text
stringlengths 1.81k
125k
| id
stringlengths 47
47
| metadata
dict | input_text
stringlengths 1.28k
3.13k
| target_text
stringlengths 1
19k
|
---|---|---|---|---|
Everyone has their own idea about what happiness is but there is no quick fix to finding consistent happiness. Yet we often find ourselves swayed toward those temporary measures to live a good life, and struggle to understand why the happy buzz didn’t last long.
The pursuit of happiness is embedded within many myths, is not at the end of our wallet, in our bank account, or something to wait for with a belief that “I will be happy when I get a promotion, when I finish my degree, when I find a partner.” Happiness is not a commodity, yet we are sold the idea that we can buy it. And happiness is not solid or as long-lasting as we believe it to be but rather exists on a continuum and often within a state of flux. The myth of happiness tells us that if anything negative happens such as a divorce, illness or bereavement then we will be unhappy for the long term. And as much as we may buy into the ill-conceived idea of happiness, many of us are conflicted about what happiness actually is and how to maintain it, partially because of the many misconceptions about how to achieve contentment and happiness.
To debunk some of the most misunderstood ideas surrounding happiness, Dr Róisín Joyce, director and chartered clinical psychologist at the Evidence-Based Therapy Centre in Galway, helps us to understand this emotion.
1) Myth – There is something wrong with you if you’re not happy all the time
“People often feel like there is something wrong with them or that they are in some way broken because they do not feel happy all the time,” says Dr Joyce, noting that the first thing to be aware of is that happiness is an emotional response to an internal or external trigger rather than a constant state of being.
“As humans, we are continually interacting with stimuli, people and situations that can activate emotional responses in us,” she says. “Whether or not something provokes an emotional reaction in us and what type of emotion is provoked depends on a number of factors such as our brain development, our temperament and our personal histories.”
[ Brianna Parkins: Hikes and swims are what healthy and happy people do, right? I think it’s the exact opposite ]
[ The tools you need to get healthy and happy ]
Dr Joyce illustrates this idea with the example of a child growing up with a lot of parental criticism. “They might get very upset when given any feedback at work or in relationships,” she says. “They may experience themselves as an ‘unhappy person’ but when you start to understand their emotional response in the context of their history, it makes perfect sense that experiencing criticism triggers sadness in them. By gaining an understanding of why we feel and respond in the way we do, it can result in greater self-compassion, self-care and increased peace of mind.”
2) Myth – You can’t be happy unless everything is the way it should be
“Human beings are always telling stories,” says Dr Joyce. “In the absence of facts, we simply make up a story that fits well enough and go from there. This can serve us well in many circumstances, but it can also lead to a great deal of suffering.”
Taking the idea of a simple family meal out together, Dr Joyce asks us to imagine the scene. “Everyone is dressed up, chatting pleasantly, smiling and laughing. You then get to the restaurant and your children start bickering or the food is not as tasty as you remember. In this moment, your expectations of the situation and the reality do not match, and this can lead to feelings of frustration, anger and disappointment.”
Happiness is an emotional state which sits with those feelings of satisfaction, fulfilment and joy. We settle well here because it’s a state of being comfortable, feeling good and content. When other feelings creep in to rock the boat, our happiness meter can drop because we live within rigid expectations and struggle to accept that happiness exists with other emotions.
“These emotions can lead to unhelpful thinking loops that then fuel further emotions such as, ‘This is so unfair, this is our only night out all month’,” says Dr Joyce. “In this way, our fantasies about how things should be interfere with us being able to manage situations as they arise and enjoy the situation as it is and so can actually rob us of our joy and happiness in the moment.”
3) Myth – External factors (money, holidays, cars etc) will bring you happiness
The endorphins released when clicking Buy Now are pretty powerful but that rush of hormones only gives a temporary jolt of happiness. As soon as that item is in our possession, our happiness levels drop back to before we spent our hard-earned cash. The unhelpful belief that buying big-ticket items like holidays and cars will make us happy is harmful to not only our bank balance but also our understanding of what makes us happy.
[ Measuring happiness matters – it’s a graph people might actually want to see ]
[ How workplace wellbeing is playing a leading role in corporate culture ]
“There is evidence that money does make us happier, but only up to a point,” clarifies Dr Joyce. “The level of income that correlates to happiness varies in different studies but there is a consensus that once you have enough money to meet your household needs, there is no further happiness benefit in having more money. There is also research to suggest that being generous with money and buying experiences are both correlated with happiness and so, in this way, the way you use your money is important too.”
4) Myth – Happiness means feeling only good and positive emotions
“Many people have an idea that people who appear happy are feeling happy all the time,” says Dr Joyce. “Of course, this cannot be the case because anyone who is alive experiences moments of suffering all the time. These can be small disappointments such as losing out on an opportunity or major life events such as a bereavement.”
We have power and influence over our emotions and how we experience them, thereby affecting our level of happiness. The choices we make include how we navigate feelings outside of the happiness realm. All emotions are important, valid and serve each other.
“People who experience peace of mind are able to experience the emotions that go with the normal ups and downs of life without these emotions taking over,” she says. “They do this by acknowledging their emotions, knowing that they are part of the normal fabric of life and helping themselves in any way they can. In this way, their emotions flow through them and pass quickly, meaning they can return to a settled state of mind again. It is our attempts to repress our so-called undesirable emotions that leads them to last longer and cause even more suffering and disturbance.”
[ Only girls have to worry about it: Challenging myths about eating disorders ]
|
<urn:uuid:95181ee5-4647-451b-9902-acf3863c0c17>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00722.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9640403389930725,
"pii_count": 0,
"score": 2.71875,
"token_count": 1464,
"url": "https://www.irishtimes.com/health/your-wellness/2023/02/09/challenging-four-myths-about-happiness/"
}
|
Everyone has their own idea about what happiness is but there is no quick fix to finding consistent happiness. Yet we often find ourselves swayed toward those temporary measures to live a good life, and struggle to understand why the happy buzz didn’t last long.
The pursuit of happiness is embedded within many myths, is not at the end of our wallet, in our bank account, or something to wait for with a belief that “I will be happy when I get a promotion, when I finish my degree, when I find a partner.” Happiness is not a commodity, yet we are sold the idea that we can buy it. And happiness is not solid or as long-lasting as we believe it to be but rather exists on a continuum and often within a state of flux. The myth of happiness tells us that if anything negative happens such as a divorce, illness or bereavement then we will be unhappy for the long term. And as much as we may buy into the ill-conceived idea of happiness, many of us are conflicted about what happiness actually is and how to maintain it, partially because of the many misconceptions about how to achieve contentment and happiness.
To debunk some of the most misunderstood ideas surrounding happiness, Dr Róisín Joyce, director and chartered clinical psychologist at the Evidence-Based Therapy Centre in Galway, helps us to understand this emotion.
1) Myth – There is something wrong with you if you’re not happy all the time
“People often feel like there is something wrong with them or that they are in some way broken because they do not feel happy all the time,” says Dr Joyce, noting that the first thing to be aware of is that happiness is an emotional response to an internal or external trigger rather than a constant state of being.
“As humans, we are continually interacting with stimuli, people and situations that can activate emotional responses in us,” she says. “Whether or not something provokes an emotional reaction in us and what type of emotion is provoked depends on a number of factors such as our brain development, our temperament and our personal histories.”
[ Brianna Parkins: Hikes and swims are what healthy and happy people do, right? I think it’s the exact opposite ]
[ The tools you need to get healthy and happy ]
Dr Joyce illustrates this idea with the example of a child growing up with a lot of parental criticism. “They might get very upset when given any feedback at work or in relationships,” she says. “They may experience themselves as an ‘unhappy person’
|
but when you start to understand their emotional response in the context of their history, it makes perfect sense that experiencing criticism triggers sadness in them. By gaining an understanding of why we feel and respond in the way we do, it can result in greater self-compassion, self-care and increased peace of mind.”
2) Myth – You can’t be happy unless everything is the way it should be
“Human beings are always telling stories,” says Dr Joyce. “In the absence of facts, we simply make up a story that fits well enough and go from there. This can serve us well in many circumstances, but it can also lead to a great deal of suffering.”
Taking the idea of a simple family meal out together, Dr Joyce asks us to imagine the scene. “Everyone is dressed up, chatting pleasantly, smiling and laughing. You then get to the restaurant and your children start bickering or the food is not as tasty as you remember. In this moment, your expectations of the situation and the reality do not match, and this can lead to feelings of frustration, anger and disappointment.”
Happiness is an emotional state which sits with those feelings of satisfaction, fulfilment and joy. We settle well here because it’s a state of being comfortable, feeling good and content. When other feelings creep in to rock the boat, our happiness meter can drop because we live within rigid expectations and struggle to accept that happiness exists with other emotions.
“These emotions can lead to unhelpful thinking loops that then fuel further emotions such as, ‘This is so unfair, this is our only night out all month’,” says Dr Joyce. “In this way, our fantasies about how things should be interfere with us being able to manage situations as they arise and enjoy the situation as it is and so can actually rob us of our joy and happiness in the moment.”
3) Myth – External factors (money, holidays, cars etc) will bring you happiness
The endorphins released when clicking Buy Now are pretty powerful but that rush of hormones only gives a temporary jolt of happiness. As soon as that item is in our possession, our happiness levels drop back to before we spent our hard-earned cash. The unhelpful belief that buying big-ticket items like holidays and cars will make us happy is harmful to not only our bank balance but also our understanding of what makes us happy.
[ Measuring happiness matters – it’s a graph people might actually want to see ]
[ How workplace wellbeing is playing a leading role in corporate culture ]
“There is evidence that money does make us happier, but only up to a point,” clarifies Dr Joyce. “The level of income that correlates to happiness varies in different studies but there is a consensus that once you have enough money to meet your household needs, there is no further happiness benefit in having more money. There is also research to suggest that being generous with money and buying experiences are both correlated with happiness and so, in this way, the way you use your money is important too.”
4) Myth – Happiness means feeling only good and positive emotions
“Many people have an idea that people who appear happy are feeling happy all the time,” says Dr Joyce. “Of course, this cannot be the case because anyone who is alive experiences moments of suffering all the time. These can be small disappointments such as losing out on an opportunity or major life events such as a bereavement.”
We have power and influence over our emotions and how we experience them, thereby affecting our level of happiness. The choices we make include how we navigate feelings outside of the happiness realm. All emotions are important, valid and serve each other.
“People who experience peace of mind are able to experience the emotions that go with the normal ups and downs of life without these emotions taking over,” she says. “They do this by acknowledging their emotions, knowing that they are part of the normal fabric of life and helping themselves in any way they can. In this way, their emotions flow through them and pass quickly, meaning they can return to a settled state of mind again. It is our attempts to repress our so-called undesirable emotions that leads them to last longer and cause even more suffering and disturbance.”
[ Only girls have to worry about it: Challenging myths about eating disorders ]
|
What is green energy? What to know about renewable, clean power like solar and wind energy
The world is in the midst of a shift away from fossil fuels and towards carbon-neutral energy sources, a change expected to be as momentous as the coal that enabled the Industrial Revolution in the 1700s.
The United States has pledged to become carbon neutral by 2050, just 27 years from now. It's a move that 69% of Americans support and one that would mitigate climate change, clean the air and make power cheaper.
But what does green energy mean, actually? Is it wind turbines, solar panels, natural gas or nuclear? Who decides and what are the guidelines?
What to know about clean, green energy:
What does green energy mean?
Renewable energy is electricity produced by fuel sources that renew themselves and do not diminish when humans tap them for power. Think the sun, the wind, plants and the heat at the Earth's core. These include electricity from solar panels, wind turbines, hydroelectric dams and what's known as biomass, which is burning wood, crop waste or garbage.
How does climate change affect you?: Subscribe to the weekly Climate Point newsletter
READ MORE: Latest climate change news from USA TODAY
For 2023, 16% of US electricity will come from solar and wind projects, according to the U.S. Energy Information Administration. Hydroelectric power produces 6% of U.S. electricity. Biomass, which includes ethanol blended into gasoline, made up about 5% of total US energy consumption.
Renewables also include geothermal energy, which means making power from naturally occurring underground reservoirs of very hot water and steam. Mostly available in the western United States, geothermal power is still a tiny proportion of US energy but already produces enough electricity to power 2.7 million homes. For comparison, geyser-filled Iceland gets 66% of its primary energy from geothermal power.
Why is it called green energy?
Like many ecologically friendly initiatives, "green" energy has gotten its name because it is good for the planet.
It has become common to label clean, renewable projects "green" to remind people that they are intended help lead to a healthier, greener, more sustainable planet. Green racing, Green New Deal and green plane fares are other examples.
While scientists agree green energy helps fight climate change, it's important to remember that not everything with the "green" label is actually better. It can also be used as a form of "greenwashing" where a company tries to make a product or policy seem environmentally friendly when it in fact isn't.
Green energy helps fight climate change
The shift to renewable energy is important because most of these power sources don't produce greenhouse gasses that drive climate change. These gases, especially the carbon dioxide produces when coal, oil or natural gas are burned, create a "blanket" in the atmosphere that holds in heat.
Since humans began burning large amounts of coal at the beginning of the Industrial Revolution, later adding oil and natural gas, the amount of carbon dioxide in the atmosphere has increased from 280 parts per million to 418 parts per million.
DEFINITIONS: Is climate change the same thing as global warming? Definitions explained.
CLIMATE CHANGE CAUSES: Why scientists say humans are to blame.
Is renewable energy the same as green energy?
In general, renewable and green energy mean the same thing.
The U.S. Environmental Protection Agency defines green power as a subset of renewable energy, including all renewable energy resources that provide the greatest environmental benefit and the lowest environmental cost.
In practice, this means all renewable energy sources with the exception of large hydroelectric resources that can have "environmental trade-offs on issues such as fisheries and land use."
Is solar power green energy? Is wind? Definitions explained.
Here's a cheat sheet:
- Green energy: Wind, solar, small hydro, geothermal, biomass
- Renewable energy: Wind, solar, all hydro, geothermal, biomass
- Carbon neutral: Wind, solar, hydro, nuclear, geothermal, biomass
- Conventional: Coal, natural gas, oil, nuclear
What is carbon-neutral energy?
Carbon-neutral energy is energy that is produced without emitting greenhouse gases into the atmosphere.
Renewable energy sources – wind, solar, hydroelectric, biomass and geothermal – are all considered carbon-neutral energy production, although building them (and all energy plants) does produce carbon. The major energy source that's carbon neutral but not renewable is nuclear power.
America's 92 nuclear power plants produce about 20% of US electricity and about 50% of the nation's carbon-neutral energy.
CLIMATE CHANGE EFFECTS: What are the effects of climate change? How they disrupt our daily life, fuel disasters.
CARBON DIOXIDE: Here's what to know and a look at how it contributes to global warming.
What are conventional energy sources?
Conventional power is energy that comes from the burning of fossil fuels including coal, natural gas and oil. Power from the nuclear fission of uranium is also considered conventional. These fuels all have environmental costs from mining, drilling and extraction and all but nuclear power emit greenhouse gases.
Elizabeth Weise covers climate change and the energy transition for USA TODAY. Reach out to her at <email-pii>.
|
<urn:uuid:89eb5882-2c0d-4a7a-bfba-45af587e36c8>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00403.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9339985251426697,
"pii_count": 1,
"score": 3.203125,
"token_count": 1116,
"url": "https://www.usatoday.com/story/news/2023/02/25/what-is-green-energy-clean-solar-wind-power-climate-change/11263691002/"
}
|
What is green energy? What to know about renewable, clean power like solar and wind energy
The world is in the midst of a shift away from fossil fuels and towards carbon-neutral energy sources, a change expected to be as momentous as the coal that enabled the Industrial Revolution in the 1700s.
The United States has pledged to become carbon neutral by 2050, just 27 years from now. It's a move that 69% of Americans support and one that would mitigate climate change, clean the air and make power cheaper.
But what does green energy mean, actually? Is it wind turbines, solar panels, natural gas or nuclear? Who decides and what are the guidelines?
What to know about clean, green energy:
What does green energy mean?
Renewable energy is electricity produced by fuel sources that renew themselves and do not diminish when humans tap them for power. Think the sun, the wind, plants and the heat at the Earth's core. These include electricity from solar panels, wind turbines, hydroelectric dams and what's known as biomass, which is burning wood, crop waste or garbage.
How does climate change affect you?: Subscribe to the weekly Climate Point newsletter
READ MORE: Latest climate change news from USA TODAY
For 2023, 16% of US electricity will come from solar and wind projects, according to the U.S. Energy Information Administration. Hydroelectric power produces 6% of U.S. electricity. Biomass, which includes ethanol blended into gasoline, made up about 5% of total US energy consumption.
Renewables also include geothermal energy, which means making power from naturally occurring underground reservoirs of very hot water and steam. Mostly available in the western United States, geothermal power is still a tiny proportion of US energy but already produces enough electricity to power 2.7 million homes. For comparison, geyser-filled Iceland gets 66% of its primary energy from geothermal power.
Why is it called green energy?
Like many ecologically friendly initiatives, "green" energy has gotten its name because it is good for the planet.
It has become common to label clean, renewable projects "green" to remind people that they are intended help lead to a healthier, greener, more sustainable planet. Green racing, Green New Deal and green plane fares are other examples.
While scientists agree green energy helps fight climate change, it's important to remember that not everything with the "green" label
|
is actually better. It can also be used as a form of "greenwashing" where a company tries to make a product or policy seem environmentally friendly when it in fact isn't.
Green energy helps fight climate change
The shift to renewable energy is important because most of these power sources don't produce greenhouse gasses that drive climate change. These gases, especially the carbon dioxide produces when coal, oil or natural gas are burned, create a "blanket" in the atmosphere that holds in heat.
Since humans began burning large amounts of coal at the beginning of the Industrial Revolution, later adding oil and natural gas, the amount of carbon dioxide in the atmosphere has increased from 280 parts per million to 418 parts per million.
DEFINITIONS: Is climate change the same thing as global warming? Definitions explained.
CLIMATE CHANGE CAUSES: Why scientists say humans are to blame.
Is renewable energy the same as green energy?
In general, renewable and green energy mean the same thing.
The U.S. Environmental Protection Agency defines green power as a subset of renewable energy, including all renewable energy resources that provide the greatest environmental benefit and the lowest environmental cost.
In practice, this means all renewable energy sources with the exception of large hydroelectric resources that can have "environmental trade-offs on issues such as fisheries and land use."
Is solar power green energy? Is wind? Definitions explained.
Here's a cheat sheet:
- Green energy: Wind, solar, small hydro, geothermal, biomass
- Renewable energy: Wind, solar, all hydro, geothermal, biomass
- Carbon neutral: Wind, solar, hydro, nuclear, geothermal, biomass
- Conventional: Coal, natural gas, oil, nuclear
What is carbon-neutral energy?
Carbon-neutral energy is energy that is produced without emitting greenhouse gases into the atmosphere.
Renewable energy sources – wind, solar, hydroelectric, biomass and geothermal – are all considered carbon-neutral energy production, although building them (and all energy plants) does produce carbon. The major energy source that's carbon neutral but not renewable is nuclear power.
America's 92 nuclear power plants produce about 20% of US electricity and about 50% of the nation's carbon-neutral energy.
CLIMATE CHANGE EFFECTS: What are the effects of climate change? How they disrupt our daily life, fuel disasters.
CARBON DIOXIDE: Here's what to know and a look at how it contributes to global warming.
What are conventional energy sources?
Conventional power is energy that comes from the burning of fossil fuels including coal, natural gas and oil. Power from the nuclear fission of uranium is also considered conventional. These fuels all have environmental costs from mining, drilling and extraction and all but nuclear power emit greenhouse gases.
Elizabeth Weise covers climate change and the energy transition for USA TODAY. Reach out to her at <email-pii>.
|
March is bound to get even madder.
Scientists at Cornell University have put together a data model that suggests that the application of a physics theory to basketball may lead to teams scoring five to 15 more points per game.
Researchers analyzed player metrics and material that were accrued from an undisclosed NBA team through a stop-motion camera during many of its games this season. The science squad was then able to project precise positioning that guaranteed better scoring outcomes for individual players — sometimes by moving mere inches.
“Every 40 milliseconds, we know with … a very high degree of accuracy, where every player is and where the ball is located,” Boris Barron, a doctoral physics student on the project, told The Post.
“[Our work] has the potential to be a game changer for basketball … This is taking ‘Moneyball’ to the extreme.”
Although the Big Red missed the big dance, Barron — along with physics professor Tomás Arias and peer Nathan Sitaraman — have been on their toes these past few weeks by applying density-functional fluctuation theory (DFFT) to introduce “more kind of advanced quantitative analysis” to the game.
In quite plain terms, DFFT looks at fluctuations caused by certain events that either separated or brought together entities within a group. Previous research using the theory observed how fruit fly clusters adapted to heat being introduced to their environment and separately, was used to predict crowd behavior among people.
Barron and company are using DFFT to break down the spatial interactions of where players like to be and how players interact with one another on the court.
“Looking back at a game, I can see how this can help players improve,” Barron said. “The improvements can be in the [team total] range of five points in 100. It wouldn’t shock me based on the results that we’re getting here,” he added, mentioning that there could “potentially” be upticks by 15 points or more.
The approach can quantify a player’s success, or lack thereof, from several nearby positions on the court — thus predicting more exact locations where they will score more or defend better in just about any given scenario.
“We can take a look at a snapshot of a game and ask, does this look like a good position for the offense? Or does this look like a bad position for the offense?” Barron said.
“Where this becomes useful is that we can improve a player’s positioning,” he added of the data, which currently only accounts for two-point shots.
Former Oakland A’s general manager Billy Beane found incredible success with another data intensive strategy — “Moneyball” — in the early 2000s.
Beane was constantly asking “but can he get on base?”
In that same vein, many basketball coaches may soon pose the question “but can he drive to the net?” from simulations based on the Cornell research.
“We’re determining where each of the players should move,” Barron said. “We’re pretty much saying ‘this guy, in this case, should prefer to take kind of this path [to the basket].’ “
Statistics wrung from DFFT simulations can hyper-analyze positioning to help teams better scout future opponents and individual matchups.
Admittedly, more variables — like accounting for players’ set positions, specialty skill sets and re-running the numbers to include three-pointers — still need to get worked in, according to the doctoral student.
“Maybe [next] we can follow along a certain kind of player and see if they tend to stand in good positions for the team or maybe not so good positions for the team,” he said.
“You can imagine turning some of our modeling into a simulation tool for coaches.”
Even with changes to come, Barron said the theory behind what they’re shooting for is sound at the moment.
“Going forward, you can imagine using this to provide a positioning metric for basketball.”
|
<urn:uuid:36bd45a6-6191-48ef-8b8b-7acdb6417904>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646181.29/warc/CC-MAIN-20230530230622-20230531020622-00271.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.959063708782196,
"pii_count": 0,
"score": 2.578125,
"token_count": 866,
"url": "https://nypost.com/2023/03/20/physics-theory-could-upend-march-madness-tournament/"
}
|
March is bound to get even madder.
Scientists at Cornell University have put together a data model that suggests that the application of a physics theory to basketball may lead to teams scoring five to 15 more points per game.
Researchers analyzed player metrics and material that were accrued from an undisclosed NBA team through a stop-motion camera during many of its games this season. The science squad was then able to project precise positioning that guaranteed better scoring outcomes for individual players — sometimes by moving mere inches.
“Every 40 milliseconds, we know with … a very high degree of accuracy, where every player is and where the ball is located,” Boris Barron, a doctoral physics student on the project, told The Post.
“[Our work] has the potential to be a game changer for basketball … This is taking ‘Moneyball’ to the extreme.”
Although the Big Red missed the big dance, Barron — along with physics professor Tomás Arias and peer Nathan Sitaraman — have been on their toes these past few weeks by applying density-functional fluctuation theory (DFFT) to introduce “more kind of advanced quantitative analysis” to the game.
In quite plain terms, DFFT looks at fluctuations caused by certain events that either separated or brought together entities within a group. Previous research using the theory observed how fruit fly clusters adapted to heat being introduced to their environment and separately, was used to predict crowd behavior among people.
Barron and company are using DFFT to break down the spatial interactions of where players like to be and how players interact with one another on the court.
“Looking back at a game, I can see how this can help players improve,” Barron said. “The improvements can be in the [team total] range of five points in 100. It wouldn’t shock me based on the results that we’re getting here,” he added, mentioning that there could “potentially” be upticks by 15 points or more.
The approach can quantify a player’s success, or lack thereof, from several nearby positions on the court — thus predicting more exact locations where they will score more or defend better in just about any given scenario.
“We can take a look at a snapshot of a game and ask, does this look like a good position for the offense? Or does this look like a bad position for the offense?” Barron said.
“Where this becomes useful is that we can improve a player’s positioning,” he added of the data, which currently only accounts for two-point shots.
|
Former Oakland A’s general manager Billy Beane found incredible success with another data intensive strategy — “Moneyball” — in the early 2000s.
Beane was constantly asking “but can he get on base?”
In that same vein, many basketball coaches may soon pose the question “but can he drive to the net?” from simulations based on the Cornell research.
“We’re determining where each of the players should move,” Barron said. “We’re pretty much saying ‘this guy, in this case, should prefer to take kind of this path [to the basket].’ “
Statistics wrung from DFFT simulations can hyper-analyze positioning to help teams better scout future opponents and individual matchups.
Admittedly, more variables — like accounting for players’ set positions, specialty skill sets and re-running the numbers to include three-pointers — still need to get worked in, according to the doctoral student.
“Maybe [next] we can follow along a certain kind of player and see if they tend to stand in good positions for the team or maybe not so good positions for the team,” he said.
“You can imagine turning some of our modeling into a simulation tool for coaches.”
Even with changes to come, Barron said the theory behind what they’re shooting for is sound at the moment.
“Going forward, you can imagine using this to provide a positioning metric for basketball.”
|
ChatGPT just like InstructGPT. It is like a chat box that can give instant answers to questions.We can explore its potentials by looking at the Benefits of Using ChatGPT For Learning. It is also an independent search engine that gives users the opportunity to learn at their own pace. But before we dive further
What is ChatGPT?
ChatGPT (Chat Generative Pre-Trained Transformer) is a large language model-based chatbot by OpenAI on November 30, 2022. It is notable for enabling users to steer a conversation towards a length, format and style. Successive prompts and replies, known as prompt engineering, are into account at each stage of the conversation as a context.
Although the core function of a chatbot is to mimic a human conversationalist, ChatGPT is versatile. Among countless examples, it can write music, teleplays, fairy tales and student essays. It can also answer test questions (sometimes, depending on the test, at a level above the average human test-taker), generate business ideas, write poetry and song lyrics. It translates and summarize text, emulate a Linux system or simulate entire chat rooms.
In March 2023, OpenAI added support for plugins for ChatGPT.This includes both plugins made by OpenAI, such as web browsing and code interpretation, and external plugins from developers such as Expedia, OpenTable, Zapier, Shopify, Slack, and Wolfram.
The benefits of using ChatGPT For learning?
These are: Engaging Learning Experience, Personalized Assistance, Exploration of Topics, 24/7 Availability, mediate Feedback, Practice and Application, Language Improvement, Access to Resources, Complex Problem Solving, Supplement to Formal Education, Concept Reinforcement, Low Pressure Environment.
In comparison to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses.In one example, whereas InstructGPT accepts the premise of the prompt. “Tell me about when Christopher Columbus came to the U.S. in 2015” as being truthful. ChatGPT acknowledges the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might happen. That is if Columbus came to the U.S. in 2015.using information about the voyages of Christopher Columbus and facts about the modern world – including modern perceptions of Columbus’ actions.
Unlike most chatbots, ChatGPT remembers a few number of previous prompts in the same conversation. Journalists tells us that this will allow ChatGPT to serve as a personal therapist.
It has a lot of benefits that it gives to users who are willing to learn. And to prevent offensive outputs to stem from ChatGPT, queries are always filtered through the OpenAI “Moderation endpoint” API (a separate GPT-based AI) and dismiss any potentially racist or sexist prompts.
In this “previous article” on the blog we learnt about the negative effects of ChatGPT but in this article, we will learn about some of the benefits of Using ChatGPT For Learning;
01. Effective Communication:
ChatGPT can work at any given time. That is it is available 24/7. This makes it an effective tool for communication. You can also communicate with it using any language since it has the ability to comprehend instructions in multiple languages and the consistency in its availability gives room for effective communication for every user or learner.
02. Cost Friendly:
Studies from some universities such as Standford University shows that ChatGPT is very helpful in saving cost. It has also drastically reduced the costs students spend on research work and other forms of online learning. Learners and users do not have to buy expensive textbooks or pay for some courses. This is because they can easily get and analyze the useful information they need on chatGPT.
03. All-Round Satisfaction:
ChatGPT gives an all-round satisfaction to every user. There is no limit or restrictions to what you can use it to learn. Regardless of your field of study, you can use it to ask questions about anything and find answers to them. Researchers and Programmers employ the use of ChatGPT. Learners from different disciplines and backgrounds are able to access it and use it to their own advantage.
It is important to note that despite the recent competitors of ChatGPT. The benefits of using it for learning purposes will continue to prove to be very significant. And the way it is being used will help to improve our world by enabling more conversations. Further interactions will be available while fostering more awareness and interest in the users. When you cautiously use ChatGPT it will lead to a top notch learning experience, and more access to knowledge and resources.
If you are still looking for an integrated school management solution that fits your school’s operations. Simply request for a free demo of our system to see how it works. Register here for free.
|
<urn:uuid:6834a568-d4c9-4e16-b2ed-ae87a6d572ec>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00790.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.933306097984314,
"pii_count": 0,
"score": 3.265625,
"token_count": 1019,
"url": "http://ec2-52-214-81-77.eu-west-1.compute.amazonaws.com/2023/08/09/benefits-of-using-chatgpt-for-learning/"
}
|
ChatGPT just like InstructGPT. It is like a chat box that can give instant answers to questions.We can explore its potentials by looking at the Benefits of Using ChatGPT For Learning. It is also an independent search engine that gives users the opportunity to learn at their own pace. But before we dive further
What is ChatGPT?
ChatGPT (Chat Generative Pre-Trained Transformer) is a large language model-based chatbot by OpenAI on November 30, 2022. It is notable for enabling users to steer a conversation towards a length, format and style. Successive prompts and replies, known as prompt engineering, are into account at each stage of the conversation as a context.
Although the core function of a chatbot is to mimic a human conversationalist, ChatGPT is versatile. Among countless examples, it can write music, teleplays, fairy tales and student essays. It can also answer test questions (sometimes, depending on the test, at a level above the average human test-taker), generate business ideas, write poetry and song lyrics. It translates and summarize text, emulate a Linux system or simulate entire chat rooms.
In March 2023, OpenAI added support for plugins for ChatGPT.This includes both plugins made by OpenAI, such as web browsing and code interpretation, and external plugins from developers such as Expedia, OpenTable, Zapier, Shopify, Slack, and Wolfram.
The benefits of using ChatGPT For learning?
These are: Engaging Learning Experience, Personalized Assistance, Exploration of Topics, 24/7 Availability, mediate Feedback, Practice and Application, Language Improvement, Access to Resources, Complex Problem Solving, Supplement to Formal Education, Concept Reinforcement, Low Pressure Environment.
In comparison to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses.In one example, whereas InstructGPT accepts the premise of the prompt. “Tell me about when Christopher Columbus came to the U.S. in 2015” as being truthful. ChatGPT acknowledges the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might happen. That is if Columbus came to the U.S. in 2015.using information about the voyages of Christopher Columbus and facts about the modern world – including modern perceptions of Columbus’ actions.
Unlike most chatbots, ChatGPT remembers a
|
few number of previous prompts in the same conversation. Journalists tells us that this will allow ChatGPT to serve as a personal therapist.
It has a lot of benefits that it gives to users who are willing to learn. And to prevent offensive outputs to stem from ChatGPT, queries are always filtered through the OpenAI “Moderation endpoint” API (a separate GPT-based AI) and dismiss any potentially racist or sexist prompts.
In this “previous article” on the blog we learnt about the negative effects of ChatGPT but in this article, we will learn about some of the benefits of Using ChatGPT For Learning;
01. Effective Communication:
ChatGPT can work at any given time. That is it is available 24/7. This makes it an effective tool for communication. You can also communicate with it using any language since it has the ability to comprehend instructions in multiple languages and the consistency in its availability gives room for effective communication for every user or learner.
02. Cost Friendly:
Studies from some universities such as Standford University shows that ChatGPT is very helpful in saving cost. It has also drastically reduced the costs students spend on research work and other forms of online learning. Learners and users do not have to buy expensive textbooks or pay for some courses. This is because they can easily get and analyze the useful information they need on chatGPT.
03. All-Round Satisfaction:
ChatGPT gives an all-round satisfaction to every user. There is no limit or restrictions to what you can use it to learn. Regardless of your field of study, you can use it to ask questions about anything and find answers to them. Researchers and Programmers employ the use of ChatGPT. Learners from different disciplines and backgrounds are able to access it and use it to their own advantage.
It is important to note that despite the recent competitors of ChatGPT. The benefits of using it for learning purposes will continue to prove to be very significant. And the way it is being used will help to improve our world by enabling more conversations. Further interactions will be available while fostering more awareness and interest in the users. When you cautiously use ChatGPT it will lead to a top notch learning experience, and more access to knowledge and resources.
If you are still looking for an integrated school management solution that fits your school’s operations. Simply request for a free demo of our system to see how it works. Register here for free.
|
The European Union’s Nature Restoration Law – which is intended to restore ecosystems and make the environment more resilient to climate change – is in peril after a vote on June 27th in the European Parliament’s environment committee was evenly split. It will go to a vote of all 705 MEPs in the parliament’s next plenary session on July 11th.
It is common, if shamefully neglected, knowledge that the European Union and the Irish Government have failed miserably to engage effectively with the public, especially farmers, about nature conservation strategies.
Agricultural and environmental policies have been infuriatingly contradictory. Brussels and Dublin have pushed farmers to intensify production, while haphazardly penalising them for the ecological damage which intensification inflicts.
That is one context in which the fierce controversy around the Nature Restoration Law must be understood. The law offers the last best hope for reversing the collapse of Europe’s natural systems. Too much rests on its approval to allow the failures of the past to shape the future. But if we don’t change this context, the law will fail, even if enacted.
It is equally important to understand another context – the rapidly accelerating decline of Europe’s ecosystems – which caused the new law to be framed as active restoration, rather than passive protection. Eighty-five per cent of EU habitats, especially farmed landscapes, are in bad or poor condition. Most are getting worse. Animal and plant populations, including agriculturally vital insect pollinators, are vanishing. The Habitats and Birds Directives, based on protection, have failed. But does this matter, except to birdwatchers and botanists? Well, surveys suggest that most people do value nature. The Citizens’ Assembly on Biodiversity Loss called, overwhelmingly, for “decisive and urgent action to address biodiversity loss and restoration”.
Catastrophic news of floods, fires and extinctions has mobilised public opinion. There is now wider understanding that our economies, societies and wellbeing depend entirely on the services that healthy ecosystems supply, from clean water to fertile soil, from pollination to stable climates.
Restoration is a relatively new conservation strategy, arising precisely because we have lost so many healthy ecosystems that we now need to actively help them recover. Restoration, based on sound science, is a rare hopeful message in grim times.
So it is unfortunate that the Nature Restoration Law has been so targeted by scaremongering. It scraped by the EU environment committee, even in diluted form, and faces powerful parliamentary opposition.
The European People’s Party has been accused of running such a flagrant disinformation campaign that some members, including Fine Gael, have distanced themselves. Some MEPs from Fianna Fáil’s Renew grouping also voted against the law, though no Irish MEPs from those parties were on the committee.
Misleading phrases abound. Restoring biodiversity is not ‘abandoning land’, it is enabling our environment to continue providing us with essential services, including carbon sequestration, not to mention beautiful landscapes. Nor is rewetting about ‘flooding farms’. Wetlands, properly restored, act as sponges, slowing rainwater run-off. Degraded land accelerates flooding. Contrary to EPP claims, each country is free to meet targets in its own ways, and the law imposes no changes on any individual landowner.
Attacks on provisions the law does not contain make a mockery of claims by these parties, and by agribusiness vested interests, to be treating the biodiversity and climate crises seriously. It’s a classic approach of “Lord make me ready, but not yet – not while there are short-term profits to be made”.
We simply can’t delay any longer. Big farming lobbies are campaigning against the real interests of farming families. Agriculture is already becoming difficult in parts of Europe – ask any north Dublin fruit or vegetable producer. Unless we take boldly radical steps to restore landscapes, farming will become simply impossible. The Nature Restoration Law does not undermine food security. It is its guarantee.
But these lobbies will continue to garner popular support, and populist votes, until the EU, and national governments, fully recognise that there is a social as well as an environmental context to the law. Nature must be restored in Europe if we are to survive and thrive – and farmers must be fully rewarded for participating in this restoration. The farmer-led Burren Programme, based on measurable outcomes rather than prohibitions, offers a proven model for doing just this.
Paddy Woodworth is the author of Our Once and Future Planet: Restoring the World in the Climate Change Century
|
<urn:uuid:a56d0138-a14b-473b-93d8-c149e98087a9>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00372.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9415900111198425,
"pii_count": 0,
"score": 3.015625,
"token_count": 972,
"url": "https://www.irishtimes.com/opinion/2023/07/09/nature-restoration-law-the-case-in-favour/"
}
|
The European Union’s Nature Restoration Law – which is intended to restore ecosystems and make the environment more resilient to climate change – is in peril after a vote on June 27th in the European Parliament’s environment committee was evenly split. It will go to a vote of all 705 MEPs in the parliament’s next plenary session on July 11th.
It is common, if shamefully neglected, knowledge that the European Union and the Irish Government have failed miserably to engage effectively with the public, especially farmers, about nature conservation strategies.
Agricultural and environmental policies have been infuriatingly contradictory. Brussels and Dublin have pushed farmers to intensify production, while haphazardly penalising them for the ecological damage which intensification inflicts.
That is one context in which the fierce controversy around the Nature Restoration Law must be understood. The law offers the last best hope for reversing the collapse of Europe’s natural systems. Too much rests on its approval to allow the failures of the past to shape the future. But if we don’t change this context, the law will fail, even if enacted.
It is equally important to understand another context – the rapidly accelerating decline of Europe’s ecosystems – which caused the new law to be framed as active restoration, rather than passive protection. Eighty-five per cent of EU habitats, especially farmed landscapes, are in bad or poor condition. Most are getting worse. Animal and plant populations, including agriculturally vital insect pollinators, are vanishing. The Habitats and Birds Directives, based on protection, have failed. But does this matter, except to birdwatchers and botanists? Well, surveys suggest that most people do value nature. The Citizens’ Assembly on Biodiversity Loss called, overwhelmingly, for “decisive and urgent action to address biodiversity loss and restoration”.
Catastrophic news of floods, fires and extinctions has mobilised public opinion. There is now wider understanding that our economies, societies and wellbeing depend entirely on the services that healthy ecosystems supply, from clean water to fertile soil, from pollination to stable climates.
Restoration is a relatively new conservation strategy, arising precisely because we have lost so many healthy ecosystems that we now need to actively help them recover. Restoration, based on sound science, is a rare hopeful message in grim times.
So it is unfortunate that the Nature Restoration Law has been so targeted by scaremongering. It scraped by the EU environment committee, even in diluted form, and faces powerful parliamentary
|
opposition.
The European People’s Party has been accused of running such a flagrant disinformation campaign that some members, including Fine Gael, have distanced themselves. Some MEPs from Fianna Fáil’s Renew grouping also voted against the law, though no Irish MEPs from those parties were on the committee.
Misleading phrases abound. Restoring biodiversity is not ‘abandoning land’, it is enabling our environment to continue providing us with essential services, including carbon sequestration, not to mention beautiful landscapes. Nor is rewetting about ‘flooding farms’. Wetlands, properly restored, act as sponges, slowing rainwater run-off. Degraded land accelerates flooding. Contrary to EPP claims, each country is free to meet targets in its own ways, and the law imposes no changes on any individual landowner.
Attacks on provisions the law does not contain make a mockery of claims by these parties, and by agribusiness vested interests, to be treating the biodiversity and climate crises seriously. It’s a classic approach of “Lord make me ready, but not yet – not while there are short-term profits to be made”.
We simply can’t delay any longer. Big farming lobbies are campaigning against the real interests of farming families. Agriculture is already becoming difficult in parts of Europe – ask any north Dublin fruit or vegetable producer. Unless we take boldly radical steps to restore landscapes, farming will become simply impossible. The Nature Restoration Law does not undermine food security. It is its guarantee.
But these lobbies will continue to garner popular support, and populist votes, until the EU, and national governments, fully recognise that there is a social as well as an environmental context to the law. Nature must be restored in Europe if we are to survive and thrive – and farmers must be fully rewarded for participating in this restoration. The farmer-led Burren Programme, based on measurable outcomes rather than prohibitions, offers a proven model for doing just this.
Paddy Woodworth is the author of Our Once and Future Planet: Restoring the World in the Climate Change Century
|
In today’s blog, we pay tribute to Betty Boothroyd, the first female Speaker of the Commons, who sadly died yesterday. Dr Emma Peplow, Head of our Oral History Project, reflects on her historic career.
Betty Boothroyd will go down in parliamentary history as the first woman to be elected Speaker of the Commons – she was also the first Speaker to be elected from the ranks of the opposition. Widely considered a kind, fair but tough speaker, when she stood down in 2000 she received plaudits across the House for her handling of the role. In her interview for our oral history project, she reflected on how it could feel:
Boothroyd was born in Dewsbury, West Yorkshire. The family often struggled financially as her father faced significant periods of unemployment, but both of her parents were Trade Unionists. Boothroyd remembered being taken by her mother to Labour party women’s section meetings as a young girl, helping to raise money for the party. This included trips to ‘the most marvellous rallies’ in the larger Yorkshire cities on the weekend where ‘the great stars of Westminster came: Attlee, Jim Griffiths, Antony Greenwood, Aneurin Bevan and his wife Jennie Lee.’ Throughout her interview she described a constant interest in politics, but also a reluctance to put herself forward at first and a lack of confidence early on.
In 1947 Boothroyd put aside her good secretarial education to move to London and pursue a career as a dancer – much to her father’s displeasure. She remembered her mother persuaded him to let Boothroyd ‘get it out of her system’: something that quickly happened. Missing home she returned and combined secretarial jobs with Labour League of Youth work. In the late 1950s she moved back to London to work for the Labour party, and later as a parliamentary secretary to MPs Barbara Castle and Geoffrey de Freitas. The early 1960s she spent in the US, working on John F. Kennedy’s Presidential campaign and later for a US Congressman, but ‘the plan was always to come back’. She described this experience as reinforcing her admiration for the British parliamentary system.
Meanwhile, Boothroyd was trying to become an MP herself. Like many women of her generation she found it hard to be selected for a winnable seat. From the late 1950s she stood in multiple constituencies; each election involved getting to know a new constituency, and preparing three different speeches for election night depending on the outcome of the election. When she was finally elected in 1973 for West Bromwich she had been in tears earlier that night, convinced she had lost again. She held the successor seat – West Bromwich West – until she stood down in 2000.
Boothroyd described taking to life as an MP ‘like a duck to water’ – her previous experience working in Westminster now invaluable. She began by doing most of her own secretarial work. This prior experience didn’t stop her fears over her maiden speech, however:
Aside from her role in Parliament, Boothroyd soon joined the whips’ office and became a member of the European parliament; a period she didn’t enjoy due to the considerable demands on her time divided between Brussels, London, and her constituency back in the Midlands. She described always having two suitcases made up on the spare bed, and finding it simply exhausting.
She continued to work her way up through the Commons, sitting on the Speakers’ panel of Chairmen, meaning much of her time was spent chairing various bill committees. In 1987 the then Speaker, Bernard Weatherill, asked her to stand for election as Deputy Speaker. She described enjoying this position but ‘all you had to do really was sit in the chair’, as important decisions were taken by the Speaker beforehand. When Weatherill stood down she took some convincing to stand as Speaker herself: ‘it wasn’t being female that I was worried about, it was this lack of confidence: could I do it? I might let the side down, I might let down everything I believe in if I wasn’t up to it.’ Boothroyd had cross-party support for her election, the Conservative John Biffen nominated her for the position. ‘I decided in my own mind I would go for it …. If I lost I wanted to lose well, I didn’t want it to be a disaster.’
Throughout her time as Speaker she developed a reputation for being tough, one she put down to the fact that she held high standards over the use of language in the House. She recalled the time she banned DUP leader Iain Paisley from Parliament for a ten days after he called another MP a liar in the Chamber. In the end Paisley thanked her for all the press coverage he got from the incident and the pair became good friends!
Throughout her interview she described herself as a ‘parliamentarian, not a politician’. She also reflected back on her life, stating clearly that she had made a choice not to have a family to pursue her career, and that decision had given her ‘the most interesting and wonderful life.’ It is certainly one that will keep its place in the history books.
Since the announcement of her death yesterday the tributes have come flooding in across the political spectrum, with more due in the Commons today. The current Speaker, Sir Lindsay Hoyle, called her ‘groundbreaking’, and by becoming the first female Speaker she had ‘certainly broke that glass ceiling with panache’. Labour’s Alistair Campbell paid tribute to her as ‘one of the kindest, wisest, most loving and loveable women you could ever wish to know’; whilst former leader of the Scottish Conservatives, Ruth Davidson, described her simply as ‘magnificent’.
Find more blogs from our Oral History project here.
|
<urn:uuid:c7b862c9-64e4-4ccf-ad75-9156490f0b05>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00295.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9847499132156372,
"pii_count": 0,
"score": 3.015625,
"token_count": 1245,
"url": "https://thehistoryofparliament.wordpress.com/2023/02/28/betty-boothroyd-the-voice-from-the-speakers-chair/"
}
|
In today’s blog, we pay tribute to Betty Boothroyd, the first female Speaker of the Commons, who sadly died yesterday. Dr Emma Peplow, Head of our Oral History Project, reflects on her historic career.
Betty Boothroyd will go down in parliamentary history as the first woman to be elected Speaker of the Commons – she was also the first Speaker to be elected from the ranks of the opposition. Widely considered a kind, fair but tough speaker, when she stood down in 2000 she received plaudits across the House for her handling of the role. In her interview for our oral history project, she reflected on how it could feel:
Boothroyd was born in Dewsbury, West Yorkshire. The family often struggled financially as her father faced significant periods of unemployment, but both of her parents were Trade Unionists. Boothroyd remembered being taken by her mother to Labour party women’s section meetings as a young girl, helping to raise money for the party. This included trips to ‘the most marvellous rallies’ in the larger Yorkshire cities on the weekend where ‘the great stars of Westminster came: Attlee, Jim Griffiths, Antony Greenwood, Aneurin Bevan and his wife Jennie Lee.’ Throughout her interview she described a constant interest in politics, but also a reluctance to put herself forward at first and a lack of confidence early on.
In 1947 Boothroyd put aside her good secretarial education to move to London and pursue a career as a dancer – much to her father’s displeasure. She remembered her mother persuaded him to let Boothroyd ‘get it out of her system’: something that quickly happened. Missing home she returned and combined secretarial jobs with Labour League of Youth work. In the late 1950s she moved back to London to work for the Labour party, and later as a parliamentary secretary to MPs Barbara Castle and Geoffrey de Freitas. The early 1960s she spent in the US, working on John F. Kennedy’s Presidential campaign and later for a US Congressman, but ‘the plan was always to come back’. She described this experience as reinforcing her admiration for the British parliamentary system.
Meanwhile, Boothroyd was trying to become an MP herself. Like many women of her generation she found it hard to be selected for a winnable seat. From the late 1950s she stood in multiple constituencies; each election involved getting to know a
|
new constituency, and preparing three different speeches for election night depending on the outcome of the election. When she was finally elected in 1973 for West Bromwich she had been in tears earlier that night, convinced she had lost again. She held the successor seat – West Bromwich West – until she stood down in 2000.
Boothroyd described taking to life as an MP ‘like a duck to water’ – her previous experience working in Westminster now invaluable. She began by doing most of her own secretarial work. This prior experience didn’t stop her fears over her maiden speech, however:
Aside from her role in Parliament, Boothroyd soon joined the whips’ office and became a member of the European parliament; a period she didn’t enjoy due to the considerable demands on her time divided between Brussels, London, and her constituency back in the Midlands. She described always having two suitcases made up on the spare bed, and finding it simply exhausting.
She continued to work her way up through the Commons, sitting on the Speakers’ panel of Chairmen, meaning much of her time was spent chairing various bill committees. In 1987 the then Speaker, Bernard Weatherill, asked her to stand for election as Deputy Speaker. She described enjoying this position but ‘all you had to do really was sit in the chair’, as important decisions were taken by the Speaker beforehand. When Weatherill stood down she took some convincing to stand as Speaker herself: ‘it wasn’t being female that I was worried about, it was this lack of confidence: could I do it? I might let the side down, I might let down everything I believe in if I wasn’t up to it.’ Boothroyd had cross-party support for her election, the Conservative John Biffen nominated her for the position. ‘I decided in my own mind I would go for it …. If I lost I wanted to lose well, I didn’t want it to be a disaster.’
Throughout her time as Speaker she developed a reputation for being tough, one she put down to the fact that she held high standards over the use of language in the House. She recalled the time she banned DUP leader Iain Paisley from Parliament for a ten days after he called another MP a liar in the Chamber. In the end Paisley thanked her for all the press coverage he got from the incident and the pair became good friends!
Throughout her interview she described herself as a ‘parliamentarian, not a politician’. She also reflected back on her life, stating clearly that she had made a choice not to have a family to pursue her career, and that decision had given her ‘the most interesting and wonderful life.’ It is certainly one that will keep its place in the history books.
Since the announcement of her death yesterday the tributes have come flooding in across the political spectrum, with more due in the Commons today. The current Speaker, Sir Lindsay Hoyle, called her ‘groundbreaking’, and by becoming the first female Speaker she had ‘certainly broke that glass ceiling with panache’. Labour’s Alistair Campbell paid tribute to her as ‘one of the kindest, wisest, most loving and loveable women you could ever wish to know’; whilst former leader of the Scottish Conservatives, Ruth Davidson, described her simply as ‘magnificent’.
Find more blogs from our Oral History project here.
|
On Tuesday, the National Transportation Safety Board (NTSB) found that driver fatigue led to a deadly multivehicle crash in Phoenix in June 2021 that killed four and injured 11 after a tractor-trailer carrying milk rammed into stopped traffic of passenger vehicles.
The investigation revealed that the driver had less than a six-hour opportunity for sleep the day of the crash, and regularly worked 70 - 80 hours a week. The trucking company did not have a program to manage driver fatigue, had poor oversight over its drivers, and did not enforce its own policies regarding the maximum hours employees could work.
A program to manage driver fatigue, as well as collision avoidance technology, would have prevented the fatal collision, the safety agency said, highlighting the dangers of being drowsy behind the wheel.
New research announced earlier this month by the AAA Foundation for Traffic Safety, a nonprofit research and education association, found that many drivers may be unaware of how drowsy they are, and as a result, fail to take breaks because they may not realize the risks until it’s too late.
“Being drowsy while driving is a dangerous form of impairment, and it does not resolve or improve with continued driving,” David Yang, the AAA Foundation’s president and executive director, said in a statement. “Our goal is to help drivers learn to heed the early warning signs of drowsiness so they can stop, rest, and then continue their journey as safely as possible.”
Drowsy driving plays a significant role in traffic crashes, injuries, and deaths, the safety group noted. Previous research estimated that 16% to 21% of all police-reported fatal vehicle crashes likely involve drowsy driving. Beyond the danger of falling asleep at the wheel, researchers said, drowsiness also reduces drivers’ alertness.
“Crashes caused by drowsy driving tend to be severe because the driver may not attempt to brake or swerve to avoid a collision, so the resulting impact occurs at a high rate of speed. A drowsy driver may also be startled and lose control of the vehicle,” according to the research.
For the report, “Drowsiness and Decision Making During Long Drives:A Driving Simulation Study,” researchers designed a nighttime highway driving experiment, which included a survey to gauge how drowsy drivers felt and measured the percentage of time their eyes were closed.
The results of the assessment showed that levels of drowsiness generally increased throughout the 3-hour long driving simulation, and while participants were usually aware that they were drowsy, their perceptions about their levels of sleepiness were not always accurate and affected decision-making.
Key findings from the study:
- When drivers rated their level of drowsiness as low, 75% of them were, in fact, moderately or severely drowsy.
- Even when drivers’ eyes were closed for 15 seconds or longer over a one-minute window— indicative of severe drowsiness—one in four still rated their drowsiness as low.
- Drivers very rarely took breaks unless they perceived that they were very drowsy.
- Even when drivers recognized they were extremely drowsy, they still declined 75% of their opportunities to take breaks and kept driving.
“Obtaining sufficient sleep, napping, and consuming caffeine are among the few evidence-based countermeasures that drivers can employ to prevent or mitigate drowsy driving,” the safety group said.
For more information about the research, tips to recognize and avoid drowsy driving, and access to the full report, click here.
|
<urn:uuid:f675d560-2bf6-40cf-bedc-e46e23cb8a37>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657720.82/warc/CC-MAIN-20230610131939-20230610161939-00477.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.972981333732605,
"pii_count": 0,
"score": 2.625,
"token_count": 752,
"url": "https://www.forbes.com/sites/tanyamohn/2023/03/31/driver-fatiguea-dangerous-form-of-impairmentcontinues-to-take-lives/"
}
|
On Tuesday, the National Transportation Safety Board (NTSB) found that driver fatigue led to a deadly multivehicle crash in Phoenix in June 2021 that killed four and injured 11 after a tractor-trailer carrying milk rammed into stopped traffic of passenger vehicles.
The investigation revealed that the driver had less than a six-hour opportunity for sleep the day of the crash, and regularly worked 70 - 80 hours a week. The trucking company did not have a program to manage driver fatigue, had poor oversight over its drivers, and did not enforce its own policies regarding the maximum hours employees could work.
A program to manage driver fatigue, as well as collision avoidance technology, would have prevented the fatal collision, the safety agency said, highlighting the dangers of being drowsy behind the wheel.
New research announced earlier this month by the AAA Foundation for Traffic Safety, a nonprofit research and education association, found that many drivers may be unaware of how drowsy they are, and as a result, fail to take breaks because they may not realize the risks until it’s too late.
“Being drowsy while driving is a dangerous form of impairment, and it does not resolve or improve with continued driving,” David Yang, the AAA Foundation’s president and executive director, said in a statement. “Our goal is to help drivers learn to heed the early warning signs of drowsiness so they can stop, rest, and then continue their journey as safely as possible.”
Drowsy driving plays a significant role in traffic crashes, injuries, and deaths, the safety group noted. Previous research estimated that 16% to 21% of all police-reported fatal vehicle crashes likely involve drowsy driving. Beyond the danger of falling asleep at the wheel, researchers said, drowsiness also reduces drivers’ alertness.
“Crashes caused by drowsy driving tend to be severe because the driver may not attempt to brake or swerve to avoid a collision, so the resulting impact occurs at a high rate of speed. A drowsy driver may also be startled and lose control of the vehicle,” according to the research.
For the report, “Drowsiness and Decision Making During Long Drives:A Driving Simulation Study,” researchers designed a nighttime highway driving experiment, which included a survey to gauge how drowsy drivers felt and measured the percentage of time their eyes were closed.
The results of the assessment showed that levels of drowsiness generally increased throughout the 3-hour long
|
driving simulation, and while participants were usually aware that they were drowsy, their perceptions about their levels of sleepiness were not always accurate and affected decision-making.
Key findings from the study:
- When drivers rated their level of drowsiness as low, 75% of them were, in fact, moderately or severely drowsy.
- Even when drivers’ eyes were closed for 15 seconds or longer over a one-minute window— indicative of severe drowsiness—one in four still rated their drowsiness as low.
- Drivers very rarely took breaks unless they perceived that they were very drowsy.
- Even when drivers recognized they were extremely drowsy, they still declined 75% of their opportunities to take breaks and kept driving.
“Obtaining sufficient sleep, napping, and consuming caffeine are among the few evidence-based countermeasures that drivers can employ to prevent or mitigate drowsy driving,” the safety group said.
For more information about the research, tips to recognize and avoid drowsy driving, and access to the full report, click here.
|
Amelia Earhart Hangar Museum in Atchison to show historic plane and interactive exhibits
The fate of Amelia Earhart remains one of the world's great unsolved mysteries.
The Atchison-born pilot and her navigator, Fred Noonan, vanished in 1937 in the South Pacific while attempting an around-the-world flight.
The 14 interactive exhibits offered at the soon-to-open Amelia Earhart Hangar Museum include one in which visitors will be able to hear commonly held theories about what happened and vote for the one they think is most plausible, said Karen Seaberg, the museum's founder and president.
Ribbon-cutting ceremonies will be at noon Friday for the museum at Amelia Earhart Memorial Airport, one mile west of Atchison at 16701 286th Road.
Who was Amelia Earhart?
The $17 million museum was financed using donations from supporters that include NASA and corporate powerhouses Boeing, Bombardier, FedEx, Garmin and Lockheed Martin. It is in the process of seeking to become an affiliate of the Smithsonian Institute, Seaberg said.
The museum celebrates the legacy of Earhart, a trailblazing pilot whose accomplishments include having been the first woman to fly solo across the Atlantic Ocean, the first person to fly solo from Hawaii to the U.S. mainland and the first woman to fly nonstop across the U.S.
Museum visitors will be able to hear a recording of Earhart's voice, made as she spoke to children in New York City after she flew across the Atlantic, Seaberg said.
Earhart became a celebrity and drew attention by rejecting the traditional role set aside for women and speaking in support of the proposed Equal Rights Amendment.
What happened on Amelia Earhart's last flight?
In 1931, Earhart married newspaper publisher George Putnam, who promoted her career and encouraged her to try to become the first pilot to circle the earth at the equator.
Aviator Wiley Post had already circled the globe twice, but over a shorter and less challenging northern route that included passing over Canada, Alaska and the then-USSR.
Earhart, accompanied by Noonan, attempted the around-the-world flight in a Lockheed Model 10 Electra plane. Their trip was more than three-quarters complete when they vanished on July 2, 1937, as they sought to find a landing strip the U.S. government had built for them on small, U.S.-owned Howland Island in the South Pacific.
The most commonly held theory is that Earhart, being unable to see the island, ran out of gas and ditched her plane at sea, where she and Noonan died soon afterward.
What will the Amelia Earhart museum's centerpiece be?
Only one plane still exists among the 149 Lockheed Model 10 Electras that were made.
That plane will be the centerpiece of the Amelia Earhart Hangar Museum.
It is named "Muriel," after Earhart's sister, Grace Muriel Earhart Morrissey, who died in 1998 at age 98.
Those present for Friday's ribbon-cutting are to include Bram Kleppner, a grandson of Grace Muriel Earhart Morrissey, Seaberg said.
In addition to Muriel, the museum will offer interactive exhibits that will enable visitors to:
• Flip through Earhart’s scrapbook, which has been digitized, to learn about the bold women who motivated her.
• Create an avatar while learning about Earhart's career paths as a nurse, mechanic, pilot and fashion designer.
• For $5, to take the controls for a virtual reality flight modeled upon Earhart's solo journey across the Atlantic.
How much does admission cost?
Friday's ribbon-cutting ceremony is expected to last about 30 minutes. Those who plan to attend are asked to RSVP in advance using the museum's website.
The museum will open immediately after the ceremony ends. Its regular hours of operation will be from 10 a.m. to 5 p.m. Wednesdays through Saturdays and noon to 5 p.m. Sundays. Park will be free.
Admission will cost $15 plus tax for adults ages 13 and older; $12 plus tax for senior citizens 60 and older, retired military and active duty service members; $8 plus tax for children 4 to 12; and free for children 3 and under.
Group rates are available for groups of more than 20 people, with further information about those being available from the museum.
Contact Tim Hrenchir at <email-pii> or 785-213-5934.
|
<urn:uuid:e54ae906-5cf1-4758-8579-ffc2d914acbd>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00648.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9585708379745483,
"pii_count": 1,
"score": 2.875,
"token_count": 949,
"url": "https://www.cjonline.com/story/news/state/2023/04/11/atchison-museum-opening-friday-honors-legacy-of-pilot-amelia-earhart/70090152007/"
}
|
Amelia Earhart Hangar Museum in Atchison to show historic plane and interactive exhibits
The fate of Amelia Earhart remains one of the world's great unsolved mysteries.
The Atchison-born pilot and her navigator, Fred Noonan, vanished in 1937 in the South Pacific while attempting an around-the-world flight.
The 14 interactive exhibits offered at the soon-to-open Amelia Earhart Hangar Museum include one in which visitors will be able to hear commonly held theories about what happened and vote for the one they think is most plausible, said Karen Seaberg, the museum's founder and president.
Ribbon-cutting ceremonies will be at noon Friday for the museum at Amelia Earhart Memorial Airport, one mile west of Atchison at 16701 286th Road.
Who was Amelia Earhart?
The $17 million museum was financed using donations from supporters that include NASA and corporate powerhouses Boeing, Bombardier, FedEx, Garmin and Lockheed Martin. It is in the process of seeking to become an affiliate of the Smithsonian Institute, Seaberg said.
The museum celebrates the legacy of Earhart, a trailblazing pilot whose accomplishments include having been the first woman to fly solo across the Atlantic Ocean, the first person to fly solo from Hawaii to the U.S. mainland and the first woman to fly nonstop across the U.S.
Museum visitors will be able to hear a recording of Earhart's voice, made as she spoke to children in New York City after she flew across the Atlantic, Seaberg said.
Earhart became a celebrity and drew attention by rejecting the traditional role set aside for women and speaking in support of the proposed Equal Rights Amendment.
What happened on Amelia Earhart's last flight?
In 1931, Earhart married newspaper publisher George Putnam, who promoted her career and encouraged her to try to become the first pilot to circle the earth at the equator.
Aviator Wiley Post had already circled the globe twice, but over a shorter and less challenging northern route that included passing over Canada, Alaska and the then-USSR.
Earhart, accompanied by Noonan, attempted the around-the-world flight in a Lockheed Model 10 Electra plane. Their trip was more than three-quarters complete when they vanished on July 2, 1937, as they sought to find a landing strip the U.S. government had built for them on small, U.S.-owned Howland
|
Island in the South Pacific.
The most commonly held theory is that Earhart, being unable to see the island, ran out of gas and ditched her plane at sea, where she and Noonan died soon afterward.
What will the Amelia Earhart museum's centerpiece be?
Only one plane still exists among the 149 Lockheed Model 10 Electras that were made.
That plane will be the centerpiece of the Amelia Earhart Hangar Museum.
It is named "Muriel," after Earhart's sister, Grace Muriel Earhart Morrissey, who died in 1998 at age 98.
Those present for Friday's ribbon-cutting are to include Bram Kleppner, a grandson of Grace Muriel Earhart Morrissey, Seaberg said.
In addition to Muriel, the museum will offer interactive exhibits that will enable visitors to:
• Flip through Earhart’s scrapbook, which has been digitized, to learn about the bold women who motivated her.
• Create an avatar while learning about Earhart's career paths as a nurse, mechanic, pilot and fashion designer.
• For $5, to take the controls for a virtual reality flight modeled upon Earhart's solo journey across the Atlantic.
How much does admission cost?
Friday's ribbon-cutting ceremony is expected to last about 30 minutes. Those who plan to attend are asked to RSVP in advance using the museum's website.
The museum will open immediately after the ceremony ends. Its regular hours of operation will be from 10 a.m. to 5 p.m. Wednesdays through Saturdays and noon to 5 p.m. Sundays. Park will be free.
Admission will cost $15 plus tax for adults ages 13 and older; $12 plus tax for senior citizens 60 and older, retired military and active duty service members; $8 plus tax for children 4 to 12; and free for children 3 and under.
Group rates are available for groups of more than 20 people, with further information about those being available from the museum.
Contact Tim Hrenchir at <email-pii> or 785-213-5934.
|
When Muhsinah Morris stepped onto the central quad of the Morehouse College campus in Atlanta, Georgia, she cried.
“We hadn’t been on campus for almost a year,” she recalls. “It was amazing. You hear the birds chirping and everything.”
It was 2021, and Morehouse, like many other schools and universities during the Covid-19 pandemic, had been closed since lockdowns began in March 2020.
In fact, it was still closed. Morris wasn’t really standing in the quad — she was standing in Morehouse’s “Metaversity” digital twin.
Morehouse College is the world’s first Metaversity, an interactive, virtual learning space based on real or imagined environments.
“(It) became our solution to increase attendance rates, reduce student recidivism, and make sure that they continue to persist in their majors,” says Morris, Morehouse’s Metaversity director, who at the time was academic program director. “We wanted our students to be more engaged than just sitting in another Zoom classroom.”
A view of Morehouse College in the real world and in the virtual world, as part of its “Metaversity.” (Credit: VictoryXR/Morehouse College)
In partnership with educational virtual reality developers VictoryXR, Morehouse created a virtual space where students could congregate for class remotely. Using a virtual reality (VR) headset, students first “spawn” into the digital twin campus, a familiar environment that “makes them more comfortable with the fact that they’re in new and emerging technology,” says Morris. From there, teachers can guide their students through different “classrooms,” real or imagined — including battlefields, outer space, or speculative future landscapes.
In the past two years, Morris says the college’s virtual reality classes have seen greater attendance, engagement and achievement from students — and now, she’s eager to see the education sector reform “in a way that brings all of these emerging technologies together for a better future for our young people.”
In addition to her role as academic program director, Morris was also the interim department chair of chemistry and an assistant professor — and she saw an opportunity to “bring to life” complex but visual concepts like molecules with a virtual reality program.
In addition to the digital twin of the campus, built using drone images and geographic data, Morehouse has an exact replica of its chemistry lab in the metaverse. This helps incoming students to familiarize themselves with the space and conduct a “trial run” of their experiment setup and safety protocols before getting into the lab. “That way we have less problems, injuries, and mistakes that happen in the lab,” says Morris.
Students that take Metaversity courses check out one of the school’s 500 headsets at the beginning of the semester, and return it at the end, so it can be cleaned, charged and reset for reuse the following semester, says Morris.
Juan Johnson, one student who took the new high-tech class, says his first experience in the Metaversity was “very surreal” but that “learning in virtual reality has been amazing.”
“The things that I’m reading on the page, I can see them and I’m really touching them in virtual reality, so it connects the dots for me,” he adds.
The Metaversity may look like a pandemic gimmick – but Morris says it’s yielding results.
Students “gain mastery of concepts much quicker than in a classroom,” and the immersive experience eliminates distractions like phones to make “efficient use of time,” says Morris, adding that class attendance rates increased by 10 percentage points, compared to in-person and online classes, and student achievement increased by 11.9%.
Morris isn’t the only one who sees the potential to improve education with VR. Orthopedic surgeon Dr. Danny P. Goel, based in Vancouver, Canada, worked with a team of artists and former game developers to design a surgery training program based on real-life procedures, called PrecisionOS. Allowing students to refine skills without the risk of making life-or-death mistakes, Goel found that they picked up the skills five times faster and made 50% less critical errors.
In partnership with VictoryXR, Morehouse College created virtual education spaces for students as part of its Metaversity. (Credit: VictoryXR/Morehouse College)
Demand for VR learning has mainly been driven by the “difficult experience” of online learning during the pandemic, says Rose Luckin, a professor at the University College London Knowledge Lab and co-founder of the Institute for Ethical AI in Education.
While Metaversities can offer engaging, immersive, and visual education opportunities, Luckin thinks VR is wasted on replicating physical spaces that already exist and should focus on historical or imagined spaces instead. “We need to recognize the importance of experiencing the real world as it is,” says Luckin, adding: “If we get it right, the future of education will be one where there’s more human interaction.”
But as one of many technological “tools” that will be available to teachers in the coming years, virtual reality has the opportunity to make learning more immersive and individualized if we “recognize the strengths, the weaknesses, what it is they’re particularly good for, and make sure that’s how we use them,” says Luckin.
And while more universities are jumping on the Metaversity bandwagon — Morehouse College is one of 30 campuses funded by Meta, Facebook’s parent company, that are being built by VictoryXR, which is also working with over 100 higher education institutions on immersive virtual reality learning programs — the cost could be a huge potential barrier to wider adoption.
Morehouse’s Metaversity campus and courses are funded by several grantors, including Meta, Qualcomm, and T-Mobile. A “digital twin” campus starts from $50,000, and the headsets Morehouse uses cost $499 each, says Morris.
Getting VR access to students across “every level of education” is vital, says Morris. “We are going to live in a digitally illiterate society if we don’t start infusing our money and efforts into young people,” she says.
However, as a “geographically agnostic” teaching tool, Morris believes that VR has the potential to make education more equitable, connecting students to world-class institutions on the other side of the world. Goel has already found this with his surgery software, which is currently being used to train doctors in 53 low and middle-income countries.
Teaching tech literacy
Starting with three classes, Morehouse now offers 13 courses each semester in the Metaversity, including modules in multimedia and visual storytelling, world history, microbiology, and literature, which include emotive experiences such as being on the front line in World War II, and on a transatlantic slave ship.
“It’s really important to experience things that may change your perspective,” says Morris.
Vitally, the Metaversity isn’t just teaching subject knowledge — it’s also showing students how to use and interact with emerging technology. “Science, technology, engineering and mathematics is really important, regardless of what your major is,” says Morris. “About 60% of the jobs that will be here in 2030 don’t exist right now, and it’s really important for our students to have highly technical skill sets.”
With achievement on the rise, she hopes to see the graduation rate increase, too. Morehouse ranks in the top five of the US’s historically Black colleges and universities (HBCUs) for its graduation rate (around 55%) although it still falls short of the national average (64%).
Morris is excited that as the first Metaversity, Morehouse has the opportunity to empower its students, predominantly Black men, in these technological spaces.
“Most HBCUs are broadband deserts, and most rural or inner-city schools have poor access to emerging technology,” she says. “That is why it is so important for the world to see Black young men at Morehouse at the forefront of an innovative effort like this that is transforming education.”
Aiming for an 80% graduation rate, a goal set by the university president, Morris is optimistic that the Metaversity can keep students engaged as well as accelerate their learning.
And for Morris, the Metaversity has “restored joy in the education system.”
“It felt like having my own dreams come true, in the sense that we’re no longer operating in an antiquated learning model where I’m seeing students fail over and over again,” she says, adding: “Why did we wait so long to do something this transformative?”
|
<urn:uuid:0f54b4eb-4403-42ea-aed0-c070aa2437bc>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100626.1/warc/CC-MAIN-20231206230347-20231207020347-00114.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9546982645988464,
"pii_count": 0,
"score": 2.703125,
"token_count": 1925,
"url": "https://edition.cnn.com/2023/06/28/americas/metaversity-virtual-reality-morehouse-college-hnk-spc-intl"
}
|
When Muhsinah Morris stepped onto the central quad of the Morehouse College campus in Atlanta, Georgia, she cried.
“We hadn’t been on campus for almost a year,” she recalls. “It was amazing. You hear the birds chirping and everything.”
It was 2021, and Morehouse, like many other schools and universities during the Covid-19 pandemic, had been closed since lockdowns began in March 2020.
In fact, it was still closed. Morris wasn’t really standing in the quad — she was standing in Morehouse’s “Metaversity” digital twin.
Morehouse College is the world’s first Metaversity, an interactive, virtual learning space based on real or imagined environments.
“(It) became our solution to increase attendance rates, reduce student recidivism, and make sure that they continue to persist in their majors,” says Morris, Morehouse’s Metaversity director, who at the time was academic program director. “We wanted our students to be more engaged than just sitting in another Zoom classroom.”
A view of Morehouse College in the real world and in the virtual world, as part of its “Metaversity.” (Credit: VictoryXR/Morehouse College)
In partnership with educational virtual reality developers VictoryXR, Morehouse created a virtual space where students could congregate for class remotely. Using a virtual reality (VR) headset, students first “spawn” into the digital twin campus, a familiar environment that “makes them more comfortable with the fact that they’re in new and emerging technology,” says Morris. From there, teachers can guide their students through different “classrooms,” real or imagined — including battlefields, outer space, or speculative future landscapes.
In the past two years, Morris says the college’s virtual reality classes have seen greater attendance, engagement and achievement from students — and now, she’s eager to see the education sector reform “in a way that brings all of these emerging technologies together for a better future for our young people.”
In addition to her role as academic program director, Morris was also the interim department chair of chemistry and an assistant professor — and she saw an opportunity to “bring to life” complex but visual concepts like molecules with a virtual reality program.
In addition to the digital twin of the campus, built using drone images and geographic data, Morehouse has an exact replica of its chemistry lab in the metaverse. This helps incoming students to familiarize themselves with the space and conduct a “t
|
rial run” of their experiment setup and safety protocols before getting into the lab. “That way we have less problems, injuries, and mistakes that happen in the lab,” says Morris.
Students that take Metaversity courses check out one of the school’s 500 headsets at the beginning of the semester, and return it at the end, so it can be cleaned, charged and reset for reuse the following semester, says Morris.
Juan Johnson, one student who took the new high-tech class, says his first experience in the Metaversity was “very surreal” but that “learning in virtual reality has been amazing.”
“The things that I’m reading on the page, I can see them and I’m really touching them in virtual reality, so it connects the dots for me,” he adds.
The Metaversity may look like a pandemic gimmick – but Morris says it’s yielding results.
Students “gain mastery of concepts much quicker than in a classroom,” and the immersive experience eliminates distractions like phones to make “efficient use of time,” says Morris, adding that class attendance rates increased by 10 percentage points, compared to in-person and online classes, and student achievement increased by 11.9%.
Morris isn’t the only one who sees the potential to improve education with VR. Orthopedic surgeon Dr. Danny P. Goel, based in Vancouver, Canada, worked with a team of artists and former game developers to design a surgery training program based on real-life procedures, called PrecisionOS. Allowing students to refine skills without the risk of making life-or-death mistakes, Goel found that they picked up the skills five times faster and made 50% less critical errors.
In partnership with VictoryXR, Morehouse College created virtual education spaces for students as part of its Metaversity. (Credit: VictoryXR/Morehouse College)
Demand for VR learning has mainly been driven by the “difficult experience” of online learning during the pandemic, says Rose Luckin, a professor at the University College London Knowledge Lab and co-founder of the Institute for Ethical AI in Education.
While Metaversities can offer engaging, immersive, and visual education opportunities, Luckin thinks VR is wasted on replicating physical spaces that already exist and should focus on historical or imagined spaces instead. “We need to recognize the importance of experiencing the real world as it is,” says Luckin, adding: “If we get it right, the future of education will be one where there’s more human interaction.”
But as one of many technological “tools” that will be available to teachers in the coming years, virtual reality has the opportunity to make learning more immersive and individualized if we “recognize the strengths, the weaknesses, what it is they’re particularly good for, and make sure that’s how we use them,” says Luckin.
And while more universities are jumping on the Metaversity bandwagon — Morehouse College is one of 30 campuses funded by Meta, Facebook’s parent company, that are being built by VictoryXR, which is also working with over 100 higher education institutions on immersive virtual reality learning programs — the cost could be a huge potential barrier to wider adoption.
Morehouse’s Metaversity campus and courses are funded by several grantors, including Meta, Qualcomm, and T-Mobile. A “digital twin” campus starts from $50,000, and the headsets Morehouse uses cost $499 each, says Morris.
Getting VR access to students across “every level of education” is vital, says Morris. “We are going to live in a digitally illiterate society if we don’t start infusing our money and efforts into young people,” she says.
However, as a “geographically agnostic” teaching tool, Morris believes that VR has the potential to make education more equitable, connecting students to world-class institutions on the other side of the world. Goel has already found this with his surgery software, which is currently being used to train doctors in 53 low and middle-income countries.
Teaching tech literacy
Starting with three classes, Morehouse now offers 13 courses each semester in the Metaversity, including modules in multimedia and visual storytelling, world history, microbiology, and literature, which include emotive experiences such as being on the front line in World War II, and on a transatlantic slave ship.
“It’s really important to experience things that may change your perspective,” says Morris.
Vitally, the Metaversity isn’t just teaching subject knowledge — it’s also showing students how to use and interact with emerging technology. “Science, technology, engineering and mathematics is really important, regardless of what your major is,” says Morris. “About 60% of the jobs that will be here in 2030 don’t exist right now, and it’s really important for our students to have highly technical skill sets.”
With achievement on the rise, she hopes to see the graduation rate increase, too. Morehouse ranks in the top five of the US’s historically Black colleges and universities (HBCUs) for its graduation rate (around 55%) although it still falls short of the national average (64%).
Morris is excited that as the first Metaversity, Morehouse has the opportunity to empower its students, predominantly Black men, in these technological spaces.
“Most HBCUs are broadband deserts, and most rural or inner-city schools have poor access to emerging technology,” she says. “That is why it is so important for the world to see Black young men at Morehouse at the forefront of an innovative effort like this that is transforming education.”
Aiming for an 80% graduation rate, a goal set by the university president, Morris is optimistic that the Metaversity can keep students engaged as well as accelerate their learning.
And for Morris, the Metaversity has “restored joy in the education system.”
“It felt like having my own dreams come true, in the sense that we’re no longer operating in an antiquated learning model where I’m seeing students fail over and over again,” she says, adding: “Why did we wait so long to do something this transformative?”
|
The free state of Florida | Opinion
In his second inaugural address, Gov. Ron DeSantis declared that the Free State of Florida was the land of liberty and sanity. In his speech, he referenced the Founding Fathers, Washington, Lincoln, and events in history to support his vision of freedom and liberty.
His comments started me thinking and researching the concepts he emphasized. The Oxford dictionary defines freedom as,” The power or right to act, speak or think as one wants without hinderance.” Liberty is defined as, “The state of being free within a society from oppressive restrictions imposed by authority on one’s way of life, behavior or political views.”
Both the Right and the Left see themselves fighting for their interpretation of these definitions, with “Woke Ideology” and “Christian Nationalism” among the threats to their freedom.
What were the Founding Fathers writing about? The Declaration of Independence asserts the inalienable rights of life, liberty, and pursuit of happiness. One wonders what has happened to the notions of life and happiness with all the concern for liberty?
The Constitution was formed, “…to establish justice, ensure domestic tranquility, promote the general welfare, and secure the Blessings of Liberty to ourselves and our posterity.” Clearly the first three goals were primary in their minds. They don’t seem to be as important in the halls of state and federal legislatures.
Washington was cited by the Governor, but his most important quote shares a concern most relevant today: “if we mean to support the liberty and independence that has cost us so much blood and treasure to establish, we must drive far away the demon of party spirit and local reproach.” Who is acting with bi-partisanship and building friends over demonizing enemies?
The Gettysburg address was referenced with the call for a new birth of freedom. The reference was not freedom from government intervention but the concept that all men are created equal and deserved equal protection. Yes, exercising one’s rights, “Must be free from government interference,” Lincoln said elsewhere. One would think that applies to state governments as well as “entrenched bureaucrats in D.C.”
World War II was fought for a variety of reasons, but “Fighting for Freedom” was the most powerful call for men to sacrifice their lives. Franklin Delano Roosevelt’s speech on the Four Freedoms was the clarion call. Freedom of speech, freedom of worship, freedom from want and freedom from fear. These seem far more important than freedom from, “Jet setters in Davos and corporations wielding public power.”
Justice Louis Brandeis said, “The greatest dangers to liberty lurk in the insidious encroachment by men of zeal, well meaning, but without understanding.” Liberty is, in a greater sense, using freedom responsibly to do right by others. As Lincoln said, “Those who deny freedom to others, deserve it not for themselves.”
When we assert a definition of freedom as the right to do what we please, we miss DeSantis’s referenced “ Hand of the Almighty” and God’s call to do what is right for others.
Jim Croteau is a retired educator and non-profit administrator. He is also president of the 21st Century Council (http://21stcenturycouncil.org).
JOIN THE CONVERSATION
Send letters to the editor (up to 200 words) or Your Turn columns (about 500 words) <email-pii>. Please include your address for verification purposes only, and if you send a Your Turn, also include a photo and 1-2 line bio of yourself. You can also submit anonymous Zing!s at Tallahassee.com/Zing. Submissions are published on a space-available basis. All submissions may be edited for content, clarity and length, and may also be published by any part of the USA TODAY NETWORK.
|
<urn:uuid:c9e2c87a-11c8-4fe8-9ed7-cf0234939edb>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00167.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9548521637916565,
"pii_count": 1,
"score": 2.8125,
"token_count": 837,
"url": "https://www.tallahassee.com/story/opinion/2023/01/13/the-free-state-of-florida-opinion/69800088007/"
}
|
The free state of Florida | Opinion
In his second inaugural address, Gov. Ron DeSantis declared that the Free State of Florida was the land of liberty and sanity. In his speech, he referenced the Founding Fathers, Washington, Lincoln, and events in history to support his vision of freedom and liberty.
His comments started me thinking and researching the concepts he emphasized. The Oxford dictionary defines freedom as,” The power or right to act, speak or think as one wants without hinderance.” Liberty is defined as, “The state of being free within a society from oppressive restrictions imposed by authority on one’s way of life, behavior or political views.”
Both the Right and the Left see themselves fighting for their interpretation of these definitions, with “Woke Ideology” and “Christian Nationalism” among the threats to their freedom.
What were the Founding Fathers writing about? The Declaration of Independence asserts the inalienable rights of life, liberty, and pursuit of happiness. One wonders what has happened to the notions of life and happiness with all the concern for liberty?
The Constitution was formed, “…to establish justice, ensure domestic tranquility, promote the general welfare, and secure the Blessings of Liberty to ourselves and our posterity.” Clearly the first three goals were primary in their minds. They don’t seem to be as important in the halls of state and federal legislatures.
Washington was cited by the Governor, but his most important quote shares a concern most relevant today: “if we mean to support the liberty and independence that has cost us so much blood and treasure to establish, we must drive far away the demon of party spirit and local reproach.” Who is acting with bi-partisanship and building friends over demonizing enemies?
The Gettysburg address was referenced with the call for a new birth of freedom. The reference was not freedom from government intervention but the concept that all men are created equal and deserved equal protection. Yes, exercising one’s rights, “Must be free from government interference,” Lincoln said elsewhere. One would think that applies to state governments as well as “entrenched bureaucrats in D.C.”
World War II was fought for a variety of reasons, but “Fighting for Freedom” was the most powerful call for men to sacrifice their lives. Franklin Delano Roosevelt’s speech on the Four Freedoms was the clarion call. Freedom of speech, freedom of worship, freedom from want and freedom from fear. These seem far more important than freedom from, “Jet setters
|
in Davos and corporations wielding public power.”
Justice Louis Brandeis said, “The greatest dangers to liberty lurk in the insidious encroachment by men of zeal, well meaning, but without understanding.” Liberty is, in a greater sense, using freedom responsibly to do right by others. As Lincoln said, “Those who deny freedom to others, deserve it not for themselves.”
When we assert a definition of freedom as the right to do what we please, we miss DeSantis’s referenced “ Hand of the Almighty” and God’s call to do what is right for others.
Jim Croteau is a retired educator and non-profit administrator. He is also president of the 21st Century Council (http://21stcenturycouncil.org).
JOIN THE CONVERSATION
Send letters to the editor (up to 200 words) or Your Turn columns (about 500 words) <email-pii>. Please include your address for verification purposes only, and if you send a Your Turn, also include a photo and 1-2 line bio of yourself. You can also submit anonymous Zing!s at Tallahassee.com/Zing. Submissions are published on a space-available basis. All submissions may be edited for content, clarity and length, and may also be published by any part of the USA TODAY NETWORK.
|
The Biden administration has announced an expansion of its use of Title 42, the pandemic measure the Trump administration initially implemented, to immediately expel to Mexico asylum seekers from a list of specified countries.
The funneling of migrants through inhospitable terrain is primarily responsible for deaths at the border. As U.S. authorities have increased enforcement near border crossings in urban, populated communities, they have pushed people into dangerous remote areas. This is intentional. They use such terrain as a natural “wall.” Within the Southwest’s unforgiving mountains and deserts, thousands of people seeking safety in the United States have succumbed to snake bites, animal attacks, heat stroke, dehydration and hyperthermia.
These mass deaths can be traced to border policies adopted in the early 1990s, ones that help us understand where we find ourselves today.
One of the architects of these policies was Silvestre Reyes. Reyes was raised in an El Paso-area farming community near the New Mexico state line as the grandchild of Mexican refugees who fled the violence of the Mexican Revolution (1910-20). On his family farm, he served as a lookout for Border Patrol agents and warned undocumented farmworkers to hide or run.
After being drafted into the Vietnam War, he took various civil service exams, and the Border Patrol was the first federal agency to respond. He moved to Del Rio in southwest Texas to join the Border Patrol in 1969. Reyes rose through the ranks and became the Border Patrol’s first Hispanic sector chief in 1984, when he was appointed to head the McAllen Border Patrol sector, which runs along the Lower Rio Grande Valley to the Gulf of Mexico.
Throughout the 1970s and 1980s, Mexican agricultural workers often headed north for work, to flee economic volatility in Mexico. The U.S. immigration system foreclosed opportunities for legal migration, and many came without authorization. During these decades apprehension rates of Mexican migrants at the U.S.-Mexico border increased as migration ebbed and flowed.
From his station in Texas, Reyes told reporters that by the mid-1980s he began noticing that a growing number of the people being apprehended originally came from Central America. They included Guatemalans fleeing anti-Indigenous genocide, Salvadorans fleeing right-wing death squads and Nicaraguans fleeing guerrillas known as Contras.
During the Central American conflicts, the United States often backed oppressive right-wing regimes and offered official and unofficial support to other groups as part of its Cold War-era objectives. Over 1 million displaced Central Americans fled violent government regimes and guerrillas during the 1980s. The Reagan administration pursued policies denying their asylum claims and deported thousands of people who never had the opportunity to seek legal counsel.
Having come overland through Mexico, Central Americans were deported by costly flights. Reyes began planning what he saw as a cheaper strategy: one that would keep Central American refugees in Mexico. He began planning a blockade that would, as he stated, “back up the aliens on the Mexican side and start causing them problems over there.”
In 1993, Reyes became the El Paso Border Patrol sector chief. He implemented his plans through Operation Blockade, renamed Operation Hold the Line to avoid connotations with the Berlin Blockade, along the El Paso-Ciudad Juárez border. He stationed at least 400 Border Patrol agents and their vehicles directly on the Rio Grande’s banks.
Going against official procedure that had partly centered on patrolling city streets, Reyes created a wall of agents and vehicles between El Paso and Ciudad Juárez, greatly curtailing the number of migrants crossing into El Paso’s urban core.
But Reyes did not simply intend to stop unauthorized border crossings. He also wanted to improve the Border Patrol’s relations with El Paso residents.
El Paso’s Mexican American community had long complained about the Border Patrol’s racial profiling and harassment, which included such incidents as a border patrol agent stopping a high school football coach and then putting a gun to his head when the coach was driving players to a game. In 1992 the coach, five students and a school secretary filed a federal class-action lawsuit against the Border Patrol for racial profiling and harassment. In a preliminary injunction, a federal district judge found that the Border Patrol had “stopped, questioned, detained, frisked, arrested, searched, and physically and verbally abused” school staff and students for the “mere appearance of being from Hispanic descent.”
As one remediation, Reyes implemented the operation he had begun conceptualizing while leading the McAllen sector. He moved his agents away from the streets of El Paso and to the physical border, which helped reduce residents’ complaints against the Border Patrol. In 1994, under Reyes’s leadership, the El Paso sector Border Patrol settled the federal lawsuit by agreeing to stop detaining people for simply appearing to be Hispanic.
Despite some initial concerns — such as the State Department’s worry that it would impact diplomatic relations with Mexico and the possibility that it would increase personnel costs — Reyes’s Operation Hold the Line received broad praise. The Clinton administration supported a tougher stance on unauthorized immigration, and soon similar operations extended across other urban sections of the U.S.-Mexico border. At the San Diego-Tijuana border, the Clinton administration launched Operation Gatekeeper in 1994, which expanded enforcement on the urban borderline, driving migrants toward rural and wilderness areas east of the city. The Border Patrol also instituted checkpoints within the United States, began building border walls and expanded their technological infrastructure — pushing migrants away from highways, roads and populated areas far beyond the border.
After implementing Operation Hold the Line and having it lauded by the highest centers of power in D.C., Reyes ran for Congress as a Democrat with a tough-on-immigration record. In 1996 he was elected as the first Hispanic to represent the Hispanic-majority congressional district in El Paso. The Hispanic-majority electorate sent him to Congress eight more times. In 2001, Reyes was selected by his peers to chair the Congressional Hispanic Caucus, where he backed broadened border enforcement.
With few detractors in D.C. or the Border Patrol, the model Reyes innovated in El Paso became the guiding idea behind border enforcement more broadly. Operation Hold the Line and Operation Gatekeeper molded into various “prevention through deterrence” policies that sought to make it more difficult to cross into the United States.
Both Republican and Democratic policymakers believed that if the urban border was solidified and if migrants could only cross through dangerous remote terrain, they would not attempt to enter the United States. That belief did not hold against the reality of human desperation and the promise of a better life in the United States.
Since the 2006 passage of the Secure Fence Act, the Border Patrol has increasingly supplemented their operations with monumental border walls, surveillance towers, radar, high-definition and infrared cameras, thermographic heat sensors and drones. Such measures have pushed migrants even deeper into dangerous terrain while cartels primarily hide drugs (going northward) and guns (going southward) inside vehicles crossing border checkpoints.
As a result, between 1994 and 2000, 1,700 migrants died while traversing remote desert terrain. Border Patrol data reveals that over 9,000 migrants have died since 1998 — with the last two years being the deadliest. These are all surely undercounts that do not include full statistics from local and state governments or those who will never be found in remote locations.
Today President Biden equates criticisms from humanitarian groups and immigration hard-liners. He has asserted that both “extremes are wrong,” and that what’s needed is a “middle proposition.”
While the Congressional Hispanic Caucus has a history of supporting violent border policies, today’s leadership has diverged from those stances. Rep. Nanette Diaz Barragán (D-Calif. and chair of the CHC) noted her concerns about the expansion of Title 42 and the denial of due process for asylum seekers. Rep. Joaquin Castro (D-Texas and former CHC chair) went further, stating that the Biden administration’s extension of Title 42 would “exacerbate chaos and irregular migration at the Southern border.” But they could go further in highlighting the death toll that will result from such a “middle proposition.”
Recent history teaches us that migrants with few choices will risk their lives in trudging through the Southwest. Not unlike Reyes’s grandparents, hunger, destitution, persecution and violence in their countries of origin and on the streets of Mexico will drive people to risk their lives in search of the American Dream. Like previous administrations, the Biden administration’s plans will cause further suffering and the death of innumerable people seeking refuge on American soil.
|
<urn:uuid:b7362be2-6903-460b-8aee-42b23f989254>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00410.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9634734392166138,
"pii_count": 0,
"score": 2.921875,
"token_count": 1797,
"url": "https://www.washingtonpost.com/made-by-history/2023/01/20/border-patrol-title-42-immigration/"
}
|
The Biden administration has announced an expansion of its use of Title 42, the pandemic measure the Trump administration initially implemented, to immediately expel to Mexico asylum seekers from a list of specified countries.
The funneling of migrants through inhospitable terrain is primarily responsible for deaths at the border. As U.S. authorities have increased enforcement near border crossings in urban, populated communities, they have pushed people into dangerous remote areas. This is intentional. They use such terrain as a natural “wall.” Within the Southwest’s unforgiving mountains and deserts, thousands of people seeking safety in the United States have succumbed to snake bites, animal attacks, heat stroke, dehydration and hyperthermia.
These mass deaths can be traced to border policies adopted in the early 1990s, ones that help us understand where we find ourselves today.
One of the architects of these policies was Silvestre Reyes. Reyes was raised in an El Paso-area farming community near the New Mexico state line as the grandchild of Mexican refugees who fled the violence of the Mexican Revolution (1910-20). On his family farm, he served as a lookout for Border Patrol agents and warned undocumented farmworkers to hide or run.
After being drafted into the Vietnam War, he took various civil service exams, and the Border Patrol was the first federal agency to respond. He moved to Del Rio in southwest Texas to join the Border Patrol in 1969. Reyes rose through the ranks and became the Border Patrol’s first Hispanic sector chief in 1984, when he was appointed to head the McAllen Border Patrol sector, which runs along the Lower Rio Grande Valley to the Gulf of Mexico.
Throughout the 1970s and 1980s, Mexican agricultural workers often headed north for work, to flee economic volatility in Mexico. The U.S. immigration system foreclosed opportunities for legal migration, and many came without authorization. During these decades apprehension rates of Mexican migrants at the U.S.-Mexico border increased as migration ebbed and flowed.
From his station in Texas, Reyes told reporters that by the mid-1980s he began noticing that a growing number of the people being apprehended originally came from Central America. They included Guatemalans fleeing anti-Indigenous genocide, Salvadorans fleeing right-wing death squads and Nicaraguans fleeing guerrillas known as Contras.
During the Central American conflicts, the United States often backed oppressive right-wing regimes and offered official and unofficial
|
support to other groups as part of its Cold War-era objectives. Over 1 million displaced Central Americans fled violent government regimes and guerrillas during the 1980s. The Reagan administration pursued policies denying their asylum claims and deported thousands of people who never had the opportunity to seek legal counsel.
Having come overland through Mexico, Central Americans were deported by costly flights. Reyes began planning what he saw as a cheaper strategy: one that would keep Central American refugees in Mexico. He began planning a blockade that would, as he stated, “back up the aliens on the Mexican side and start causing them problems over there.”
In 1993, Reyes became the El Paso Border Patrol sector chief. He implemented his plans through Operation Blockade, renamed Operation Hold the Line to avoid connotations with the Berlin Blockade, along the El Paso-Ciudad Juárez border. He stationed at least 400 Border Patrol agents and their vehicles directly on the Rio Grande’s banks.
Going against official procedure that had partly centered on patrolling city streets, Reyes created a wall of agents and vehicles between El Paso and Ciudad Juárez, greatly curtailing the number of migrants crossing into El Paso’s urban core.
But Reyes did not simply intend to stop unauthorized border crossings. He also wanted to improve the Border Patrol’s relations with El Paso residents.
El Paso’s Mexican American community had long complained about the Border Patrol’s racial profiling and harassment, which included such incidents as a border patrol agent stopping a high school football coach and then putting a gun to his head when the coach was driving players to a game. In 1992 the coach, five students and a school secretary filed a federal class-action lawsuit against the Border Patrol for racial profiling and harassment. In a preliminary injunction, a federal district judge found that the Border Patrol had “stopped, questioned, detained, frisked, arrested, searched, and physically and verbally abused” school staff and students for the “mere appearance of being from Hispanic descent.”
As one remediation, Reyes implemented the operation he had begun conceptualizing while leading the McAllen sector. He moved his agents away from the streets of El Paso and to the physical border, which helped reduce residents’ complaints against the Border Patrol. In 1994, under Reyes’s leadership, the El Paso sector Border Patrol settled the federal lawsuit by agreeing to stop detaining people for simply appearing to be Hispanic.
Despite some initial concerns — such as the State Department’s worry that it would impact diplomatic relations with Mexico and the possibility that it would increase personnel costs — Reyes’s Operation Hold the Line received broad praise. The Clinton administration supported a tougher stance on unauthorized immigration, and soon similar operations extended across other urban sections of the U.S.-Mexico border. At the San Diego-Tijuana border, the Clinton administration launched Operation Gatekeeper in 1994, which expanded enforcement on the urban borderline, driving migrants toward rural and wilderness areas east of the city. The Border Patrol also instituted checkpoints within the United States, began building border walls and expanded their technological infrastructure — pushing migrants away from highways, roads and populated areas far beyond the border.
After implementing Operation Hold the Line and having it lauded by the highest centers of power in D.C., Reyes ran for Congress as a Democrat with a tough-on-immigration record. In 1996 he was elected as the first Hispanic to represent the Hispanic-majority congressional district in El Paso. The Hispanic-majority electorate sent him to Congress eight more times. In 2001, Reyes was selected by his peers to chair the Congressional Hispanic Caucus, where he backed broadened border enforcement.
With few detractors in D.C. or the Border Patrol, the model Reyes innovated in El Paso became the guiding idea behind border enforcement more broadly. Operation Hold the Line and Operation Gatekeeper molded into various “prevention through deterrence” policies that sought to make it more difficult to cross into the United States.
Both Republican and Democratic policymakers believed that if the urban border was solidified and if migrants could only cross through dangerous remote terrain, they would not attempt to enter the United States. That belief did not hold against the reality of human desperation and the promise of a better life in the United States.
Since the 2006 passage of the Secure Fence Act, the Border Patrol has increasingly supplemented their operations with monumental border walls, surveillance towers, radar, high-definition and infrared cameras, thermographic heat sensors and drones. Such measures have pushed migrants even deeper into dangerous terrain while cartels primarily hide drugs (going northward) and guns (going southward) inside vehicles crossing border checkpoints.
As a result, between 1994 and 2000, 1,700 migrants died while traversing remote desert terrain. Border Patrol data reveals that over 9,000 migrants have died since 1998 — with the last two years being the deadliest. These are all surely undercounts that do not include full statistics from local and state governments or those who will never be found in remote locations.
Today President Biden equates criticisms from humanitarian groups and immigration hard-liners. He has asserted that both “extremes are wrong,” and that what’s needed is a “middle proposition.”
While the Congressional Hispanic Caucus has a history of supporting violent border policies, today’s leadership has diverged from those stances. Rep. Nanette Diaz Barragán (D-Calif. and chair of the CHC) noted her concerns about the expansion of Title 42 and the denial of due process for asylum seekers. Rep. Joaquin Castro (D-Texas and former CHC chair) went further, stating that the Biden administration’s extension of Title 42 would “exacerbate chaos and irregular migration at the Southern border.” But they could go further in highlighting the death toll that will result from such a “middle proposition.”
Recent history teaches us that migrants with few choices will risk their lives in trudging through the Southwest. Not unlike Reyes’s grandparents, hunger, destitution, persecution and violence in their countries of origin and on the streets of Mexico will drive people to risk their lives in search of the American Dream. Like previous administrations, the Biden administration’s plans will cause further suffering and the death of innumerable people seeking refuge on American soil.
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
Beginning decades ago as ideas that seemed more like science fiction, these missions took years of research and testing to come to life.
Technological advances and scientific breakthroughs have transformed how we observe and investigate the cosmos. How will space exploration change in the coming decades, and what new possibilities will emerge?
These questions are at the heart of NASA’s Innovative Advanced Concepts program, or NIAC, which awards funding for concepts that could be part of future missions.
“NASA dares to make the impossible possible,” NASA Administrator Bill Nelson said in a statement. “That’s only achievable because of the innovators, thinkers, and doers who are helping us imagine and prepare for the future of space exploration. The NIAC program helps give these forward-thinking scientists and engineers the tools and support they need to spur technology that will enable future NASA missions.”
The lastest NIAC competition selected 14 new concepts, awarding each $175,000 in January. Now, these researchers have nine months to use that funding toward refining and testing their ideas to see whether they can advance to the second phase of funding, which is $600,000 to flesh out their concepts and bring them closer to reality.
Only five projects have made it to the third phase during the NIAC program — $2 million to make something implementable.
Active since 2011, the competitive program is open to a broad range of ideas as long as they are technically credible, said Michael LaPointe, program executive for NIAC at NASA.
Some of the latest NIAC-funded concepts include a fluid space telescope, self-growing bricks intended for Mars and a plane that could fly on Saturn’s moon Titan, among others. Many of the ideas are the result of creative collaborations between experts in different fields challenging one another to come up with new ideas.
“It really is a community of innovators,” LaPointe said. “We’re looking for ideas that will enable brand-new ways of doing things.”
Self-growing bricks for Mars
For the past few years, Congrui Jin and her research group have used bacteria and fungi to heal cracks in concrete. Jin, an assistant professor at the University of Nebraska-Lincoln, now wants to take her idea to space. Her self-growing bricks could one day build habitats and other structures for human explorers on the red planet.
The concept would involve sending bacterial and fungal spores and a bioreactor to Mars. The bioreactor is needed for the microbes to survive because Mars’ natural environment would be too harsh for them. But Mars would provide the rest of the necessary ingredients for the self-growing bricks, including dust and soil, sunlight, nitrogen, carbon dioxide and water from melted ice.
In turn, the bacteria can produce oxygen and organic carbon to support the fungi. The process, once all of these ingredients are inside the bioreactor, would also create calcium carbonate to serve as the glue.
The bacteria, fungi and minerals will bind Martian soil together to form blocks, which can later be used to make floors, walls and even furniture.
Jin’s group, which includes some students, screens the most suitable types of fungi and bacteria and tests which ones work best together. The team is also building a bioreactor to calibrate the atmosphere, pressure, temperature and illumination required to grow the bricks.
“The very important feature of this technology is its autonomous nature, and it doesn’t require any human intervention,” Jin said. “We initially just need to provide small quantities of spores to start this process, and the rest is automatic.”
Saturn’s moon Titan has long intrigued astronomers with its thick atmosphere and lakes and rivers of methane. It’s a unique place in our solar system in which chemistry that’s taking place may be similar to what occurred on early Earth. A rover-size drone called Dragonfly is expected to launch for this moon in 2027 to study its fine-grained, drier organic material.
Quinn Morley, principal investigator at Planet Enterprises in Gig Harbor, Washington, and his collaborators at Washington State University and other institutions envision a complementary Titan mission to explore the wetter regions of the intriguing moon. The seaplane-like design, called TitanAir, would soar through Titan’s atmosphere and sail on its lakes.
TitanAir could reach the moon about a decade after Dragonfly “to help unlock key secrets of this alien planet,” Morley said.
The front section of the plane’s wing would “drink” liquid methane that forms on the wing’s surface as the plane flies through rain clouds. The liquid collected inside the wing could be analyzed by instruments and transmitted back to Earth.
A small rover or helicopter could fly to places the plane can’t reach and return samples to the plane. Beginning in the fall, Morley wants to partner with engineering student design teams at Washington State on ideas for TitanAir.
“The analysis of clouds, lakes, and shorelines allows us to attack the search for life in three unique ways with one spacecraft,” Morley said in an email, “increasing our chances of unlocking these profound mysteries.”
Large space observatories such as the Hubble Space Telescope and Webb telescope are the result of decades of funding, design, assembly and testing. But there is a growing demand for a wider range of telescopes that can be developed cheaper and more quickly.
Several new NIAC concepts offer diverse ways of observing the cosmos like never before.
One design is FLUTE, or the Fluidic Telescope, from Edward Balaban, a research scientist at NASA’s Ames Research Center in California, and his collaborators. Balaban has mostly worked in artificial intelligence and strategic planning for the upcoming VIPER lunar rover mission.
Balaban was inspired through conversations with colleagues both at Ames and the Technion-Israel Institute of Technology to develop a concept that could combine manipulating fluids with on-orbit assembly of a large telescope. The latter wouldn’t be limited by the size of its launch vehicle.
Under the plan, two launches would send an instrumentation spacecraft and a frame that can be filled with liquid into space. The liquid-filled frame creates a massive 50-meter (164-foot) mirror, while the instrumentation spacecraft will remain at a specific distance from the mirror to collect images and send them back to Earth, Balaban said
Balaban and his team are testing liquids that could act as the mirror, including ionic liquids called molten salts. Researchers will use their grant to work on constructing a frame to hold the liquid and the design for the instrumentation spacecraft. Such a telescope could be impervious to the micrometeoroid strikes that the Webb telescope has experienced because the liquids wouldn’t be affected, he said.
A large light-collecting surface such as FLUTE could glimpse the faint light of early galaxies or look inside the atmospheres of exoplanets.
“Believe it or not, we might be able to start seeing surface features on the nearest exoplanets,” Balaban said. “Instead of seeing them as pinpoints of light, we might be able to tell if they have continents, for example.”
Searching for another Earth
Meanwhile, Heidi Jo Newberg, a professor of physics, applied physics, and astronomy at Rensselaer Polytechnic Institute, and her collaborators have a concept for a telescope that could find “Earth 2.0.”
The team’s idea is to search for nearby habitable planets by changing up the centuries-old design for telescopes. It’s called DICER, or a Diffractive Interfero Coronagraph Exoplanet Resolver.
Earth-size exoplanets are small and faint, especially compared with the bright stars they orbit, so conventional thinking suggests that searching for such planets would require a telescope with three times the diameter of the Webb telescope — which at 6.5 meters (21 feet 4 inches) is the largest mirror ever flown in space.
A 20-meter (65-foot) telescope currently isn’t feasible because it would be difficult to launch such a large mirror into space.
But DICER would rely on two small mirrors as well as two 10-meter diffraction gratings, or optical components that diffract light, that can be easily packed up inside a rocket. Think about the rainbow visible on the back of a CD when you hold it up to the light — that’s diffraction grating. The gratings could collect the same amount of light as a 20-meter telescope.
“This idea brings down the size and the weight of what you have to send up into space to get the desired high resolution of a telescope like Webb,” Newberg said. “What we’re trying to do is exploit this out-of-the-box design to make it possible to find Earth-like planets.”
The team will work on the scale of the optics and determine whether the telescope could also be used to study exoplanet atmospheres.
The invisible sky
Despite advances in the types of light telescopes can see, low-frequency radio waves remain invisible to us.
Dr. Mary Knapp, a research scientist at the Massachusetts Institute of Technology’s Haystack Observatory, and her collaborators are working on a concept that could reveal this part of the radio wave spectrum. The waves are unobservable by ground-based radio telescopes due to the distorting effects of Earth’s upper atmosphere.
“I learned back in my undergrad days that there was this part of the spectrum we couldn’t see,” Knapp said. “It really just struck me that there was this unexplored part of the universe, and I want to explore this part of the sky for the first time.”
The Great Observatory for Long Wavelengths, or GO-LoW, would rely on a fleet of thousands of shoebox-size satellites that act in concert, like one large virtual telescope. Small satellites are cost-effective, and mega-constellations of them can be launched at once on a single, large rocket.
The observatory could measure low-frequency electromagnetic radiation, which would carry a wealth of data for astronomers who want to observe the oldest stars and galaxies and understand the magnetic fields of exoplanets and stars. This information can help scientists more easily spot habitable planets. Magnetic fields help planets maintain their atmospheres.
The team will focus next on the antenna design for the satellites and the potential architecture needed for the satellites in space. If GO-LoW is developed, it could be used to create a new map of the sky.
“Anytime we look at the universe in a new part of the spectrum, we’ve seen things we didn’t expect,” Knapp said. “I’m really excited about what we don’t know.”
|
<urn:uuid:81d3b301-32e3-4fbf-84f0-03e1ed288239>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100989.75/warc/CC-MAIN-20231209233632-20231210023632-00735.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9369853734970093,
"pii_count": 0,
"score": 3.796875,
"token_count": 2326,
"url": "https://edition.cnn.com/2023/02/04/world/nasa-niac-space-exploration-concepts-scn/index.html"
}
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
Beginning decades ago as ideas that seemed more like science fiction, these missions took years of research and testing to come to life.
Technological advances and scientific breakthroughs have transformed how we observe and investigate the cosmos. How will space exploration change in the coming decades, and what new possibilities will emerge?
These questions are at the heart of NASA’s Innovative Advanced Concepts program, or NIAC, which awards funding for concepts that could be part of future missions.
“NASA dares to make the impossible possible,” NASA Administrator Bill Nelson said in a statement. “That’s only achievable because of the innovators, thinkers, and doers who are helping us imagine and prepare for the future of space exploration. The NIAC program helps give these forward-thinking scientists and engineers the tools and support they need to spur technology that will enable future NASA missions.”
The lastest NIAC competition selected 14 new concepts, awarding each $175,000 in January. Now, these researchers have nine months to use that funding toward refining and testing their ideas to see whether they can advance to the second phase of funding, which is $600,000 to flesh out their concepts and bring them closer to reality.
Only five projects have made it to the third phase during the NIAC program — $2 million to make something implementable.
Active since 2011, the competitive program is open to a broad range of ideas as long as they are technically credible, said Michael LaPointe, program executive for NIAC at NASA.
Some of the latest NIAC-funded concepts include a fluid space telescope, self-growing bricks intended for Mars and a plane that could fly on Saturn’s moon Titan, among others. Many of the ideas are the result of creative collaborations between experts in different fields challenging one another to come up with new ideas.
“It really is a community of innovators,” LaPointe said. “We’re looking for ideas that will enable brand-new ways of doing things.”
Self-growing bricks for Mars
For the past few years, Congrui Jin and her research group have used bacteria and fungi to heal cracks in concrete. Jin, an assistant professor at the University of Nebraska-Lincoln, now wants to take her idea to space. Her self-growing bricks could one day build habitats and other structures for human explorers on the red planet.
The concept would involve
|
sending bacterial and fungal spores and a bioreactor to Mars. The bioreactor is needed for the microbes to survive because Mars’ natural environment would be too harsh for them. But Mars would provide the rest of the necessary ingredients for the self-growing bricks, including dust and soil, sunlight, nitrogen, carbon dioxide and water from melted ice.
In turn, the bacteria can produce oxygen and organic carbon to support the fungi. The process, once all of these ingredients are inside the bioreactor, would also create calcium carbonate to serve as the glue.
The bacteria, fungi and minerals will bind Martian soil together to form blocks, which can later be used to make floors, walls and even furniture.
Jin’s group, which includes some students, screens the most suitable types of fungi and bacteria and tests which ones work best together. The team is also building a bioreactor to calibrate the atmosphere, pressure, temperature and illumination required to grow the bricks.
“The very important feature of this technology is its autonomous nature, and it doesn’t require any human intervention,” Jin said. “We initially just need to provide small quantities of spores to start this process, and the rest is automatic.”
Saturn’s moon Titan has long intrigued astronomers with its thick atmosphere and lakes and rivers of methane. It’s a unique place in our solar system in which chemistry that’s taking place may be similar to what occurred on early Earth. A rover-size drone called Dragonfly is expected to launch for this moon in 2027 to study its fine-grained, drier organic material.
Quinn Morley, principal investigator at Planet Enterprises in Gig Harbor, Washington, and his collaborators at Washington State University and other institutions envision a complementary Titan mission to explore the wetter regions of the intriguing moon. The seaplane-like design, called TitanAir, would soar through Titan’s atmosphere and sail on its lakes.
TitanAir could reach the moon about a decade after Dragonfly “to help unlock key secrets of this alien planet,” Morley said.
The front section of the plane’s wing would “drink” liquid methane that forms on the wing’s surface as the plane flies through rain clouds. The liquid collected inside the wing could be analyzed by instruments and transmitted back to Earth.
A small rover or helicopter could fly to places the plane can’t reach and return samples to the plane. Beginning in the fall, Morley wants to partner with engineering student design teams at Washington State on ideas for TitanAir.
“The analysis of clouds, lakes, and shorelines allows us to attack the search for life in three unique ways with one spacecraft,” Morley said in an email, “increasing our chances of unlocking these profound mysteries.”
Large space observatories such as the Hubble Space Telescope and Webb telescope are the result of decades of funding, design, assembly and testing. But there is a growing demand for a wider range of telescopes that can be developed cheaper and more quickly.
Several new NIAC concepts offer diverse ways of observing the cosmos like never before.
One design is FLUTE, or the Fluidic Telescope, from Edward Balaban, a research scientist at NASA’s Ames Research Center in California, and his collaborators. Balaban has mostly worked in artificial intelligence and strategic planning for the upcoming VIPER lunar rover mission.
Balaban was inspired through conversations with colleagues both at Ames and the Technion-Israel Institute of Technology to develop a concept that could combine manipulating fluids with on-orbit assembly of a large telescope. The latter wouldn’t be limited by the size of its launch vehicle.
Under the plan, two launches would send an instrumentation spacecraft and a frame that can be filled with liquid into space. The liquid-filled frame creates a massive 50-meter (164-foot) mirror, while the instrumentation spacecraft will remain at a specific distance from the mirror to collect images and send them back to Earth, Balaban said
Balaban and his team are testing liquids that could act as the mirror, including ionic liquids called molten salts. Researchers will use their grant to work on constructing a frame to hold the liquid and the design for the instrumentation spacecraft. Such a telescope could be impervious to the micrometeoroid strikes that the Webb telescope has experienced because the liquids wouldn’t be affected, he said.
A large light-collecting surface such as FLUTE could glimpse the faint light of early galaxies or look inside the atmospheres of exoplanets.
“Believe it or not, we might be able to start seeing surface features on the nearest exoplanets,” Balaban said. “Instead of seeing them as pinpoints of light, we might be able to tell if they have continents, for example.”
Searching for another Earth
Meanwhile, Heidi Jo Newberg, a professor of physics, applied physics, and astronomy at Rensselaer Polytechnic Institute, and her collaborators have a concept for a telescope that could find “Earth 2.0.”
The team’s idea is to search for nearby habitable planets by changing up the centuries-old design for telescopes. It’s called DICER, or a Diffractive Interfero Coronagraph Exoplanet Resolver.
Earth-size exoplanets are small and faint, especially compared with the bright stars they orbit, so conventional thinking suggests that searching for such planets would require a telescope with three times the diameter of the Webb telescope — which at 6.5 meters (21 feet 4 inches) is the largest mirror ever flown in space.
A 20-meter (65-foot) telescope currently isn’t feasible because it would be difficult to launch such a large mirror into space.
But DICER would rely on two small mirrors as well as two 10-meter diffraction gratings, or optical components that diffract light, that can be easily packed up inside a rocket. Think about the rainbow visible on the back of a CD when you hold it up to the light — that’s diffraction grating. The gratings could collect the same amount of light as a 20-meter telescope.
“This idea brings down the size and the weight of what you have to send up into space to get the desired high resolution of a telescope like Webb,” Newberg said. “What we’re trying to do is exploit this out-of-the-box design to make it possible to find Earth-like planets.”
The team will work on the scale of the optics and determine whether the telescope could also be used to study exoplanet atmospheres.
The invisible sky
Despite advances in the types of light telescopes can see, low-frequency radio waves remain invisible to us.
Dr. Mary Knapp, a research scientist at the Massachusetts Institute of Technology’s Haystack Observatory, and her collaborators are working on a concept that could reveal this part of the radio wave spectrum. The waves are unobservable by ground-based radio telescopes due to the distorting effects of Earth’s upper atmosphere.
“I learned back in my undergrad days that there was this part of the spectrum we couldn’t see,” Knapp said. “It really just struck me that there was this unexplored part of the universe, and I want to explore this part of the sky for the first time.”
The Great Observatory for Long Wavelengths, or GO-LoW, would rely on a fleet of thousands of shoebox-size satellites that act in concert, like one large virtual telescope. Small satellites are cost-effective, and mega-constellations of them can be launched at once on a single, large rocket.
The observatory could measure low-frequency electromagnetic radiation, which would carry a wealth of data for astronomers who want to observe the oldest stars and galaxies and understand the magnetic fields of exoplanets and stars. This information can help scientists more easily spot habitable planets. Magnetic fields help planets maintain their atmospheres.
The team will focus next on the antenna design for the satellites and the potential architecture needed for the satellites in space. If GO-LoW is developed, it could be used to create a new map of the sky.
“Anytime we look at the universe in a new part of the spectrum, we’ve seen things we didn’t expect,” Knapp said. “I’m really excited about what we don’t know.”
|
A ridge of high-pressure air parked east of Colorado has produced record amounts of rainfall for many Front Range communities in June.
With 5.23 inches of rain recorded at Denver International Airport so far this month, it's been the rainiest June in Denver since the 1880s, according to National Weather Service data. Greeley and Colorado Springs have also logged record amounts of precipitation at 4.37 inches and 9.47 inches, respectively.
Thunderstorms, fueled by moisture from air over the Gulf of Mexico, have turned dangerous at times. A fast-developing storm cell unleashed golf ball-sized hail on Red Rocks concertgoers this week.
Rain flooded several roads and bridges in Elbert, Arapahoe and Douglas counties Wednesday night. One person also died in rural Arapahoe County when a stream of stormwater swept their car away.
An EF-1 tornado that spun off from the storm traveled more than six miles across Highlands Ranch, unrooting trees, knocking over fences and bending streetlights.
Local authorities made a disaster declaration to help with cleanup efforts.
A block of lower-pressure air in the atmosphere just west of Colorado has helped fuel the steady stream of rain and storms, said Scott Entrekin, NWS meteorologist. But it’s beginning to dissipate, which means drier air from the southwest is moving over the state.
“That’s pretty much cut off our rain chances here through early next week,” he said.
Other Front Range communities have received near-record amounts of rain due to the wet weather pattern.
Evergreen and other Front Range foothill communities have logged a lot — around 4.52 inches. That’s just shy of a June 1949 record of 4.87 inches.
Fort Collins has seen about 5.19 inches so far in June, which is about an inch shy of its record set in 1949.
Farther south and west, the month has been drier.
Pueblo is about halfway to its all-time June record of 6.55 inches of rain set in 1921.
And Grand Junction, Steamboat Springs and Gunnison have seen less than half of the totals their June records are at.
The Denver metro can expect to dry out by next week, when temps are expected to top out in the mid to high 80s.
“It’s gonna be a lot drier, more normal summer weather from here on out,” Entrekin said.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
<urn:uuid:2e7fc125-e9ac-49ad-82a5-e96985c845f8>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00781.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9565756916999817,
"pii_count": 0,
"score": 2.515625,
"token_count": 577,
"url": "https://www.cpr.org/2023/06/23/colorado-june-weather-rain/"
}
|
A ridge of high-pressure air parked east of Colorado has produced record amounts of rainfall for many Front Range communities in June.
With 5.23 inches of rain recorded at Denver International Airport so far this month, it's been the rainiest June in Denver since the 1880s, according to National Weather Service data. Greeley and Colorado Springs have also logged record amounts of precipitation at 4.37 inches and 9.47 inches, respectively.
Thunderstorms, fueled by moisture from air over the Gulf of Mexico, have turned dangerous at times. A fast-developing storm cell unleashed golf ball-sized hail on Red Rocks concertgoers this week.
Rain flooded several roads and bridges in Elbert, Arapahoe and Douglas counties Wednesday night. One person also died in rural Arapahoe County when a stream of stormwater swept their car away.
An EF-1 tornado that spun off from the storm traveled more than six miles across Highlands Ranch, unrooting trees, knocking over fences and bending streetlights.
Local authorities made a disaster declaration to help with cleanup efforts.
A block of lower-pressure air in the atmosphere just west of Colorado has helped fuel the steady stream of rain and storms, said Scott Entrekin, NWS meteorologist. But it’s beginning to dissipate, which means drier air from the southwest is moving over the state.
“That’s pretty much cut off our rain chances here through early next week,” he said.
Other Front Range communities have received near-record amounts of rain due to the wet weather pattern.
Evergreen and other Front Range foothill communities have logged a lot — around 4.52 inches. That’s just shy of a June 1949 record of 4.87 inches.
Fort Collins has seen about 5.19 inches so far in June, which is about an inch shy of its record set in 1949.
Farther south and west, the month has been drier.
Pueblo is about halfway to its all-time June record of 6.55 inches of rain set in 1921.
And Grand Junction, Steamboat Springs and Gunnison have seen less than half of the totals their June records are at.
The Denver metro can expect to dry out by next week, when temps are expected to top out in the mid to high 80s.
“It’s gonna be a lot drier, more normal summer weather from here on
|
out,” Entrekin said.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
Inge Brees is a policy and outreach manager at Search for Common Ground. Harriet Mackaill-Hill is a climate and peace advocacy advisor at International Alert.
Meeting the Paris Climate Agreement’s objective to limit the rise in global average temperature to well below 1.5 degrees Celsius will only be possible if we properly understand and mitigate the links between climate change and conflict.
And as this year’s U.N. Climate Change Conference COP 28 approaches, this connection needs to be on everyone’s mind.
Throughout the past century, the challenges of environmental degradation and the depletion of natural resources have evolved into potent catalysts for global insecurity and violent conflict. From the water scarcity crisis in Yemen to land ownership disputes in Kyrgyzstan and human rights violations during mining operations in the Democratic Republic of Congo, we now encounter tensions that are fueled by climate-related issues and escalate into violent confrontation every day.
The number of fatal conflicts triggered over scarce water resources alone has increased by 300 percent since 2000.
But while the connection between climate change and conflict is well known, political discussions regarding potential solutions often lag behind. This year, however, for the first time in climate summit history, peace is finally being officially acknowledged at COP 28, with its first thematic day dedicated to “relief, recovery and peace.”
This initiative comes at a crucial time. Trillions of dollars will be spent to remedy the impact of climate change over the next decade, yet the distribution of climate finance remains uneven, leaving many exposed communities behind — particularly those most vulnerable to conflict.
In conflict-affected areas, the impact of climate change is one of many governmental priorities, and treating the two separately obstructs innovative and collaborative approaches, which are vital for addressing climate security risks.
It’s becoming increasingly evident that the success of climate policies hinges not only on reducing emissions but also on prioritizing peace. In essence, we need to make sure interventions don’t inadvertently contribute to further conflict, but create opportunities for stability and inclusivity instead.
For even the best-intentioned interventions can yield disastrous consequences if they fail to account for local context and conflict dynamics.
Let’s consider the rush for raw materials: The crucial shift toward a low-carbon society has led to a growing demand for specific minerals that are essential for solar panels, wind energy, electric cars and more. Unfortunately, the mining required for these minerals often brings about adverse environmental and human consequence — such as exploitation, land clearance and conflicts between large-scale and small-scale players. And most of these raw material reserves are in countries already at considerable risk of conflict.
This isn’t an argument for relying on polluting resources. Phasing out fossil fuels is the only way forward. But it’s imperative that alternative extraction processes are conducted in a manner that respects both people and the planet.
In other words, to prevent the emergence of new disputes and crises, the green transition must adhere to a conflict-sensitive approach.
Like green investment, climate finance needs to be conflict sensitive as well. This means better access to funding, allowing for the increased involvement and empowerment of local civil society organizations and authorities, and — from the donor side — a greater understanding of the risks associated with financing projects in conflict-affected settings in order to increase the flow of funding to such areas.
Local actors need to be able to have direct access to funds without having to go through their governments or, as is often the case, an international intermediary, and we need to move away from cumbersome accreditation processes. Moreover, we need to improve the capacity and understanding of funding frameworks in order to allow for their greater participation and for governments and local civil society organizations to access funding.
Along these lines, the announcement of the Loss and Damage Fund at COP27 in Sharm El-Sheikh — a fund designed to compensate those severely impacted by the climate crisis and hold the largest polluters accountable — was celebrated as a victory in the battle for climate justice.
But to deliver this much-needed compensation, we need to reach an agreement concerning critical aspects of the fund, including its governance, structure, location, funding sources and beneficiaries. It’s also essential that the fund and other similar mechanisms aren’t just conflict sensitive but also de facto accessible to populations affected by climate change in fragile, conflict-affected settings.
Thus, the inclusion of peace as an official thematic day at COP28 helps create much-needed momentum toward this goal. However, it should mark the beginning of the discussion — not the endpoint.
These talks need to address the importance of conflict sensitivity and of scaling up climate action in conflict-affected settings, then subsequently incorporate them into implemented agreements. Governments, NGOs and donors should also collectively provide both political and financial support for such conflict-sensitive climate solutions. And, crucially, these efforts should be carried out in consultation with local communities, which possess much deeper contextual understanding.
Indeed, climate change can present an opportunity to bring people together across dividing lines so they can address this shared existential challenge, thereby providing a chance for building peace.
It can be done — and we shouldn’t aim for anything less.
|
<urn:uuid:93415439-c761-44b9-9e76-b23ed121493c>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474674.35/warc/CC-MAIN-20240227085429-20240227115429-00612.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9410386681556702,
"pii_count": 0,
"score": 3,
"token_count": 1066,
"url": "https://dnyuz.com/2023/11/30/climate-action-must-avoid-fueling-conflict/"
}
|
Inge Brees is a policy and outreach manager at Search for Common Ground. Harriet Mackaill-Hill is a climate and peace advocacy advisor at International Alert.
Meeting the Paris Climate Agreement’s objective to limit the rise in global average temperature to well below 1.5 degrees Celsius will only be possible if we properly understand and mitigate the links between climate change and conflict.
And as this year’s U.N. Climate Change Conference COP 28 approaches, this connection needs to be on everyone’s mind.
Throughout the past century, the challenges of environmental degradation and the depletion of natural resources have evolved into potent catalysts for global insecurity and violent conflict. From the water scarcity crisis in Yemen to land ownership disputes in Kyrgyzstan and human rights violations during mining operations in the Democratic Republic of Congo, we now encounter tensions that are fueled by climate-related issues and escalate into violent confrontation every day.
The number of fatal conflicts triggered over scarce water resources alone has increased by 300 percent since 2000.
But while the connection between climate change and conflict is well known, political discussions regarding potential solutions often lag behind. This year, however, for the first time in climate summit history, peace is finally being officially acknowledged at COP 28, with its first thematic day dedicated to “relief, recovery and peace.”
This initiative comes at a crucial time. Trillions of dollars will be spent to remedy the impact of climate change over the next decade, yet the distribution of climate finance remains uneven, leaving many exposed communities behind — particularly those most vulnerable to conflict.
In conflict-affected areas, the impact of climate change is one of many governmental priorities, and treating the two separately obstructs innovative and collaborative approaches, which are vital for addressing climate security risks.
It’s becoming increasingly evident that the success of climate policies hinges not only on reducing emissions but also on prioritizing peace. In essence, we need to make sure interventions don’t inadvertently contribute to further conflict, but create opportunities for stability and inclusivity instead.
For even the best-intentioned interventions can yield disastrous consequences if they fail to account for local context and conflict dynamics.
Let’s consider the rush for raw materials: The crucial shift toward a low-carbon society has led to a growing demand for specific minerals that are essential for solar panels, wind energy, electric cars and more. Unfortunately, the mining required for these minerals often brings about adverse environmental and human consequence — such as exploitation, land clearance and conflicts between large-scale and small-scale players. And most of
|
these raw material reserves are in countries already at considerable risk of conflict.
This isn’t an argument for relying on polluting resources. Phasing out fossil fuels is the only way forward. But it’s imperative that alternative extraction processes are conducted in a manner that respects both people and the planet.
In other words, to prevent the emergence of new disputes and crises, the green transition must adhere to a conflict-sensitive approach.
Like green investment, climate finance needs to be conflict sensitive as well. This means better access to funding, allowing for the increased involvement and empowerment of local civil society organizations and authorities, and — from the donor side — a greater understanding of the risks associated with financing projects in conflict-affected settings in order to increase the flow of funding to such areas.
Local actors need to be able to have direct access to funds without having to go through their governments or, as is often the case, an international intermediary, and we need to move away from cumbersome accreditation processes. Moreover, we need to improve the capacity and understanding of funding frameworks in order to allow for their greater participation and for governments and local civil society organizations to access funding.
Along these lines, the announcement of the Loss and Damage Fund at COP27 in Sharm El-Sheikh — a fund designed to compensate those severely impacted by the climate crisis and hold the largest polluters accountable — was celebrated as a victory in the battle for climate justice.
But to deliver this much-needed compensation, we need to reach an agreement concerning critical aspects of the fund, including its governance, structure, location, funding sources and beneficiaries. It’s also essential that the fund and other similar mechanisms aren’t just conflict sensitive but also de facto accessible to populations affected by climate change in fragile, conflict-affected settings.
Thus, the inclusion of peace as an official thematic day at COP28 helps create much-needed momentum toward this goal. However, it should mark the beginning of the discussion — not the endpoint.
These talks need to address the importance of conflict sensitivity and of scaling up climate action in conflict-affected settings, then subsequently incorporate them into implemented agreements. Governments, NGOs and donors should also collectively provide both political and financial support for such conflict-sensitive climate solutions. And, crucially, these efforts should be carried out in consultation with local communities, which possess much deeper contextual understanding.
Indeed, climate change can present an opportunity to bring people together across dividing lines so they can address this shared existential challenge, thereby providing a chance for building peace.
It can be done — and we shouldn’t aim for anything less.
|
Start your own tomatoes and peppers from seed in January
A few years ago, I planted pepper seeds for the first time to make my own paprika. This small start sent me on a journey from my bathroom window to a small growing tent in my closet and eventually to some of the best and most interesting paprika I have ever tasted.
Peppers and tomatoes are both members of the nightshade family. In our area they have similar growing habits and time frames for planting. The timing of seed planting is one of the most important aspects of growing vegetables. Other considerations include available equipment, quantity of plants needed, and the planting and caring of the seeds and seedlings.
When choosing a pepper or tomato to grow from seed, consider how you are planning to use the produce. For instance, if you enjoy pasta with red sauce, a determinate Roma-style tomato might be the best choice. For hot sauces, I have enjoyed some of the super-hot peppers, although one plant is often enough!
Sharing or exchanging your extra plants with friends is often an option to gain a diversity of varieties after you have too many plants. If growing a whole seed packet of one variety, I usually have a project in mind that requires lots of the same variety of pepper or tomato.
The equipment needed for starting peppers and tomatoes consists of a heat mat, grow light, containers to start the seeds in, watering can, fertilizer, and growing medium. Tomatoes do not need as much heat to germinate as a lot of the peppers. It is possible to grow both in a south facing window or greenhouse but without sufficient light and heat the peppers will be very slow to grow.
Peppers prefer to be germinated between 70 to 90 degrees Fahrenheit, while tomatoes prefer 65 to 85 degrees. My house is usually under 70 in the winter, so I run a heat mat to keep the soil temperature closer to 80 degrees. This is the same with the lighting situation, which requires me to use a grow light to get sufficient light to the seedlings.
You can use a good quality potting soil or seed starting mix as your growing medium. Your container could be a nursery flat with individual cells or small two-inch pots. Here is my recipe for sowing the seed:
- Fill the seed starting flat or individual pots with seed starter mix
- Place three to four seeds per cell or pot
- Gently sprinkle worm castings to cover seeds (or use more of the potting medium)
- Press down to achieve good seed to soil contact
Water lightly using a watering can with a fine shower or water from the bottom to keep disturbance to a minimumThe seeds will usually germinate in five to 10 days, but some peppers in low temperatures can take a few weeks so don’t toss empty containers immediately. During this time, keep the top portion of the container moist.
During germination the cotyledon will emerge, followed by the first set of true leaves. Make sure the seeds and plants are getting plenty of light, 14 to 16 hours. A window will only get a few hours when compared to a grow light. Select the strongest seedling in each cell or pot and cut out the others.
At this point the plant can be fertilized with a fish/seaweed product which is gentle and promotes a great burst of growth. Continue to water thoroughly and fertilizer once a week.
The best time to plant both the seeds of peppers and tomatoes is in January. Tomato seedlings can be planted outside in late February, then protected from frost, while peppers prefer a warmer temperature and should be planted outside after the last frost, usually in late March or April.
Many of the peppers are slow to grow while tomatoes shoot up very fast. Each growing season will be different and will present different challenges, but the final product is delicious and well worth the effort.
Joe Walthall is a co-owner at Native Nurseries, where he teaches vegetable gardening workshops, and is a volunteer writer for UF/IFAS Extension Leon County, an Equal Opportunity Institution. For gardening questions, email the extension office at <email-pii>.
|
<urn:uuid:a93522a0-123b-4e02-8be0-7976158acbac>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00409.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9415144920349121,
"pii_count": 1,
"score": 2.625,
"token_count": 860,
"url": "https://www.tallahassee.com/story/life/home-garden/2023/01/12/tips-for-starting-your-own-tomatoes-and-peppers-from-seed-in-january/69787748007/"
}
|
Start your own tomatoes and peppers from seed in January
A few years ago, I planted pepper seeds for the first time to make my own paprika. This small start sent me on a journey from my bathroom window to a small growing tent in my closet and eventually to some of the best and most interesting paprika I have ever tasted.
Peppers and tomatoes are both members of the nightshade family. In our area they have similar growing habits and time frames for planting. The timing of seed planting is one of the most important aspects of growing vegetables. Other considerations include available equipment, quantity of plants needed, and the planting and caring of the seeds and seedlings.
When choosing a pepper or tomato to grow from seed, consider how you are planning to use the produce. For instance, if you enjoy pasta with red sauce, a determinate Roma-style tomato might be the best choice. For hot sauces, I have enjoyed some of the super-hot peppers, although one plant is often enough!
Sharing or exchanging your extra plants with friends is often an option to gain a diversity of varieties after you have too many plants. If growing a whole seed packet of one variety, I usually have a project in mind that requires lots of the same variety of pepper or tomato.
The equipment needed for starting peppers and tomatoes consists of a heat mat, grow light, containers to start the seeds in, watering can, fertilizer, and growing medium. Tomatoes do not need as much heat to germinate as a lot of the peppers. It is possible to grow both in a south facing window or greenhouse but without sufficient light and heat the peppers will be very slow to grow.
Peppers prefer to be germinated between 70 to 90 degrees Fahrenheit, while tomatoes prefer 65 to 85 degrees. My house is usually under 70 in the winter, so I run a heat mat to keep the soil temperature closer to 80 degrees. This is the same with the lighting situation, which requires me to use a grow light to get sufficient light to the seedlings.
You can use a good quality potting soil or seed starting mix as your growing medium. Your container could be a nursery flat with individual cells or small two-inch pots. Here is my recipe for sowing the seed:
- Fill the seed starting flat or individual pots with seed starter mix
- Place three to four seeds per cell or pot
- Gently sprinkle worm castings to cover seeds (or use more of
|
the potting medium)
- Press down to achieve good seed to soil contact
Water lightly using a watering can with a fine shower or water from the bottom to keep disturbance to a minimumThe seeds will usually germinate in five to 10 days, but some peppers in low temperatures can take a few weeks so don’t toss empty containers immediately. During this time, keep the top portion of the container moist.
During germination the cotyledon will emerge, followed by the first set of true leaves. Make sure the seeds and plants are getting plenty of light, 14 to 16 hours. A window will only get a few hours when compared to a grow light. Select the strongest seedling in each cell or pot and cut out the others.
At this point the plant can be fertilized with a fish/seaweed product which is gentle and promotes a great burst of growth. Continue to water thoroughly and fertilizer once a week.
The best time to plant both the seeds of peppers and tomatoes is in January. Tomato seedlings can be planted outside in late February, then protected from frost, while peppers prefer a warmer temperature and should be planted outside after the last frost, usually in late March or April.
Many of the peppers are slow to grow while tomatoes shoot up very fast. Each growing season will be different and will present different challenges, but the final product is delicious and well worth the effort.
Joe Walthall is a co-owner at Native Nurseries, where he teaches vegetable gardening workshops, and is a volunteer writer for UF/IFAS Extension Leon County, an Equal Opportunity Institution. For gardening questions, email the extension office at <email-pii>.
|
A hundred years later, a Welsh women's peace petition returns home
MARY LOUISE KELLY, HOST:
Dreams of world peace are as old as wars. But as the women of Wales were recovering from the First World War, they demanded peace in droves. One hundred years ago, 1923, a group of Welsh women drafted a petition for peace, and they got three-quarters of all the women in Wales to sign on. Then they packed up the document into a big oak chest and sent it across the Atlantic so that women in America could join the movement, too. Well, now it has made the journey back home to Wales. Professor and poet Mererid Hopwood is overseeing the effort to start digitizing the signatures and find out just who all these women were. She joins us now from Aberystwyth in Wales. Professor Hopwood, welcome.
MERERID HOPWOOD: (Speaking Welsh). Thank you very much.
KELLY: Oh, it's lovely to have you and to hear that accent. I want to give people a sense of scale of this thing. I've read that it's almost 400,000 signatures, and that if you laid them into end, they would stretch 7 miles.
HOPWOOD: Well, that's right. That's how it was reported when it arrived in New York.
KELLY: So it sat on this side, the American side of the Atlantic, for a hundred years and I gather was largely forgotten. How did it resurface?
HOPWOOD: Well, that's right. So a hundred years after the end of the First World War, people went rummaging around the Temple of Peace and Health in Cardiff, which has a magnificent archive. And there was a curious plaque sort of thing made of Moroccan leather with gold lettering, bilingual message saying something about this petition that nobody seemed to know about. We certainly we hadn't been taught this in school or anything. So from then the story was sort of recovered. And in 2017, the first email was sent to the Smithsonian to say we believe this chest and petition is there. And it's from there that we've been working to see how best we can digitize it.
KELLY: What kind of interest are you hearing from people today in Wales? Are people interested in knowing, like, you know, did my great-great-great-grandmother sign this or did the person who lived in my house a hundred years ago sign this?
HOPWOOD: That's right. You can't imagine the excitement that the truck arrived in the National Library of Wales last Wednesday. And it was, yeah, an emotional moment.
KELLY: Just stay with that moment for a second. Is it still in that oak chest? Was there a moment where you got to open it?
HOPWOOD: It was in the oak chest. These 400,000 signatures have been carefully put together in boxes. We were given white gloves and were allowed to open just a few to have a look. And as you can imagine, the inevitable thing happened. One of the women in the gathering there on Wednesday said, oh, I know that house (laughter).
KELLY: Oh, my goodness.
HOPWOOD: I had a good little look, but didn't quite see anybody I could claim. But, you know, we will find these people in the end.
KELLY: The original goal was so idealistic. And here we sit in 2023, and we obviously have not achieved a peaceful world, a world without war. For you, this history - I can hear in your voice - it does bring hope. It brings joy. Tell me why.
HOPWOOD: It does. The ambition - these people weren't afraid to think that this was possible, and the common-sense approach, OK, so how can we do that? Let's call on our sisters in the States to see if they can help bring that about. And I think, you know, one of the things we've had as a guiding principle for the partnership is this - to hold on to hope and to interpret hope not as a crossing of fingers, but as a power and energy of force that can enable us to do two things. First of all, to see that better place, and secondly, to know the way to get there. It is possible. We have to believe that.
KELLY: That is Mererid Hopwood, professor at Aberystwyth University and chair of the Peace Petition Partnership at the National Peace Institute of Wales. Thank you so much. And good luck.
HOPWOOD: Thank you. (Speaking Welsh). Transcript provided by NPR, Copyright NPR.
|
<urn:uuid:2757bc6d-4aa1-4864-91c7-fcae30cda37b>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653183.5/warc/CC-MAIN-20230606214755-20230607004755-00493.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9765904545783997,
"pii_count": 0,
"score": 3.1875,
"token_count": 983,
"url": "https://www.tpr.org/2023-04-07/a-hundred-years-later-a-welsh-womens-peace-petition-returns-home"
}
|
A hundred years later, a Welsh women's peace petition returns home
MARY LOUISE KELLY, HOST:
Dreams of world peace are as old as wars. But as the women of Wales were recovering from the First World War, they demanded peace in droves. One hundred years ago, 1923, a group of Welsh women drafted a petition for peace, and they got three-quarters of all the women in Wales to sign on. Then they packed up the document into a big oak chest and sent it across the Atlantic so that women in America could join the movement, too. Well, now it has made the journey back home to Wales. Professor and poet Mererid Hopwood is overseeing the effort to start digitizing the signatures and find out just who all these women were. She joins us now from Aberystwyth in Wales. Professor Hopwood, welcome.
MERERID HOPWOOD: (Speaking Welsh). Thank you very much.
KELLY: Oh, it's lovely to have you and to hear that accent. I want to give people a sense of scale of this thing. I've read that it's almost 400,000 signatures, and that if you laid them into end, they would stretch 7 miles.
HOPWOOD: Well, that's right. That's how it was reported when it arrived in New York.
KELLY: So it sat on this side, the American side of the Atlantic, for a hundred years and I gather was largely forgotten. How did it resurface?
HOPWOOD: Well, that's right. So a hundred years after the end of the First World War, people went rummaging around the Temple of Peace and Health in Cardiff, which has a magnificent archive. And there was a curious plaque sort of thing made of Moroccan leather with gold lettering, bilingual message saying something about this petition that nobody seemed to know about. We certainly we hadn't been taught this in school or anything. So from then the story was sort of recovered. And in 2017, the first email was sent to the Smithsonian to say we believe this chest and petition is there. And it's from there that we've been working to see how best we can digitize it.
KELLY: What kind of interest are you hearing from people today in Wales? Are people interested in knowing, like, you know, did my great-great-great-grandmother sign
|
this or did the person who lived in my house a hundred years ago sign this?
HOPWOOD: That's right. You can't imagine the excitement that the truck arrived in the National Library of Wales last Wednesday. And it was, yeah, an emotional moment.
KELLY: Just stay with that moment for a second. Is it still in that oak chest? Was there a moment where you got to open it?
HOPWOOD: It was in the oak chest. These 400,000 signatures have been carefully put together in boxes. We were given white gloves and were allowed to open just a few to have a look. And as you can imagine, the inevitable thing happened. One of the women in the gathering there on Wednesday said, oh, I know that house (laughter).
KELLY: Oh, my goodness.
HOPWOOD: I had a good little look, but didn't quite see anybody I could claim. But, you know, we will find these people in the end.
KELLY: The original goal was so idealistic. And here we sit in 2023, and we obviously have not achieved a peaceful world, a world without war. For you, this history - I can hear in your voice - it does bring hope. It brings joy. Tell me why.
HOPWOOD: It does. The ambition - these people weren't afraid to think that this was possible, and the common-sense approach, OK, so how can we do that? Let's call on our sisters in the States to see if they can help bring that about. And I think, you know, one of the things we've had as a guiding principle for the partnership is this - to hold on to hope and to interpret hope not as a crossing of fingers, but as a power and energy of force that can enable us to do two things. First of all, to see that better place, and secondly, to know the way to get there. It is possible. We have to believe that.
KELLY: That is Mererid Hopwood, professor at Aberystwyth University and chair of the Peace Petition Partnership at the National Peace Institute of Wales. Thank you so much. And good luck.
HOPWOOD: Thank you. (Speaking Welsh). Transcript provided by NPR, Copyright NPR.
|
To the much-discussed hallucinations of Large Language Models (LLM), their instability and lack of interpretability, we can now add their vulnerability to automated adversarial attacks forcing them to produce harmful content. But these are the least of their weaknesses. The depth of their understanding of the human world is a much more important hurdle for LLM to overcome before they become “superintelligence.”
According to ChatGPT creator OpenAI, however, this “most impactful technology humanity has ever invented,” could “arrive this decade” and could “lead to human extinction.”
We have already heard years ago that humanlike AI is just around the corner. At the time Terry Winograd entered graduate school in the late 1960s, most [intelligent and knowledgeable humans] “believed that it wouldn’t be long before machines would see, hear, speak, move, and otherwise perform humanlike tasks,” wrote John Markoff in Machines of Loving Grace.
For his PhD dissertation at MIT, Winograd developed SHRDLU, a computer program that was capable of participating in conversations about a made-up world, a “microworld” consisting of toy blocks and a “gripper” for moving them about. It responded to commands in English to manipulate the blocks using its gripper and it “understood” questions about the state of its world.
In his history of artificial intelligence, Nils Nilsson says that SHRDLU’s impressive performance made some NLP researchers “optimistic about future success.” But “Winograd soon abandoned this line of research in favor of pursuing work devoted to the interaction of computers and people. Perhaps because he had first-hand experience of how much knowledge was required for successful language understanding in something so simple as the blocks world, he despaired of ever giving computers enough knowledge to duplicate the full range of human verbal competence.”
Later, in 2006, Winograd saw statistical language understanding, neural networks and machine learning as new developments moving the AI field in the direction of his preferred approach to human-computer interaction. “Enlightened trial and error outperforms the planning of the flawless intellect” is at the center of this approach, he wrote, acknowledging “the limitations of knowing and modeling the complexities of the real human world.”
Further triumphs of the statistical analysis approach to AI, particularly in the domain of image identification in 2012, made most intelligent and knowledgeable humans believe that human-like machine intelligence or even “superintelligence” is, again, just around the corner.
Most, but not all. One popular tool used by those questioning the intelligence of the new AI became known as the Winograd Schema Challenge. This was the 2010 brainchild of Hector Levesque at the University of Toronto, based on an example of semantic ambiguity mentioned in Winograd’s 1972 “Understanding Natural Language.” To overcome the limitations of the “Turing Test,” Levesque suggested a multiple-choice test that requires the correct resolution of an ambiguous pronoun in a statement. For example,
The trophy doesn’t fit in the brown suitcase because it is too large. What is too large?
A. The trophy
B. The suitcase
“The challenge attracted a fair amount of favorable interest from both the research community and the popular science press. The naturalness of the problem made it a fair one for AI systems; the complexity of the inferences involved seemed to put it far beyond the range of what was then current technology,” wrote the authors of “The Defeat of the Winograd Schema Challenge” (January 2023).
Indeed, at the “first and last running of the Winograd Schema Challenge” which took place in 2016, the most successful of the six AI programs that entered it achieved a score of 58% on the test collection, only slightly better than chance. At the inaugural O’Reilly AI conference the same year, while some talked about “super-human” driverless cars that are just around the corner, others, including deep learning pioneer Yann LeCun, mentioned the Winograd Schema as an unsolved challenge in testing the machine’s knowledge of how the world works.
Just a year later, however, natural language processing (NLP) moved forward with Google’s researchers publishing “Attention is All You Need.” It proposed the Transformer (the T in ChatGPT), a new “network architecture” that improved considerably the (machine) learning of the dependencies among words even when they are far apart in the text. The result of this breakthrough, in addition to unleashing the avalanche of generative AI, was the defeat of the Winograd Schema Challenge, with AI going from “random guessing to human performance in a very short time span,” achieving a score of 90.1% in 2019.
Large language model GPT-3 achieved a similar score (88.3%) in 2020. Eager to see for myself the triumph of LLM, I ran a few Winograd Schema phrases by the July 20, 2023, version of ChatGPT. It gave me the correct answers to a couple of them. But then I presented it with "The sack of potatoes had been placed below the bag of flour, so it had to be moved first. What had to be moved first?” and got this in response:
In the given sentence, "it" refers to the sack of potatoes. The sentence "The sack of potatoes had been placed below the bag of flour, so it had to be moved first" means that the sack of potatoes had to be moved first before accessing or moving the bag of flour, which was placed on top of it.
When I switched the position of the sack of potatoes from “below” to “above,” I got this in response:
Apologies for any confusion in my previous response. In the given sentence, "it" refers to the bag of flour. The correct interpretation of the sentence is:
"The sack of potatoes had been placed above the bag of flour, so it had to be moved first."
This means that the bag of flour had to be moved first because it was below the sack of potatoes, and you needed to move it out of the way before accessing the sack of potatoes placed above it.
It appears that the great leap forward for AI, which some of its prominent developers say will lead soon to superintelligence that could destroy humanity, still does not have, at least in this case, a child-like understanding of space and motion and what’s involved in moving and manipulating physical objects.
Sometimes LLM appear to understand the world and how to operate in it, and sometimes they fail to do so. Sometimes their language skills lead to accurate translations and sometimes to complete failures. But what about their impressive performance in the Winograd Schema Challenge?
In the paper documenting the demise of the Winograd Schema Challenge, the researchers who worked for about ten years to promote it as a test of machine intelligence, attribute the human-like achievements of LLM to flaws in the test’s “formulation and execution.”
In addition, they reported that in the early 2010s, “when there were few signs that AI systems could succeed at the Winograd Schema Challenge, there were those who warned that it might be a less robust test than its creators hoped and believed. They argued that… success might come without solving the underlying problem that had motivated the challenge.”
The proponents of the Winograd Schema now admit that solving it “is not a surrogate for the ability to do commonsense reasoning, let alone for intelligence. The difficulty of using success at a specific task as a surrogate for intelligence is one that continues to this day, even as the tasks computers can successfully perform significantly increase in complexity.”
Or what Yehoshua Bar-Hillel called seventy years ago (!) the fallacy of the first step: The distance from the inability to do something to doing it badly is usually much shorter than the distance from doing something badly and doing it correctly.
Or what Terry Winograd said in 2004, according to Nilsson: “There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains.”
Winograd thought it would take decades or more to realize the initial ambitions of NLP researchers. “In the meantime,” he said, “AI took on much more doable goals of working in less ambitious niches, or accepting less-than-human results (as in translation).”
Not anymore. Today’s ambition of AI developers is focused on getting to artificial general intelligence (AGI), recreating the full spectrum of human intelligence in machines while fixing all its flaws.
After attending the O’Reilly AI conference in 2016, I wrote:
Maybe, just maybe, our minds are not computers and computers do not resemble our brains? And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional—albeit “narrow”—applications of computers to enrich and improve our lives?
|
<urn:uuid:8b3ae6b0-129c-4458-b011-addd30b46006>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00593.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9606756567955017,
"pii_count": 0,
"score": 2.78125,
"token_count": 1974,
"url": "https://www.forbes.com/sites/gilpress/2023/07/30/can-chatgpt-destroy-humanity-if-it-doesnt-understand-the-world/"
}
|
To the much-discussed hallucinations of Large Language Models (LLM), their instability and lack of interpretability, we can now add their vulnerability to automated adversarial attacks forcing them to produce harmful content. But these are the least of their weaknesses. The depth of their understanding of the human world is a much more important hurdle for LLM to overcome before they become “superintelligence.”
According to ChatGPT creator OpenAI, however, this “most impactful technology humanity has ever invented,” could “arrive this decade” and could “lead to human extinction.”
We have already heard years ago that humanlike AI is just around the corner. At the time Terry Winograd entered graduate school in the late 1960s, most [intelligent and knowledgeable humans] “believed that it wouldn’t be long before machines would see, hear, speak, move, and otherwise perform humanlike tasks,” wrote John Markoff in Machines of Loving Grace.
For his PhD dissertation at MIT, Winograd developed SHRDLU, a computer program that was capable of participating in conversations about a made-up world, a “microworld” consisting of toy blocks and a “gripper” for moving them about. It responded to commands in English to manipulate the blocks using its gripper and it “understood” questions about the state of its world.
In his history of artificial intelligence, Nils Nilsson says that SHRDLU’s impressive performance made some NLP researchers “optimistic about future success.” But “Winograd soon abandoned this line of research in favor of pursuing work devoted to the interaction of computers and people. Perhaps because he had first-hand experience of how much knowledge was required for successful language understanding in something so simple as the blocks world, he despaired of ever giving computers enough knowledge to duplicate the full range of human verbal competence.”
Later, in 2006, Winograd saw statistical language understanding, neural networks and machine learning as new developments moving the AI field in the direction of his preferred approach to human-computer interaction. “Enlightened trial and error outperforms the planning of the flawless intellect” is at the center of this approach, he wrote, acknowledging “the limitations of knowing and modeling the complexities of the real human world.”
Further triumphs of the statistical analysis approach to AI, particularly in the domain of image identification in 2012, made most intelligent and knowledgeable humans believe that human-like machine intelligence or even “superintelligence” is, again, just
|
around the corner.
Most, but not all. One popular tool used by those questioning the intelligence of the new AI became known as the Winograd Schema Challenge. This was the 2010 brainchild of Hector Levesque at the University of Toronto, based on an example of semantic ambiguity mentioned in Winograd’s 1972 “Understanding Natural Language.” To overcome the limitations of the “Turing Test,” Levesque suggested a multiple-choice test that requires the correct resolution of an ambiguous pronoun in a statement. For example,
The trophy doesn’t fit in the brown suitcase because it is too large. What is too large?
A. The trophy
B. The suitcase
“The challenge attracted a fair amount of favorable interest from both the research community and the popular science press. The naturalness of the problem made it a fair one for AI systems; the complexity of the inferences involved seemed to put it far beyond the range of what was then current technology,” wrote the authors of “The Defeat of the Winograd Schema Challenge” (January 2023).
Indeed, at the “first and last running of the Winograd Schema Challenge” which took place in 2016, the most successful of the six AI programs that entered it achieved a score of 58% on the test collection, only slightly better than chance. At the inaugural O’Reilly AI conference the same year, while some talked about “super-human” driverless cars that are just around the corner, others, including deep learning pioneer Yann LeCun, mentioned the Winograd Schema as an unsolved challenge in testing the machine’s knowledge of how the world works.
Just a year later, however, natural language processing (NLP) moved forward with Google’s researchers publishing “Attention is All You Need.” It proposed the Transformer (the T in ChatGPT), a new “network architecture” that improved considerably the (machine) learning of the dependencies among words even when they are far apart in the text. The result of this breakthrough, in addition to unleashing the avalanche of generative AI, was the defeat of the Winograd Schema Challenge, with AI going from “random guessing to human performance in a very short time span,” achieving a score of 90.1% in 2019.
Large language model GPT-3 achieved a similar score (88.3%) in 2020. Eager to see for myself the triumph of LLM, I ran a few Winograd Schema phrases by the July 20, 2023, version of ChatGPT. It gave me the correct answers to a couple of them. But then I presented it with "The sack of potatoes had been placed below the bag of flour, so it had to be moved first. What had to be moved first?” and got this in response:
In the given sentence, "it" refers to the sack of potatoes. The sentence "The sack of potatoes had been placed below the bag of flour, so it had to be moved first" means that the sack of potatoes had to be moved first before accessing or moving the bag of flour, which was placed on top of it.
When I switched the position of the sack of potatoes from “below” to “above,” I got this in response:
Apologies for any confusion in my previous response. In the given sentence, "it" refers to the bag of flour. The correct interpretation of the sentence is:
"The sack of potatoes had been placed above the bag of flour, so it had to be moved first."
This means that the bag of flour had to be moved first because it was below the sack of potatoes, and you needed to move it out of the way before accessing the sack of potatoes placed above it.
It appears that the great leap forward for AI, which some of its prominent developers say will lead soon to superintelligence that could destroy humanity, still does not have, at least in this case, a child-like understanding of space and motion and what’s involved in moving and manipulating physical objects.
Sometimes LLM appear to understand the world and how to operate in it, and sometimes they fail to do so. Sometimes their language skills lead to accurate translations and sometimes to complete failures. But what about their impressive performance in the Winograd Schema Challenge?
In the paper documenting the demise of the Winograd Schema Challenge, the researchers who worked for about ten years to promote it as a test of machine intelligence, attribute the human-like achievements of LLM to flaws in the test’s “formulation and execution.”
In addition, they reported that in the early 2010s, “when there were few signs that AI systems could succeed at the Winograd Schema Challenge, there were those who warned that it might be a less robust test than its creators hoped and believed. They argued that… success might come without solving the underlying problem that had motivated the challenge.”
The proponents of the Winograd Schema now admit that solving it “is not a surrogate for the ability to do commonsense reasoning, let alone for intelligence. The difficulty of using success at a specific task as a surrogate for intelligence is one that continues to this day, even as the tasks computers can successfully perform significantly increase in complexity.”
Or what Yehoshua Bar-Hillel called seventy years ago (!) the fallacy of the first step: The distance from the inability to do something to doing it badly is usually much shorter than the distance from doing something badly and doing it correctly.
Or what Terry Winograd said in 2004, according to Nilsson: “There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains.”
Winograd thought it would take decades or more to realize the initial ambitions of NLP researchers. “In the meantime,” he said, “AI took on much more doable goals of working in less ambitious niches, or accepting less-than-human results (as in translation).”
Not anymore. Today’s ambition of AI developers is focused on getting to artificial general intelligence (AGI), recreating the full spectrum of human intelligence in machines while fixing all its flaws.
After attending the O’Reilly AI conference in 2016, I wrote:
Maybe, just maybe, our minds are not computers and computers do not resemble our brains? And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional—albeit “narrow”—applications of computers to enrich and improve our lives?
|
What exactly is reflexology?
Reflexology can be described as a type of alternative medicine in which practitioners use targeted pressure point massage to revive the flow of energy throughout the body. Although the majority of the treatment is performed with the feet and ears, it can also be applied to the hands and the ears. Toes and fingers are believed to correspond to particular parts of our body. The weakness of a particular organ could be reflected by the sensitivity of a reflex. The practitioner uses thumb and finger "walking" sequences to activate different zones of the feet.
Your lifestyle and health will be discussed with the therapist. Once they know about the health of you and any ailments that you might have the therapist will choose the area of your body would be 서울op benefited by reflexology. Reflexologists may pay attention to the feet while other will be more focused on your hands and ear. Reflexology is performed in a tranquil spa environment and can be followed by aromatherapy. It is not possible to move around during the treatment as with acupuncture.
The method relies on the idea that nerve impulses transmit messages towards the region that requires care. To eliminate any obstructions or bring energy flow back to the body, the practitioner can employ different methods of reflexology. The use of reflexology can alleviate pain and discomfort caused by stress. Stress are felt instantly, as well as a treatment with an experienced reflexology practitioner can ease various illnesses. Although the advantages from reflexology are significant however, it is essential to book an appointment when you don't have to job.
It is possible to improve your health with reflexology. The best time to book your reflexology session is at a time when you are not working for long periods of time. A reflexology session can be scheduled for a session during lunchtime or in the last part of your work day if there isn't much free time. This session can make you less distracted. This allows you to concentrate on other things with no distraction from the discomfort.
The benefits of reflexology are numerous. It is a great way to relieve physical discomfort and anxiety. Someone who suffers from migraines since her teens said reflexology helped reduce the medication she took. Some have also reported better sleeping more energy, and improved quality sleeping. It is a wonderful alternative treatment for health. The therapist will be able to ask some questions about your medical background and your lifestyle. The best treatment is tapping the reflexology points on the hands and feet.
Reflexology has numerous benefits. Clients who have suffered from migraines for many years reported reducing her migraine medication after a reflexology session. Others reported a decrease in their pain, improved energy, and deeper sleep. The reflexology session is a fantastic option to enhance the overall health of your body. The majority of clients experience an easing of the need for pain medication, however reflexology can benefit any person. It is not only beneficial to your body, but for the mind.
Reflexology is a great alternative treatment for health. It helps reduce stress and assist the body in natural healing. It is beneficial to people suffering from different medical conditions. A client who suffers from migraines was able to decrease their migraine medication by 50percent after experiencing reflexology. Some have also experienced increased levels of energy, sleep quality as well as pain relief. Therefore, it's important to choose a professional that is skilled in the field of Reflexology.
Reflexology treatments can be an excellent way to relax and improve your overall wellbeing. It is important to schedule an appointment and ensure that you do not have any other obligations to attend to in the meantime. The treatment may take length of time and can take you away from your job. However, the rewards are well worthwhile! You will feel happier and better. Reflexology should be scheduled as frequently as you can for the greatest benefit.
Reflexology's benefits are many. It aids the body to release tension and recover. To improve your well-being, it works conjunction with the central nervous systems. The study of reflexology began in the late 1890s. it revealed the neurologic connection between the skin and internal organs. It is a method for the entire nervous system to be able to detect stimuli and respond by changing its behaviour. It assists in restoring the balance of the body and ensures adequate sleeping to live a healthier existence.
|
<urn:uuid:98b756f2-c027-47bf-a600-57591fa83692>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00577.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9639748334884644,
"pii_count": 0,
"score": 2.5625,
"token_count": 884,
"url": "https://anma1martinfehk183.wordpress.com/2023/02/27/the-most-underrated-companies-to-follow-in-the-%EC%84%9C%EC%9A%B8%EC%98%A4%ED%94%BC-industry/"
}
|
What exactly is reflexology?
Reflexology can be described as a type of alternative medicine in which practitioners use targeted pressure point massage to revive the flow of energy throughout the body. Although the majority of the treatment is performed with the feet and ears, it can also be applied to the hands and the ears. Toes and fingers are believed to correspond to particular parts of our body. The weakness of a particular organ could be reflected by the sensitivity of a reflex. The practitioner uses thumb and finger "walking" sequences to activate different zones of the feet.
Your lifestyle and health will be discussed with the therapist. Once they know about the health of you and any ailments that you might have the therapist will choose the area of your body would be 서울op benefited by reflexology. Reflexologists may pay attention to the feet while other will be more focused on your hands and ear. Reflexology is performed in a tranquil spa environment and can be followed by aromatherapy. It is not possible to move around during the treatment as with acupuncture.
The method relies on the idea that nerve impulses transmit messages towards the region that requires care. To eliminate any obstructions or bring energy flow back to the body, the practitioner can employ different methods of reflexology. The use of reflexology can alleviate pain and discomfort caused by stress. Stress are felt instantly, as well as a treatment with an experienced reflexology practitioner can ease various illnesses. Although the advantages from reflexology are significant however, it is essential to book an appointment when you don't have to job.
It is possible to improve your health with reflexology. The best time to book your reflexology session is at a time when you are not working for long periods of time. A reflexology session can be scheduled for a session during lunchtime or in the last part of your work day if there isn't much free time. This session can make you less distracted. This allows you to concentrate on other things with no distraction from the discomfort.
The benefits of reflexology are numerous. It is a great way to relieve physical discomfort and anxiety. Someone who suffers from migraines since her teens said reflexology helped reduce the medication she took. Some have also reported better sleeping more energy, and improved quality sleeping. It is a wonderful alternative treatment for health. The therapist will be able to ask some questions about your medical background and your lifestyle. The best treatment is tapping the reflexology points on the hands and feet.
Reflexology has numerous benefits. Clients who have suffered from migra
|
ines for many years reported reducing her migraine medication after a reflexology session. Others reported a decrease in their pain, improved energy, and deeper sleep. The reflexology session is a fantastic option to enhance the overall health of your body. The majority of clients experience an easing of the need for pain medication, however reflexology can benefit any person. It is not only beneficial to your body, but for the mind.
Reflexology is a great alternative treatment for health. It helps reduce stress and assist the body in natural healing. It is beneficial to people suffering from different medical conditions. A client who suffers from migraines was able to decrease their migraine medication by 50percent after experiencing reflexology. Some have also experienced increased levels of energy, sleep quality as well as pain relief. Therefore, it's important to choose a professional that is skilled in the field of Reflexology.
Reflexology treatments can be an excellent way to relax and improve your overall wellbeing. It is important to schedule an appointment and ensure that you do not have any other obligations to attend to in the meantime. The treatment may take length of time and can take you away from your job. However, the rewards are well worthwhile! You will feel happier and better. Reflexology should be scheduled as frequently as you can for the greatest benefit.
Reflexology's benefits are many. It aids the body to release tension and recover. To improve your well-being, it works conjunction with the central nervous systems. The study of reflexology began in the late 1890s. it revealed the neurologic connection between the skin and internal organs. It is a method for the entire nervous system to be able to detect stimuli and respond by changing its behaviour. It assists in restoring the balance of the body and ensures adequate sleeping to live a healthier existence.
|
Graduating high school students left almost $3.6 billion in Pell Grant funds unclaimed last year because they did not complete the Free Application for Federal Student Aid (FAFSA). That is the finding of a new report from the National College Attainment Network (NCAN) out this week. The FAFSA is the federal form you must complete to be considered for any financial aid from the federal government, your state, or one of the colleges you hope to attend.
FAFSA completion rates are used as an indicator of whether students intend to enroll in college and often provide an early indication of what proportion of recent high school graduates plan on heading to college. FAFSA completion is also strongly associated with college enrollment, so increasing the number of students who complete their aid application may increase the number of students entering higher education.
NCAN calculated that 44 percent of the class of 2022 did not complete a FAFSA. Of that 44 percent, almost half are estimated to be eligible for a Pell Grant, with an average value of $4,686. That leaves a lot of students who could be getting help to pay for training in higher education without support. Some states are trying to change low completion rates with Mandatory FAFSA completion policies.
In recent years some states have instituted mandatory FAFSA completion policies that require students to submit the form as a graduation requirement or require that school districts do everything they can to help students complete the application. So far, eight states — California, Maryland, Louisiana, Illinois, Alabama, Colorado, New Hampshire, and Texas — have implemented FAFSA mandates. Early results from Louisiana, which was the first state to implement the requirement fully, have shown significant promise, with a large increase in the number of students who complete an application and narrowing the gap between students from high-income and low-income districts completing the form.
Why Is FAFSA Completion Important?
FAFSA asks students to provide detailed financial information for themselves (and their parents if they are younger than 24 or meet some narrow exceptions). After completing the application, students receive their Expected Family Contribution (EFC) number. This number plays a significant role in determining what types of financial aid they for and how much total financial aid a student can receive. The lower that number, the more types of aid and total aid dollars a student can qualify for, and vice versa.
Completing a FAFSA helps students see if they will qualify for federal financial aid, although it provides limited information about what state and institutional support they might be eligible for. Students typically have to wait for their financial aid offers to find out how much support they will get from the state and the colleges they have been admitted to.
Do More FAFSA Completions Mean More Students Go to College?
Completing a FAFSA does increase the chances that a student will go to college, but that fact is correlational. Students who completed the application were probably more likely to head to college in the first place. However, knowing there is help to pay for college also encourages enrollment if it comes with help applying for aid. Research has shown that when students receive help completing the FAFSA and receive information about financial aid, they are more likely to attend college. Students who complete a FAFSA and discover they are eligible for a Pell Grant are more likely to realize that they might be able to afford to attend college.
Completing the application, and doing it as early as possible, can also provide access to additional financial aid from state and institutional grant programs, which tend to have early deadlines, reducing the price of college even further.
Students need all the help they can get when it comes to paying for college. Given the current low rates of FAFSA completion, finding ways to increase the number of students applying for financial aid should be a priority for states.
|
<urn:uuid:faf7f185-311c-4424-b023-76957b4db563>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00651.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9597777724266052,
"pii_count": 0,
"score": 2.734375,
"token_count": 786,
"url": "https://www.forbes.com/sites/edwardconroy/2023/02/05/students-left-36-billion-in-pell-grants-unclaimed-last-year/"
}
|
Graduating high school students left almost $3.6 billion in Pell Grant funds unclaimed last year because they did not complete the Free Application for Federal Student Aid (FAFSA). That is the finding of a new report from the National College Attainment Network (NCAN) out this week. The FAFSA is the federal form you must complete to be considered for any financial aid from the federal government, your state, or one of the colleges you hope to attend.
FAFSA completion rates are used as an indicator of whether students intend to enroll in college and often provide an early indication of what proportion of recent high school graduates plan on heading to college. FAFSA completion is also strongly associated with college enrollment, so increasing the number of students who complete their aid application may increase the number of students entering higher education.
NCAN calculated that 44 percent of the class of 2022 did not complete a FAFSA. Of that 44 percent, almost half are estimated to be eligible for a Pell Grant, with an average value of $4,686. That leaves a lot of students who could be getting help to pay for training in higher education without support. Some states are trying to change low completion rates with Mandatory FAFSA completion policies.
In recent years some states have instituted mandatory FAFSA completion policies that require students to submit the form as a graduation requirement or require that school districts do everything they can to help students complete the application. So far, eight states — California, Maryland, Louisiana, Illinois, Alabama, Colorado, New Hampshire, and Texas — have implemented FAFSA mandates. Early results from Louisiana, which was the first state to implement the requirement fully, have shown significant promise, with a large increase in the number of students who complete an application and narrowing the gap between students from high-income and low-income districts completing the form.
Why Is FAFSA Completion Important?
FAFSA asks students to provide detailed financial information for themselves (and their parents if they are younger than 24 or meet some narrow exceptions). After completing the application, students receive their Expected Family Contribution (EFC) number. This number plays a significant role in determining what types of financial aid they for and how much total financial aid a student can receive. The lower that number, the more types of aid and total aid dollars a student can qualify for, and vice versa.
Completing a FAFSA helps students see if they will qualify for federal financial aid,
|
although it provides limited information about what state and institutional support they might be eligible for. Students typically have to wait for their financial aid offers to find out how much support they will get from the state and the colleges they have been admitted to.
Do More FAFSA Completions Mean More Students Go to College?
Completing a FAFSA does increase the chances that a student will go to college, but that fact is correlational. Students who completed the application were probably more likely to head to college in the first place. However, knowing there is help to pay for college also encourages enrollment if it comes with help applying for aid. Research has shown that when students receive help completing the FAFSA and receive information about financial aid, they are more likely to attend college. Students who complete a FAFSA and discover they are eligible for a Pell Grant are more likely to realize that they might be able to afford to attend college.
Completing the application, and doing it as early as possible, can also provide access to additional financial aid from state and institutional grant programs, which tend to have early deadlines, reducing the price of college even further.
Students need all the help they can get when it comes to paying for college. Given the current low rates of FAFSA completion, finding ways to increase the number of students applying for financial aid should be a priority for states.
|
The United States honors the most prominent figure in its civil rights movement, Martin Luther King Jr., on Monday. The national holiday was hard won by his family and activists in 1983 and has been marked on the third Monday of January ever since.
He wasn’t actually named Martin
Martin Luther King Jr. was named Michael when he was born in Atlanta on Jan. 15, 1929. His father, Martin Luther King Sr., a pastor at the Ebenezer Baptist Church in Atlanta was also named Michael. However, in 1934, he took an eye-opening trip to Germany — where in 1517, a monk named Martin Luther nailed his 95 Theses to the door of the Wittenberg castle church, igniting the Protestant Reformation. King Sr., who was an early figure in the American civil rights movement, traveled back to the United States and swiftly changed his and his son’s names, when young Martin was at about 5.
When he was 28, King Jr. officially revised his birth certificate. In 1957, he crossed out the name Michael and replaced it with “Martin Luther, Jr.” in black ink.
He skipped grades in school and went to college at 15
A prodigy, King skipped at least two grades, graduating from Booker T. Washington High School in Atlanta before he was admitted to nearby Morehouse College, a historically Black men’s school also attended by his father and grandfather.
“My days in college were very exciting ones. There was a free atmosphere at Morehouse, and it was there I had my first frank discussion on race,” he later wrote in his autobiography.
When King was 19, in 1948, he finished college and enrolled at the Crozer Theological Seminary, where he was ordained as a Baptist minister. He went on to study systematic theology and earn a PhD from Boston University. King was later awarded many honorary degrees from academic institutions across the world.
“Education must enable one to sift and weigh evidence, to discern the true from the false, the real from the unreal, and the facts from the fiction,” he wrote in a student newspaper in 1947.
He got a C in public speaking
Later known as a great orator, King once struggled with giving speeches and received a C in public speaking when he was training to become a minister.
Martin Luther King, Jr. received two Cs in public speaking. Actually went from a C+ to a C the next term. Here’s the transcript. Live your dream. pic.twitter.com/mAoFIwaICw— Sarah Elizabeth Lewis (@sarahelizalewis) January 12, 2020
He was a prankster as a child
In his youth, King had a mischievous streak. He tried to scare passersby on the street by putting his mother’s fox furs on a stick and rustling the bushes. He also tried to drive away his piano teacher by getting the stool to collapse and would sometimes destroy his older sister’s doll heads to use as baseballs.
King may have improvised the ‘I have a dream’ line in his speech
One of history’s most consequential speeches was delivered in less than 18 minutes during the March on Washington on Aug. 28, 1963.
But when King was drafting the speech — which drew on the Bible, the Declaration of Independence and the words of William Shakespeare — he did not include the famous refrain: “I have a dream that one day this nation will rise up and live out the true meaning of its creed: ‘We hold these truths to be self-evident, that all men are created equal.’”
According to reports, the American gospel singer Mahalia Jackson called out during the speech: “Tell them about the dream, Martin,” prompting King to deploy the now historic phrase, which he had used in previous public addresses.
His family paid the medical bills for the birth of actress Julia Roberts
When the Hollywood actress was born 55 years ago in Smyrna, Ga., King and his wife, Coretta Scott King, paid the hospital bills for her parents, Walter and Betty. The story came to light only last year, when Roberts confirmed the fact in an interview with television personality Gayle King (no relation).
“The day you were born, who paid for the hospital bill?” King asked Roberts during HistoryTalks, a September event in D.C. hosted by the History Channel and A&E Networks. “The King family paid for my hospital bill,” Roberts replied. “My parents couldn’t pay for the hospital bill.”
Grateful that #JuliaRoberts shared this story with @GayleKing and that so many people have been awed by it.— Be A King (@BerniceKing) October 31, 2022
I know the story well, but it is moving for me to be reminded of my parents’ generosity and influence.#CorettaScottKing #MLK pic.twitter.com/Hn9yOVxYIL
Roberts explained that her parents owned a theater school in Atlanta called the Actors and Writers Workshop, which they welcomed the King children to attend at a time when racial tensions remained high.
“One day, Coretta Scott King called my mother and asked if her kids could be part of the school, because they were having a hard time finding a place that would accept her kids,” Roberts said. “My mom was like, ‘Sure, come on over.’ And so they just all became friends, and they helped us out of a jam.”
Her revelation sparked surprise on the internet and drew praise, including from King’s youngest child, Bernice King.
Another assassination attempt came a decade before his killing
In 1958, King was autographing books at the Blumstein’s Department Store in Harlem when a well-dressed woman wearing glasses stepped out of the line and shouted: “Is this Martin Luther King?”
King, then 29, looked up from signing copies of his memoir about the Montgomery, Ala., bus boycott and answered: “Yes, it is.”
The woman then pulled a letter opener with an ivory handle from her purse and attacked King, plunging a seven-inch blade into the left side of his chest, according to the Martin Luther King Jr. Research and Education Institute at Stanford University.
King was rushed to a hospital for surgery. Doctors later told him that if he had sneezed, the blade, which was lodged near his aorta, could have killed him. The attacker was Izola Ware Curry, the Black daughter of sharecroppers, and King later referred to her as a “demented woman.”
He was shot dead 10 years later on the balcony of the Lorraine Motel in Memphis in 1968.
His mother was also assassinated
Just six years later, his mother, Alberta Williams King, was also assassinated. She was killed in 1974 while playing the organ at a service at Ebenezer Baptist Church in Atlanta, shot by Marcus Wayne Chenault, a young man from Ohio who claimed he had been aiming for Martin Luther King Sr., who was also at the church.
King often spoke of the positive influence his mother had on his development, calling her “the best mother in the world.”
In addition to the Nobel Peace Prize, he won a Grammy
King won a Grammy music award posthumously in 1971. He won best spoken word recording for a speech entitled “Why I Oppose the War in Vietnam,” which he made in New York to condemn the war one year before he was killed.
King was nominated for a Grammy on two previous occasions, in 1969, for his “I Have a Dream” speech, and in 1964, for his “We Shall Overcome” address.
|
<urn:uuid:39dc84c5-febc-4d34-9ef5-852aab233d49>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00213.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9866878986358643,
"pii_count": 0,
"score": 2.96875,
"token_count": 1647,
"url": "https://www.washingtonpost.com/nation/2023/01/16/martin-luther-king-day-facts-about/"
}
|
The United States honors the most prominent figure in its civil rights movement, Martin Luther King Jr., on Monday. The national holiday was hard won by his family and activists in 1983 and has been marked on the third Monday of January ever since.
He wasn’t actually named Martin
Martin Luther King Jr. was named Michael when he was born in Atlanta on Jan. 15, 1929. His father, Martin Luther King Sr., a pastor at the Ebenezer Baptist Church in Atlanta was also named Michael. However, in 1934, he took an eye-opening trip to Germany — where in 1517, a monk named Martin Luther nailed his 95 Theses to the door of the Wittenberg castle church, igniting the Protestant Reformation. King Sr., who was an early figure in the American civil rights movement, traveled back to the United States and swiftly changed his and his son’s names, when young Martin was at about 5.
When he was 28, King Jr. officially revised his birth certificate. In 1957, he crossed out the name Michael and replaced it with “Martin Luther, Jr.” in black ink.
He skipped grades in school and went to college at 15
A prodigy, King skipped at least two grades, graduating from Booker T. Washington High School in Atlanta before he was admitted to nearby Morehouse College, a historically Black men’s school also attended by his father and grandfather.
“My days in college were very exciting ones. There was a free atmosphere at Morehouse, and it was there I had my first frank discussion on race,” he later wrote in his autobiography.
When King was 19, in 1948, he finished college and enrolled at the Crozer Theological Seminary, where he was ordained as a Baptist minister. He went on to study systematic theology and earn a PhD from Boston University. King was later awarded many honorary degrees from academic institutions across the world.
“Education must enable one to sift and weigh evidence, to discern the true from the false, the real from the unreal, and the facts from the fiction,” he wrote in a student newspaper in 1947.
He got a C in public speaking
Later known as a great orator, King once struggled with giving speeches and received a C in public speaking when he was training to become a minister.
Martin Luther King, Jr. received two Cs in public speaking. Actually
|
went from a C+ to a C the next term. Here’s the transcript. Live your dream. pic.twitter.com/mAoFIwaICw— Sarah Elizabeth Lewis (@sarahelizalewis) January 12, 2020
He was a prankster as a child
In his youth, King had a mischievous streak. He tried to scare passersby on the street by putting his mother’s fox furs on a stick and rustling the bushes. He also tried to drive away his piano teacher by getting the stool to collapse and would sometimes destroy his older sister’s doll heads to use as baseballs.
King may have improvised the ‘I have a dream’ line in his speech
One of history’s most consequential speeches was delivered in less than 18 minutes during the March on Washington on Aug. 28, 1963.
But when King was drafting the speech — which drew on the Bible, the Declaration of Independence and the words of William Shakespeare — he did not include the famous refrain: “I have a dream that one day this nation will rise up and live out the true meaning of its creed: ‘We hold these truths to be self-evident, that all men are created equal.’”
According to reports, the American gospel singer Mahalia Jackson called out during the speech: “Tell them about the dream, Martin,” prompting King to deploy the now historic phrase, which he had used in previous public addresses.
His family paid the medical bills for the birth of actress Julia Roberts
When the Hollywood actress was born 55 years ago in Smyrna, Ga., King and his wife, Coretta Scott King, paid the hospital bills for her parents, Walter and Betty. The story came to light only last year, when Roberts confirmed the fact in an interview with television personality Gayle King (no relation).
“The day you were born, who paid for the hospital bill?” King asked Roberts during HistoryTalks, a September event in D.C. hosted by the History Channel and A&E Networks. “The King family paid for my hospital bill,” Roberts replied. “My parents couldn’t pay for the hospital bill.”
Grateful that #JuliaRoberts shared this story with @GayleKing and that so many people have been awed by it.— Be A King (@BerniceKing) October 31, 2022
I know the story well, but it is moving for me to be reminded of my parents’ generosity and influence.#CorettaScottKing #MLK pic.twitter.com/Hn9yOVxYIL
Roberts explained that her parents owned a theater school in Atlanta called the Actors and Writers Workshop, which they welcomed the King children to attend at a time when racial tensions remained high.
“One day, Coretta Scott King called my mother and asked if her kids could be part of the school, because they were having a hard time finding a place that would accept her kids,” Roberts said. “My mom was like, ‘Sure, come on over.’ And so they just all became friends, and they helped us out of a jam.”
Her revelation sparked surprise on the internet and drew praise, including from King’s youngest child, Bernice King.
Another assassination attempt came a decade before his killing
In 1958, King was autographing books at the Blumstein’s Department Store in Harlem when a well-dressed woman wearing glasses stepped out of the line and shouted: “Is this Martin Luther King?”
King, then 29, looked up from signing copies of his memoir about the Montgomery, Ala., bus boycott and answered: “Yes, it is.”
The woman then pulled a letter opener with an ivory handle from her purse and attacked King, plunging a seven-inch blade into the left side of his chest, according to the Martin Luther King Jr. Research and Education Institute at Stanford University.
King was rushed to a hospital for surgery. Doctors later told him that if he had sneezed, the blade, which was lodged near his aorta, could have killed him. The attacker was Izola Ware Curry, the Black daughter of sharecroppers, and King later referred to her as a “demented woman.”
He was shot dead 10 years later on the balcony of the Lorraine Motel in Memphis in 1968.
His mother was also assassinated
Just six years later, his mother, Alberta Williams King, was also assassinated. She was killed in 1974 while playing the organ at a service at Ebenezer Baptist Church in Atlanta, shot by Marcus Wayne Chenault, a young man from Ohio who claimed he had been aiming for Martin Luther King Sr., who was also at the church.
King often spoke of the positive influence his mother had on his development, calling her “the best mother in the world.”
In addition to the Nobel Peace Prize, he won a Grammy
King won a Grammy music award posthumously in 1971. He won best spoken word recording for a speech entitled “Why I Oppose the War in Vietnam,” which he made in New York to condemn the war one year before he was killed.
King was nominated for a Grammy on two previous occasions, in 1969, for his “I Have a Dream” speech, and in 1964, for his “We Shall Overcome” address.
|
Palliative care aims to improve patient and relative quality of life, and this can be supported by helping them to identify and draw on their internal, external, community and society-based resources. For today’s blog, Prof. Mathieu Bernard tells us about the ‘resource oriented’ approach which can complement palliative care practice.
The World Health Organization’s definition of palliative care focuses on the identification and treatment of health problems in the physical, psychosocial and spiritual domains – a deficit-oriented (what is missing and what is needed) approach, aimed at identifying and resolving physical symptoms, psychosocial and spiritual issues, and thus relieve suffering. However, recent research shows that patients’ and families’ quality of life improves more significantly and sustainably if the treatment also builds on enhancing the patients’ and relatives’ identified resources – a resource-oriented approach, which can be seen as a necessary complement to the deficit-oriented one. This approach is in line with the focus on patient autonomy and empowerment which characterises palliative care. A resource-based approach is not new in palliative care. It is consistent with the notion of resilience, which has been the subject of much work over the last 20 years in many scientific disciplines, including palliative care (1).
Resources for palliative care patients and families can be classified into four levels: internal (e.g., spirituality, gratitude), external (e.g., family, friends, home environment), community (e.g., caring communities, hospice volunteers), and societal (e.g., health care policies, palliative care availability). The first two levels concern the psychosocial and spiritual care of the individual patients and their significant others. Several studies have shown the importance and efficiency of strengthening patient and family resources such as gratitude or social relations in order to sustainably improve their quality of life. The last two resource levels concern palliative care as a public health issue, which involves creating the necessary political and society-based conditions for delivering palliative care (both through the deficit-oriented and the resource-oriented approach) to all members of the community in an equitable fashion.
To find and encourage these resources, it is necessary to look in depth at interactions between patients, their loved ones, and health care professionals, as well as support on offer from communities and society – a task that could be performed, for example, by a specifically trained social worker. Such support needs to be integrated and planned for within services to support people receiving palliative care, as it could take time to work with people to identify their resources. A resource-based approach also underlines the importance of inter-professionality in palliative care, as resources can be identified and fostered by all domains of care. One example of this interprofessional approach is a current multicentre research project led by the Lausanne Palliative Care Center which focuses on ‘Understanding patient altruism at the end of life’. Altruism is defined here as a personal and intentional interest in improving the wellbeing and welfare of others. The principal investigators of its five subprojects come from five different disciplines: medicine, ethics, nursing, psychology, and sociology, because the multi-faceted nature of resources such as altruism requires the intersection of different perspectives in order to understand the whole picture.
We believe that a resource-based approach to palliative care represents a complementary paradigm which may lead to significantly better quality of life, and clinical results, for patients and families, and should be valued more widely in health care.
1 Monroe, Barbara, and David Oliviere (eds), Resilience in Palliative Care: Achievement in adversity (Oxford, 2007; online edn, Oxford Academic, 17 Nov. 2011), https://doi.org/10.1093/acprof:oso/9780199206414.001.0001.
Links and resources
- An international scientific symposium at the Lausanne University Hospital on Tuesday, 13th June 2023, including esteemed international experts such as Prof. William Breitbart and Prof. Daniel Sulmasy, will highlight research on the diverse array of resources in palliative care. Thanks to the support of non-profit foundations, the symposium is free of charge and will be streamed online. Click here to register.
- Read EAPC public health and palliative care blog series here.
About the author
Prof. Mathieu Bernard is Chair in Palliative Psychology at the Palliative and Supportive Care Service of the Lausanne University Hospital (CHUV). In line with the general concept of ‘resource-oriented palliative care’, his main research interest concerns the application of positive psychology to palliative psychosocial care, and this has included research on the meaning in life, gratitude, post-traumatic growth, will to live, and altruism, and he has published extensively on these subjects. ORCID: 0000-0003-2823-8806.
- Register now!
- Find out more about the Congress here.
- FOLLOW @EAPCvzw #EAPC2023.
- Sign up to the EAPC blog to hear more about the Congress
|
<urn:uuid:9553ea94-fa0d-454b-b12b-3d00d6eeff9f>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655244.74/warc/CC-MAIN-20230609000217-20230609030217-00403.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9342456459999084,
"pii_count": 0,
"score": 2.90625,
"token_count": 1094,
"url": "https://eapcnet.wordpress.com/2023/05/09/how-a-resource-oriented-approach-can-enhance-palliative-care/"
}
|
Palliative care aims to improve patient and relative quality of life, and this can be supported by helping them to identify and draw on their internal, external, community and society-based resources. For today’s blog, Prof. Mathieu Bernard tells us about the ‘resource oriented’ approach which can complement palliative care practice.
The World Health Organization’s definition of palliative care focuses on the identification and treatment of health problems in the physical, psychosocial and spiritual domains – a deficit-oriented (what is missing and what is needed) approach, aimed at identifying and resolving physical symptoms, psychosocial and spiritual issues, and thus relieve suffering. However, recent research shows that patients’ and families’ quality of life improves more significantly and sustainably if the treatment also builds on enhancing the patients’ and relatives’ identified resources – a resource-oriented approach, which can be seen as a necessary complement to the deficit-oriented one. This approach is in line with the focus on patient autonomy and empowerment which characterises palliative care. A resource-based approach is not new in palliative care. It is consistent with the notion of resilience, which has been the subject of much work over the last 20 years in many scientific disciplines, including palliative care (1).
Resources for palliative care patients and families can be classified into four levels: internal (e.g., spirituality, gratitude), external (e.g., family, friends, home environment), community (e.g., caring communities, hospice volunteers), and societal (e.g., health care policies, palliative care availability). The first two levels concern the psychosocial and spiritual care of the individual patients and their significant others. Several studies have shown the importance and efficiency of strengthening patient and family resources such as gratitude or social relations in order to sustainably improve their quality of life. The last two resource levels concern palliative care as a public health issue, which involves creating the necessary political and society-based conditions for delivering palliative care (both through the deficit-oriented and the resource-oriented approach) to all members of the community in an equitable fashion.
To find and encourage these resources, it is necessary to look in depth at interactions between patients, their loved ones, and health care professionals, as well as support on offer from communities and society – a task that could be performed, for example, by a specifically trained social worker. Such support needs to be integrated and planned for within services to support people receiving palliative care, as it could take time to work with people to identify their resources. A resource-based approach also underlines the importance of inter-professionality in palliative care,
|
as resources can be identified and fostered by all domains of care. One example of this interprofessional approach is a current multicentre research project led by the Lausanne Palliative Care Center which focuses on ‘Understanding patient altruism at the end of life’. Altruism is defined here as a personal and intentional interest in improving the wellbeing and welfare of others. The principal investigators of its five subprojects come from five different disciplines: medicine, ethics, nursing, psychology, and sociology, because the multi-faceted nature of resources such as altruism requires the intersection of different perspectives in order to understand the whole picture.
We believe that a resource-based approach to palliative care represents a complementary paradigm which may lead to significantly better quality of life, and clinical results, for patients and families, and should be valued more widely in health care.
1 Monroe, Barbara, and David Oliviere (eds), Resilience in Palliative Care: Achievement in adversity (Oxford, 2007; online edn, Oxford Academic, 17 Nov. 2011), https://doi.org/10.1093/acprof:oso/9780199206414.001.0001.
Links and resources
- An international scientific symposium at the Lausanne University Hospital on Tuesday, 13th June 2023, including esteemed international experts such as Prof. William Breitbart and Prof. Daniel Sulmasy, will highlight research on the diverse array of resources in palliative care. Thanks to the support of non-profit foundations, the symposium is free of charge and will be streamed online. Click here to register.
- Read EAPC public health and palliative care blog series here.
About the author
Prof. Mathieu Bernard is Chair in Palliative Psychology at the Palliative and Supportive Care Service of the Lausanne University Hospital (CHUV). In line with the general concept of ‘resource-oriented palliative care’, his main research interest concerns the application of positive psychology to palliative psychosocial care, and this has included research on the meaning in life, gratitude, post-traumatic growth, will to live, and altruism, and he has published extensively on these subjects. ORCID: 0000-0003-2823-8806.
- Register now!
- Find out more about the Congress here.
- FOLLOW @EAPCvzw #EAPC2023.
- Sign up to the EAPC blog to hear more about the Congress
|
Learning to read in a mother tongue is key to improving SA’s literacy rates, says foundation
At first, the Zenex Foundation believed that helping learners in Mathematics and Science was the best way to address the education crisis. Then it realised that, first, it needed to help learners with their home languages.
The Zenex Foundation and partner organisations have launched a series of African languages storybooks and reading material that can be used to supplement learner curriculums in an effort to improve the South African literacy rate.
According to the 2021 Progress in International Reading Literacy Study (Pirls) report released in May this year, 81% of Grade 4 learners in South Africa cannot read for meaning in any language, which shows an alarming increase from 78% in 2016. The report also found that it was mostly African language schools that recorded a decline in reading, while English and Afrikaans schools did not. Limpopo, North West, Free State and Mpumalanga performed particularly poorly.
Read more in Daily Maverick: International study shows most Grade 4s in South Africa cannot read for meaning
At the Zenex Foundation’s launch of the new reading material, literacy specialist and senior lecturer Dr Xolisa Guzula, from the University of Cape Town, said: “It saddens me to think that when we think about creating African language materials people say, ‘No, that can’t be done because they don’t have a standard language’.
“I think people need to get rid of that thinking … In our teaching of languages, we need to value the children’s varieties of languages when they come to our classrooms and see them as a continuum of what we must teach in schools.”
Guzula said the legacy of the Bantu education system still had a bearing on the teaching curriculum in that black children believed they received inferior education in African languages. She said parents now opted for their children to be taught in English, because African language teaching was considered inferior.
“So, the African child was not seen as someone who could read, who could have access to literature. The only books that were available were readers that only emphasised letter sounds with no meaningful content. There were no supplementary materials for African language-speaking children. It is now up to us to change that injustice that was created,” said Guzula.
In a panel discussion during the launch, Lauren Fok, programme manager at the Zenex Foundation, said the creation and production of the books was possible through robust engagement with partner organisations representing different backgrounds, schools of literacy and reading. These included the Molteno Institute for Language and Literacy, the Nelson Mandela Institute for Education and Rural Development, Room to Read and African Storybook.
“We took the controversial decision to level our books rather than grade them. Many of the project partners felt that given the fact that we didn’t want children to feel judged if they couldn’t read at Grade 3 level, but could only read Grade 1 books, we chose to level our books from one to five and within that have sub-levels,” Fok said.
She went on to iterate that the books were also meant to support teachers and enable learners to be able to decode and understand letter and consonant sounds.
Research and development manager Jenny Katz from Molteno said: “Another big takeaway for us was the opportunity for the development of people who want to work in this field but have never been given the opportunity … amateur illustrators, aspiring story writers. And, because we have a more scientific approach to teaching how to read, we were actually also able to educate in linguistics and approaches to teaching reading in African languages.”
Katz also pointed out that one of the good things about the books was that they did not only focus on the most widely spoken languages but also included languages such as isiNdebele, Tshivenda and Titsonga.
“Writing for children is such a specialised skill. You have to know children, you have to use the correct language, you have to have read children’s literature, because by practising something yourself, you can do it well, you have to think like the child … So, this is so important for our country, because the more storybooks our children are exposed to, hopefully we’ll have our next generation of writers,” Katz added.
“Storybooks are really important in encouraging reading. It’s not graded readers above storybooks; they work side by side to encourage fun, focus and learning. In terms of developing non-fiction, it’s difficult anywhere, even in English for children.
“So, we had to invite our authors to write storybooks that were aligned to CAPS [ Curriculum Assessment Policy Statements] that dealt with knowledge concepts, explored science, nature, relationships and Covid … I love that our books really prioritise the journeys that children take because we’re not just giving them knowledge. We’re also working and building on the knowledge they carry and saying that reading is fun,” African Storybook’s Smangele Mathebula said during the panel discussion.
Rich in meaning
Speaking of the Ulwazi Lwethu readers and storybooks, which she had to review individually, Guzula said she found that they were quite rich in content and meaning.
She said that there was a difference between books that were written in English and then translated, often watering down nuance and content, and books specifically written in African languages, which maintained the integrity of the content of the stories.
Guzula said the books were also a way of expanding on culture, challenging gender stereotypes and historiographies. She said they taught children to engage critically with literature as well as critique issues of societal power dynamics.
Thandi Orleyn, chairperson of the Zenex Foundation, said she hoped the books would be “a living legacy for our great-grandchildren”. She said the foundation started off focusing on the infrastructure needs of schools, but had now expanded to focus on “capturing the minds of children”.
“In doing so, we thought that Maths, Science and Accounting was the route to go, and that’s when we realised that without your home language, you can’t understand the language of Maths and Science, and even as you pretend to understand English, you actually don’t.”
Orleyn said this is why it was important for the foundation to rethink its strategy towards education in order to have the most impact. She highlighted that the launch was significant in that it was happening during what the United Nations has named the international decade dedicated to indigenous languages. DM
|
<urn:uuid:5591d126-669b-4363-868c-fee0b7cf7d26>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00369.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9768374562263489,
"pii_count": 0,
"score": 3.046875,
"token_count": 1413,
"url": "https://www.dailymaverick.co.za/article/2023-09-26-learning-to-read-in-a-mother-tongue-is-key-to-improving-sas-literacy-rates-says-foundation/"
}
|
Learning to read in a mother tongue is key to improving SA’s literacy rates, says foundation
At first, the Zenex Foundation believed that helping learners in Mathematics and Science was the best way to address the education crisis. Then it realised that, first, it needed to help learners with their home languages.
The Zenex Foundation and partner organisations have launched a series of African languages storybooks and reading material that can be used to supplement learner curriculums in an effort to improve the South African literacy rate.
According to the 2021 Progress in International Reading Literacy Study (Pirls) report released in May this year, 81% of Grade 4 learners in South Africa cannot read for meaning in any language, which shows an alarming increase from 78% in 2016. The report also found that it was mostly African language schools that recorded a decline in reading, while English and Afrikaans schools did not. Limpopo, North West, Free State and Mpumalanga performed particularly poorly.
Read more in Daily Maverick: International study shows most Grade 4s in South Africa cannot read for meaning
At the Zenex Foundation’s launch of the new reading material, literacy specialist and senior lecturer Dr Xolisa Guzula, from the University of Cape Town, said: “It saddens me to think that when we think about creating African language materials people say, ‘No, that can’t be done because they don’t have a standard language’.
“I think people need to get rid of that thinking … In our teaching of languages, we need to value the children’s varieties of languages when they come to our classrooms and see them as a continuum of what we must teach in schools.”
Guzula said the legacy of the Bantu education system still had a bearing on the teaching curriculum in that black children believed they received inferior education in African languages. She said parents now opted for their children to be taught in English, because African language teaching was considered inferior.
“So, the African child was not seen as someone who could read, who could have access to literature. The only books that were available were readers that only emphasised letter sounds with no meaningful content. There were no supplementary materials for African language-speaking children. It is now up to us to change that injustice that was created,” said Guzula.
In a panel discussion during the launch, Lauren Fok, programme manager at the Zenex Foundation, said the creation and production of the books was possible through robust
|
engagement with partner organisations representing different backgrounds, schools of literacy and reading. These included the Molteno Institute for Language and Literacy, the Nelson Mandela Institute for Education and Rural Development, Room to Read and African Storybook.
“We took the controversial decision to level our books rather than grade them. Many of the project partners felt that given the fact that we didn’t want children to feel judged if they couldn’t read at Grade 3 level, but could only read Grade 1 books, we chose to level our books from one to five and within that have sub-levels,” Fok said.
She went on to iterate that the books were also meant to support teachers and enable learners to be able to decode and understand letter and consonant sounds.
Research and development manager Jenny Katz from Molteno said: “Another big takeaway for us was the opportunity for the development of people who want to work in this field but have never been given the opportunity … amateur illustrators, aspiring story writers. And, because we have a more scientific approach to teaching how to read, we were actually also able to educate in linguistics and approaches to teaching reading in African languages.”
Katz also pointed out that one of the good things about the books was that they did not only focus on the most widely spoken languages but also included languages such as isiNdebele, Tshivenda and Titsonga.
“Writing for children is such a specialised skill. You have to know children, you have to use the correct language, you have to have read children’s literature, because by practising something yourself, you can do it well, you have to think like the child … So, this is so important for our country, because the more storybooks our children are exposed to, hopefully we’ll have our next generation of writers,” Katz added.
“Storybooks are really important in encouraging reading. It’s not graded readers above storybooks; they work side by side to encourage fun, focus and learning. In terms of developing non-fiction, it’s difficult anywhere, even in English for children.
“So, we had to invite our authors to write storybooks that were aligned to CAPS [ Curriculum Assessment Policy Statements] that dealt with knowledge concepts, explored science, nature, relationships and Covid … I love that our books really prioritise the journeys that children take because we’re not just giving them knowledge. We’re also working and building on the knowledge they carry and saying that reading is fun,” African Storybook’s Smangele Mathebula said during the panel discussion.
Rich in meaning
Speaking of the Ulwazi Lwethu readers and storybooks, which she had to review individually, Guzula said she found that they were quite rich in content and meaning.
She said that there was a difference between books that were written in English and then translated, often watering down nuance and content, and books specifically written in African languages, which maintained the integrity of the content of the stories.
Guzula said the books were also a way of expanding on culture, challenging gender stereotypes and historiographies. She said they taught children to engage critically with literature as well as critique issues of societal power dynamics.
Thandi Orleyn, chairperson of the Zenex Foundation, said she hoped the books would be “a living legacy for our great-grandchildren”. She said the foundation started off focusing on the infrastructure needs of schools, but had now expanded to focus on “capturing the minds of children”.
“In doing so, we thought that Maths, Science and Accounting was the route to go, and that’s when we realised that without your home language, you can’t understand the language of Maths and Science, and even as you pretend to understand English, you actually don’t.”
Orleyn said this is why it was important for the foundation to rethink its strategy towards education in order to have the most impact. She highlighted that the launch was significant in that it was happening during what the United Nations has named the international decade dedicated to indigenous languages. DM
|
In the latter decades of the 20th century, stem cells—cells that can self-renew and differentiate into other cell types—took the research world by storm. They began to be applied to a wide range of degenerative disease research programs, promising cures for spinal cord injuries, cancer, osteoarthritis, Alzheimer’s disease, and more.
Although stem cell therapies are still under development, and some are used clinically (mainly for cancer), the hype around them eventually died down, and for 20 years, they haven’t been mentioned much outside of niche communities. They’re hard to work with and difficult to scale, limiting their application. But thanks to the Cambridge-based biotechnology company bit.bio, stem cell research is about to make a big comeback.
Providing pure, functional human iPSC
Bit.bio was founded in 2016 by Mark Kotter and Florian Schuster to solve the biggest challenges facing stem cells by leveraging synthetic biology.
Stem cells are either derived from embryos (embryonic stem cells) or by engineering somatic cells to return them to a pluripotent state (induced pluripotent stem cells; iPSCs)—a state of unlimited potential to turn into any type of cell you want. iPSCs—used more often than embryonic stem cells because of ethical concerns—are notoriously inconsistent, with batches of differentiated cells often containing a mixed population of the cells you want plus many cell types you don’t want. This severely limits the pace of discovery, innovation, and, ultimately, the ability to use these cells at scale in the clinic.
Seven years ago, bit.bio introduced their opti-ox (optimized inducible overexpression) technology to the world (which I wrote about in Forbes in 2020), achieving highly functional, consistently pure, and defined human iPSC-derived cells. Their portfolio of ioCells™ includes wild-type GABAergic and glutamatergic neurons, sensory neurons, microglia, and skeletal myocytes, which have been used by a range of research labs all over the world.
But Kotter and the rest of the bit.bio team wasn’t satisfied. “It’s not only about health states,” says Kotter, “it’s much more about disease states where differences between animal models and human biology seem to be heightened.” So they’ve developed an extensive portfolio of disease model cells as well, with an expansion of those offerings to include custom disease model cells announced at SynBioBeta 2023 last week.
“Now our customers can study their disease mutation of interest in a robust, highly reproducible context,” Kotter explained. “Providing everyone access to their own mutation of interest allows us to help more people in terms of drug discovery.”
Driving personalized medicine
I sat down with Kotter and bit.bio Associated Director of Sales Timothy Smith after bit.bio’s announcement to talk more about what this means for the biopharma industry and the company's future, which has already built incredible momentum. The custom disease model offering is the 10th launch in only the last year. Kotter quickly turned the discussion to personalized medicine, an area with incredible potential but few real-world examples.
“One of the most exciting areas is personalized, n equals one medicines, where you have rare diseases that can, for example, be treated via gene correction, and we need to build a regulatory environment that allows you to, within constraints, generate those solutions. You likely require a model to demonstrate that your drug has the desired effect. We think our new offering can open up exactly that—where n equals one medicines can be tested and demonstrated in an extremely robust system,” Kotter explained. One of the biggest application areas for this type of research is in rare diseases, something that Smith is especially passionate about.
“There is a lot of opportunity to create personalized medicines for these rare diseases where there are only a handful of patients worldwide. This is hugely ambitious and with pediatric patients, the time is often limited. For instance, in the context of certain neurodevelopmental conditions affecting the brain, you may only have 12-24 months because once the damage has been done, it will be difficult to reverse. The whole drug discovery and development workflow has to be challenged,” he explained.
One company that is tackling rare diseases in children and using bit.bio’s cells to do it is Everlum.bio. They recently made a big breakthrough studying potential therapies to correct a mutation in the HNRNPH2 (H2) gene that has impacted the founder’s daughter, Rose, slowing her development and then slowly robbing her of the functional abilities she did develop. The company uses antisense oligos—short oligonucleotide strands designed to bind to complementary DNA or RNA sequences—to correct mutations that cause rare diseases. In the case of the H2 mutation, fibroblasts, which are the most commonly used cells, wasn’t a good model. Everlum needed neurons. bit.bio had just what they needed.
“At Everlum, we created a new model to make and deliver rare disease therapeutics to patients faster than ever before. Partnering with other companies that want to help these children with innovation is critical to our success. bit.bio fits the bill. They have been a crucial partner by providing human neurons to us for testing our drugs. Using these bit.bio GABAergic neurons, we have identified the first therapeutic ever for Rose’s disease, and we hope soon for so many more!” said Casey McPherson, Everlum’s founder and CINO.
Keeping the momentum going
Custom disease model cells are certain to push the rare disease research field further than it's ever been before — but that wouldn’t be possible without the basic, inherent purity and consistency characteristics of bit.bio’s cells. Something the company has always been proud of, they’ve now hit the optimization ceiling, Kotter told me. The company has optimized its platform to the extent that they see no batch-to-batch differences—a first for the iPSC world.
“The data blows me away,” Kotter told me. “Essentially, we’ve generated a manufacturing paradigm that is as good as manufacturing an iPhone - you cannot tell the difference between one that’s produced in January versus one that’s produced in March. If you think about biology being a fuzzy sort of science, we’ve proven that there is a reality in biology that is not fuzzy, that is, deterministic. That’s a watershed moment.”
It truly is remarkable what the company has achieved in less than ten years — an achievement that Kotter says is possible only because of synthetic biology. And he sees this as just the starting point for an incredibly fruitful (and too long in coming) marriage between synbio and biopharma. But although Kotter allows himself to dream big, he remains pragmatic.
“We have a responsibility, a stewardship to bring this technology to its potential. Lots of ideas can die; we must allow this to flourish,” he says.
When I asked him what “flourishing” looked like years down the road, Kotter painted an incredibly inspiring picture.
“The ultimate achievement would be to allow cell therapy to become the mainstage treatment for various degenerative diseases and other conditions. It should be as simple and as cost-effective as biologics. And, if we can democratize access to human cells and enable others to generate drugs, there is the potential to impact a billion patients. I think it’s possible.”
I think it’s possible, too. And based on what I’ve seen in just the past five years from bit.bio, I’d say the likelihood is high that they’re going to allow what they have to flourish.
Thank you to Embriette Hyde for additional research and reporting on this article. I’m the founder of SynBioBeta, and some of the companies I write about, including bit.bio, are sponsors of the SynBioBeta conference. For more content, you can subscribe to my weekly newsletter and follow me on Twitter and LinkedIn.
|
<urn:uuid:da69ffea-3c31-4753-bf56-8cd143fd1290>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510412.43/warc/CC-MAIN-20230928130936-20230928160936-00553.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9537208676338196,
"pii_count": 0,
"score": 2.6875,
"token_count": 1768,
"url": "https://www.forbes.com/sites/johncumbers/2023/06/01/meet-the-company-democratizing-access-to-human-cells/"
}
|
In the latter decades of the 20th century, stem cells—cells that can self-renew and differentiate into other cell types—took the research world by storm. They began to be applied to a wide range of degenerative disease research programs, promising cures for spinal cord injuries, cancer, osteoarthritis, Alzheimer’s disease, and more.
Although stem cell therapies are still under development, and some are used clinically (mainly for cancer), the hype around them eventually died down, and for 20 years, they haven’t been mentioned much outside of niche communities. They’re hard to work with and difficult to scale, limiting their application. But thanks to the Cambridge-based biotechnology company bit.bio, stem cell research is about to make a big comeback.
Providing pure, functional human iPSC
Bit.bio was founded in 2016 by Mark Kotter and Florian Schuster to solve the biggest challenges facing stem cells by leveraging synthetic biology.
Stem cells are either derived from embryos (embryonic stem cells) or by engineering somatic cells to return them to a pluripotent state (induced pluripotent stem cells; iPSCs)—a state of unlimited potential to turn into any type of cell you want. iPSCs—used more often than embryonic stem cells because of ethical concerns—are notoriously inconsistent, with batches of differentiated cells often containing a mixed population of the cells you want plus many cell types you don’t want. This severely limits the pace of discovery, innovation, and, ultimately, the ability to use these cells at scale in the clinic.
Seven years ago, bit.bio introduced their opti-ox (optimized inducible overexpression) technology to the world (which I wrote about in Forbes in 2020), achieving highly functional, consistently pure, and defined human iPSC-derived cells. Their portfolio of ioCells™ includes wild-type GABAergic and glutamatergic neurons, sensory neurons, microglia, and skeletal myocytes, which have been used by a range of research labs all over the world.
But Kotter and the rest of the bit.bio team wasn’t satisfied. “It’s not only about health states,” says Kotter, “it’s much more about disease states where differences between animal models and human biology seem to be heightened.” So they’ve developed an extensive portfolio of disease model cells as well, with an expansion of those offerings to include custom disease model cells announced at SynBioBeta 2023 last week.
“Now our customers can study their disease mutation
|
of interest in a robust, highly reproducible context,” Kotter explained. “Providing everyone access to their own mutation of interest allows us to help more people in terms of drug discovery.”
Driving personalized medicine
I sat down with Kotter and bit.bio Associated Director of Sales Timothy Smith after bit.bio’s announcement to talk more about what this means for the biopharma industry and the company's future, which has already built incredible momentum. The custom disease model offering is the 10th launch in only the last year. Kotter quickly turned the discussion to personalized medicine, an area with incredible potential but few real-world examples.
“One of the most exciting areas is personalized, n equals one medicines, where you have rare diseases that can, for example, be treated via gene correction, and we need to build a regulatory environment that allows you to, within constraints, generate those solutions. You likely require a model to demonstrate that your drug has the desired effect. We think our new offering can open up exactly that—where n equals one medicines can be tested and demonstrated in an extremely robust system,” Kotter explained. One of the biggest application areas for this type of research is in rare diseases, something that Smith is especially passionate about.
“There is a lot of opportunity to create personalized medicines for these rare diseases where there are only a handful of patients worldwide. This is hugely ambitious and with pediatric patients, the time is often limited. For instance, in the context of certain neurodevelopmental conditions affecting the brain, you may only have 12-24 months because once the damage has been done, it will be difficult to reverse. The whole drug discovery and development workflow has to be challenged,” he explained.
One company that is tackling rare diseases in children and using bit.bio’s cells to do it is Everlum.bio. They recently made a big breakthrough studying potential therapies to correct a mutation in the HNRNPH2 (H2) gene that has impacted the founder’s daughter, Rose, slowing her development and then slowly robbing her of the functional abilities she did develop. The company uses antisense oligos—short oligonucleotide strands designed to bind to complementary DNA or RNA sequences—to correct mutations that cause rare diseases. In the case of the H2 mutation, fibroblasts, which are the most commonly used cells, wasn’t a good model. Everlum needed neurons. bit.bio had just what they needed.
“At Everlum, we created a new model to make and deliver rare disease therapeutics to patients faster than ever before. Partnering with other companies that want to help these children with innovation is critical to our success. bit.bio fits the bill. They have been a crucial partner by providing human neurons to us for testing our drugs. Using these bit.bio GABAergic neurons, we have identified the first therapeutic ever for Rose’s disease, and we hope soon for so many more!” said Casey McPherson, Everlum’s founder and CINO.
Keeping the momentum going
Custom disease model cells are certain to push the rare disease research field further than it's ever been before — but that wouldn’t be possible without the basic, inherent purity and consistency characteristics of bit.bio’s cells. Something the company has always been proud of, they’ve now hit the optimization ceiling, Kotter told me. The company has optimized its platform to the extent that they see no batch-to-batch differences—a first for the iPSC world.
“The data blows me away,” Kotter told me. “Essentially, we’ve generated a manufacturing paradigm that is as good as manufacturing an iPhone - you cannot tell the difference between one that’s produced in January versus one that’s produced in March. If you think about biology being a fuzzy sort of science, we’ve proven that there is a reality in biology that is not fuzzy, that is, deterministic. That’s a watershed moment.”
It truly is remarkable what the company has achieved in less than ten years — an achievement that Kotter says is possible only because of synthetic biology. And he sees this as just the starting point for an incredibly fruitful (and too long in coming) marriage between synbio and biopharma. But although Kotter allows himself to dream big, he remains pragmatic.
“We have a responsibility, a stewardship to bring this technology to its potential. Lots of ideas can die; we must allow this to flourish,” he says.
When I asked him what “flourishing” looked like years down the road, Kotter painted an incredibly inspiring picture.
“The ultimate achievement would be to allow cell therapy to become the mainstage treatment for various degenerative diseases and other conditions. It should be as simple and as cost-effective as biologics. And, if we can democratize access to human cells and enable others to generate drugs, there is the potential to impact a billion patients. I think it’s possible.”
I think it’s possible, too. And based on what I’ve seen in just the past five years from bit.bio, I’d say the likelihood is high that they’re going to allow what they have to flourish.
Thank you to Embriette Hyde for additional research and reporting on this article. I’m the founder of SynBioBeta, and some of the companies I write about, including bit.bio, are sponsors of the SynBioBeta conference. For more content, you can subscribe to my weekly newsletter and follow me on Twitter and LinkedIn.
|
For the casual student of U.F.O. history, the modern idea of life beyond our planet usually dates to 1947, when a top-secret U.S. military balloon crashed in the desert near Roswell, N.M. The wreckage prompted decades of conspiracy theories and gave rise to the idea that Roswell was the site of an alien crash landing.
Now, thanks to a new congressional spending bill, U.F.O. enthusiasts may look to 1945 as the beginning of that era.
An amendment tucked into this year’s $858 billion National Defense Authorization Act, which funds the Defense Department’s annual operating budget, requires the department to review historical documents related to unidentified aerial phenomena — government lingo for U.F.O.s — dating to 1945. That is the year that, according to one account, a large, avocado-shaped object struck a communication tower in a patch of New Mexico desert now known as the Trinity Site, where the world’s first atomic bomb was detonated that July.
Experts said the bill, which President Biden signed into law in December, could be a game changer for studying unidentified phenomena.
“The American public can reasonably expect to get some answers to questions that have been burning in the minds of millions of Americans for many years,” said Christopher Mellon, a former deputy assistant secretary of defense for intelligence. “If nothing else, this should either clear up something that’s been a cloud hanging over the Air Force and Department of Defense for decades or it might lead in another direction, which could be truly incredible. There’s a lot at stake.”
The amendment was introduced by Representative Mike Gallagher, a Wisconsin Republican and a member of the Armed Services Committee. Mr. Gallagher, who declined an interview request, said in a brief statement that a “comprehensive timeline” of unidentified aerial phenomena in U.S. government records was needed, and that the amendment would ensure a full review of “all U.S. government classified and unclassified information.”
“This is an important step that will give us a more comprehensive understanding of what we know — and don’t know — about incidents impacting our military,” he said.
The U.S. government has dabbled in public-facing programs that have explored the possibility of alien life. In 2021, the Pentagon announced it would form a task force to look at the issue after a congressionally mandated report found that the government had no explanation for 143 sightings of strange phenomena by military pilots and others since 2004. NASA said in June that it would finance a study to look at unexplained sightings.
In 2022, the Defense Department established the All-domain Anomaly Resolution Office, which succeeded the Airborne Object Identification and Management Synchronization Group, after facing scrutiny from the public and lawmakers. Sean M. Kirkpatrick, a former chief scientist at the Missile and Space Intelligence Center, which is part of the Defense Intelligence Agency, was named director.
The introduction of drones and other airborne clutter has led to an increase in sightings of unidentified objects in recent years. The government, which delivered its most recent update on Thursday, has found that weather balloons and surveillance operations by foreign powers accounted for most of the recent sightings of unidentified aerial phenomena, though dozens remain officially unexplained.
The defense funding bill requires the new office to work with the intelligence community to identify any nondisclosure agreements related to possible U.F.O. sightings. It also requires the office to create a process for people to share information, regardless of classification, and to share its findings with the highest levels of the Defense Department. It also mandates that the All-domain Anomaly Resolution Office be fully staffed.
“This is an office now that has a voice and resources, and it has authority,” Mr. Mellon, the former Defense Department official, said.
Susan Gough, a spokeswoman for the agency, said in an email that “the department is reviewing the enacted legislation.”
Jacques Vallée, a longtime ufologist, astronomer and computer scientist, said the amendment’s inclusion in the defense bill was “an absolute turning point.”
“This is what all scientists and my colleagues have always dreamed of,” said Dr. Vallée, who has helped study reports of U.F.O.s for the Centre National d’Études Spatiales, the French space agency. He said that the U.S. government’s agreement to dig into the past meant “the stigma has been removed.”
Dr. Vallée began studying the Trinity incident several years ago alongside a journalist, Paola Harris, and interviewed people who claimed to have witnessed the crash. Dr. Vallée and Ms. Harris chronicled their research in a book, “Trinity: The Best-Kept Secret,” including the details of the avocado-shaped object. They also spoke to witnesses who said they came across the object as children and found what they described as “little creatures.”
In the United States, Dr. Vallée said, “there has always been, on the part of the government, especially the Pentagon,” a sense that civilian sightings are unreliable. “The reason,” he said, “is that civilians don’t have the technology to really document what happens, and of course the Pentagon does.”
But, Dr. Vallée said, there’s no reason that “a farmer in his field” isn’t qualified to give a quality observation of a possible U.F.O. “The civilian observations tend to be longer, they tend to be more detailed, they tend to leave a trace that we can analyze,” he said.
He said he was working with a team at Stanford University to analyze samples of minerals and debris that were left after U.F.O. crashes or landings.
“I would hope that the new project would continue to do that because I think we’ve shown the way to do that scientifically,” Dr. Vallée said. He added, “We don’t have proof that a biologist can look at, but we have considerable statistical and now observational evidence that there must be life out there, that the Earth is not unique.”
At 83, Dr. Vallée still holds out hope for tangible evidence in his lifetime.
“Science is a moving frontier,” he said. “I want to have the right answers, even if they are small answers, rather than more speculation.”
The post Did Aliens Land on Earth in 1945? A Defense Bill Seeks Answers. appeared first on New York Times.
|
<urn:uuid:24ae542d-2bf3-4c01-8d81-0d386298f8b2>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647639.37/warc/CC-MAIN-20230601074606-20230601104606-00722.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9640692472457886,
"pii_count": 0,
"score": 2.71875,
"token_count": 1407,
"url": "https://dnyuz.com/2023/01/13/did-aliens-land-on-earth-in-1945-a-defense-bill-seeks-answers/"
}
|
For the casual student of U.F.O. history, the modern idea of life beyond our planet usually dates to 1947, when a top-secret U.S. military balloon crashed in the desert near Roswell, N.M. The wreckage prompted decades of conspiracy theories and gave rise to the idea that Roswell was the site of an alien crash landing.
Now, thanks to a new congressional spending bill, U.F.O. enthusiasts may look to 1945 as the beginning of that era.
An amendment tucked into this year’s $858 billion National Defense Authorization Act, which funds the Defense Department’s annual operating budget, requires the department to review historical documents related to unidentified aerial phenomena — government lingo for U.F.O.s — dating to 1945. That is the year that, according to one account, a large, avocado-shaped object struck a communication tower in a patch of New Mexico desert now known as the Trinity Site, where the world’s first atomic bomb was detonated that July.
Experts said the bill, which President Biden signed into law in December, could be a game changer for studying unidentified phenomena.
“The American public can reasonably expect to get some answers to questions that have been burning in the minds of millions of Americans for many years,” said Christopher Mellon, a former deputy assistant secretary of defense for intelligence. “If nothing else, this should either clear up something that’s been a cloud hanging over the Air Force and Department of Defense for decades or it might lead in another direction, which could be truly incredible. There’s a lot at stake.”
The amendment was introduced by Representative Mike Gallagher, a Wisconsin Republican and a member of the Armed Services Committee. Mr. Gallagher, who declined an interview request, said in a brief statement that a “comprehensive timeline” of unidentified aerial phenomena in U.S. government records was needed, and that the amendment would ensure a full review of “all U.S. government classified and unclassified information.”
“This is an important step that will give us a more comprehensive understanding of what we know — and don’t know — about incidents impacting our military,” he said.
The U.S. government has dabbled in public-facing programs that have explored the possibility of alien life. In 2021, the Pentagon announced it would form a task force to look at the issue after a congressionally mandated report found that the government had no explanation for 143 sightings of strange phenomena by military pilots and others since 20
|
04. NASA said in June that it would finance a study to look at unexplained sightings.
In 2022, the Defense Department established the All-domain Anomaly Resolution Office, which succeeded the Airborne Object Identification and Management Synchronization Group, after facing scrutiny from the public and lawmakers. Sean M. Kirkpatrick, a former chief scientist at the Missile and Space Intelligence Center, which is part of the Defense Intelligence Agency, was named director.
The introduction of drones and other airborne clutter has led to an increase in sightings of unidentified objects in recent years. The government, which delivered its most recent update on Thursday, has found that weather balloons and surveillance operations by foreign powers accounted for most of the recent sightings of unidentified aerial phenomena, though dozens remain officially unexplained.
The defense funding bill requires the new office to work with the intelligence community to identify any nondisclosure agreements related to possible U.F.O. sightings. It also requires the office to create a process for people to share information, regardless of classification, and to share its findings with the highest levels of the Defense Department. It also mandates that the All-domain Anomaly Resolution Office be fully staffed.
“This is an office now that has a voice and resources, and it has authority,” Mr. Mellon, the former Defense Department official, said.
Susan Gough, a spokeswoman for the agency, said in an email that “the department is reviewing the enacted legislation.”
Jacques Vallée, a longtime ufologist, astronomer and computer scientist, said the amendment’s inclusion in the defense bill was “an absolute turning point.”
“This is what all scientists and my colleagues have always dreamed of,” said Dr. Vallée, who has helped study reports of U.F.O.s for the Centre National d’Études Spatiales, the French space agency. He said that the U.S. government’s agreement to dig into the past meant “the stigma has been removed.”
Dr. Vallée began studying the Trinity incident several years ago alongside a journalist, Paola Harris, and interviewed people who claimed to have witnessed the crash. Dr. Vallée and Ms. Harris chronicled their research in a book, “Trinity: The Best-Kept Secret,” including the details of the avocado-shaped object. They also spoke to witnesses who said they came across the object as children and found what they described as “little creatures.”
In the United States, Dr. Vallée said, “there has always been, on the part of the government, especially the Pentagon,” a sense that civilian sightings are unreliable. “The reason,” he said, “is that civilians don’t have the technology to really document what happens, and of course the Pentagon does.”
But, Dr. Vallée said, there’s no reason that “a farmer in his field” isn’t qualified to give a quality observation of a possible U.F.O. “The civilian observations tend to be longer, they tend to be more detailed, they tend to leave a trace that we can analyze,” he said.
He said he was working with a team at Stanford University to analyze samples of minerals and debris that were left after U.F.O. crashes or landings.
“I would hope that the new project would continue to do that because I think we’ve shown the way to do that scientifically,” Dr. Vallée said. He added, “We don’t have proof that a biologist can look at, but we have considerable statistical and now observational evidence that there must be life out there, that the Earth is not unique.”
At 83, Dr. Vallée still holds out hope for tangible evidence in his lifetime.
“Science is a moving frontier,” he said. “I want to have the right answers, even if they are small answers, rather than more speculation.”
The post Did Aliens Land on Earth in 1945? A Defense Bill Seeks Answers. appeared first on New York Times.
|
Everything has a history, and writers have for thousands of years tried to pull together a universal history of everything. “In earliest times,” the Hellenistic historian Polybius mused, in the second century B.C., “history was a series of unrelated episodes, but from now on history becomes an organic whole. Europe and Africa with Asia, and Asia with Africa and Europe.” For the past hundred years or so, each generation of English-language readers has been treated to a fresh blockbuster trying to synthesize world history. H. G. Wells’s “The Outline of History” (1920), written “to be read as much by Hindus or Moslems or Buddhists as by Americans and Western Europeans,” argued “that men form one universal brotherhood . . . that their individual lives, their nations and races, interbreed and blend and go on to merge again at last in one common human destiny.” Then came Arnold Toynbee, whose twelve-volume “Study of History” (1934-61), abridged into a best-selling two, proposed that human civilizations rose and fell in predictable stages. In time, Jared Diamond swept in with “Guns, Germs, and Steel” (1997), delivering an agriculture- and animal-powered explanation for the phases of human development. More recently, the field has belonged to Yuval Noah Harari, whose “Sapiens” (2011) describes the ascent of humankind over other species, and offers Silicon Valley-friendly speculations about a post-human future.
The appeal of such chronicles has something to do with the way they schematize history in the service of a master plot, identifying laws or tendencies that explain the course of human events. Western historians have long charted history as the linear, progressive working out of some larger design—courtesy of God, Nature, or Marx. Other historians, most influentially the fourteenth-century scholar Ibn Khaldun, embraced a sine-wave model of civilizational growth and decline. The cliché that “history repeats itself” promotes a cyclical version of events, reminiscent of the Hindu cosmology that divided time into four ages, each more degenerate than the last.
What if world history more resembles a family tree, its vectors hard to trace through cascading tiers, multiplying branches, and an ever-expanding jumble of names? This is the model, heavier on masters than on plot, suggested by Simon Sebag Montefiore’s “The World: A Family History of Humanity” (Knopf), a new synthesis that, as the title suggests, approaches the sweep of world history through the family—or, to be more precise, through families in power. In the course of some thirteen hundred pages, “The World” offers a monumental survey of dynastic rule: how to get it, how to keep it, how to squander it.
“The word family has an air of cosiness and affection, but of course in real life families can be webs of struggle and cruelty too,” Montefiore begins. Dynastic history, as he tells it, was riddled with rivalry, betrayal, and violence from the start. A prime example might be Julius Caesar’s adopted son Octavian, the founder of the Julio-Claudian dynasty, who consolidated his rule by entrapping and murdering Caesar’s biological son Caesarion, the last of the Ptolemies. Octavian’s ruthlessness looked anodyne compared with many other ancient successions, like that of the Achaemenid king Artaxerxes II, who was opposed by his mother and her favorite son. When the favorite died in battle against Artaxerxes, Montefiore reports, their mother executed one of his killers by scaphism, “in which the victim was enclosed between two boats while force-fed honey and milk until maggots, rats and flies infested their living faecal cocoon, eating them alive.” She also ordered the family of Artaxerxes’ wife to be buried alive, and murdered her daughter-in-law by feeding her poisoned fowl.
Read our reviews of notable new fiction and nonfiction, updated every Wednesday.
As such episodes suggest, it was one thing to hold power, another to pass it on peacefully. “Succession is the great test of a system; few manage it well,” Montefiore observes. Two distinct models coalesced in the thirteenth century. One was practiced by the Mongol empire and its successor states, which tended to hand power to whichever of a ruler’s sons proved the most able in warfare, politics, or internecine family feuds. The Mongol conquests were accompanied by rampant sexual violence; DNA evidence suggests that Genghis Khan may be “literally the father of Asia,” Montefiore writes. He insists, though, that “women among nomadic peoples enjoyed more freedom and authority than those in sedentary states,” and that the many wives, consorts, and concubines in a royal court could occasionally hold real power. The Tang-dynasty empress Wu worked her way up from concubine of the sixth rank through the roles of empress consort (wife), dowager (widow), and regent (mother), and finally became an empress in her own right. More than a millennium later, another low-ranking concubine who became de-facto ruler, Empress Dowager Cixi, contrasted herself with her peer Queen Victoria: “I don’t think her life was half so interesting and eventful as mine. . . . She had nothing to say about policy. Now look at me. I have 400 million dependent on my judgment.”
The political liability of these heir-splitting methods was that rival claimants might fracture the kingdom. The Ottomans handled this problem by dispatching a brigade of mute executioners, known as the Tongueless, to strangle a sultan’s male relatives, and so limit the shedding of royal blood. This made for intense power games in the harem, as mothers tussled to place their sons at the front of the line for succession. A sultan was supposed to stop visiting a consort once she’d given birth to a son, Montefiore explains, “so that each prince would be supported by one mother.” Suleiman the Magnificent—whose father cleared the way for him by having three brothers, seven nephews, and many of his own sons strangled—broke that rule with a young Ukrainian captive named Hürrem (also known as Roxelana). Suleiman had more than one son with Hürrem, freed her, and married her; he then had his eldest son by another mother strangled. But that left two of his and Hürrem’s surviving adult sons jockeying for the top position. After a failed bid to seize power, the younger escaped to Persia, where he was hunted down by the Tongueless and throttled.
A different model for dynasty-building relied on the apparently more tranquil method of intermarriage. Alexander the Great was an early adopter of exogamy as an accessory to conquest; Montefiore says that he merged “the elites of his new empire, Macedonians and Persians, in a mass multicultural wedding” at Susa in 324 B.C. Many other empire-builders through the centuries took up the tactic, notably the Mughal emperor Akbar, who followed his subjugation of the Rajputs by marrying a princess of Amber, and so, Montefiore notes, kicked off “a fusion of Tamerlanian and Rajput lineages with Sanskritic and Persian cultures” that transformed the arts of north India. But it was in Catholic Europe, with its insistence on monogamy and primogeniture, that royal matchmaking became an essential tool of dynasty-building. (The Catholic Church itself, which imposed celibacy on its own Fathers, Mothers, Brothers, and Sisters, kept power in the family when Popes positioned their nephews—nipote, in Italian—in positions of authority, a practice that, as Montefiore points out, gave us the term “nepotism.”)
The archetypal dynasty of this model was the Habsburgs. The family had been catapulted to prominence in the thirteenth century by the self-styled Count Rudolf, who presented himself as a godson of the Holy Roman Emperor Frederick II. Rudolf, recognizing the strategic value of family alliances, cannily married off five of his daughters to German princes, thus helping to cement his position as king of the Germans. His method was violently echoed by the Habsburg-sponsored conquistadores, who, in order to shore up their authority, forced the kinswomen of Motecuhzoma and Atahualpa into marriages. And it was to the Habsburgs that Napoleon Bonaparte turned when he sought a mother for his own hoped-for heir.
The ruthless biology of primogeniture tended to reduce women to the position of breeders—and occasionally men, too. Otto von Bismarck snidely called Saxe-Coburg, the home of Queen Victoria’s husband, Albert, the “stud farm of Europe.” This system conduced to inbreeding, and came at a genetic price. By the sixteenth century, the Holy Roman Emperor Charles V suffered from a massively protruding jaw, with a mouth agape and a stubby tongue slurring his speech. His son Philip II contended with a congenitally incapable heir, Don Carlos, who, Montefiore summarizes, abused animals, flagellated servant girls, defenestrated a page, and torched a house; he also tried to murder a number of courtiers, stage a coup in the Netherlands, stab his uncle, assassinate his father, and kill himself “by swallowing a diamond.” The Spanish Habsburg line ended a few generations later with “Carlos the Hexed,” whose parents were uncle and niece; he was, in Montefiore’s description, “born with a brain swelling, one kidney, one testicle and a jaw so deformed he could barely chew yet a throat so wide he could swallow chunks of meat,” along with “ambiguous genitalia” that may have contributed to his inability to sire an heir.
By the nineteenth century, European dynasts formed an incestuous thicket of cousins, virtually all of them descended from Charlemagne, and many, more proximately, from Queen Victoria. The First World War was the family feud to end them all. Triggered by the assassination of Franz Ferdinand, the heir of the Habsburg emperor Franz Josef, the war brought three first cousins into conflict: Kaiser Wilhelm II, Tsar Nicholas II, and King George V. (By then, Franz Josef’s only son had killed himself; his wife—and first cousin—had been stabbed to death; his brother Emperor Maximilian of Mexico had been executed; and another first cousin, Emperor Pedro II of Brazil, had been deposed.) The war would, Montefiore observes, ultimately “destroy the dynasties it was designed to save”: the Habsburgs, the Ottomans, the Romanovs, and the Hohenzollerns had all been ousted by 1922.
With the rise to political power of non-royal families in the twentieth century, Montefiore’s template for dynastic rule switches from monarchs to mafiosi. The Mafia model applies as readily to the Kennedys, whom Montefiore calls “a macho family business” with Mob ties, as to the Yeltsins, Boris and his daughter Tatiana, whose designated famiglia of oligarchs selected Vladimir Putin as their heir. In Montefiore’s view, Donald Trump is a wannabe dynast who installed a “disorganized, corrupt and nepotistic court” in democracy’s most iconic palace.
The Mafia metaphor also captures an important truth: a history of family power is a history of hit jobs, lately including Mohammed bin Salman’s ordering the dismemberment of Jamal Khashoggi—which has been linked to battles within the House of Saud—and Kim Jong Un’s arranging the murder of his half brother. In the late eighteenth century, the concept of family was taking on another role. Modern republican governments seized on the language of kinship—the Jacobins’ “fraternité,” the United States’ “Founding Fathers”—to forge political communities detached from specific dynasties. Versions of the title “Father of the Nation” have been bestowed on leaders from Argentina’s José de San Martín to Zambia’s Kenneth Kaunda. Immanuel Kant, among others, believed that democracies would be more peaceful than monarchies, because they would be free from dynastic struggles. But some of the bloodiest conflicts of modern times have instead hinged on who does and doesn’t belong to which national “family.” Mustafa Kemal renamed himself “Father of the Turks” (Atatürk) in the wake of the Armenian genocide. A century later, Aung San Suu Kyi, the daughter of Myanmar’s “Father of the Nation,” refused to condemn the ethnic cleansing of the Rohingya, who have been denied citizenship and so excluded from counting as Burmese.
It was partly to counter the genocidal implications of nationalism that, in 1955, MoMA’s photography curator Edward Steichen launched “The Family of Man,” a major exhibition designed to showcase “the essential oneness of mankind throughout the world.” The trouble is that even the most intimately connected human family can divide against itself. In the final days of the Soviet Union, Montefiore recounts, the U.S. Secretary of State James Baker discussed the possibility of war in Ukraine with a member of the Politburo. The Soviet official observed that Ukraine had twelve million Russians and many were in mixed marriages, “so what kind of war would that be?” Baker told him, “A normal war.”
“The World” has the heft and character of a dictionary; it’s divided into twenty-three “acts,” each labelled by world-population figures and subdivided into sections headed by family names. Montefiore energetically fulfills his promise to write a “genuine world history, not unbalanced by excessive focus on Britain and Europe.” In zesty sentences and lively vignettes, he captures the widening global circuits of people, commerce, and culture. Here’s the Roman emperor Claudius parading down the streets of what is now Colchester on an elephant; there’s Manikongo Garcia holding court in what is today Angola “amid Flemish tapestries, wearing Indian linens, eating with cutlery of American silver.” Here are the Anglo-Saxon Mercian kings using Arabic dirhams as local currency; there’s the Khmer ruler Jayavarman VII converting the Hindu site of Angkor for Buddhist worship.
It’s largely up to the reader, though, to make meaning out of these portraits, especially when it comes to the conceit at the book’s center. For one thing, a “family history” is not the same as a “history of the family,” of the sort pioneered by social historians such as Philippe Ariès, Louise A. Tilly, and Lawrence Stone. Montefiore alludes only in passing to shifts such as the consolidation of the nuclear family in Europe after the Black Death, and to the effects on the family of the Industrial Revolution and modern contraception. He offers no sustained analysis of the implications that different family structures had for who could hold power and why.
To the extent that “The World” does have a plot, it concerns the resilience of dynastic power in the face of political transformation. Even today, more than forty nations have a monarch as the head of state, fifteen of them in the British Commonwealth. Yet in democracies, too, holding political power is very often a matter of family connections. “Well, Franklin, there’s nothing like keeping the name in the family,” Teddy Roosevelt remarked at the marriage of his niece Eleanor to her cousin. Americans balk at how many U.S. Presidential nominees in the past generation have been family members of former senators (George H. W. Bush, Al Gore), governors (Mitt Romney), and Presidents (George W. Bush, Hillary Clinton). That’s nothing compared with postwar Japan, where virtually every Prime Minister has come from a political family and some thirty per cent of parliamentary representatives are second generation. In Asia more generally, the path to power for women, especially, has often run through male relatives: of the eleven women who have led Asian democracies, nine have been the daughter, sister, or widow of a male leader. This isn’t how democracy was supposed to work.
Why is hereditary power so hard to shake? Montefiore argues that “dynastic reversion seems both natural and pragmatic when weak states are not trusted to deliver justice or protection and loyalties remain to kin not to institutions”—and new states, many of them hobbled by colonial rule, are rarely strong states. Then, people in power can bend the rules in ways that help them and their successors keep it. It’s not just monarchies that go autocratic: republics can get there all on their own.
A fuller answer, though, rests on the material reality of inheritance, which has systematically enriched some families and dispossessed others. This is most starkly illustrated by the history of slavery, which, as Montefiore frequently points out, has always been twinned with the history of family. Transatlantic slavery, in particular, was “an anti-familial institution” that captured families and ripped them apart, while creating conditions of sexual bondage that produced furtive parallel families. Sally Hemings was the daughter of her first owner, John Wayles; the half sister of her next owner, Martha Wayles; and the mistress of another, Martha’s husband, Thomas Jefferson. Jefferson’s children by Wayles and Hemings were simultaneously half siblings and cousins—one set enslaved, the other free. Even without such intimate ties, European family privilege was magnified in the distorting mirror of American slavery. In Guyana in 1823, for example, an enslaved man and his son Jack Gladstone led a rebellion against their British owner, John Gladstone. Jack Gladstone, for his role in the uprising, was exiled to St. Lucia. John Gladstone, for his ownership of more than two thousand enslaved workers, received the largest payout that the British government made to a slaveholder when slavery was abolished. John’s son William Gladstone, the future Liberal Prime Minister, gave his maiden speech in Parliament defending John’s treatment of his chattel labor.
The inheritance of money and status goes a long way toward explaining the prevalence of dynastic patterns in other sectors. Thomas Paine maintained that “a hereditary monarch is as absurd a position as a hereditary doctor,” and yet in many societies being a doctor often was hereditary. The same went for artists, bankers, soldiers, and more; the Paris executioner who lopped off Louis XVI’s head was preceded in his line of work by three generations of family members. Montefiore’s own family, Britain’s most prominent Sephardic dynasty, puts in the occasional appearance in these pages, alongside the Rothschilds (with whom the Montefiores intermarried); both were banking families, and their prominence endures in part because of the generational accumulation of wealth. A recent study of occupations in the United States shows that children are disproportionately likely to do the same job as one of their parents. The children of doctors are twenty times as likely as others to go into medicine; the children of textile-machine operators are hundreds of times more likely to operate textile machines. Children of academics—like me—are five times as likely to go into academia as others. It’s nepo babies all the way down.
There’s an obvious tension between the ideal of democracy, in which citizens enjoy equal standing regardless of family status, and the reality that the family persists as a prime mediator of social, cultural, and financial opportunities. That doesn’t mean that democracy is bound to be dynastic, any more than it means that families have to be superseded by the state. It does mean that dynasties play as persistent and paradoxical a role in many democracies as families do for many citizens of those democracies—can’t live with them, can’t live without them. ♦
|
<urn:uuid:096c9e9c-ddbc-4c3a-b029-895564e1a3bd>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646652.16/warc/CC-MAIN-20230610233020-20230611023020-00346.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9644989371299744,
"pii_count": 0,
"score": 2.625,
"token_count": 4443,
"url": "https://www.newyorker.com/magazine/2023/05/29/the-world-a-family-history-of-humanity-simon-sebag-montefiore-book-review"
}
|
Everything has a history, and writers have for thousands of years tried to pull together a universal history of everything. “In earliest times,” the Hellenistic historian Polybius mused, in the second century B.C., “history was a series of unrelated episodes, but from now on history becomes an organic whole. Europe and Africa with Asia, and Asia with Africa and Europe.” For the past hundred years or so, each generation of English-language readers has been treated to a fresh blockbuster trying to synthesize world history. H. G. Wells’s “The Outline of History” (1920), written “to be read as much by Hindus or Moslems or Buddhists as by Americans and Western Europeans,” argued “that men form one universal brotherhood . . . that their individual lives, their nations and races, interbreed and blend and go on to merge again at last in one common human destiny.” Then came Arnold Toynbee, whose twelve-volume “Study of History” (1934-61), abridged into a best-selling two, proposed that human civilizations rose and fell in predictable stages. In time, Jared Diamond swept in with “Guns, Germs, and Steel” (1997), delivering an agriculture- and animal-powered explanation for the phases of human development. More recently, the field has belonged to Yuval Noah Harari, whose “Sapiens” (2011) describes the ascent of humankind over other species, and offers Silicon Valley-friendly speculations about a post-human future.
The appeal of such chronicles has something to do with the way they schematize history in the service of a master plot, identifying laws or tendencies that explain the course of human events. Western historians have long charted history as the linear, progressive working out of some larger design—courtesy of God, Nature, or Marx. Other historians, most influentially the fourteenth-century scholar Ibn Khaldun, embraced a sine-wave model of civilizational growth and decline. The cliché that “history repeats itself” promotes a cyclical version of events, reminiscent of the Hindu cosmology that divided time into four ages, each more degenerate than the last.
What if world history more resembles a family tree, its vectors hard to trace through cascading tiers, multiplying branches, and an ever-expanding jumble of names? This is the model, heavier on masters than on plot, suggested by Simon Sebag Montefiore’s “The World:
|
A Family History of Humanity” (Knopf), a new synthesis that, as the title suggests, approaches the sweep of world history through the family—or, to be more precise, through families in power. In the course of some thirteen hundred pages, “The World” offers a monumental survey of dynastic rule: how to get it, how to keep it, how to squander it.
“The word family has an air of cosiness and affection, but of course in real life families can be webs of struggle and cruelty too,” Montefiore begins. Dynastic history, as he tells it, was riddled with rivalry, betrayal, and violence from the start. A prime example might be Julius Caesar’s adopted son Octavian, the founder of the Julio-Claudian dynasty, who consolidated his rule by entrapping and murdering Caesar’s biological son Caesarion, the last of the Ptolemies. Octavian’s ruthlessness looked anodyne compared with many other ancient successions, like that of the Achaemenid king Artaxerxes II, who was opposed by his mother and her favorite son. When the favorite died in battle against Artaxerxes, Montefiore reports, their mother executed one of his killers by scaphism, “in which the victim was enclosed between two boats while force-fed honey and milk until maggots, rats and flies infested their living faecal cocoon, eating them alive.” She also ordered the family of Artaxerxes’ wife to be buried alive, and murdered her daughter-in-law by feeding her poisoned fowl.
Read our reviews of notable new fiction and nonfiction, updated every Wednesday.
As such episodes suggest, it was one thing to hold power, another to pass it on peacefully. “Succession is the great test of a system; few manage it well,” Montefiore observes. Two distinct models coalesced in the thirteenth century. One was practiced by the Mongol empire and its successor states, which tended to hand power to whichever of a ruler’s sons proved the most able in warfare, politics, or internecine family feuds. The Mongol conquests were accompanied by rampant sexual violence; DNA evidence suggests that Genghis Khan may be “literally the father of Asia,” Montefiore writes. He insists, though, that “women among nomadic peoples enjoyed more freedom and authority than those in sedentary states,” and that the many wives, consorts, and concubines in a royal court could occasionally hold real power. The Tang-dynasty empress Wu worked her way up from concubine of the sixth rank through the roles of empress consort (wife), dowager (widow), and regent (mother), and finally became an empress in her own right. More than a millennium later, another low-ranking concubine who became de-facto ruler, Empress Dowager Cixi, contrasted herself with her peer Queen Victoria: “I don’t think her life was half so interesting and eventful as mine. . . . She had nothing to say about policy. Now look at me. I have 400 million dependent on my judgment.”
The political liability of these heir-splitting methods was that rival claimants might fracture the kingdom. The Ottomans handled this problem by dispatching a brigade of mute executioners, known as the Tongueless, to strangle a sultan’s male relatives, and so limit the shedding of royal blood. This made for intense power games in the harem, as mothers tussled to place their sons at the front of the line for succession. A sultan was supposed to stop visiting a consort once she’d given birth to a son, Montefiore explains, “so that each prince would be supported by one mother.” Suleiman the Magnificent—whose father cleared the way for him by having three brothers, seven nephews, and many of his own sons strangled—broke that rule with a young Ukrainian captive named Hürrem (also known as Roxelana). Suleiman had more than one son with Hürrem, freed her, and married her; he then had his eldest son by another mother strangled. But that left two of his and Hürrem’s surviving adult sons jockeying for the top position. After a failed bid to seize power, the younger escaped to Persia, where he was hunted down by the Tongueless and throttled.
A different model for dynasty-building relied on the apparently more tranquil method of intermarriage. Alexander the Great was an early adopter of exogamy as an accessory to conquest; Montefiore says that he merged “the elites of his new empire, Macedonians and Persians, in a mass multicultural wedding” at Susa in 324 B.C. Many other empire-builders through the centuries took up the tactic, notably the Mughal emperor Akbar, who followed his subjugation of the Rajputs by marrying a princess of Amber, and so, Montefiore notes, kicked off “a fusion of Tamerlanian and Rajput lineages with Sanskritic and Persian cultures” that transformed the arts of north India. But it was in Catholic Europe, with its insistence on monogamy and primogeniture, that royal matchmaking became an essential tool of dynasty-building. (The Catholic Church itself, which imposed celibacy on its own Fathers, Mothers, Brothers, and Sisters, kept power in the family when Popes positioned their nephews—nipote, in Italian—in positions of authority, a practice that, as Montefiore points out, gave us the term “nepotism.”)
The archetypal dynasty of this model was the Habsburgs. The family had been catapulted to prominence in the thirteenth century by the self-styled Count Rudolf, who presented himself as a godson of the Holy Roman Emperor Frederick II. Rudolf, recognizing the strategic value of family alliances, cannily married off five of his daughters to German princes, thus helping to cement his position as king of the Germans. His method was violently echoed by the Habsburg-sponsored conquistadores, who, in order to shore up their authority, forced the kinswomen of Motecuhzoma and Atahualpa into marriages. And it was to the Habsburgs that Napoleon Bonaparte turned when he sought a mother for his own hoped-for heir.
The ruthless biology of primogeniture tended to reduce women to the position of breeders—and occasionally men, too. Otto von Bismarck snidely called Saxe-Coburg, the home of Queen Victoria’s husband, Albert, the “stud farm of Europe.” This system conduced to inbreeding, and came at a genetic price. By the sixteenth century, the Holy Roman Emperor Charles V suffered from a massively protruding jaw, with a mouth agape and a stubby tongue slurring his speech. His son Philip II contended with a congenitally incapable heir, Don Carlos, who, Montefiore summarizes, abused animals, flagellated servant girls, defenestrated a page, and torched a house; he also tried to murder a number of courtiers, stage a coup in the Netherlands, stab his uncle, assassinate his father, and kill himself “by swallowing a diamond.” The Spanish Habsburg line ended a few generations later with “Carlos the Hexed,” whose parents were uncle and niece; he was, in Montefiore’s description, “born with a brain swelling, one kidney, one testicle and a jaw so deformed he could barely chew yet a throat so wide he could swallow chunks of meat,” along with “ambiguous genitalia” that may have contributed to his inability to sire an heir.
By the nineteenth century, European dynasts formed an incestuous thicket of cousins, virtually all of them descended from Charlemagne, and many, more proximately, from Queen Victoria. The First World War was the family feud to end them all. Triggered by the assassination of Franz Ferdinand, the heir of the Habsburg emperor Franz Josef, the war brought three first cousins into conflict: Kaiser Wilhelm II, Tsar Nicholas II, and King George V. (By then, Franz Josef’s only son had killed himself; his wife—and first cousin—had been stabbed to death; his brother Emperor Maximilian of Mexico had been executed; and another first cousin, Emperor Pedro II of Brazil, had been deposed.) The war would, Montefiore observes, ultimately “destroy the dynasties it was designed to save”: the Habsburgs, the Ottomans, the Romanovs, and the Hohenzollerns had all been ousted by 1922.
With the rise to political power of non-royal families in the twentieth century, Montefiore’s template for dynastic rule switches from monarchs to mafiosi. The Mafia model applies as readily to the Kennedys, whom Montefiore calls “a macho family business” with Mob ties, as to the Yeltsins, Boris and his daughter Tatiana, whose designated famiglia of oligarchs selected Vladimir Putin as their heir. In Montefiore’s view, Donald Trump is a wannabe dynast who installed a “disorganized, corrupt and nepotistic court” in democracy’s most iconic palace.
The Mafia metaphor also captures an important truth: a history of family power is a history of hit jobs, lately including Mohammed bin Salman’s ordering the dismemberment of Jamal Khashoggi—which has been linked to battles within the House of Saud—and Kim Jong Un’s arranging the murder of his half brother. In the late eighteenth century, the concept of family was taking on another role. Modern republican governments seized on the language of kinship—the Jacobins’ “fraternité,” the United States’ “Founding Fathers”—to forge political communities detached from specific dynasties. Versions of the title “Father of the Nation” have been bestowed on leaders from Argentina’s José de San Martín to Zambia’s Kenneth Kaunda. Immanuel Kant, among others, believed that democracies would be more peaceful than monarchies, because they would be free from dynastic struggles. But some of the bloodiest conflicts of modern times have instead hinged on who does and doesn’t belong to which national “family.” Mustafa Kemal renamed himself “Father of the Turks” (Atatürk) in the wake of the Armenian genocide. A century later, Aung San Suu Kyi, the daughter of Myanmar’s “Father of the Nation,” refused to condemn the ethnic cleansing of the Rohingya, who have been denied citizenship and so excluded from counting as Burmese.
It was partly to counter the genocidal implications of nationalism that, in 1955, MoMA’s photography curator Edward Steichen launched “The Family of Man,” a major exhibition designed to showcase “the essential oneness of mankind throughout the world.” The trouble is that even the most intimately connected human family can divide against itself. In the final days of the Soviet Union, Montefiore recounts, the U.S. Secretary of State James Baker discussed the possibility of war in Ukraine with a member of the Politburo. The Soviet official observed that Ukraine had twelve million Russians and many were in mixed marriages, “so what kind of war would that be?” Baker told him, “A normal war.”
“The World” has the heft and character of a dictionary; it’s divided into twenty-three “acts,” each labelled by world-population figures and subdivided into sections headed by family names. Montefiore energetically fulfills his promise to write a “genuine world history, not unbalanced by excessive focus on Britain and Europe.” In zesty sentences and lively vignettes, he captures the widening global circuits of people, commerce, and culture. Here’s the Roman emperor Claudius parading down the streets of what is now Colchester on an elephant; there’s Manikongo Garcia holding court in what is today Angola “amid Flemish tapestries, wearing Indian linens, eating with cutlery of American silver.” Here are the Anglo-Saxon Mercian kings using Arabic dirhams as local currency; there’s the Khmer ruler Jayavarman VII converting the Hindu site of Angkor for Buddhist worship.
It’s largely up to the reader, though, to make meaning out of these portraits, especially when it comes to the conceit at the book’s center. For one thing, a “family history” is not the same as a “history of the family,” of the sort pioneered by social historians such as Philippe Ariès, Louise A. Tilly, and Lawrence Stone. Montefiore alludes only in passing to shifts such as the consolidation of the nuclear family in Europe after the Black Death, and to the effects on the family of the Industrial Revolution and modern contraception. He offers no sustained analysis of the implications that different family structures had for who could hold power and why.
To the extent that “The World” does have a plot, it concerns the resilience of dynastic power in the face of political transformation. Even today, more than forty nations have a monarch as the head of state, fifteen of them in the British Commonwealth. Yet in democracies, too, holding political power is very often a matter of family connections. “Well, Franklin, there’s nothing like keeping the name in the family,” Teddy Roosevelt remarked at the marriage of his niece Eleanor to her cousin. Americans balk at how many U.S. Presidential nominees in the past generation have been family members of former senators (George H. W. Bush, Al Gore), governors (Mitt Romney), and Presidents (George W. Bush, Hillary Clinton). That’s nothing compared with postwar Japan, where virtually every Prime Minister has come from a political family and some thirty per cent of parliamentary representatives are second generation. In Asia more generally, the path to power for women, especially, has often run through male relatives: of the eleven women who have led Asian democracies, nine have been the daughter, sister, or widow of a male leader. This isn’t how democracy was supposed to work.
Why is hereditary power so hard to shake? Montefiore argues that “dynastic reversion seems both natural and pragmatic when weak states are not trusted to deliver justice or protection and loyalties remain to kin not to institutions”—and new states, many of them hobbled by colonial rule, are rarely strong states. Then, people in power can bend the rules in ways that help them and their successors keep it. It’s not just monarchies that go autocratic: republics can get there all on their own.
A fuller answer, though, rests on the material reality of inheritance, which has systematically enriched some families and dispossessed others. This is most starkly illustrated by the history of slavery, which, as Montefiore frequently points out, has always been twinned with the history of family. Transatlantic slavery, in particular, was “an anti-familial institution” that captured families and ripped them apart, while creating conditions of sexual bondage that produced furtive parallel families. Sally Hemings was the daughter of her first owner, John Wayles; the half sister of her next owner, Martha Wayles; and the mistress of another, Martha’s husband, Thomas Jefferson. Jefferson’s children by Wayles and Hemings were simultaneously half siblings and cousins—one set enslaved, the other free. Even without such intimate ties, European family privilege was magnified in the distorting mirror of American slavery. In Guyana in 1823, for example, an enslaved man and his son Jack Gladstone led a rebellion against their British owner, John Gladstone. Jack Gladstone, for his role in the uprising, was exiled to St. Lucia. John Gladstone, for his ownership of more than two thousand enslaved workers, received the largest payout that the British government made to a slaveholder when slavery was abolished. John’s son William Gladstone, the future Liberal Prime Minister, gave his maiden speech in Parliament defending John’s treatment of his chattel labor.
The inheritance of money and status goes a long way toward explaining the prevalence of dynastic patterns in other sectors. Thomas Paine maintained that “a hereditary monarch is as absurd a position as a hereditary doctor,” and yet in many societies being a doctor often was hereditary. The same went for artists, bankers, soldiers, and more; the Paris executioner who lopped off Louis XVI’s head was preceded in his line of work by three generations of family members. Montefiore’s own family, Britain’s most prominent Sephardic dynasty, puts in the occasional appearance in these pages, alongside the Rothschilds (with whom the Montefiores intermarried); both were banking families, and their prominence endures in part because of the generational accumulation of wealth. A recent study of occupations in the United States shows that children are disproportionately likely to do the same job as one of their parents. The children of doctors are twenty times as likely as others to go into medicine; the children of textile-machine operators are hundreds of times more likely to operate textile machines. Children of academics—like me—are five times as likely to go into academia as others. It’s nepo babies all the way down.
There’s an obvious tension between the ideal of democracy, in which citizens enjoy equal standing regardless of family status, and the reality that the family persists as a prime mediator of social
|
Massachusetts Gov. Maura Healey — one of the country's first two openly lesbian elected governors and a descendant of Irish immigrants — addressed the Irish Senate on Tuesday to help commemorate the 30th anniversary of the decriminalization of homosexuality in Ireland.
“It was not so long ago, when the story of Irish-American unity, and the story of gay liberation would never have been told together,” Healey said in her speech to the Senate. “I’m here to say they are stories of the same people, threads in the same fabric that binds us across time and strengthens us to face the future.”
Ireland passed a law decriminalizing homosexual acts in 1993. Nearly two decades later, the predominantly Catholic nation legalized same-sex marriage, by popular vote, in 2015. The U.S. Supreme Court legalized same-sex marriage nationwide the same year.
“It’s been 19 years since we secured marriage equality in Massachusetts — eight years since both the citizens of Ireland and the Supreme Court of the United States, just one month apart, declared that 'love is love' once and for all,” Healey said.
Healey's trip also coincides with the 60th anniversary of a state visit to Ireland by another Massachusetts Democrat — President John F. Kennedy — that helped usher in an economic and cultural partnership between the two countries.
Healey's agenda during the weeklong trip, which begins and ends in Dublin, also includes business development meetings with Irish business leaders in technology and clean energy. Her visit doubles as a trade mission, her first since taking office in January.
Healey's pitch is that Massachusetts offers a lot that Irish businesses will find attractive, from the state's “world-class education and research institutions to our cutting-edge biotechnology and clean energy sectors to our commitment to protecting civil rights and freedom.”
Healey traces her Irish ancestry on her mother’s side to Ballinasloe, County Galway. Her maternal great-grandmother Katherine Tracy emigrated to America at age 16 in 1912. On the paternal side, Healey’s grandfather came from Kilgarvan, County Kerry, and her grandmother came from Macroom, County Cork.
Healey said her story is just one of millions of emigrant stories that helped build Massachusetts and the United States.
“Our Irish ancestors left behind everything they knew and worked hard to give us all we would need,” Healey said. “I was raised with the values they passed on — taking care of your family, taking responsibility for the welfare of your community, and looking out for those who need a helping hand.”
Lt. Gov. Kim Driscoll is serving as Massachusetts acting governor during Healey's trip.
|
<urn:uuid:d34a6cca-b098-416d-bb55-1a7c64429033>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511351.18/warc/CC-MAIN-20231004020329-20231004050329-00769.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9640035629272461,
"pii_count": 0,
"score": 2.765625,
"token_count": 572,
"url": "https://www.wbur.org/news/2023/06/27/maura-healey-lesbian-irish-senate-anniversary"
}
|
Massachusetts Gov. Maura Healey — one of the country's first two openly lesbian elected governors and a descendant of Irish immigrants — addressed the Irish Senate on Tuesday to help commemorate the 30th anniversary of the decriminalization of homosexuality in Ireland.
“It was not so long ago, when the story of Irish-American unity, and the story of gay liberation would never have been told together,” Healey said in her speech to the Senate. “I’m here to say they are stories of the same people, threads in the same fabric that binds us across time and strengthens us to face the future.”
Ireland passed a law decriminalizing homosexual acts in 1993. Nearly two decades later, the predominantly Catholic nation legalized same-sex marriage, by popular vote, in 2015. The U.S. Supreme Court legalized same-sex marriage nationwide the same year.
“It’s been 19 years since we secured marriage equality in Massachusetts — eight years since both the citizens of Ireland and the Supreme Court of the United States, just one month apart, declared that 'love is love' once and for all,” Healey said.
Healey's trip also coincides with the 60th anniversary of a state visit to Ireland by another Massachusetts Democrat — President John F. Kennedy — that helped usher in an economic and cultural partnership between the two countries.
Healey's agenda during the weeklong trip, which begins and ends in Dublin, also includes business development meetings with Irish business leaders in technology and clean energy. Her visit doubles as a trade mission, her first since taking office in January.
Healey's pitch is that Massachusetts offers a lot that Irish businesses will find attractive, from the state's “world-class education and research institutions to our cutting-edge biotechnology and clean energy sectors to our commitment to protecting civil rights and freedom.”
Healey traces her Irish ancestry on her mother’s side to Ballinasloe, County Galway. Her maternal great-grandmother Katherine Tracy emigrated to America at age 16 in 1912. On the paternal side, Healey’s grandfather came from Kilgarvan, County Kerry, and her grandmother came from Macroom, County Cork.
Healey said her story is just one of millions of emigrant stories that helped build Massachusetts and the United States.
“Our Irish ancestors left behind everything they knew and worked hard to give us all we would need,” Healey said. “I was raised with the values they passed on — taking care of your family,
|
taking responsibility for the welfare of your community, and looking out for those who need a helping hand.”
Lt. Gov. Kim Driscoll is serving as Massachusetts acting governor during Healey's trip.
|
TURES: Did a person or a computer write this column?
Published 10:30 am Tuesday, January 31, 2023
From colleges to the comic strip Doonesbury to just about any area focusing on composition, the subject of artificial intelligence and enhanced writing tools possibly replacing human input is causing nervousness. Could you tell which part of this column were written by me, and which parts are computer-generated? The answer is at the end of this essay.
The concerns in academic are very obvious. A.I. could help students cheat on papers by providing them with pre-written or pre-generated content for their assignments. This content can be generated through natural language processing techniques, such as text summarization or text generation, which can create essays, research papers, or other written work that is similar in style and content to the student’s own writing.
But others see the potential for artificial intelligence to be helpful when it comes to student assignments. AI can assist students in writing by providing feedback on grammar, sentence structure, and vocabulary. It can also assist in organizing and outlining a piece of writing, as well as providing suggestions for improving the overall quality of the writing.
Just as the technology might be used by students, it could also be adopted by professors as well. A.I. could be used to detect plagiarism by comparing a student’s work to existing content on the internet. However, it is important to note that cheating is unethical and can have serious consequences for students.
The concerns about the new directions of artificial intelligence go beyond colleges and universities.
AI has the potential to automate certain writing tasks, such as generating news articles or creating social media posts. However, it is unlikely to fully replace human writers as it lacks the creativity and emotional intelligence that humans possess. Additionally, AI-generated content may not have the same level of nuance and personal perspective that a human writer can provide. It is more likely that AI will augment the work of human writers, helping them to be more efficient and productive, rather than replacing them entirely.
What could be done about AI? Should it be banned? Would that even be legal? This is not so different from our history, where new technologies challenged the current role that humans played in the economy. After all, the steamboat replaced the men who worked the keel-boats, paddling their way across the river. But goods and passengers could be transported faster across longer distances. The humans had to adapt to changes, as they always have had to do.
As for academics who are fearful that students may use a computer to write their essays, this is not a new problem. In the past, a friend or parent could write the paper for the student, who would turn it in with their own name on it. In recent years, the student could go on the Internet and purchase a paper written by a site that generates such papers for a price. Constant vigilance is needed by teachers, whether AI tools exist or not. The solution should be to craft assignments and tasks in a way that would identify and even highlight or somehow value that individual contribution by humans, so that AI is more of a complement than a replacement of human-based writing, and even thoughts.
In case you are curious about which parts of this were AI-generated, they came from paragraphs 2, 3, 4 and 5, after the first sentence prompt.
|
<urn:uuid:bea7d2d9-cf37-44fa-9487-e638eff93c0d>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510225.44/warc/CC-MAIN-20230926211344-20230927001344-00796.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9721923470497131,
"pii_count": 0,
"score": 2.65625,
"token_count": 691,
"url": "https://www.lagrangenews.com/2023/01/31/tures-did-a-person-or-a-computer-write-this-column/"
}
|
TURES: Did a person or a computer write this column?
Published 10:30 am Tuesday, January 31, 2023
From colleges to the comic strip Doonesbury to just about any area focusing on composition, the subject of artificial intelligence and enhanced writing tools possibly replacing human input is causing nervousness. Could you tell which part of this column were written by me, and which parts are computer-generated? The answer is at the end of this essay.
The concerns in academic are very obvious. A.I. could help students cheat on papers by providing them with pre-written or pre-generated content for their assignments. This content can be generated through natural language processing techniques, such as text summarization or text generation, which can create essays, research papers, or other written work that is similar in style and content to the student’s own writing.
But others see the potential for artificial intelligence to be helpful when it comes to student assignments. AI can assist students in writing by providing feedback on grammar, sentence structure, and vocabulary. It can also assist in organizing and outlining a piece of writing, as well as providing suggestions for improving the overall quality of the writing.
Just as the technology might be used by students, it could also be adopted by professors as well. A.I. could be used to detect plagiarism by comparing a student’s work to existing content on the internet. However, it is important to note that cheating is unethical and can have serious consequences for students.
The concerns about the new directions of artificial intelligence go beyond colleges and universities.
AI has the potential to automate certain writing tasks, such as generating news articles or creating social media posts. However, it is unlikely to fully replace human writers as it lacks the creativity and emotional intelligence that humans possess. Additionally, AI-generated content may not have the same level of nuance and personal perspective that a human writer can provide. It is more likely that AI will augment the work of human writers, helping them to be more efficient and productive, rather than replacing them entirely.
What could be done about AI? Should it be banned? Would that even be legal? This is not so different from our history, where new technologies challenged the current role that humans played in the economy. After all, the steamboat replaced the men who worked the keel-boats, paddling their way across the river. But goods and passengers could be transported faster across longer distances. The humans had to adapt to changes, as they always have had to do.
|
As for academics who are fearful that students may use a computer to write their essays, this is not a new problem. In the past, a friend or parent could write the paper for the student, who would turn it in with their own name on it. In recent years, the student could go on the Internet and purchase a paper written by a site that generates such papers for a price. Constant vigilance is needed by teachers, whether AI tools exist or not. The solution should be to craft assignments and tasks in a way that would identify and even highlight or somehow value that individual contribution by humans, so that AI is more of a complement than a replacement of human-based writing, and even thoughts.
In case you are curious about which parts of this were AI-generated, they came from paragraphs 2, 3, 4 and 5, after the first sentence prompt.
|
When construction is complete, the new all-electric facility in Lakewood aims to be net-zero, releasing no new emissions into the atmosphere.
But the new Food and Drug Administration lab has a secondary mission: To prove that cost-effective and energy efficient buildings can house serious scientific activities without risk of losing crucial research. Like hospitals, labs run 24/7, 365 days a year.
“You can’t shut the power off and shut the experiments down or the testing that they’re doing. But you can be more efficient in the way that you use the energy,” said Daniel Nikolich, project manager for the new FDA lab at the Denver Federal Center.
Funded through $79 million from a federal climate change and health care law, the new lab is part of the U.S. General Service Administration’s first batch of projects that looks to boost clean energy innovation by using energy-saving technologies and materials that minimize carbon emissions in construction and renovation projects at federal facilities. The administration owns more than 411,000 buildings across the country, including research and judicial complexes in Boulder and Denver.
The GSA estimates that its first round of clean energy projects, totaling more than $300 million from the Inflation Reduction Act, will help reduce greenhouse gas emissions by about 12,000 metric tons, which is equivalent to the emissions created by more than 25,000 cars in one year and reduce energy costs by $35 million over the next 20 years.
The project works toward President Joe Biden’s goal of reaching net-zero emissions from federal buildings by 2045.
“It’s great to see the federal government leading by example,” said Nissa Erickson, an associate at Boulder-based Southwest Energy Efficiency Project. “Using IRA funds to help pay for the state-of-the-art HVAC systems in this new lab makes a lot of sense. It’s important to demonstrate all-electric, net-zero energy buildings of many different types, commercial and residential. Moving to highly efficient, all-electric buildings will be essential to protect our climate — and it can save us money too.”
The new 70,000-square-foot Food and Drug Administration lab, which is set to open in 2026, will be decked with solar panels and other ways to offset power generated by coal or natural gas and reduce carbon dioxide emissions.
The three-story building will replace a building that was built as a munitions factory during World War II. The FDA moved into the nearly windowless two-story building at the Denver Federal Center in 1987, after it was modified into a lab.
The new lab will be built with glass on the south and west sides of the building to bring in sunlight, Nikolich said. The north and east sides will have less glass to help provide for better thermal insulation, he said.
To offset the energy used in heating and cooling the building, the lab will recapture energy as it leaves the building and put it back, using runaround loops, ground-source heat pumps and other technologies.
The building design and construction, a combination of steel and concrete, will help the lab use half the amount of energy a typical lab uses, Nikolich said. For perspective, a typical lab uses about 10 times as much energy as an office building of a similar size.
The FDA lab in Denver performs tests on a wide range of food for humans and animals, cosmetics and dietary-supplements and specializes in testing for food allergies and DNA fish barcoding, among other analyses.
The lab will still rely on a generator in the case of a power outage, but due to “fairly stable” power in Denver, Nikolich said he doesn’t expect it to be used often.
☀ OUR RECOMMENDATIONS
“We’re proud of the fact that it is going to be our first net-zero lab in the country,” said G.W. Emge, director of design and construction for GSA’s Rocky Mountain region. The energy efficiency will save the FDA money that would otherwise be spent to heat, cool and light the building.
“Every dollar they have to spend on energy is a dollar they can’t spend on their research,” he said. “So it’s kind of a win-win in that respect.”
Construction on the lab is set to start within the next year. It will be built on what is now a parking lot, south of the existing building.
The project also falls in line with a lofty goal proposed by some Colorado lawmakers to remove all carbon emissions by 2050. Senate Bill 16, which would also set tough interim goals for greenhouse gas reduction and try again for a 30% tax credit for clean electric lawn and garden equipment, is on the table in the legislature this year.
Nonprofit clean energy analysis group RMI hopes the push to build an all-electric, net-zero lab serves as a model for other facilities to adopt.
“Facilities like the Denver Federal Center can be difficult to electrify and decarbonize due to their high process and ventilation loads,” said Lucas Toffoli, on the group’s Carbon-Free Building program. “So, we hope the integrated efficiency, energy recovery, and renewable energy strategies they’re using will provide a helpful model for high-performance lab build-outs across the U.S.”
|
<urn:uuid:655c2fc9-4037-4184-96ed-28b78f56c6c1>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511717.69/warc/CC-MAIN-20231005012006-20231005042006-00574.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9431409239768982,
"pii_count": 0,
"score": 2.5625,
"token_count": 1134,
"url": "https://coloradosun.com/2023/01/24/net-zero-fda-federal-lab-lakewood-colorado/"
}
|
When construction is complete, the new all-electric facility in Lakewood aims to be net-zero, releasing no new emissions into the atmosphere.
But the new Food and Drug Administration lab has a secondary mission: To prove that cost-effective and energy efficient buildings can house serious scientific activities without risk of losing crucial research. Like hospitals, labs run 24/7, 365 days a year.
“You can’t shut the power off and shut the experiments down or the testing that they’re doing. But you can be more efficient in the way that you use the energy,” said Daniel Nikolich, project manager for the new FDA lab at the Denver Federal Center.
Funded through $79 million from a federal climate change and health care law, the new lab is part of the U.S. General Service Administration’s first batch of projects that looks to boost clean energy innovation by using energy-saving technologies and materials that minimize carbon emissions in construction and renovation projects at federal facilities. The administration owns more than 411,000 buildings across the country, including research and judicial complexes in Boulder and Denver.
The GSA estimates that its first round of clean energy projects, totaling more than $300 million from the Inflation Reduction Act, will help reduce greenhouse gas emissions by about 12,000 metric tons, which is equivalent to the emissions created by more than 25,000 cars in one year and reduce energy costs by $35 million over the next 20 years.
The project works toward President Joe Biden’s goal of reaching net-zero emissions from federal buildings by 2045.
“It’s great to see the federal government leading by example,” said Nissa Erickson, an associate at Boulder-based Southwest Energy Efficiency Project. “Using IRA funds to help pay for the state-of-the-art HVAC systems in this new lab makes a lot of sense. It’s important to demonstrate all-electric, net-zero energy buildings of many different types, commercial and residential. Moving to highly efficient, all-electric buildings will be essential to protect our climate — and it can save us money too.”
The new 70,000-square-foot Food and Drug Administration lab, which is set to open in 2026, will be decked with solar panels and other ways to offset power generated by coal or natural gas and reduce carbon dioxide emissions.
The three-story building will replace a building that was built as a munitions factory during World War II
|
. The FDA moved into the nearly windowless two-story building at the Denver Federal Center in 1987, after it was modified into a lab.
The new lab will be built with glass on the south and west sides of the building to bring in sunlight, Nikolich said. The north and east sides will have less glass to help provide for better thermal insulation, he said.
To offset the energy used in heating and cooling the building, the lab will recapture energy as it leaves the building and put it back, using runaround loops, ground-source heat pumps and other technologies.
The building design and construction, a combination of steel and concrete, will help the lab use half the amount of energy a typical lab uses, Nikolich said. For perspective, a typical lab uses about 10 times as much energy as an office building of a similar size.
The FDA lab in Denver performs tests on a wide range of food for humans and animals, cosmetics and dietary-supplements and specializes in testing for food allergies and DNA fish barcoding, among other analyses.
The lab will still rely on a generator in the case of a power outage, but due to “fairly stable” power in Denver, Nikolich said he doesn’t expect it to be used often.
☀ OUR RECOMMENDATIONS
“We’re proud of the fact that it is going to be our first net-zero lab in the country,” said G.W. Emge, director of design and construction for GSA’s Rocky Mountain region. The energy efficiency will save the FDA money that would otherwise be spent to heat, cool and light the building.
“Every dollar they have to spend on energy is a dollar they can’t spend on their research,” he said. “So it’s kind of a win-win in that respect.”
Construction on the lab is set to start within the next year. It will be built on what is now a parking lot, south of the existing building.
The project also falls in line with a lofty goal proposed by some Colorado lawmakers to remove all carbon emissions by 2050. Senate Bill 16, which would also set tough interim goals for greenhouse gas reduction and try again for a 30% tax credit for clean electric lawn and garden equipment, is on the table in the legislature this year.
Nonprofit clean energy analysis group RMI hopes the push to build an all-electric, net-zero lab serves as a model for other facilities to adopt.
“Facilities like the Denver Federal Center can be difficult to electrify and decarbonize due to their high process and ventilation loads,” said Lucas Toffoli, on the group’s Carbon-Free Building program. “So, we hope the integrated efficiency, energy recovery, and renewable energy strategies they’re using will provide a helpful model for high-performance lab build-outs across the U.S.”
|
As we are about to enter the month of February which is not only celebrated as Black History Month but also as Reggae Month, there is the usual heightened awareness of culture here in Jamaica. The month is packed with cultural activities commemorating the respective themes. Discussions surrounding Marcus Garvey often focus on his political involvement, however he also contributed significantly to the cultural development of Jamaica.
Garvey and the Universal Improvement Negro Association (U.N.I.A) were very significant to cultural development in Jamaica because Black people were not given the opportunity to perform in the mainstream artistic community before Garvey came along. So, Garvey was a forerunner, in this regard, as he used his movement to not only train Black people as artistes, but to give them a sense of purpose and direction.
Garvey supported or launched the careers of several performers or artistes. These persons included:
- Ranny Williams – Dancer, actor, composer and singer
- Ernest Cupidon – Comedian and impersonator
- Miss Myrtle Bennett – Soprano
Garvey, himself, was quite talented. He wrote seven plays and directed several more. He wrote poetry; most of which were written while he was in prison in the U.S. He wrote songs; the most popular being Keep Cool which he also wrote while in prison in the U.S. Garvey was also an exceptional orator.
Long after his death in 1940, Garvey continues to influence the Arts. Artists use him as inspiration for art and craft pieces, likewise poets for their poetry. Musicians such as Peter Tosh, Burning Spear and Fred Locks, among others, compose lyrics about him or use his philosophies in their songs. One of the most popular musicians who was influenced by Garvey was Bob Marley. Many of his songs included Garvey’s views on Africa, unity and the upliftment of Black people. Songs like Africa Unite and Redemption Song with the popular lyrics, “Emancipate yourself from mental slavery. None but ourselves can free our minds.”
Finally, Garvey influenced the Rastafari Movement. The movement was founded on Garvey’s philosophy and opinions of a united Black race and repatriation to Africa. This was probably because the founders of Rastafari were Garveyites, Leonard Howell, Joseph Hibbert, Archibald Dunkley and Robert Hinds. Rastas revere Garvey as a prophet and the movement has become one of the most iconic representations of Jamaica’s culture, boasting worldwide recognition and influence.
Marcus Garvey was a cultured man who had great ideas to empower Black people and build their character, and, he used the Arts extensively to help achieve this.
|
<urn:uuid:cdb69e5e-fd27-4af9-bc46-eea125465bd0>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00761.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9860326051712036,
"pii_count": 0,
"score": 2.859375,
"token_count": 562,
"url": "https://garveylhkgn.wordpress.com/2023/01/27/garveys-cultural-impact/"
}
|
As we are about to enter the month of February which is not only celebrated as Black History Month but also as Reggae Month, there is the usual heightened awareness of culture here in Jamaica. The month is packed with cultural activities commemorating the respective themes. Discussions surrounding Marcus Garvey often focus on his political involvement, however he also contributed significantly to the cultural development of Jamaica.
Garvey and the Universal Improvement Negro Association (U.N.I.A) were very significant to cultural development in Jamaica because Black people were not given the opportunity to perform in the mainstream artistic community before Garvey came along. So, Garvey was a forerunner, in this regard, as he used his movement to not only train Black people as artistes, but to give them a sense of purpose and direction.
Garvey supported or launched the careers of several performers or artistes. These persons included:
- Ranny Williams – Dancer, actor, composer and singer
- Ernest Cupidon – Comedian and impersonator
- Miss Myrtle Bennett – Soprano
Garvey, himself, was quite talented. He wrote seven plays and directed several more. He wrote poetry; most of which were written while he was in prison in the U.S. He wrote songs; the most popular being Keep Cool which he also wrote while in prison in the U.S. Garvey was also an exceptional orator.
Long after his death in 1940, Garvey continues to influence the Arts. Artists use him as inspiration for art and craft pieces, likewise poets for their poetry. Musicians such as Peter Tosh, Burning Spear and Fred Locks, among others, compose lyrics about him or use his philosophies in their songs. One of the most popular musicians who was influenced by Garvey was Bob Marley. Many of his songs included Garvey’s views on Africa, unity and the upliftment of Black people. Songs like Africa Unite and Redemption Song with the popular lyrics, “Emancipate yourself from mental slavery. None but ourselves can free our minds.”
Finally, Garvey influenced the Rastafari Movement. The movement was founded on Garvey’s philosophy and opinions of a united Black race and repatriation to Africa. This was probably because the founders of Rastafari were Garveyites, Leonard Howell, Joseph Hibbert, Archibald Dunkley and Robert Hinds. Rastas revere Garvey as a prophet and the movement has become one of the most iconic representations of Jamaica’s culture, boasting worldwide recognition and influence
|
.
Marcus Garvey was a cultured man who had great ideas to empower Black people and build their character, and, he used the Arts extensively to help achieve this.
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
When the Voyager probes lifted off weeks apart in 1977, no one expected that the twin spacecraft would have their missions extended from four years to 45 years and counting.
Now, the mission team is getting creative with its strategies for the power supply and instruments on both Voyager 1 and 2 to enable both probes to continue collecting valuable data as they explore uncharted interstellar territory.
Voyager 1 is currently the farthest spacecraft from Earth at about 15 billion miles (24 billion kilometers) away, while Voyager 2 has traveled more than 12 billion miles (20 billion kilometers) from Earth. Both are in interstellar space and the only spacecraft to operate beyond the heliosphere, the sun’s bubble of magnetic fields and particles that extends well beyond the orbit of Pluto.
As the sole extensions of humanity outside the heliosphere’s protective bubble, the two probes are alone even on their cosmic treks as they travel in different directions.
Think of the planets of the solar system as existing in one plane. Voyager 1’s trajectory took it up and out of the plane of the planets after it passed Saturn, while Voyager 2 passed over the top of Neptune and moved down and out of the plane of planets, said Suzanne Dodd, Voyager’s project manager at NASA’s Jet Propulsion Laboratory in Pasadena, California.
The information collected by these long-lived probes is helping scientists learn about the cometlike shape of the heliosphere and how it protects Earth from energized particles and radiation in interstellar space.
Voyager 2’s priceless data is captured and returned to Earth through its five science instruments, while Voyager 1 still has four operational instruments after one failed earlier in the mission.
But it has taken a lot of care and monitoring to keep the “senior citizens” operating, Dodd said.
“I kind of describe them as twin sisters,” Dodd told CNN. “One has lost its hearing and it needs some hearing aids, and another one has lost some sense of touch. So, they’ve failed differently over time. But in a general sense, they’re very healthy for how old they are.”
Instruments designed to look at the planets as the Voyager probes toured the solar system in the 1980s have been turned off to repurpose memory for the interstellar mission that began in 1990, Dodd said. Voyager 1 reached the heliosphere boundary in 2012, while the slower Voyager 2 crossed the boundary in 2018.
Both Voyager probes rely on radioisotope thermoelectric generators. The nuclear power supply loses 4 watts per year as the plutonium it relies on slowly decays and its heat is converted into electricity. Over time, the Voyager team has commanded the probes to turn off instrument heaters and other nonessential systems.
“But (Voyager) also gets very cold and we need to keep the propellant lines warm enough, about 2 degrees Celsius (35.6 degrees Fahrenheit). If they freeze, then we could lose our ability to point to Earth. So it’s a balancing act between power and thermal and how we operate the spacecraft,” Dodd said.
A delicate balance
The team was pleasantly surprised that the instruments recalibrated to become slightly more sensitive in their data collection because some of the Voyager detectors operate better when colder.
“One way to look at is maybe think about the two Voyagers as being like cabins at the top of a mountain, and it’s very cold there,” said Linda Spilker, Voyager’s project scientist at JPL.”Little by little you’re having to turn the lights out inside to conserve your power. And you’re also having to turn your thermostat down, and yet you’re still working just fine.”
Voyager 2 has begun using a small backup power reserve that was part of a safety mechanism, which will enable the spacecraft to keep from shutting down another science instrument until 2026, rather than this year. The safety mechanism, which protects the instruments in case the flow of electricity changes significantly on the spacecraft, contained a small amount of power that acted as a backup circuit.
Now, that power can be used to keep Voyager 2’s instruments up and running.
The spacecraft’s electrical systems remain largely stable, so the team determined it was a small risk for the larger reward of being able to collect science data. The team will continue to monitor Voyager 2’s voltage and act accordingly if there are fluctuations.
If this strategy works for Voyager 2, it may also be implemented on Voyager 1, since the team will have to consider shutting down another science instrument on the spacecraft in 2024.
“Instead of turning off a science instrument, we’d like to maybe do something very creative, engineering-wise, in order to get another year of science data,” Dodd said. “It’s operating the spacecraft in a way it was never designed to be operated.”
Voyager 2’s plasma science instrument is still functioning, so it can take direct measurements of the density of the plasma in interstellar space. Space plasma is matter made of charged particles, the movement of which is controlled by electric and magnetic forces, according to NASA.
“Picture it as an ocean of space with waves and turbulence and activity going on, and the Voyager instruments can measure what’s happening,” Spilker said. “Before you go to a new place, you make predictions of what you think you might find when you get there. With Voyager, we’ve learned to be surprised.”
Scientists expected that the density of plasma would go down as Voyager traveled further from the sun, but it has increased instead. And the probes can measure and see shocks as they propagate out from the sun, Spilker said.
As long as both Voyager 1 and 2 remain healthy, it’s likely the aging probes will continue their record-breaking missions for years to come.
|
<urn:uuid:0f5d1fc6-ce5c-4eb4-9737-6d9ba2535f27>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652116.60/warc/CC-MAIN-20230605121635-20230605151635-00510.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9476181864738464,
"pii_count": 0,
"score": 3.765625,
"token_count": 1277,
"url": "https://edition.cnn.com/2023/05/10/world/nasa-voyager-probes-power-scn/index.html"
}
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
When the Voyager probes lifted off weeks apart in 1977, no one expected that the twin spacecraft would have their missions extended from four years to 45 years and counting.
Now, the mission team is getting creative with its strategies for the power supply and instruments on both Voyager 1 and 2 to enable both probes to continue collecting valuable data as they explore uncharted interstellar territory.
Voyager 1 is currently the farthest spacecraft from Earth at about 15 billion miles (24 billion kilometers) away, while Voyager 2 has traveled more than 12 billion miles (20 billion kilometers) from Earth. Both are in interstellar space and the only spacecraft to operate beyond the heliosphere, the sun’s bubble of magnetic fields and particles that extends well beyond the orbit of Pluto.
As the sole extensions of humanity outside the heliosphere’s protective bubble, the two probes are alone even on their cosmic treks as they travel in different directions.
Think of the planets of the solar system as existing in one plane. Voyager 1’s trajectory took it up and out of the plane of the planets after it passed Saturn, while Voyager 2 passed over the top of Neptune and moved down and out of the plane of planets, said Suzanne Dodd, Voyager’s project manager at NASA’s Jet Propulsion Laboratory in Pasadena, California.
The information collected by these long-lived probes is helping scientists learn about the cometlike shape of the heliosphere and how it protects Earth from energized particles and radiation in interstellar space.
Voyager 2’s priceless data is captured and returned to Earth through its five science instruments, while Voyager 1 still has four operational instruments after one failed earlier in the mission.
But it has taken a lot of care and monitoring to keep the “senior citizens” operating, Dodd said.
“I kind of describe them as twin sisters,” Dodd told CNN. “One has lost its hearing and it needs some hearing aids, and another one has lost some sense of touch. So, they’ve failed differently over time. But in a general sense, they’re very healthy for how old they are.”
Instruments designed to look at the planets as the Voyager probes toured the solar system in the 1980s have been turned off to repurpose memory for the
|
interstellar mission that began in 1990, Dodd said. Voyager 1 reached the heliosphere boundary in 2012, while the slower Voyager 2 crossed the boundary in 2018.
Both Voyager probes rely on radioisotope thermoelectric generators. The nuclear power supply loses 4 watts per year as the plutonium it relies on slowly decays and its heat is converted into electricity. Over time, the Voyager team has commanded the probes to turn off instrument heaters and other nonessential systems.
“But (Voyager) also gets very cold and we need to keep the propellant lines warm enough, about 2 degrees Celsius (35.6 degrees Fahrenheit). If they freeze, then we could lose our ability to point to Earth. So it’s a balancing act between power and thermal and how we operate the spacecraft,” Dodd said.
A delicate balance
The team was pleasantly surprised that the instruments recalibrated to become slightly more sensitive in their data collection because some of the Voyager detectors operate better when colder.
“One way to look at is maybe think about the two Voyagers as being like cabins at the top of a mountain, and it’s very cold there,” said Linda Spilker, Voyager’s project scientist at JPL.”Little by little you’re having to turn the lights out inside to conserve your power. And you’re also having to turn your thermostat down, and yet you’re still working just fine.”
Voyager 2 has begun using a small backup power reserve that was part of a safety mechanism, which will enable the spacecraft to keep from shutting down another science instrument until 2026, rather than this year. The safety mechanism, which protects the instruments in case the flow of electricity changes significantly on the spacecraft, contained a small amount of power that acted as a backup circuit.
Now, that power can be used to keep Voyager 2’s instruments up and running.
The spacecraft’s electrical systems remain largely stable, so the team determined it was a small risk for the larger reward of being able to collect science data. The team will continue to monitor Voyager 2’s voltage and act accordingly if there are fluctuations.
If this strategy works for Voyager 2, it may also be implemented on Voyager 1, since the team will have to consider shutting down another science instrument on the spacecraft in 2024.
“Instead of turning off a science instrument, we’d like to maybe do something very creative, engineering-wise, in order to get another year of science data,” Dodd said. “It’s operating the spacecraft in a way it was never designed to be operated.”
Voyager 2’s plasma science instrument is still functioning, so it can take direct measurements of the density of the plasma in interstellar space. Space plasma is matter made of charged particles, the movement of which is controlled by electric and magnetic forces, according to NASA.
“Picture it as an ocean of space with waves and turbulence and activity going on, and the Voyager instruments can measure what’s happening,” Spilker said. “Before you go to a new place, you make predictions of what you think you might find when you get there. With Voyager, we’ve learned to be surprised.”
Scientists expected that the density of plasma would go down as Voyager traveled further from the sun, but it has increased instead. And the probes can measure and see shocks as they propagate out from the sun, Spilker said.
As long as both Voyager 1 and 2 remain healthy, it’s likely the aging probes will continue their record-breaking missions for years to come.
|
Garden Guy column: Go green in the garden
What do I mean by “go green in the garden?” For me it is gardening in as environmentally friendly, sustainable, and economical way as I can. I have found that being environmentally friendly and sustainable automatically produces good economic results.
Being environmentally friendly goes beyond discontinuing the use of synthetic fertilizers, insecticides, fungicides, herbicides, and other “cides.” It means such things as recycling garden wastes through composting, watering deeply and infrequently, mowing high only when necessary, mulching, selecting proper plants for the landscape, and other practices that I have written about over the years.
Look for ways to minimize the use of precious natural resources like water. Look for ways to decrease the need for fertilizing. Look for ways to decrease the use of air-polluting machines. These are but a few things to think about to work toward going green.
Sustainability is a bit harder to wrap your arms around. When I think of sustainability, I think of using resources and practices that don’t compromise future generations’ ability to have or use the same. I don’t get too precise when addressing this issue because of its complexity. I simply try to use good practices that encourage my gardens to continue from year to year with minimal intervention and minimal use of finite resources.
The basic gardening practice that addresses all these issues and is the most important thing a gardener can do is that of building the soil, or more precisely, the soil food web. Good, healthy soil provides most of the nutrients and other needs of plants, resulting in healthy plants.
Strong, healthy plants resist disease, insects, and other pests. This reduces or eliminates the need for the “cides” and for fast food in the form of synthetic fertilizers. Healthy soil also decreases the need for more water since it retains water better and the healthy plants utilize the water better.
The healthier the soil, the less inputs (water, fertilizer, pesticides, labor) that are required to maintain lawns and gardens. This alone results in better economic results since less money is spent.
While my gardening is entirely ornamental, in recent years gardening has experienced a resurgence of interest in home food production. This has come both in traditional form, that is, home garden plots, as well as mixing edible plants in with ornamental plants, known as “the edible garden.”
Many people are learning more about gardening resulting from the desire to know where their food comes from and how it has been treated to have healthier, better tasting food. In the process, they learn that gardening is not only a healthy pursuit, but is a rewarding journey.
Why not vow to begin gardening in a greener way in 2023? It is not difficult. In the long run it is simpler, easier, and cheaper. It is just a mind-set that takes different thinking. In the process you will find many rewards while benefiting yourself and the environment. Your children and grandchildren will thank you.
|
<urn:uuid:941f7381-b519-4fa2-970c-734281527e8c>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00148.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9501324892044067,
"pii_count": 0,
"score": 2.75,
"token_count": 629,
"url": "https://www.amarillo.com/story/lifestyle/2023/02/12/garden-guy-column-go-green-in-the-garden/69886442007/"
}
|
Garden Guy column: Go green in the garden
What do I mean by “go green in the garden?” For me it is gardening in as environmentally friendly, sustainable, and economical way as I can. I have found that being environmentally friendly and sustainable automatically produces good economic results.
Being environmentally friendly goes beyond discontinuing the use of synthetic fertilizers, insecticides, fungicides, herbicides, and other “cides.” It means such things as recycling garden wastes through composting, watering deeply and infrequently, mowing high only when necessary, mulching, selecting proper plants for the landscape, and other practices that I have written about over the years.
Look for ways to minimize the use of precious natural resources like water. Look for ways to decrease the need for fertilizing. Look for ways to decrease the use of air-polluting machines. These are but a few things to think about to work toward going green.
Sustainability is a bit harder to wrap your arms around. When I think of sustainability, I think of using resources and practices that don’t compromise future generations’ ability to have or use the same. I don’t get too precise when addressing this issue because of its complexity. I simply try to use good practices that encourage my gardens to continue from year to year with minimal intervention and minimal use of finite resources.
The basic gardening practice that addresses all these issues and is the most important thing a gardener can do is that of building the soil, or more precisely, the soil food web. Good, healthy soil provides most of the nutrients and other needs of plants, resulting in healthy plants.
Strong, healthy plants resist disease, insects, and other pests. This reduces or eliminates the need for the “cides” and for fast food in the form of synthetic fertilizers. Healthy soil also decreases the need for more water since it retains water better and the healthy plants utilize the water better.
The healthier the soil, the less inputs (water, fertilizer, pesticides, labor) that are required to maintain lawns and gardens. This alone results in better economic results since less money is spent.
While my gardening is entirely ornamental, in recent years gardening has experienced a resurgence of interest in home food production. This has come both in traditional form, that is, home garden plots, as well as mixing edible plants in with ornamental plants, known as “the edible garden.”
Many people are learning more about gardening resulting from the desire to know where their food comes from and how it has
|
been treated to have healthier, better tasting food. In the process, they learn that gardening is not only a healthy pursuit, but is a rewarding journey.
Why not vow to begin gardening in a greener way in 2023? It is not difficult. In the long run it is simpler, easier, and cheaper. It is just a mind-set that takes different thinking. In the process you will find many rewards while benefiting yourself and the environment. Your children and grandchildren will thank you.
|
Aug. 19, 1958
Inspired by the success in Wichita, Kansas, the NAACP Youth Council in Oklahoma City, led by Clara Luper, a high school history teacher, began sit-ins to challenge the all-white lunch counters.
Luper had spent a lifetime fighting segregation. When she attended the University of Oklahoma, she encountered separate restrooms, separation in the classrooms, separate sections in the cafeteria.
“In one class a professor told me he had never taught a n—– and had never wanted to,” she recalled. “I moved that wall by staying in his class and working so hard that at the end of the school term, he confessed his sins.”
On that day in 1958, she led the students into the Katz drugstore, where they sat down and ordered Cokes. They were refused service, and white customers jeered at them and called them names. Some coughed in their faces, and one child was knocked to the ground.
Despite the abuse, they remained nonviolent, and days later, Katz desegregated lunch counters. The protests spread to other restaurants, theaters, hotels and churches. She went on to lead campaigns for Black Americans to have equal banking rights, voting rights, job opportunities and housing.
In 1965, she joined the march in Selma, where Alabama troopers attacked the protesters with tear gas and billy clubs. She received a deep cut in her leg from the attack. A year later, she led a march to Lawton, Oklahoma, that ended with the city vowing to eliminate racial discrimination in all public places. In 1969, she worked with the striking sanitation workers, leading to better pay. In all, she was arrested 26 times for her civil rights protests.
In 1972, she unsuccessfully ran for the U.S. Senate. Asked by reporters if she could represent white people, she replied, “I can represent White People, Black People, Red People, Yellow People, Brown People, and Polka Dot People. You see, I have lived long enough to know that people are people.”
Oklahoma City University gives scholarships each year in her name, aiding financially needy students. She wrote a memoir on the civil rights campaigns titled, “Behold the Walls,” and when she died in 2011, flags flew at half-staff in her honor. She was inducted into the Oklahoma Hall of Fame, and a street in Oklahoma City now bears her name.
|
<urn:uuid:0d4469ea-cdd8-4e62-b523-2db1e2017cc9>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506329.15/warc/CC-MAIN-20230922034112-20230922064112-00561.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9808610081672668,
"pii_count": 0,
"score": 2.875,
"token_count": 506,
"url": "https://mississippitoday.org/2023/08/19/on-this-day-in-1958-clara-luper-led-sit-in-at-whites-only-lunch-counter/"
}
|
Aug. 19, 1958
Inspired by the success in Wichita, Kansas, the NAACP Youth Council in Oklahoma City, led by Clara Luper, a high school history teacher, began sit-ins to challenge the all-white lunch counters.
Luper had spent a lifetime fighting segregation. When she attended the University of Oklahoma, she encountered separate restrooms, separation in the classrooms, separate sections in the cafeteria.
“In one class a professor told me he had never taught a n—– and had never wanted to,” she recalled. “I moved that wall by staying in his class and working so hard that at the end of the school term, he confessed his sins.”
On that day in 1958, she led the students into the Katz drugstore, where they sat down and ordered Cokes. They were refused service, and white customers jeered at them and called them names. Some coughed in their faces, and one child was knocked to the ground.
Despite the abuse, they remained nonviolent, and days later, Katz desegregated lunch counters. The protests spread to other restaurants, theaters, hotels and churches. She went on to lead campaigns for Black Americans to have equal banking rights, voting rights, job opportunities and housing.
In 1965, she joined the march in Selma, where Alabama troopers attacked the protesters with tear gas and billy clubs. She received a deep cut in her leg from the attack. A year later, she led a march to Lawton, Oklahoma, that ended with the city vowing to eliminate racial discrimination in all public places. In 1969, she worked with the striking sanitation workers, leading to better pay. In all, she was arrested 26 times for her civil rights protests.
In 1972, she unsuccessfully ran for the U.S. Senate. Asked by reporters if she could represent white people, she replied, “I can represent White People, Black People, Red People, Yellow People, Brown People, and Polka Dot People. You see, I have lived long enough to know that people are people.”
Oklahoma City University gives scholarships each year in her name, aiding financially needy students. She wrote a memoir on the civil rights campaigns titled, “Behold the Walls,” and when she died in 2011, flags flew at half-staff in her honor. She was inducted into the Oklahoma Hall of Fame
|
, and a street in Oklahoma City now bears her name.
|
As a young girl growing up in Cincinnati, Tiya Miles experienced the positive impact that being outdoors can have on a child. In a grassy meadow just beyond the edge of her neighborhood, she had a memorable encounter with some frolicking rabbits who “formed a kind of a circle and seemed to move in unison,” as if they were dancing. It felt to her as if she were being let in on some arcane secret. “In that instant,” she writes, “I knew wonder was possible… I learned that the unexpected, the impossible, and even the magical could occur outside.”
Miles, a professor of history at Harvard University and the author of the National Book Award-winning “All That She Carried,” which followed an heirloom cotton sack as it was passed down through three generations of Black women in the mid-nineteenth century, has carried the lesson of that magic childhood moment into adulthood.
In her latest book, “Wild Girls: How The Outdoors Shaped The Women Who Challenged A Nation,” Miles draws upon her experience as a historian and her passion for the outdoors to offer readers an engrossing series of essays detailing the profound influence that time spent in the wilderness had on some of America’s most notable nineteenth-century women, including Harriet Tubman, Louisa May Alcott and Laura Smith Haviland.
“I didn’t know at first that the book was going to be about the ways in which girls who ended up being very influential in American culture experienced a honing of their characters outdoors,” says Miles. But Miles’ argument is significantly informed by her past research, including her study of African-American history. “I’ve been interested in how enslaved people managed to survive the atrocity of slavery for a long time,” says Miles. “The relationship to the natural world, while complex and vexed, was a big part of this.” The story of Harriet Tubman in particular, for whom Miles says the outdoors was both “tormentor and teacher,” helped to clarify the path the book would follow.
“Wild Girls” was also inspired by Miles’ efforts to expand access to wilderness spaces for girls living in urban communities. In 2011, following an eye-opening environmental justice tour of Detroit, Miles founded ECO Girls, an organization that worked primarily with elementary and middle school girls from cities in southeastern Michigan. “I developed ECO Girls as a way to teach girls in our area about the place that they lived,” says Miles. “To try to introduce them to the wondrous aspects of that place and foster a sense of belonging and stewardship in the hopes that, as they grew, they could feel like they could be defenders and protectors of it.”
At the heart of “Wild Girls” is a stark dichotomy. On one side, there are the confining, constraining spaces that stifled the creativity, personal development and freedom of 19th-century women and girls — the Victorian domestic sphere, the Southern plantation house, the boarding schools founded to educate and “civilize” young Native Americans. On the other is the wilderness, a vast, unstructured space that inspired free thinking and helped the book’s subjects build the confidence necessary to challenge authority and effect change in American society.
“Girls who managed to reverse this condition of domestic confinement and get outside — to move, play, journey, explore, escape, and push themselves physically and mentally — were able to expand their minds, test their grit, develop their skills, and profoundly alter the course of their lives,” writes Miles. They became outsiders in the truest sense, and this helped them see the world in a new way. “It gave them the perspective of distance.”
In those confining spaces, women’s behavior was fiercely policed, sometimes by the very structure of the built environment itself. For Miles, the outdoors presents an opportunity for women and girls to exist in an unbound space in which, for a time, they do not need to be so conscious of their gender and its implications. “Outdoor play can shape kids,” says Miles. “Our interior spaces are often quite gendered. Once you’re in a space that’s not telling you what you should and shouldn’t like, you might realize new preferences.” And once you’re in a space that’s not telling you what you can and cannot do, you may learn to question the arbitrary nature of the rules that govern your behavior.
She cites Louisa May Alcott as someone who was able to see through the absurdity of restrictive Victorian social mores thanks to the lessons she learned by sneaking away to climb trees and wander around Walden Pond. In those journeys, she managed to find a way of escaping contemporary expectations of feminine modesty and self-control. As Laurel Thatcher Ulrich famously wrote, well-behaved women seldom make history.
Miles hopes that, as a culture, we are more willing to give children a long leash when it comes to exploring their environment. She imagines a world where every child has a chance to find themselves outside. “Being independent outdoors with kids your own age makes for the best learning experiences,” she says. “We should fight to open access to these spaces for anyone who doesn’t have access to them.”
“For me, this project was a joy,” says Miles. “Even though so much of what I write about is the struggle these girls faced, I felt so inspired revisiting their childhoods, seeing how they embraced their surroundings and, when their surroundings were a threat, stood up for other people.”
Tiya Miles will discuss her new book on Sept. 21 at the American Ancestors/New England Historic Genealogical Society and on Oct. 3 as part of "Beyond the Page."
|
<urn:uuid:4db44080-cf8c-4a4a-a801-6a5954d78758>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00879.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9736357927322388,
"pii_count": 0,
"score": 3.421875,
"token_count": 1251,
"url": "https://www.wbur.org/news/2023/09/19/tiya-miles-essays-wild-girls"
}
|
As a young girl growing up in Cincinnati, Tiya Miles experienced the positive impact that being outdoors can have on a child. In a grassy meadow just beyond the edge of her neighborhood, she had a memorable encounter with some frolicking rabbits who “formed a kind of a circle and seemed to move in unison,” as if they were dancing. It felt to her as if she were being let in on some arcane secret. “In that instant,” she writes, “I knew wonder was possible… I learned that the unexpected, the impossible, and even the magical could occur outside.”
Miles, a professor of history at Harvard University and the author of the National Book Award-winning “All That She Carried,” which followed an heirloom cotton sack as it was passed down through three generations of Black women in the mid-nineteenth century, has carried the lesson of that magic childhood moment into adulthood.
In her latest book, “Wild Girls: How The Outdoors Shaped The Women Who Challenged A Nation,” Miles draws upon her experience as a historian and her passion for the outdoors to offer readers an engrossing series of essays detailing the profound influence that time spent in the wilderness had on some of America’s most notable nineteenth-century women, including Harriet Tubman, Louisa May Alcott and Laura Smith Haviland.
“I didn’t know at first that the book was going to be about the ways in which girls who ended up being very influential in American culture experienced a honing of their characters outdoors,” says Miles. But Miles’ argument is significantly informed by her past research, including her study of African-American history. “I’ve been interested in how enslaved people managed to survive the atrocity of slavery for a long time,” says Miles. “The relationship to the natural world, while complex and vexed, was a big part of this.” The story of Harriet Tubman in particular, for whom Miles says the outdoors was both “tormentor and teacher,” helped to clarify the path the book would follow.
“Wild Girls” was also inspired by Miles’ efforts to expand access to wilderness spaces for girls living in urban communities. In 2011, following an eye-opening environmental justice tour of Detroit, Miles founded ECO Girls, an organization that worked primarily with elementary and middle school girls from cities in southeastern Michigan. “I developed ECO Girls as a way to teach girls in our area about the place that they lived,” says Miles. “To try to introduce them to the wondrous
|
aspects of that place and foster a sense of belonging and stewardship in the hopes that, as they grew, they could feel like they could be defenders and protectors of it.”
At the heart of “Wild Girls” is a stark dichotomy. On one side, there are the confining, constraining spaces that stifled the creativity, personal development and freedom of 19th-century women and girls — the Victorian domestic sphere, the Southern plantation house, the boarding schools founded to educate and “civilize” young Native Americans. On the other is the wilderness, a vast, unstructured space that inspired free thinking and helped the book’s subjects build the confidence necessary to challenge authority and effect change in American society.
“Girls who managed to reverse this condition of domestic confinement and get outside — to move, play, journey, explore, escape, and push themselves physically and mentally — were able to expand their minds, test their grit, develop their skills, and profoundly alter the course of their lives,” writes Miles. They became outsiders in the truest sense, and this helped them see the world in a new way. “It gave them the perspective of distance.”
In those confining spaces, women’s behavior was fiercely policed, sometimes by the very structure of the built environment itself. For Miles, the outdoors presents an opportunity for women and girls to exist in an unbound space in which, for a time, they do not need to be so conscious of their gender and its implications. “Outdoor play can shape kids,” says Miles. “Our interior spaces are often quite gendered. Once you’re in a space that’s not telling you what you should and shouldn’t like, you might realize new preferences.” And once you’re in a space that’s not telling you what you can and cannot do, you may learn to question the arbitrary nature of the rules that govern your behavior.
She cites Louisa May Alcott as someone who was able to see through the absurdity of restrictive Victorian social mores thanks to the lessons she learned by sneaking away to climb trees and wander around Walden Pond. In those journeys, she managed to find a way of escaping contemporary expectations of feminine modesty and self-control. As Laurel Thatcher Ulrich famously wrote, well-behaved women seldom make history.
Miles hopes that, as a culture, we are more willing to give children a long leash when it comes to exploring their environment. She imagines a world where every child has a chance to find themselves outside. “Being independent outdoors with kids your own age makes for the best learning experiences,” she says. “We should fight to open access to these spaces for anyone who doesn’t have access to them.”
“For me, this project was a joy,” says Miles. “Even though so much of what I write about is the struggle these girls faced, I felt so inspired revisiting their childhoods, seeing how they embraced their surroundings and, when their surroundings were a threat, stood up for other people.”
Tiya Miles will discuss her new book on Sept. 21 at the American Ancestors/New England Historic Genealogical Society and on Oct. 3 as part of "Beyond the Page."
|
Editor Comment: The aftermath of WWII for every German around the world was profound; many millions suffered the horrors of sanctioned and premeditated starvation, rape, torture, execution, slavery and dispossession. Those Germans who did not suffer these horrors still became victims of an incredibly well orchestrated propaganda campaign demonizing them. Consider the sensational lie that Jews were turned into lampshades; it is one of approximately 6 million lies created to incite hatred towards Germans. This was necessary to ensure the sustained maltreatment of an entire ethnic population over many years that would otherwise have been deemed immoral and criminal.
In Operation Paperclip, every Allied nation (even Australia!) put their hand up and waved it madly for their share of the brightest German minds, as if they were each bidding against each other at a slave market on basement-bargain day. This article by John Wear takes you quietly into the post-war mental anguish of three brilliant and highly refined Anti-Nazi German scientists. We begin to realize the incredible loss Western Civilization has suffered. As we read how their spirits were deliberately broken through systematic demoralization an Orwellian feel emerges. One reminiscent of oppressive Communist regimes, like our Ally the USSR. You might even wonder who actually won WWII?
The Mentality of German Scientists After World War II
The end of World War II brought a crisis in Germany that is rarely mentioned in the history books. The Allied denazification program and extreme economic deprivations in Germany created bitter feelings among leading German scientists. Even vehemently anti-Nazi German scientists began to realize that the Allied occupation was a system of repression no better than what they had experienced under the National Socialist regime.
This article will focus primarily on the mental aftermath after the war of three of Germany’s greatest scientists: Max von Laue, Otto Hahn, and Werner Heisenberg.
Max von Laue
German Nobel-laureate physicist Max von Laue earned an international reputation for being courageously anti-Nazi. In a speech at an
annual Physicists Conference on September 18, 1933, von Laue unmistakably implied a comparison of the Nazi government’s attitude toward Einstein and relativity theory with the attitude of the Inquisition toward Galileo. When Jewish chemist Fritz Haber died in January 1934, von Laue published a tribute to his former colleague in two widely read and prestigious scientific journals. Von Laue’s speech and obituaries resulted in reprimands from the Prussian Ministry of Education.
Other similar actions made von Laue a symbol for refusal to cooperate with the Nazis. Von Laue indicated after the war that he stayed in Germany for a number of reasons, one of which was not to take away badly needed positions abroad from exiled Jewish physicists. However, his primary reason for staying in Germany was
“I wanted also to be there once the collapse of the ‘Third Reich’—which I always foresaw and hoped for—allowed the possibility of a cultural reconstruction upon the ruins this Reich created.”
Max von Laue was interned immediately after the war in England in a house named Farm Hall even though he never worked on the German atomic bomb project. Von Laue returned after his internment to a devastated Germany. Everywhere there were severe shortages of food, clothing and shelter. German children begged for food while their parents rummaged through garbage for whatever food they could find. Von Laue wrote to his son Theodore at Princeton in 1946:
“[The Germans] are immeasurably depressed. The complete suffering of war makes itself felt only now.”
Max von Laue also soon became disillusioned with the Allied denazification program. Von Laue wrote his son:
“More ‘denazification’ is going on here. My colleagues and I are now supposed to fill out our fourth questionnaire, a monster of 12 pages and with 133 questions! We declared that we are refusing to fill it out. The thing is beginning to get humiliating.”
Von Laue also angrily complained to his son that denazification as practiced by the Americans in particular made “every use of reason impossible.”
As a courageous anti-Nazi, von Laue was frequently called upon to defend German scientists after the war. Niels Bohr, the great Danish physicist, wrote to Otto Hahn in 1946 suggesting that German scientists should publicly apologize for the treatment of scientists in countries occupied by Nazi Germany. Max von Laue responded by writing:
I hardly believe that the Germans coming into consideration would find themselves ready to do so. In any event, I am against it. Such self-evidences are not said so specifically, least of all in formal declaration. If our colleagues abroad would like to hear such declarations documenting a distancing from the spirit of the Third Reich, they only need to take a look at the speeches that the presidents at German universities delivered at the inauguration of the new semester…
Von Laue also defended the motives of German scientists who had worked on the German atomic bomb during the war. In a review of Samuel Goudsmit’s book Alsos, American physicist Philip Morrison stated that the Germans
“worked for the cause of Himmler and Auschwitz, for the burners of books and the takers of hostages. The community of science will be long delayed in welcoming the armourers of the Nazis, even if their work was not successful.”
Von Laue wrote in reply that it was a “monstrous suggestion” that German scientists as a body worked for Himmler and Auschwitz. Von Laue also said he doubted whether Goudsmit could ever write objectively about the German atomic bomb program.
German chemist Otto Hahn was also strongly anti-Nazi. Hahn and nine other German 220px-Hahnfchscientists were interned in England for six months after the war in Farm Hall. On November 16, 1945, the Swedish Royal Academy announced that the Nobel Prize in Chemistry for 1944 would be given to Otto Hahn for his discovery of fission.
On November 10, 1946, shortly before Hahn’s departure for Sweden to receive his Nobel Prize, Hahn stated to a Swiss visitor:
You see, I had hoped for years for the time when we would be rid of the heavy mental burden of National Socialism, and how much I looked forward to being able to work freely and without hindrance. But now I am sitting here, a head without a body; I am not allowed to return to my institute because it lies in the French zone, and I have little idea about the other institutes, and here come new people every day wanting a job or a political exonerating certificate or whatever else. I simply cannot help these people. Formerly, I really used to be a cheerful person and was actually never pessimistic, but if people just come with demands and one can hardly move for all the restrictions, I simply cannot go on. And imagine, ludicrous though it may sound, at the moment I don’t even have a sound pair of shoes to put on. So, what use is it to me if the Nobel Prize is waiting for me in Sweden, which I am not allowed to pick up because I don’t get a travel permit and meanwhile, I submit one application after the next for months on end in vain for a pair of shoe soles. If they would at least send me a pair of shoe soles against the Nobel Prize account, then I wouldn’t have to walk around with wet feet all the time.
Otto Hahn was initially favorably disposed to the denazification process. However, by 1947 he had completely changed his mind. Hahn stated that German scientists
“profoundly regret how the ‘denazification’ is flipping into its obverse through the many measures, pushing true peace further and further away.”
Hahn also criticized the blatant lack of equal treatment resulting from regional variations and the many alterations to the guidelines of the denazification process.
Otto Hahn also wrote bitterly about the export of German scientists to foreign countries:
Most of the older professors leave Germany very unwillingly, because they feel that their place is here. Necessity compels them, because their livelihoods and working opportunities in their country are taken away from them or else they are left in a constant state of fear of such an occurrence. All this, after our having experienced well enough what it means to replace competence with “politically irreproachable” dilettantes. But more depresses these men: the awareness that it is evidently not a matter of an honorable appointment to an independent research institution or university of some rank but
(at least according to the American press) forms a part of the “reparations.” Centuries ago, princes sent their countrymen away as plantation workers or soldiers. Today, scientists are exported.
Bitterness is a word that appears frequently in the writings of German scientists after the war. Otto Hahn wrote in 1949:
“It is certainly understandable that the factory dismantlings still taking place four years after the capitulation are being greeted with bitterness, particularly among the academic youth.”
Werner Heisenberg was one of the world’s leading physicists before World War II. Heisenberg was awarded the Nobel Prize in Physics for 1932, and he received several job offers from American universities in the summer of 1939. Despite his aversion to National Socialism and Adolf Hitler, Heisenberg decided to stay in Germany to help train Germany’s young physicists.
Heisenberg had exuded an air of delighted confidence and appetite for intellectual combat before World War II. Arnold Sommerfeld, his professor at Munich, called him healthy, eager, full of hope, uncomplicated. Wolfgang Pauli before the war called Heisenberg a Boy Scout. Heisenberg was completely changed after the war. Physicist Victor Weisskopf wrote in his memoirs,
“I saw Heisenberg after the war and he was completely changed from the man I had known…He visibly carried a load.”
Several of Heisenberg’s colleagues after the war also observed that he seemed to suffer from a perpetual depression.
Heisenberg suffered from his failure to explain his involvement in Germany’s atomic bomb program to his former friends. When Heisenberg met with Niels Bohr in August 1947, the two could not agree on even basic points of their last discussion in September 1941. Heisenberg had hoped in 1941 that he could obtain Bohr’s help in reaching an agreement among physicists not to build an atomic bomb during the war. Bohr had not wanted to pursue Heisenberg’s suggestion, and apparently did not trust Heisenberg’s motives. Germany had driven many of its leading scientists into exile before the war, and it seemed to Bohr that Heisenberg was seeking to negate this Allied advantage in the development of atomic bombs.
Although they had been the closest of friends, Bohr and Heisenberg were unable to communicate either in September 1941 or in August 1947. After a while the two great physicists felt it would be better to stop disturbing the spirits of the past. Their close friendship had been shattered.
An important point to make concerning Heisenberg’s meeting with Bohr in September 1941 is that Heisenberg had no official authority to tell Bohr anything about the German atomic bomb project. Heisenberg had committed an act of treason by attempting to obtain an international agreement among physicists not to build an atomic bomb during the war. Heisenberg had courageously risked his life in their meeting.
Heisenberg did not fare any better with his former friend Samuel Goudsmit. Goudsmit had written a book entitled Alsos that was highly critical of the German atomic bomb program. Heisenberg patiently tried to explain the factual misstatements in Alsos. Goudsmit grudgingly conceded some mistakes he had made in his book, but was infuriated by Heisenberg’s claim of “a sense of decency” and his insistence that a “moral decision” was involved in the question of whether German scientists would build a bomb for Germany.
The remarkable thing about Alsos is that Goudsmit claimed to see documentation that his parents had died in a Nazi gas chamber. Goudsmit states:
“The world has always admired the Germans for their orderliness. They are so systematic; they have such a sense of correctness. That is why they kept such precise records of their evil deeds, which we later found in their proper files in Germany. And that is why I know the precise date my father and my blind mother were put to death in the gas chamber. It was my father’s 70th birthday.”
Since Goudsmit spoke fluent German and no documentation concerning Nazi gas chambers has ever been found, Goudsmit is certainly lying about seeing records that his parents were put to death in a Nazi gas chamber. Yet Goudsmit hypocritically questioned the morality of the German scientists who worked on the atomic bomb. In his last letter to Heisenberg in June 1949, Goudsmit wrote he was ending their discussions because “I am afraid that we might lose our tempers.”
The subject of the German atomic bomb program continued to remain touchy. In the fall of 1949 Heisenberg made his first trip to the United States in over 10 years. Victor Weisskopf, who was then a physicist at MIT, held a reception for Heisenberg in Weisskopf’s home. Approximately half of the guests Weisskopf had invited failed to appear at the reception. They all gave Weisskopf similar explanations for staying away; they didn’t want to shake the hand of a man who had tried to build an atomic bomb for Hitler.
The cold reception continued for years. In the early 1950s, Heisenberg’s wife Elisabeth sat next to James Franck at a physics conference on Lake Como in Italy. Elisabeth Heisenberg told Franck that she and Werner felt terribly isolated; people treated them coldly and blamed them for things they hadn’t done. Franck unsympathetically replied:
“This is the way we Jews were always treated—now the Germans must live with it.”
Even Heisenberg’s appetite for competition became weakened by years of postwar humiliation. In a film made in 1965, for example, Heisenberg conceded a discussion point to Paul Dirac which 30 years before he would have vehemently contested.
German scientists were not allowed freedom of speech after the end of World War II. The physicists released from Farm Hall were told what they were allowed to say in public and initially were allowed only to reside within the British zone. One scientist wrote to Walther Gerlach:
“People are all so timid, perhaps justifiably so, for one can’t speak one’s mind as freely now anymore as during the Third Reich.”
German scientists were also filled with bitterness and cynicism after the war. The Allied denazification program was especially unpopular. One German wrote in his diary:
“These mindless dismissals of all former Nazis could drive one to desperation. The method only shows that the Americans are no smarter than their predecessors, the Nazis. What did a reasonable man say to me yesterday? From a mild dictatorship with its faults we have now arrived at a severe dictatorship.”
Ultimately, even anti-Nazi German scientists regarded the Allied postwar occupation of Germany as merely a substitution of one hated system of restrictions imposed by the Nazis with another. The Allied denazification program, the forced transfer of German scientists to other countries, the restrictions of free speech, and the poverty and starvation in postwar Germany created bitterness and depression among even the most anti-Nazi German scientists.
Beyerchen, Alan D., Scientists Under Hitler: Politics and the Physics Community in the Third Reich, New Haven, CT: Yale University Press, 1979, pp. 64-65.
Ibid., p. 65.
Cassidy, David C., Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb, New York: Bellevue Literary Press, 2010, p. 390.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, p. 105.
Cassidy, David C., Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb, New York: Bellevue Literary Press, 2010, p. 395.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, p. 127.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, p. 457.
Bernstein, Jeremy, Hitler’s Uranium Club: The Secret Recordings at Farm Hall, 2nd edition, New York: Copernicus Books, 2001, p. 281.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, p. 129.
Ibid., p. 53.
Ibid., pp. 81-82.
Ibid., p. 81.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, pp. 3-12.
Ibid., pp. 460-461.
Cassidy, David C., Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb, New York: Bellevue Literary Press, 2010, p. 394.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, pp. 454-455.
Ibid., p. 511.
Ibid., pp. 455-457.
Goudsmit, Samuel A., Alsos, Los Angeles and San Francisco: Tomash Publishers, 1986, pp. 48-49.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, p. 470.
Ibid., p. 458.
Farmelo, Graham, The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, New York: Basic Books, 2009, pp. 377-378.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, pp. 41-43.
Ibid., pp. 44-45.
Ibid., p. 64.
Originally found at https://wearswar.wordpress.com/2017/09/02/stealing-minds-breaking-spirits-the-life-of-anti-nazi-german-scientists-post-wwii/
Leave a Reply
|
<urn:uuid:8b033457-c842-43b3-ab8b-06f6ddbe8568>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646257.46/warc/CC-MAIN-20230531022541-20230531052541-00728.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9636401534080505,
"pii_count": 0,
"score": 3.4375,
"token_count": 4166,
"url": "https://twelveyearsnotaslave.wordpress.com/2023/02/13/stealing-minds-breaking-spirits-the-life-of-anti-nazi-german-scientists-post-wwii/"
}
|
Editor Comment: The aftermath of WWII for every German around the world was profound; many millions suffered the horrors of sanctioned and premeditated starvation, rape, torture, execution, slavery and dispossession. Those Germans who did not suffer these horrors still became victims of an incredibly well orchestrated propaganda campaign demonizing them. Consider the sensational lie that Jews were turned into lampshades; it is one of approximately 6 million lies created to incite hatred towards Germans. This was necessary to ensure the sustained maltreatment of an entire ethnic population over many years that would otherwise have been deemed immoral and criminal.
In Operation Paperclip, every Allied nation (even Australia!) put their hand up and waved it madly for their share of the brightest German minds, as if they were each bidding against each other at a slave market on basement-bargain day. This article by John Wear takes you quietly into the post-war mental anguish of three brilliant and highly refined Anti-Nazi German scientists. We begin to realize the incredible loss Western Civilization has suffered. As we read how their spirits were deliberately broken through systematic demoralization an Orwellian feel emerges. One reminiscent of oppressive Communist regimes, like our Ally the USSR. You might even wonder who actually won WWII?
The Mentality of German Scientists After World War II
The end of World War II brought a crisis in Germany that is rarely mentioned in the history books. The Allied denazification program and extreme economic deprivations in Germany created bitter feelings among leading German scientists. Even vehemently anti-Nazi German scientists began to realize that the Allied occupation was a system of repression no better than what they had experienced under the National Socialist regime.
This article will focus primarily on the mental aftermath after the war of three of Germany’s greatest scientists: Max von Laue, Otto Hahn, and Werner Heisenberg.
Max von Laue
German Nobel-laureate physicist Max von Laue earned an international reputation for being courageously anti-Nazi. In a speech at an
annual Physicists Conference on September 18, 1933, von Laue unmistakably implied a comparison of the Nazi government’s attitude toward Einstein and relativity theory with the attitude of the Inquisition toward Galileo. When Jewish chemist Fritz Haber died in January 1934, von Laue published a tribute to his former colleague in two widely read and prestigious scientific journals. Von Laue’s speech and obituaries resulted in
|
reprimands from the Prussian Ministry of Education.
Other similar actions made von Laue a symbol for refusal to cooperate with the Nazis. Von Laue indicated after the war that he stayed in Germany for a number of reasons, one of which was not to take away badly needed positions abroad from exiled Jewish physicists. However, his primary reason for staying in Germany was
“I wanted also to be there once the collapse of the ‘Third Reich’—which I always foresaw and hoped for—allowed the possibility of a cultural reconstruction upon the ruins this Reich created.”
Max von Laue was interned immediately after the war in England in a house named Farm Hall even though he never worked on the German atomic bomb project. Von Laue returned after his internment to a devastated Germany. Everywhere there were severe shortages of food, clothing and shelter. German children begged for food while their parents rummaged through garbage for whatever food they could find. Von Laue wrote to his son Theodore at Princeton in 1946:
“[The Germans] are immeasurably depressed. The complete suffering of war makes itself felt only now.”
Max von Laue also soon became disillusioned with the Allied denazification program. Von Laue wrote his son:
“More ‘denazification’ is going on here. My colleagues and I are now supposed to fill out our fourth questionnaire, a monster of 12 pages and with 133 questions! We declared that we are refusing to fill it out. The thing is beginning to get humiliating.”
Von Laue also angrily complained to his son that denazification as practiced by the Americans in particular made “every use of reason impossible.”
As a courageous anti-Nazi, von Laue was frequently called upon to defend German scientists after the war. Niels Bohr, the great Danish physicist, wrote to Otto Hahn in 1946 suggesting that German scientists should publicly apologize for the treatment of scientists in countries occupied by Nazi Germany. Max von Laue responded by writing:
I hardly believe that the Germans coming into consideration would find themselves ready to do so. In any event, I am against it. Such self-evidences are not said so specifically, least of all in formal declaration. If our colleagues abroad would like to hear such declarations documenting a distancing from the spirit of the Third Reich, they only need to take a look at the speeches that the presidents at German universities delivered at the inauguration of the new semester…
Von Laue also defended the motives of German scientists who had worked on the German atomic bomb during the war. In a review of Samuel Goudsmit’s book Alsos, American physicist Philip Morrison stated that the Germans
“worked for the cause of Himmler and Auschwitz, for the burners of books and the takers of hostages. The community of science will be long delayed in welcoming the armourers of the Nazis, even if their work was not successful.”
Von Laue wrote in reply that it was a “monstrous suggestion” that German scientists as a body worked for Himmler and Auschwitz. Von Laue also said he doubted whether Goudsmit could ever write objectively about the German atomic bomb program.
German chemist Otto Hahn was also strongly anti-Nazi. Hahn and nine other German 220px-Hahnfchscientists were interned in England for six months after the war in Farm Hall. On November 16, 1945, the Swedish Royal Academy announced that the Nobel Prize in Chemistry for 1944 would be given to Otto Hahn for his discovery of fission.
On November 10, 1946, shortly before Hahn’s departure for Sweden to receive his Nobel Prize, Hahn stated to a Swiss visitor:
You see, I had hoped for years for the time when we would be rid of the heavy mental burden of National Socialism, and how much I looked forward to being able to work freely and without hindrance. But now I am sitting here, a head without a body; I am not allowed to return to my institute because it lies in the French zone, and I have little idea about the other institutes, and here come new people every day wanting a job or a political exonerating certificate or whatever else. I simply cannot help these people. Formerly, I really used to be a cheerful person and was actually never pessimistic, but if people just come with demands and one can hardly move for all the restrictions, I simply cannot go on. And imagine, ludicrous though it may sound, at the moment I don’t even have a sound pair of shoes to put on. So, what use is it to me if the Nobel Prize is waiting for me in Sweden, which I am not allowed to pick up because I don’t get a travel permit and meanwhile, I submit one application after the next for months on end in vain for a pair of shoe soles. If they would at least send me a pair of shoe soles against the Nobel Prize account, then I wouldn’t have to walk around with wet feet all the time.
Otto Hahn was initially favorably disposed to the denazification process. However, by 1947 he had completely changed his mind. Hahn stated that German scientists
“profoundly regret how the ‘denazification’ is flipping into its obverse through the many measures, pushing true peace further and further away.”
Hahn also criticized the blatant lack of equal treatment resulting from regional variations and the many alterations to the guidelines of the denazification process.
Otto Hahn also wrote bitterly about the export of German scientists to foreign countries:
Most of the older professors leave Germany very unwillingly, because they feel that their place is here. Necessity compels them, because their livelihoods and working opportunities in their country are taken away from them or else they are left in a constant state of fear of such an occurrence. All this, after our having experienced well enough what it means to replace competence with “politically irreproachable” dilettantes. But more depresses these men: the awareness that it is evidently not a matter of an honorable appointment to an independent research institution or university of some rank but
(at least according to the American press) forms a part of the “reparations.” Centuries ago, princes sent their countrymen away as plantation workers or soldiers. Today, scientists are exported.
Bitterness is a word that appears frequently in the writings of German scientists after the war. Otto Hahn wrote in 1949:
“It is certainly understandable that the factory dismantlings still taking place four years after the capitulation are being greeted with bitterness, particularly among the academic youth.”
Werner Heisenberg was one of the world’s leading physicists before World War II. Heisenberg was awarded the Nobel Prize in Physics for 1932, and he received several job offers from American universities in the summer of 1939. Despite his aversion to National Socialism and Adolf Hitler, Heisenberg decided to stay in Germany to help train Germany’s young physicists.
Heisenberg had exuded an air of delighted confidence and appetite for intellectual combat before World War II. Arnold Sommerfeld, his professor at Munich, called him healthy, eager, full of hope, uncomplicated. Wolfgang Pauli before the war called Heisenberg a Boy Scout. Heisenberg was completely changed after the war. Physicist Victor Weisskopf wrote in his memoirs,
“I saw Heisenberg after the war and he was completely changed from the man I had known…He visibly carried a load.”
Several of Heisenberg’s colleagues after the war also observed that he seemed to suffer from a perpetual depression.
Heisenberg suffered from his failure to explain his involvement in Germany’s atomic bomb program to his former friends. When Heisenberg met with Niels Bohr in August 1947, the two could not agree on even basic points of their last discussion in September 1941. Heisenberg had hoped in 1941 that he could obtain Bohr’s help in reaching an agreement among physicists not to build an atomic bomb during the war. Bohr had not wanted to pursue Heisenberg’s suggestion, and apparently did not trust Heisenberg’s motives. Germany had driven many of its leading scientists into exile before the war, and it seemed to Bohr that Heisenberg was seeking to negate this Allied advantage in the development of atomic bombs.
Although they had been the closest of friends, Bohr and Heisenberg were unable to communicate either in September 1941 or in August 1947. After a while the two great physicists felt it would be better to stop disturbing the spirits of the past. Their close friendship had been shattered.
An important point to make concerning Heisenberg’s meeting with Bohr in September 1941 is that Heisenberg had no official authority to tell Bohr anything about the German atomic bomb project. Heisenberg had committed an act of treason by attempting to obtain an international agreement among physicists not to build an atomic bomb during the war. Heisenberg had courageously risked his life in their meeting.
Heisenberg did not fare any better with his former friend Samuel Goudsmit. Goudsmit had written a book entitled Alsos that was highly critical of the German atomic bomb program. Heisenberg patiently tried to explain the factual misstatements in Alsos. Goudsmit grudgingly conceded some mistakes he had made in his book, but was infuriated by Heisenberg’s claim of “a sense of decency” and his insistence that a “moral decision” was involved in the question of whether German scientists would build a bomb for Germany.
The remarkable thing about Alsos is that Goudsmit claimed to see documentation that his parents had died in a Nazi gas chamber. Goudsmit states:
“The world has always admired the Germans for their orderliness. They are so systematic; they have such a sense of correctness. That is why they kept such precise records of their evil deeds, which we later found in their proper files in Germany. And that is why I know the precise date my father and my blind mother were put to death in the gas chamber. It was my father’s 70th birthday.”
Since Goudsmit spoke fluent German and no documentation concerning Nazi gas chambers has ever been found, Goudsmit is certainly lying about seeing records that his parents were put to death in a Nazi gas chamber. Yet Goudsmit hypocritically questioned the morality of the German scientists who worked on the atomic bomb. In his last letter to Heisenberg in June 1949, Goudsmit wrote he was ending their discussions because “I am afraid that we might lose our tempers.”
The subject of the German atomic bomb program continued to remain touchy. In the fall of 1949 Heisenberg made his first trip to the United States in over 10 years. Victor Weisskopf, who was then a physicist at MIT, held a reception for Heisenberg in Weisskopf’s home. Approximately half of the guests Weisskopf had invited failed to appear at the reception. They all gave Weisskopf similar explanations for staying away; they didn’t want to shake the hand of a man who had tried to build an atomic bomb for Hitler.
The cold reception continued for years. In the early 1950s, Heisenberg’s wife Elisabeth sat next to James Franck at a physics conference on Lake Como in Italy. Elisabeth Heisenberg told Franck that she and Werner felt terribly isolated; people treated them coldly and blamed them for things they hadn’t done. Franck unsympathetically replied:
“This is the way we Jews were always treated—now the Germans must live with it.”
Even Heisenberg’s appetite for competition became weakened by years of postwar humiliation. In a film made in 1965, for example, Heisenberg conceded a discussion point to Paul Dirac which 30 years before he would have vehemently contested.
German scientists were not allowed freedom of speech after the end of World War II. The physicists released from Farm Hall were told what they were allowed to say in public and initially were allowed only to reside within the British zone. One scientist wrote to Walther Gerlach:
“People are all so timid, perhaps justifiably so, for one can’t speak one’s mind as freely now anymore as during the Third Reich.”
German scientists were also filled with bitterness and cynicism after the war. The Allied denazification program was especially unpopular. One German wrote in his diary:
“These mindless dismissals of all former Nazis could drive one to desperation. The method only shows that the Americans are no smarter than their predecessors, the Nazis. What did a reasonable man say to me yesterday? From a mild dictatorship with its faults we have now arrived at a severe dictatorship.”
Ultimately, even anti-Nazi German scientists regarded the Allied postwar occupation of Germany as merely a substitution of one hated system of restrictions imposed by the Nazis with another. The Allied denazification program, the forced transfer of German scientists to other countries, the restrictions of free speech, and the poverty and starvation in postwar Germany created bitterness and depression among even the most anti-Nazi German scientists.
Beyerchen, Alan D., Scientists Under Hitler: Politics and the Physics Community in the Third Reich, New Haven, CT: Yale University Press, 1979, pp. 64-65.
Ibid., p. 65.
Cassidy, David C., Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb, New York: Bellevue Literary Press, 2010, p. 390.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, p. 105.
Cassidy, David C., Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb, New York: Bellevue Literary Press, 2010, p. 395.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, p. 127.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, p. 457.
Bernstein, Jeremy, Hitler’s Uranium Club: The Secret Recordings at Farm Hall, 2nd edition, New York: Copernicus Books, 2001, p. 281.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, p. 129.
Ibid., p. 53.
Ibid., pp. 81-82.
Ibid., p. 81.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, pp. 3-12.
Ibid., pp. 460-461.
Cassidy, David C., Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb, New York: Bellevue Literary Press, 2010, p. 394.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, pp. 454-455.
Ibid., p. 511.
Ibid., pp. 455-457.
Goudsmit, Samuel A., Alsos, Los Angeles and San Francisco: Tomash Publishers, 1986, pp. 48-49.
Powers, Thomas, Heisenberg’s War: The Secret History of the German Bomb, New York: Alfred A. Knopf, 1993, p. 470.
Ibid., p. 458.
Farmelo, Graham, The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom, New York: Basic Books, 2009, pp. 377-378.
Hentschel, Klaus, The Mental Aftermath: The Mentality of German Physicists, 1945-1949, Oxford: Oxford University Press, 2007, pp. 41-43.
Ibid., pp. 44-45.
Ibid., p. 64.
Originally found at https://wearswar.wordpress.com/2017/09/02/stealing-minds-breaking-spirits-the-life-of-anti-nazi-german-scientists-post-wwii/
Leave a Reply
|
The Freudian concept known as “reaction formation” refers to a psychological defense mechanism against guilt. It occurs when an individual responds to a shame-inducing instinct with an overcorrection. Much of modern American history appears to be in the grip of reaction formation. Mortification at the developed West’s historical misdeeds has produced a utopian narrative of indigenous worlds typified by matriarchy, cooperation, pacifism, and gender fluidity. That no such world ever existed is beside the point; much of history is narrated to suit the proclivities of the audience, not to tell the truth about what actually happened.
This seems to be specifically true for our understanding of American Indian history. The violent migration of Europeans to the New World was very much like violent migrations throughout history and across cultures, most likely including successive waves of North American Indians (though the history there is murky). Yet instead of understanding these events in the context of larger historical patterns, the Indian Wars are cast as a morality tale in the manner of Howard Zinn, in which the actions of the European settlers are represented as uniquely reprehensible. This fantasy may be an inversion of past jingoistic and racist caricatures of American Indians as “savages,” but it is not more historically accurate.
I thought about this a lot as I read Pekka Hämäläinen’s fascinating and controversial new history of North American Indians, Indigenous Continent. Told largely from the perspective of the natives, Hämäläinen covers the centuries from the arrival of Europeans in North America through to the final subjugation of the last tribes in the late 19th century. It’s a gripping history, but watching the author attempt to come to terms with the history he is telling also makes for fascinating psychological analysis.
Hämäläinen is clearly sympathetic to the Indians. Indeed, the Europeans in his story tend to be portrayed as dirty, bumbling idiots who are repeatedly outwitted until, well, they’re not. Hämäläinen leans into this interpretation a bit much, and as a psychologist, I was as intrigued by how he grapples with history as much as the historical evidence. His sympathy for the Indians is evidently in tension with his unwillingness to distort the facts. For this, I admire him, since history is often distorted to suit the needs of political and academic elites of any given period. But the author’s attempts to square the historical facts with the moral lessons he hopes to impart leads him into contradiction and incoherence.
On one page, Hämäläinen assures the reader that Indians were egalitarian, only to follow that assurance with numerous examples of how that was not true. This inconsistency surfaces early in the book, when Hämäläinen informs us that the Taino Indians encountered by Columbus were “hierarchical” and “stratified.” Elsewhere, we are told that Native Americans were generally respectful of women (the word “matrilineal” is asked to do some heavy lifting here), but we are also provided with specific examples of tribes keeping women as sex slaves, some of whom were brutally abused by tribe members.
Indeed, although the word “captive” makes a lot of appearances in the book, it is selectively employed. When Europeans take people unwillingly to harsh work environments, or to be sold to others, these victims are called “slaves.” But when American Indians do the same thing, Hämäläinen euphemistically describes those victims as “captives.” In fact, a number of tribes were energetic participants in the trade of other indigenous people, selling slaves to other tribes and to Europeans. Although Hämäläinen shows an admirable willingness to discuss such practices, his discomfort is palpable.
Rather than revealing the cultural chasm between indigenous people and Europeans, the historical record teaches us just how similar they were. Each vied for status and power, kept slaves, engaged in genocide against neighboring groups, mistreated women, indulged ethnocentrism, and so on. Tribes or confederations such as the Iroquois, Sioux, or Comanche were violent warrior cultures that recall the Spartans. This observation isn’t intended to denigrate Native Americans, it is simply evidence of our shared (if profoundly flawed) humanity.
What Europeans did to American Indians was often terrible, but Indians gave as good as they got, both to Europeans and each other. These stories are similar to those the world over—we are all equally capable of great horrors and cruelty and history provides few examples of morally unambiguous heroes. Embracing this universalist truth can help us to move past the morality tales so often told in the guise of history and discard a misbegotten and ultimately selfish indulgence in self-flagellation.
This has always been the problem with the Howard Zinn school of history. Zinn’s history of the US resembles a biography written by a bitter former spouse. In lieu of a nuanced and accurate historical account it offers a deliberate slander of our own culture. The result is at once self-indulgent and self-pitying. A balanced account must not flinch from examining our historical mistakes and misdeeds and those of others, but the modern approach to history has too often become a neurotic wallowing in half-truths of our own failures. The corresponding utopian fantasies of other cultures more closely resemble the morality play of a Tolkien novel than the more complex experiences of people who actually lived on Earth.
As UK-based IEA economist Kristian Niemietz recently observed in a short Twitter thread about “anti-Britishness,” signalling disgust at our own culture and history has little to do with truth or helping marginalized communities. Rather, it is a way to advertise the superficial cleverness of radical self-criticism. By castigating the United States on social media or with our K12 or university students, we can flatter our moral egos without needing to donate money or time to communities in need. It fosters division and the main beneficiaries are not Native Americans or other marginalized groups, but whoever is collecting likes and followers online.
We can do better than this. US history should be clear and accurate about the US’s misdeeds, but we should also acknowledge that the US overcame its faults to become a beacon for progress. In the same way, we should highlight the wonderful culture, arts, religion, and so on of American Indians without turning them into pious exemplars of pastoral innocence and moral instruction. Our “ethnic studies” curricula too often lapse into propaganda designed to indict and shame the West and all its works. People and cultures are complex. If students were permitted to understand that human failings are universal but can be overcome, it might help to alleviate the depression and anxiety of those unjustly burdened by the sins of their ancestors.
Indigenous Continent strives to provide an honest and fascinating account of historical complexity at a time when progressive activism is tightening its grip on the search for disinterested truth. But it also helps us to see, firsthand, how uncomfortable historians working in this environment are with the inconvenient truth of the stories they have to tell.
|
<urn:uuid:2b01a539-0e9d-4691-8d5c-4cdc25563048>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00610.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9602099061012268,
"pii_count": 0,
"score": 2.796875,
"token_count": 1512,
"url": "https://quillette.com/2023/04/27/uncomfortable-history/?fbclid=IwAR2NqYqshmole3wckYPw06CBhYqFc3zfrMkKZWrrDfBw4VuhH72sOFwJzmU"
}
|
The Freudian concept known as “reaction formation” refers to a psychological defense mechanism against guilt. It occurs when an individual responds to a shame-inducing instinct with an overcorrection. Much of modern American history appears to be in the grip of reaction formation. Mortification at the developed West’s historical misdeeds has produced a utopian narrative of indigenous worlds typified by matriarchy, cooperation, pacifism, and gender fluidity. That no such world ever existed is beside the point; much of history is narrated to suit the proclivities of the audience, not to tell the truth about what actually happened.
This seems to be specifically true for our understanding of American Indian history. The violent migration of Europeans to the New World was very much like violent migrations throughout history and across cultures, most likely including successive waves of North American Indians (though the history there is murky). Yet instead of understanding these events in the context of larger historical patterns, the Indian Wars are cast as a morality tale in the manner of Howard Zinn, in which the actions of the European settlers are represented as uniquely reprehensible. This fantasy may be an inversion of past jingoistic and racist caricatures of American Indians as “savages,” but it is not more historically accurate.
I thought about this a lot as I read Pekka Hämäläinen’s fascinating and controversial new history of North American Indians, Indigenous Continent. Told largely from the perspective of the natives, Hämäläinen covers the centuries from the arrival of Europeans in North America through to the final subjugation of the last tribes in the late 19th century. It’s a gripping history, but watching the author attempt to come to terms with the history he is telling also makes for fascinating psychological analysis.
Hämäläinen is clearly sympathetic to the Indians. Indeed, the Europeans in his story tend to be portrayed as dirty, bumbling idiots who are repeatedly outwitted until, well, they’re not. Hämäläinen leans into this interpretation a bit much, and as a psychologist, I was as intrigued by how he grapples with history as much as the historical evidence. His sympathy for the Indians is evidently in tension with his unwillingness to distort the facts. For this, I admire him, since history is often distorted to suit the needs of political and academic elites of any given period. But the author’s attempts to square the historical facts with the moral lessons he hopes to impart leads him into contradiction and incoherence.
|
On one page, Hämäläinen assures the reader that Indians were egalitarian, only to follow that assurance with numerous examples of how that was not true. This inconsistency surfaces early in the book, when Hämäläinen informs us that the Taino Indians encountered by Columbus were “hierarchical” and “stratified.” Elsewhere, we are told that Native Americans were generally respectful of women (the word “matrilineal” is asked to do some heavy lifting here), but we are also provided with specific examples of tribes keeping women as sex slaves, some of whom were brutally abused by tribe members.
Indeed, although the word “captive” makes a lot of appearances in the book, it is selectively employed. When Europeans take people unwillingly to harsh work environments, or to be sold to others, these victims are called “slaves.” But when American Indians do the same thing, Hämäläinen euphemistically describes those victims as “captives.” In fact, a number of tribes were energetic participants in the trade of other indigenous people, selling slaves to other tribes and to Europeans. Although Hämäläinen shows an admirable willingness to discuss such practices, his discomfort is palpable.
Rather than revealing the cultural chasm between indigenous people and Europeans, the historical record teaches us just how similar they were. Each vied for status and power, kept slaves, engaged in genocide against neighboring groups, mistreated women, indulged ethnocentrism, and so on. Tribes or confederations such as the Iroquois, Sioux, or Comanche were violent warrior cultures that recall the Spartans. This observation isn’t intended to denigrate Native Americans, it is simply evidence of our shared (if profoundly flawed) humanity.
What Europeans did to American Indians was often terrible, but Indians gave as good as they got, both to Europeans and each other. These stories are similar to those the world over—we are all equally capable of great horrors and cruelty and history provides few examples of morally unambiguous heroes. Embracing this universalist truth can help us to move past the morality tales so often told in the guise of history and discard a misbegotten and ultimately selfish indulgence in self-flagellation.
This has always been the problem with the Howard Zinn school of history. Zinn’s history of the US resembles a biography written by a bitter former spouse. In lieu of a nuanced and accurate historical account it offers a deliberate slander of our own culture. The result is at once self-indulgent and self-pitying. A balanced account must not flinch from examining our historical mistakes and misdeeds and those of others, but the modern approach to history has too often become a neurotic wallowing in half-truths of our own failures. The corresponding utopian fantasies of other cultures more closely resemble the morality play of a Tolkien novel than the more complex experiences of people who actually lived on Earth.
As UK-based IEA economist Kristian Niemietz recently observed in a short Twitter thread about “anti-Britishness,” signalling disgust at our own culture and history has little to do with truth or helping marginalized communities. Rather, it is a way to advertise the superficial cleverness of radical self-criticism. By castigating the United States on social media or with our K12 or university students, we can flatter our moral egos without needing to donate money or time to communities in need. It fosters division and the main beneficiaries are not Native Americans or other marginalized groups, but whoever is collecting likes and followers online.
We can do better than this. US history should be clear and accurate about the US’s misdeeds, but we should also acknowledge that the US overcame its faults to become a beacon for progress. In the same way, we should highlight the wonderful culture, arts, religion, and so on of American Indians without turning them into pious exemplars of pastoral innocence and moral instruction. Our “ethnic studies” curricula too often lapse into propaganda designed to indict and shame the West and all its works. People and cultures are complex. If students were permitted to understand that human failings are universal but can be overcome, it might help to alleviate the depression and anxiety of those unjustly burdened by the sins of their ancestors.
Indigenous Continent strives to provide an honest and fascinating account of historical complexity at a time when progressive activism is tightening its grip on the search for disinterested truth. But it also helps us to see, firsthand, how uncomfortable historians working in this environment are with the inconvenient truth of the stories they have to tell.
|
In honor of Coretta Scott King and Dr. Martin Luther King, Jr. and Black History Month, YWCA Greenwich and 47 community partners sponsored a powerful panel discussion entitled Climate Justice In Connecticut.
Four outstanding female leaders and experts in their fields educated us about the disproportionate impacts of climate change on communities of color and how Connecticut can take steps to not only promote climate justice, but also achieve climate justice through a more inclusive and equitable process.
Our guest moderator Denise Savageau, environmental consultant and longtime Director of Conservation for the Town of Greenwich, began with a variety of data demonstrating the impacts of climate change and where these impacts are felt the most. For example, the data predict an increase in extreme heat days from four to 48 by the year 2050, and those impacted the most will be communities of color in cities such as Bridgeport and Hartford.
Similarly, communities of color and other vulnerable communities in the state will be more vulnerable to flood events going forward. Nationally we know that 70% of Black Americans live within 30 miles of a coal-fired power plant, while three out of every five Black Americans live in areas with uncontrolled waste sites.
Sharon Lewis, the Executive Director of the Connecticut Coalition for Environmental and Economic Justice, clarified the meaning of climate and environmental justice: when all share equally in the benefits and the burdens of climate change and environmental impacts, and when those most impacted lead the efforts to develop solutions.
For example, climate shifts and underinvestment in infrastructure have caused chronic sewage backups and home displacement for Lewis and her neighbors in Hartford, and there is “no urgency” in finding solutions or including those affected in developing solutions. “Race, not income, remains the primary indicator” for the siting of landfills, industrial facilities, and mass transportation, putting families of color in close proximity to burning facilities, waste disposal plants and highways. This correlation has been well documented by data and research, including two landmark studies published by the United Church of Christ that coined the term “environmental racism.”
Brenda Watson, the Executive Director of Operation Fuel explained that 30% of all carbon emissions come from our homes, and 30% of our housing stock is old and needs to be weatherized properly. As part of the Governor’s Council on Climate Change, she is working toward solutions that will remediate mold and other issues that need to be addressed in connection with weatherization.
For example, we should mobilize for these purposes in the same way that the state mobilized for residents in Eastern Connecticut with crumbling foundations. Brenda also explained that rising energy bills are even harder to pay when people, especially renters, are living in substandard housing where the heat escapes easily.
State residents pay between 30% and 50% of household income on energy, when spending 6% or less on energy is considered “affordable.” Energy justice needs to be addressed holistically, with an understanding of the reliance on energy for medical devices, the vulnerability to heat-related death and the impact of energy loss on food insecurity. Policy and regulatory work should acknowledge access to electricity as a basic right.
Ashley Stewart, the Manager of Community Engagement for the Connecticut Green Bank stressed the importance of listening to the most vulnerable, who are experiencing climate impacts first and before most. She highlighted that these impacts are first experienced before they become “a metric to be measured”. Our environmental justice communities “are our canaries in a coal mine, letting us know there are more impacts of climate change coming”. The Connecticut Green Bank is the first green bank in the country and is leading the state’s private investment in clean energy, and has recently expanded their scope to include environmental infrastructure.
The Connecticut Green Bank is also integrating the voices and experiences of those most impacted by climate change through thoughtfully designing solutions and creating pathways for those most impacted to participate in the new green economy. The Green Bank has led the nation in clean energy investment and is aiming to continue that investment in areas of land conservation, parks, waste, agriculture and water. Much is needed in our most marginalized and disadvantaged communities, as Ashley explained, including remediation of existing brownfields, urban stormwater infrastructure in surrounding neighborhoods, and more equitable ways for citizens to provide public comment to policymakers and regulators that help to shape their communities.
Together the undersigned urge the state to promote climate justice and end environmental racism with updated public policy, regulatory reform for private and public utilities, and inclusive infrastructure siting and planning going forward.
Thanks to Sharon Lewis, we ended this critically important conversation with a powerful quote from Benjamin Franklin: “Justice will not be served until those who are unaffected are as outraged as those who are.”
Sharon Lewis is the Executive Director, Connecticut Coalition for Economic and Environmental Justice. Ashley Stewart is the Manager of Community Relations, Connecticut Green Bank. Denise Savageau is an Environmental Consultant and Former Town of Greenwich Conservation Director. Brenda Watson is the Executive Director of Operation Fuel. Mary Lee Kiernan is President and CEO of the YWCA of Greenwich.
|
<urn:uuid:c5dcbc2c-500d-4df8-ae0d-bea29b639cb2>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00533.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9467858672142029,
"pii_count": 0,
"score": 3.015625,
"token_count": 1035,
"url": "https://ctmirror.org/2023/02/27/ct-must-do-more-achieve-climate-environmental-justice/"
}
|
In honor of Coretta Scott King and Dr. Martin Luther King, Jr. and Black History Month, YWCA Greenwich and 47 community partners sponsored a powerful panel discussion entitled Climate Justice In Connecticut.
Four outstanding female leaders and experts in their fields educated us about the disproportionate impacts of climate change on communities of color and how Connecticut can take steps to not only promote climate justice, but also achieve climate justice through a more inclusive and equitable process.
Our guest moderator Denise Savageau, environmental consultant and longtime Director of Conservation for the Town of Greenwich, began with a variety of data demonstrating the impacts of climate change and where these impacts are felt the most. For example, the data predict an increase in extreme heat days from four to 48 by the year 2050, and those impacted the most will be communities of color in cities such as Bridgeport and Hartford.
Similarly, communities of color and other vulnerable communities in the state will be more vulnerable to flood events going forward. Nationally we know that 70% of Black Americans live within 30 miles of a coal-fired power plant, while three out of every five Black Americans live in areas with uncontrolled waste sites.
Sharon Lewis, the Executive Director of the Connecticut Coalition for Environmental and Economic Justice, clarified the meaning of climate and environmental justice: when all share equally in the benefits and the burdens of climate change and environmental impacts, and when those most impacted lead the efforts to develop solutions.
For example, climate shifts and underinvestment in infrastructure have caused chronic sewage backups and home displacement for Lewis and her neighbors in Hartford, and there is “no urgency” in finding solutions or including those affected in developing solutions. “Race, not income, remains the primary indicator” for the siting of landfills, industrial facilities, and mass transportation, putting families of color in close proximity to burning facilities, waste disposal plants and highways. This correlation has been well documented by data and research, including two landmark studies published by the United Church of Christ that coined the term “environmental racism.”
Brenda Watson, the Executive Director of Operation Fuel explained that 30% of all carbon emissions come from our homes, and 30% of our housing stock is old and needs to be weatherized properly. As part of the Governor’s Council on Climate Change, she is working toward solutions that will remediate mold and other issues that need to be addressed in connection with weatherization.
For example, we should mobilize for these purposes in
|
the same way that the state mobilized for residents in Eastern Connecticut with crumbling foundations. Brenda also explained that rising energy bills are even harder to pay when people, especially renters, are living in substandard housing where the heat escapes easily.
State residents pay between 30% and 50% of household income on energy, when spending 6% or less on energy is considered “affordable.” Energy justice needs to be addressed holistically, with an understanding of the reliance on energy for medical devices, the vulnerability to heat-related death and the impact of energy loss on food insecurity. Policy and regulatory work should acknowledge access to electricity as a basic right.
Ashley Stewart, the Manager of Community Engagement for the Connecticut Green Bank stressed the importance of listening to the most vulnerable, who are experiencing climate impacts first and before most. She highlighted that these impacts are first experienced before they become “a metric to be measured”. Our environmental justice communities “are our canaries in a coal mine, letting us know there are more impacts of climate change coming”. The Connecticut Green Bank is the first green bank in the country and is leading the state’s private investment in clean energy, and has recently expanded their scope to include environmental infrastructure.
The Connecticut Green Bank is also integrating the voices and experiences of those most impacted by climate change through thoughtfully designing solutions and creating pathways for those most impacted to participate in the new green economy. The Green Bank has led the nation in clean energy investment and is aiming to continue that investment in areas of land conservation, parks, waste, agriculture and water. Much is needed in our most marginalized and disadvantaged communities, as Ashley explained, including remediation of existing brownfields, urban stormwater infrastructure in surrounding neighborhoods, and more equitable ways for citizens to provide public comment to policymakers and regulators that help to shape their communities.
Together the undersigned urge the state to promote climate justice and end environmental racism with updated public policy, regulatory reform for private and public utilities, and inclusive infrastructure siting and planning going forward.
Thanks to Sharon Lewis, we ended this critically important conversation with a powerful quote from Benjamin Franklin: “Justice will not be served until those who are unaffected are as outraged as those who are.”
Sharon Lewis is the Executive Director, Connecticut Coalition for Economic and Environmental Justice. Ashley Stewart is the Manager of Community Relations, Connecticut Green Bank. Denise Savageau is an Environmental Consultant and Former Town of Greenwich Conservation Director. Brenda Watson is the Executive Director of Operation Fuel. Mary Lee Kiernan is President and CEO of the YWCA of Greenwich.
|
Is it rain or snow? Researchers enlist citizen scientists to see what satellites can't
Researchers have developed a snow and rain tracker that gathers data from citizen scientists in an effort to improve the accuracy of avalanche warnings, road conditions and water management. Now, NASA's providing additional funding to expand the project to more parts of the Mountain West.
We all know that the freezing point is 32 degrees Fahrenheit. But when it's slightly warmer, it can be hard to tell if it’s raining or snowing. That's because humidity and other factors affect the threshold between the two.
“You're driving in your car and you notice that it's 36, 37 Fahrenheit, but there's snow freezing,” said Meghan Collins, an associate research scientist in science communication at the Desert Research Institute. “Then you have a direct experience of the phenomenon that is so challenging for hydrologists and water managers and other forecasters.”
There are many satellite-based tools and computer algorithms that measure several different aspects of weather. Lots of people have inexpensive rain gauges and thermometers right in their backyard. But the tools that exist to measure when rain changes to snow – and vice versa – are expensive and not always accurate.
These complex conditions are why Collins and other researchers developed the Mountain Rain or Snow project in 2019. It relies on citizens to submit real-time weather observations. That data – which has largely focused on the Sierra Nevada region and the Colorado Rocky Mountains – is helping researchers learn more about the precipitation phase patterns in the West.
“Humans putting that down on a phone or writing it in a log book is really the only way we can reliably get this information over a large spatial extent sometimes,” said Keith Jennings, a water resources scientist who leads the project.
The accuracy of the data is crucial for rain- and snow-related forecasting – and Jennings said it will become even more so as the earth continues to heat up.
“(Snow) is very sensitive to climate warming,” he said. “So in some locations, you notch up that average temperature by a degree Fahrenheit or two degrees Fahrenheit, that means all of a sudden you're getting 10 to 15% less snow. Even if you had the same volume of precipitation falling as rain instead of snow, it still produces some pretty fundamental differences.”
In 2021, nearly 900 citizen scientists submitted more than 15,000 observations to the Mountain Rain or Snow project.
Its success convinced NASA’s Citizen Science for Earth Systems Program to fund an additional three years of research, allowing the project to expand to the Wasatch Range near Salt Lake City and Western Montana around Missoula, among other locations.
“The Mountain West is a place that has really diverse communities, highly urbanized communities and very rural [communities],” Collins said. “This is an opportunity to put your community, your area on the map for science, because all of these observations matter.”
Anyone interested in contributing observations can sign up via text. Visit the project's website for details. After signing up, observers will have access to an app to start tracking rain and snow, providing on-the-ground precipitation data satellites can't.
This story was produced by the Mountain West News Bureau, a collaboration between Wyoming Public Media, Nevada Public Radio, Boise State Public Radio in Idaho, KUNR in Nevada, the O'Connor Center for the Rocky Mountain West in Montana, KUNC in Colorado, KUNM in New Mexico, with support from affiliate stations across the region. Funding for the Mountain West News Bureau is provided in part by the Corporation for Public Broadcasting.
Copyright 2023 KUNC. To see more, visit KUNC.
|
<urn:uuid:fd767bf9-ea09-4b8f-8754-e68152e1291a>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00435.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9351721405982971,
"pii_count": 0,
"score": 3.484375,
"token_count": 772,
"url": "https://www.aspenpublicradio.org/environment/2023-01-04/is-it-rain-or-snow-researchers-enlist-citizen-scientists-to-see-what-satellites-cant"
}
|
Is it rain or snow? Researchers enlist citizen scientists to see what satellites can't
Researchers have developed a snow and rain tracker that gathers data from citizen scientists in an effort to improve the accuracy of avalanche warnings, road conditions and water management. Now, NASA's providing additional funding to expand the project to more parts of the Mountain West.
We all know that the freezing point is 32 degrees Fahrenheit. But when it's slightly warmer, it can be hard to tell if it’s raining or snowing. That's because humidity and other factors affect the threshold between the two.
“You're driving in your car and you notice that it's 36, 37 Fahrenheit, but there's snow freezing,” said Meghan Collins, an associate research scientist in science communication at the Desert Research Institute. “Then you have a direct experience of the phenomenon that is so challenging for hydrologists and water managers and other forecasters.”
There are many satellite-based tools and computer algorithms that measure several different aspects of weather. Lots of people have inexpensive rain gauges and thermometers right in their backyard. But the tools that exist to measure when rain changes to snow – and vice versa – are expensive and not always accurate.
These complex conditions are why Collins and other researchers developed the Mountain Rain or Snow project in 2019. It relies on citizens to submit real-time weather observations. That data – which has largely focused on the Sierra Nevada region and the Colorado Rocky Mountains – is helping researchers learn more about the precipitation phase patterns in the West.
“Humans putting that down on a phone or writing it in a log book is really the only way we can reliably get this information over a large spatial extent sometimes,” said Keith Jennings, a water resources scientist who leads the project.
The accuracy of the data is crucial for rain- and snow-related forecasting – and Jennings said it will become even more so as the earth continues to heat up.
“(Snow) is very sensitive to climate warming,” he said. “So in some locations, you notch up that average temperature by a degree Fahrenheit or two degrees Fahrenheit, that means all of a sudden you're getting 10 to 15% less snow. Even if you had the same volume of precipitation falling as rain instead of snow, it still produces some pretty fundamental differences.”
In 2021, nearly 900 citizen scientists submitted more than 15,000 observations to the Mountain Rain
|
or Snow project.
Its success convinced NASA’s Citizen Science for Earth Systems Program to fund an additional three years of research, allowing the project to expand to the Wasatch Range near Salt Lake City and Western Montana around Missoula, among other locations.
“The Mountain West is a place that has really diverse communities, highly urbanized communities and very rural [communities],” Collins said. “This is an opportunity to put your community, your area on the map for science, because all of these observations matter.”
Anyone interested in contributing observations can sign up via text. Visit the project's website for details. After signing up, observers will have access to an app to start tracking rain and snow, providing on-the-ground precipitation data satellites can't.
This story was produced by the Mountain West News Bureau, a collaboration between Wyoming Public Media, Nevada Public Radio, Boise State Public Radio in Idaho, KUNR in Nevada, the O'Connor Center for the Rocky Mountain West in Montana, KUNC in Colorado, KUNM in New Mexico, with support from affiliate stations across the region. Funding for the Mountain West News Bureau is provided in part by the Corporation for Public Broadcasting.
Copyright 2023 KUNC. To see more, visit KUNC.
|
A recent study published in the Journal Neurology investigated the relationship between physical activity in women and the development of Parkinson Disease, a longstanding debilitating disease that affects the nerves and one’s ability to move properly. Tracking over 95,000 women for nearly 3 decades, the study found that getting regular exercise could reduce a woman’s chance of developing Parkinson Disease by 25 percent.
Parkinson Disease is one of just many diseases and health ailments that can be mitigated or prevented by exercise. Regular exercise is most commonly linked to reducing two of the most common leading causes of death in America, heart disease and stroke. According to a study in PLOS Medicine, individuals who participated in moderate or vigorous intensity exercise had an average reduction in risk of heart disease between 48 and 57 percent. Even more striking was that there was no threshold for the amount of exercise and the benefits for cardiovascular health. In other words, those that exercised most had the lowest risk of heart disease in this study that examined data on more than 90,000 adults.
There is no ceiling on the benefits of exercise and we continue to learn each day about different diseases that can be mitigated through regular exercise. Regular physical activity can prevent the onset of Type II Diabetes, reduce blood pressure and cholesterol levels, strengthen bones and muscle, prevent falls in the elderly and lower your risk of developing many common cancers like breast, lung and colon.
So how much exercise is necessary to realize health benefits, and what is meant exactly by moderate intensity exercise? According to the American Heart Association, to be healthy, adults should participate in 150 minutes of moderate intensity aerobic exercise per week or 75 minutes of vigorous intensity aerobic activity per week. This equates to 30 minutes of moderate intensity aerobic exercise 5 days per week, or 15 minutes of vigorous intensity aerobic exercise 5 days per week. According to the American Heart Association, only 1 in 5 adults are getting enough exercise weekly to maintain good health.
Moderate intensity aerobic exercise may be easier than you think. Examples of such exercise would include walking briskly, playing badminton, gardening, or mowing the lawn. Examples of vigorous intensity aerobic exercise include jogging, running, weight-lifting, and playing competitive sports like tennis, basketball and soccer. All individuals can benefit from exercise, no matter what their weight, height, gender, ethnicity, race, or state of health.
An optimal and effective healthcare system must prioritize preventative measures in curbing the onset of serious health conditions to advance public health. Too often, diseases and conditions are addressed after diagnosis and when it is too late. Instead of focusing on developing treatments like Insulin to treat Type II Diabetes, a larger focus should be on education and lifestyle prevention measures like exercise in preventing the onset of the disease itself, which is largely preventable through regular exercise and a carbohydrate-restrictive diet. A reactionary health system that focuses on the treatment of diseases once diagnosed is at risk of becoming a “sick care” system, as opposed to a true “healthcare” system that optimizes the prevention of such ailments.
According to the CDC, nearly 110,000 deaths per year could be prevented if American adults aged 40 and above increased their physical activity by a small amount, even by just 10 minutes. 4 in 5 American adults not getting enough moderate intensity exercise per week to maintain good health should never be acceptable to us from a public health perspective. A very practical way to meet the American Heart Association guidelines is by increasing our daily steps, which can be tracked by most smart phones. Tomorrow (June 10) is Family Health and Fitness Day, so all of us should make a concerted and intentional effort to get up and exercise.
|
<urn:uuid:3bb22bc7-ab48-4ad0-ae6c-44e96e6551c1>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510529.8/warc/CC-MAIN-20230929222230-20230930012230-00414.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9561425447463989,
"pii_count": 0,
"score": 3.0625,
"token_count": 747,
"url": "https://www.forbes.com/sites/omerawan/2023/06/09/why-exercise-could-be-the-key-to-your-health/?ss=style-design"
}
|
A recent study published in the Journal Neurology investigated the relationship between physical activity in women and the development of Parkinson Disease, a longstanding debilitating disease that affects the nerves and one’s ability to move properly. Tracking over 95,000 women for nearly 3 decades, the study found that getting regular exercise could reduce a woman’s chance of developing Parkinson Disease by 25 percent.
Parkinson Disease is one of just many diseases and health ailments that can be mitigated or prevented by exercise. Regular exercise is most commonly linked to reducing two of the most common leading causes of death in America, heart disease and stroke. According to a study in PLOS Medicine, individuals who participated in moderate or vigorous intensity exercise had an average reduction in risk of heart disease between 48 and 57 percent. Even more striking was that there was no threshold for the amount of exercise and the benefits for cardiovascular health. In other words, those that exercised most had the lowest risk of heart disease in this study that examined data on more than 90,000 adults.
There is no ceiling on the benefits of exercise and we continue to learn each day about different diseases that can be mitigated through regular exercise. Regular physical activity can prevent the onset of Type II Diabetes, reduce blood pressure and cholesterol levels, strengthen bones and muscle, prevent falls in the elderly and lower your risk of developing many common cancers like breast, lung and colon.
So how much exercise is necessary to realize health benefits, and what is meant exactly by moderate intensity exercise? According to the American Heart Association, to be healthy, adults should participate in 150 minutes of moderate intensity aerobic exercise per week or 75 minutes of vigorous intensity aerobic activity per week. This equates to 30 minutes of moderate intensity aerobic exercise 5 days per week, or 15 minutes of vigorous intensity aerobic exercise 5 days per week. According to the American Heart Association, only 1 in 5 adults are getting enough exercise weekly to maintain good health.
Moderate intensity aerobic exercise may be easier than you think. Examples of such exercise would include walking briskly, playing badminton, gardening, or mowing the lawn. Examples of vigorous intensity aerobic exercise include jogging, running, weight-lifting, and playing competitive sports like tennis, basketball and soccer. All individuals can benefit from exercise, no matter what their weight, height, gender, ethnicity, race, or state of health.
An optimal and effective healthcare system must prioritize
|
preventative measures in curbing the onset of serious health conditions to advance public health. Too often, diseases and conditions are addressed after diagnosis and when it is too late. Instead of focusing on developing treatments like Insulin to treat Type II Diabetes, a larger focus should be on education and lifestyle prevention measures like exercise in preventing the onset of the disease itself, which is largely preventable through regular exercise and a carbohydrate-restrictive diet. A reactionary health system that focuses on the treatment of diseases once diagnosed is at risk of becoming a “sick care” system, as opposed to a true “healthcare” system that optimizes the prevention of such ailments.
According to the CDC, nearly 110,000 deaths per year could be prevented if American adults aged 40 and above increased their physical activity by a small amount, even by just 10 minutes. 4 in 5 American adults not getting enough moderate intensity exercise per week to maintain good health should never be acceptable to us from a public health perspective. A very practical way to meet the American Heart Association guidelines is by increasing our daily steps, which can be tracked by most smart phones. Tomorrow (June 10) is Family Health and Fitness Day, so all of us should make a concerted and intentional effort to get up and exercise.
|
In the context in which Brazil may become the world leader in hydrogen generated from renewable sources, EDP Brasil kicks off the production of the first molecule in its new generation unit located in São Gonçalo do Amarante, Ceará. The plant is an R&D pilot project at the Complexo Termelétrico do Pecém (UTE Pecém), already in operation, and received an investment of R$42 million.
The unit includes a solar plant with a capacity of 3 megawatts and an electrolyzer — a device that produces hydrogen through a chemical process —for fuel production, with a capacity to produce 250 Nm3/h of gas.
The choice of Pecém as a production site is strategic, since the complex brings together unique features for the process of introducing the power source, such as the huge solar and wind power potential — fundamental for the production of gas — and a good location for distribution to the international market.
The plant is a few meters away from EDP’s coal-fired thermal power plant and is a decarbonization initiative in the plant’s process, since it will be used to replace heavy oils from thermal power plant engines.
The idea is that hydrogen is used in order to verify the impacts of gas injection (on an experimental scale) in co-firing with diesel oil and coal, acting as an increase in combustion energy efficiency in the boilers of UTE Pecém. CEO João Marques da Cruz says that Brazil has the ideal conditions to produce the energy that was missing to ease the transition to a low-carbon economy.
“It is an R&D project that was useful for us to learn, but we have an agenda to develop large-scale projects of 100 MW in several states in Brazil,” he said. “We are in talks and have signed confidentiality agreements to see if we can agree on price conditions because green hydrogen is more expensive than hydrogen produced by non-renewable sources and substantially more expensive than natural gas.”
The challenge now is to make production scalable and economically viable compared to fossil fuels. About 70% of the production cost is with electricity, which means that this is an electro-intensive industry. “And what if it were 100 MW in Pecém for our coal plant, would it have economic viability?” asks Mr. Cruz. “It wouldn’t,” the executive replied, pointing out that other investments would be needed.
In the expectation of meeting the future demand for power based on renewable and low-carbon energy, what EDP seeks now are customers who accept having a higher industrial cost in exchange for clean production. The Portuguese company says it is in talks with a large international company that operates in Brazil using furnaces in production and is interested in green hydrogen.
The environmental viability is granted, but the economic viability entered the horizon of entrepreneurs as the United States and Europe announced subsidies for the input, a fact that puts Brazil as a potential exporter. According to the German consulting firm Roland Berger, the capacity of the Brazilian domestic market to absorb all the generation potential they have is low, which places the country as a major hub for clean energy exports.
“We are looking at both markets. We are looking at the possibility of participating in an auction in Europe, namely in Germany, with projects coming from Brazil,” he said.
On the other hand, the costs of using the power grid and transporting the hydrogen are among the main limiting factors for the exploration of all its production potential. It is important to highlight that the first and simplest element of the periodic table — and also the most abundant in the entire universe — will have to use renewable sources, such as solar, wind, biomass, biogas, and ethanol, for example, to ease the transition to a low-carbon economy.
In EDP’s case, the focus is on the use of the solar photovoltaic source. Mr. Cruz believes that the unit cost of production may fall substantially in the short term, similar to the solar source, which in a decade stopped being the most expensive and today occupies the second position in Brazil’s power generation mix with very competitive costs.
|
<urn:uuid:b3ba2d8d-f4ac-4bc8-a39d-0a7659ff7988>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00158.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9598114490509033,
"pii_count": 0,
"score": 3.0625,
"token_count": 877,
"url": "https://valorinternational.globo.com/business/news/2023/01/22/edp-starts-first-green-hydrogen-pilot-project-in-brazil.ghtml"
}
|
In the context in which Brazil may become the world leader in hydrogen generated from renewable sources, EDP Brasil kicks off the production of the first molecule in its new generation unit located in São Gonçalo do Amarante, Ceará. The plant is an R&D pilot project at the Complexo Termelétrico do Pecém (UTE Pecém), already in operation, and received an investment of R$42 million.
The unit includes a solar plant with a capacity of 3 megawatts and an electrolyzer — a device that produces hydrogen through a chemical process —for fuel production, with a capacity to produce 250 Nm3/h of gas.
The choice of Pecém as a production site is strategic, since the complex brings together unique features for the process of introducing the power source, such as the huge solar and wind power potential — fundamental for the production of gas — and a good location for distribution to the international market.
The plant is a few meters away from EDP’s coal-fired thermal power plant and is a decarbonization initiative in the plant’s process, since it will be used to replace heavy oils from thermal power plant engines.
The idea is that hydrogen is used in order to verify the impacts of gas injection (on an experimental scale) in co-firing with diesel oil and coal, acting as an increase in combustion energy efficiency in the boilers of UTE Pecém. CEO João Marques da Cruz says that Brazil has the ideal conditions to produce the energy that was missing to ease the transition to a low-carbon economy.
“It is an R&D project that was useful for us to learn, but we have an agenda to develop large-scale projects of 100 MW in several states in Brazil,” he said. “We are in talks and have signed confidentiality agreements to see if we can agree on price conditions because green hydrogen is more expensive than hydrogen produced by non-renewable sources and substantially more expensive than natural gas.”
The challenge now is to make production scalable and economically viable compared to fossil fuels. About 70% of the production cost is with electricity, which means that this is an electro-intensive industry. “And what if it were 100 MW in Pecém for our coal plant, would it have economic viability?” asks Mr. Cruz. “It wouldn’t,” the executive replied, pointing out that other investments would be needed.
In the expectation of meeting the future demand for power based on renewable and low-carbon energy, what EDP
|
seeks now are customers who accept having a higher industrial cost in exchange for clean production. The Portuguese company says it is in talks with a large international company that operates in Brazil using furnaces in production and is interested in green hydrogen.
The environmental viability is granted, but the economic viability entered the horizon of entrepreneurs as the United States and Europe announced subsidies for the input, a fact that puts Brazil as a potential exporter. According to the German consulting firm Roland Berger, the capacity of the Brazilian domestic market to absorb all the generation potential they have is low, which places the country as a major hub for clean energy exports.
“We are looking at both markets. We are looking at the possibility of participating in an auction in Europe, namely in Germany, with projects coming from Brazil,” he said.
On the other hand, the costs of using the power grid and transporting the hydrogen are among the main limiting factors for the exploration of all its production potential. It is important to highlight that the first and simplest element of the periodic table — and also the most abundant in the entire universe — will have to use renewable sources, such as solar, wind, biomass, biogas, and ethanol, for example, to ease the transition to a low-carbon economy.
In EDP’s case, the focus is on the use of the solar photovoltaic source. Mr. Cruz believes that the unit cost of production may fall substantially in the short term, similar to the solar source, which in a decade stopped being the most expensive and today occupies the second position in Brazil’s power generation mix with very competitive costs.
|
The energy system’s transition away from coal will leave miners with the equivalent of 100 job cuts daily through 2035, mostly in China and India.
(Bloomberg) — The energy system’s transition away from coal will leave miners with the equivalent of 100 job cuts daily through 2035, mostly in China and India.
Mine closures will eliminate about 15% of global coal mining jobs by that year, about 400,000 in total, according to a new report published by Global Energy Monitor this week. The figure is likely to rise to nearly 1 million by 2050 as the world turns to cheaper wind and solar power generation.
The job losses highlight some of the social challenges that companies and governments face as they transition the world away from fossil fuels in order to avoid the worst impacts of climate change. Mining jobs lost to automation, efficiency and commodity cycles have become thorny political issues from the US to Europe to Australia.
One way to ease the pain of layoffs is to give ex-miners priority for job opportunities triggered by mine shutdowns, such as work needed to rehabilitate land and mitigate environmental impacts after operations cease, according to the report.
China produces and uses more than half the world’s coal, and is home to more than 1.5 million mining jobs, according to Global Energy Monitor. Its Shanxi province alone is likely to shed more than 240,000 mining jobs through 2050. China’s government has been pushing firms to gradually replace underground workers with automated machines in part to ease the social cost of future mine closures.
Coal India Ltd., the state-owned miner that produces more coal than any other company, is likely to face the biggest corporate impact from layoffs, with potential cuts of nearly 74,000 workers by 2050, according to the report.
The firm has already seen its headcount dwindle from 310,000 in 2017 to 240,000 this year, according to company filings. Its staff count is falling by about 13,000 to 14,000 people a year, a pace that’s expected to accelerate due to retirements and outsourcing work that will limit new hiring, company officials have said.
–With assistance from Rajesh Kumar Singh.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.
|
<urn:uuid:8f7540b0-ab73-490c-82a5-48c68dc6e5ae>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00099.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9533247351646423,
"pii_count": 0,
"score": 2.53125,
"token_count": 469,
"url": "https://ec2-15-188-152-128.eu-west-3.compute.amazonaws.com/2023/10/11/coals-coming-decline-has-miners-facing-400000-layoffs-by-2035/"
}
|
The energy system’s transition away from coal will leave miners with the equivalent of 100 job cuts daily through 2035, mostly in China and India.
(Bloomberg) — The energy system’s transition away from coal will leave miners with the equivalent of 100 job cuts daily through 2035, mostly in China and India.
Mine closures will eliminate about 15% of global coal mining jobs by that year, about 400,000 in total, according to a new report published by Global Energy Monitor this week. The figure is likely to rise to nearly 1 million by 2050 as the world turns to cheaper wind and solar power generation.
The job losses highlight some of the social challenges that companies and governments face as they transition the world away from fossil fuels in order to avoid the worst impacts of climate change. Mining jobs lost to automation, efficiency and commodity cycles have become thorny political issues from the US to Europe to Australia.
One way to ease the pain of layoffs is to give ex-miners priority for job opportunities triggered by mine shutdowns, such as work needed to rehabilitate land and mitigate environmental impacts after operations cease, according to the report.
China produces and uses more than half the world’s coal, and is home to more than 1.5 million mining jobs, according to Global Energy Monitor. Its Shanxi province alone is likely to shed more than 240,000 mining jobs through 2050. China’s government has been pushing firms to gradually replace underground workers with automated machines in part to ease the social cost of future mine closures.
Coal India Ltd., the state-owned miner that produces more coal than any other company, is likely to face the biggest corporate impact from layoffs, with potential cuts of nearly 74,000 workers by 2050, according to the report.
The firm has already seen its headcount dwindle from 310,000 in 2017 to 240,000 this year, according to company filings. Its staff count is falling by about 13,000 to 14,000 people a year, a pace that’s expected to accelerate due to retirements and outsourcing work that will limit new hiring, company officials have said.
–With assistance from Rajesh Kumar Singh.
More stories like this are available on bloomberg.com
|
©2023 Bloomberg L.P.
|
How do you perceive water?
That was the question that fueled the latest collection of artwork to be featured in the Salt Lake City library system’s Day-Riverside Branch in Rose Park.
Teenagers from the Salt Lake Valley Youth Center, which offers educational programs for at-risk youths, were asked to use the Jordan River as inspiration for a series of self-portraits that will eventually make it to the walls of the Utah Museum of Fine Arts.
Every piece in the “We Are All Water” exhibit is anonymous to protect the privacy of the student who created it, but each picture tells a personal story.
“This is me and the river,” one of the descriptions reads.
The work shows a brown-haired girl with a halo and a black, red and orange checkered shirt. A dyed piece of paper representing the river flows across her face.
“Why I chose black, red and orange,” the description said, “is because it shows my day [and] how I was when or while I made it.”
A tie-dye cloth, made by students and shaped to look like the winding Jordan River that flows near the youth center in South Salt Lake, was also installed as part of the exhibit.
Most of the art the students create is introspective, said Kathryn Nelson, a science teacher at the youth center.
“Most of it,” she said, “has to do with sort of the way they see themselves in the environment.”
Often, these students have experienced hardship while trying to complete a regular public school curriculum, Nelson said. They’re also usually not involved in other parts of the community.
“They’re not on ballclubs. They don’t go to church,” Nelson said. “They’re just one of those [groups] that’s sort of moving ghostlike through our community.”
But works like “We Are All Water,” she said, give these students an opportunity to share their hopes, dreams and worries with the broader community.
The exhibit is a product of the STEM Community Alliance Program — a University of Utah and Utah State Board of Education initiative that provides science, technology, engineering and mathematics learning opportunities to at-risk youths.
After seeing the finished work, program organizers said the exhibit provided an important platform for the students to express themselves.
“Often, students in youth-in-custody settings feel that they’re being represented a certain way by other people,” said Laura George, associate director of STEM Community Alliance Program. “And so I think it’s powerful for them to be able to create an image of themselves that they get to put out into the community.”
It’s also a chance for students to have their voices heard on important issues, George said, and advocate positive changes on a topic they care deeply about: the environment.
“We can’t reverse the damage, but we can make change so we don’t cause more,” one student wrote under a drawing. “Hopefully, this gives you something to think about on the individual changes you can make. We can each make a difference.”
Nelson said the students are tuned in to how Utah is changing as relentless drought grips the state and the Great Salt Lake shrinks.
“They’re really, really worried about the way that the character of this place is going to change if we lose the lake,” Nelson said. “And they do understand the significance of having water close to our environments.”
STEM Community Alliance Program and the Day-Riverside Branch organized a reception Thursday to celebrate the exhibit with the general public. Attendees admired the beauty of the artwork and the depth of the message behind it.
“They want the community to care more about the water,” Megan Singleton said after reading the descriptions that accompany each piece. “And then they also said that about themselves, that they like this project because they just want to be seen, and they want others to see them and see the water.”
The library is only the first stop for the art installation. The collection will move to the Utah Cultural Celebration Center in March before heading to the Utah Museum of Fine Arts.
There are tentative plans to add more work from students in other programs who were inspired by different bodies of water, such as the Provo River and Ogden River.
Alixel Cabrera is a Report for America corps member and writes about the status of communities on the west side of the Salt Lake Valley for The Salt Lake Tribune. Your donation to match our RFA grant helps keep her writing stories like this one; please consider making a tax-deductible gift of any amount today by clicking here.
|
<urn:uuid:4b1480c7-34a8-4c6a-b17b-317777be61b2>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00241.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9626666307449341,
"pii_count": 0,
"score": 2.546875,
"token_count": 1017,
"url": "https://www.sltrib.com/news/2023/01/23/exhibit-explores-identity/"
}
|
How do you perceive water?
That was the question that fueled the latest collection of artwork to be featured in the Salt Lake City library system’s Day-Riverside Branch in Rose Park.
Teenagers from the Salt Lake Valley Youth Center, which offers educational programs for at-risk youths, were asked to use the Jordan River as inspiration for a series of self-portraits that will eventually make it to the walls of the Utah Museum of Fine Arts.
Every piece in the “We Are All Water” exhibit is anonymous to protect the privacy of the student who created it, but each picture tells a personal story.
“This is me and the river,” one of the descriptions reads.
The work shows a brown-haired girl with a halo and a black, red and orange checkered shirt. A dyed piece of paper representing the river flows across her face.
“Why I chose black, red and orange,” the description said, “is because it shows my day [and] how I was when or while I made it.”
A tie-dye cloth, made by students and shaped to look like the winding Jordan River that flows near the youth center in South Salt Lake, was also installed as part of the exhibit.
Most of the art the students create is introspective, said Kathryn Nelson, a science teacher at the youth center.
“Most of it,” she said, “has to do with sort of the way they see themselves in the environment.”
Often, these students have experienced hardship while trying to complete a regular public school curriculum, Nelson said. They’re also usually not involved in other parts of the community.
“They’re not on ballclubs. They don’t go to church,” Nelson said. “They’re just one of those [groups] that’s sort of moving ghostlike through our community.”
But works like “We Are All Water,” she said, give these students an opportunity to share their hopes, dreams and worries with the broader community.
The exhibit is a product of the STEM Community Alliance Program — a University of Utah and Utah State Board of Education initiative that provides science, technology, engineering and mathematics learning opportunities to at-risk youths.
After seeing the finished work, program organizers said the exhibit provided an important platform for the students to express themselves.
“Often, students in youth-in-custody settings feel that they’re being represented a certain way by other people,” said Laura George, associate director of STEM Community Alliance Program. “And so I think it’s powerful for them to be able to create an image of
|
themselves that they get to put out into the community.”
It’s also a chance for students to have their voices heard on important issues, George said, and advocate positive changes on a topic they care deeply about: the environment.
“We can’t reverse the damage, but we can make change so we don’t cause more,” one student wrote under a drawing. “Hopefully, this gives you something to think about on the individual changes you can make. We can each make a difference.”
Nelson said the students are tuned in to how Utah is changing as relentless drought grips the state and the Great Salt Lake shrinks.
“They’re really, really worried about the way that the character of this place is going to change if we lose the lake,” Nelson said. “And they do understand the significance of having water close to our environments.”
STEM Community Alliance Program and the Day-Riverside Branch organized a reception Thursday to celebrate the exhibit with the general public. Attendees admired the beauty of the artwork and the depth of the message behind it.
“They want the community to care more about the water,” Megan Singleton said after reading the descriptions that accompany each piece. “And then they also said that about themselves, that they like this project because they just want to be seen, and they want others to see them and see the water.”
The library is only the first stop for the art installation. The collection will move to the Utah Cultural Celebration Center in March before heading to the Utah Museum of Fine Arts.
There are tentative plans to add more work from students in other programs who were inspired by different bodies of water, such as the Provo River and Ogden River.
Alixel Cabrera is a Report for America corps member and writes about the status of communities on the west side of the Salt Lake Valley for The Salt Lake Tribune. Your donation to match our RFA grant helps keep her writing stories like this one; please consider making a tax-deductible gift of any amount today by clicking here.
|
Impact of Genomics in education;
Genomics is the study of the total or part of the genetic or epigenetic sequence information of organisms.
It is an attempts to understand the structure and function of these sequences and the downstream biological products.
In addition, Genomics examines the molecular mechanisms and the interplay of this molecular information and health interventions and environmental factors in disease.
Furthermore, Human genomics is not the only part of genomics relevant to human health.
However The human genome interacts with those of a myriad other organisms.
Although it includes; plants, vectors and pathogens.
In addition, Genomics is cuts across all organisms, as relevant to public heath in human populations.
In addition to genomics knowledge, we also considers technologies that make use of genomics knowledge.
Genomics is distinct from genetics.
While genetics is the study of heredity, genomics is the study of genes and their functions, and related techniques.
However The main difference between genomics and genetics is that genetics scrutinizes the functioning and composition of the single gene.
While genomics addresses all genes and their inter relationships in order to identify their influence.
This can be on the growth and development of the organism.
Furthermore, The role of human genomics research biotechnologies has the potential to achieve a number of public health goals.
Goals such as to reduce global health inequalities. This can be by providing developing countries with efficient, cost-effective and robust means of preventing,and treating major diseases that burden their populations.
Moreover, It is a new and rapidly evolving branch of science and the full future role of genomics for the provision of health care is far from clear.
However, it does offer the long-term possibility of providing new approaches to the prevention and management of many intractable diseases.
Hence it is important to prepare society for the complexities of this new field.
This is to ensure that its benefits are distributed fairly among the countries of the world.
In this article you will learn some of the impact of genomics in education;
1. It helps to increase students interests and motivation:
Genomics is an interesting field of study.
Both the teachers and students can conduct several research on it.
In addition to that they can also perform various experiments.
This experiments will expose them to endless possibilities of ideas. Such ideas can lead to more development that will improve their interest and motivation to learn.
2. It enhance students knowledge:
Knowledge is power and
Genomics allows students to work outside of their comfort zones. It is a recent aspect of biology that allows every learner to think and make research. This method of learning gives room for them to develop more problem solving skills and gain more useful knowledge.
3. It boosts students success rate while learning:
Genomics in most situations enhance learners/students success rate. Since their hypothesis are mostly practical.
With constant study and research, students tend to perform well in any course relating to Genomics since they can easily learn the various ways of discovering the diagnosis and treatments of diseases.
Are you a school owner and you need a web solution to digitize your school work click here to signup for free.
|
<urn:uuid:94baa502-954c-43b9-8496-0065b4035153>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00108.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9489837884902954,
"pii_count": 0,
"score": 3.484375,
"token_count": 663,
"url": "http://ec2-52-214-81-77.eu-west-1.compute.amazonaws.com/2023/02/20/impact-of-genomics-in-education/"
}
|
Impact of Genomics in education;
Genomics is the study of the total or part of the genetic or epigenetic sequence information of organisms.
It is an attempts to understand the structure and function of these sequences and the downstream biological products.
In addition, Genomics examines the molecular mechanisms and the interplay of this molecular information and health interventions and environmental factors in disease.
Furthermore, Human genomics is not the only part of genomics relevant to human health.
However The human genome interacts with those of a myriad other organisms.
Although it includes; plants, vectors and pathogens.
In addition, Genomics is cuts across all organisms, as relevant to public heath in human populations.
In addition to genomics knowledge, we also considers technologies that make use of genomics knowledge.
Genomics is distinct from genetics.
While genetics is the study of heredity, genomics is the study of genes and their functions, and related techniques.
However The main difference between genomics and genetics is that genetics scrutinizes the functioning and composition of the single gene.
While genomics addresses all genes and their inter relationships in order to identify their influence.
This can be on the growth and development of the organism.
Furthermore, The role of human genomics research biotechnologies has the potential to achieve a number of public health goals.
Goals such as to reduce global health inequalities. This can be by providing developing countries with efficient, cost-effective and robust means of preventing,and treating major diseases that burden their populations.
Moreover, It is a new and rapidly evolving branch of science and the full future role of genomics for the provision of health care is far from clear.
However, it does offer the long-term possibility of providing new approaches to the prevention and management of many intractable diseases.
Hence it is important to prepare society for the complexities of this new field.
This is to ensure that its benefits are distributed fairly among the countries of the world.
In this article you will learn some of the impact of genomics in education;
1. It helps to increase students interests and motivation:
Genomics is an interesting field of study.
Both the teachers and students can conduct several research on it.
In addition to that they can also perform various experiments.
This experiments will expose them to endless possibilities of ideas. Such ideas can lead to more development that will improve their interest and motivation to learn.
2. It enhance students knowledge:
Knowledge is power and
Genomics allows students to work outside of their comfort zones. It is a recent aspect of biology that allows every learner to think and make research. This
|
method of learning gives room for them to develop more problem solving skills and gain more useful knowledge.
3. It boosts students success rate while learning:
Genomics in most situations enhance learners/students success rate. Since their hypothesis are mostly practical.
With constant study and research, students tend to perform well in any course relating to Genomics since they can easily learn the various ways of discovering the diagnosis and treatments of diseases.
Are you a school owner and you need a web solution to digitize your school work click here to signup for free.
|
Scientists have found a gene that can help the body fight SARS-CoV-2 so well that people with it may not have any COVID symptoms at all, especially if they've previously had a cold.
- One fifth of people in the study who had asymptomatic COVID-19 had a gene called HLA-B15
- The gene codes a protein that helps the immune system identify SARS-CoV-2 and other coronaviruses such as the common cold
- The discovery could help the development of better coronavirus vaccines
"We've been very curious to understand why some people are able to dodge COVID," said Stephanie Gras of La Trobe University, co-author of a new study published today in the journal Nature.
About 20 per cent of individuals that test positive for SARS-CoV-2, the virus that causes COVID-19, escape symptoms, she said.
"Our theory was the immune system was stronger and more able to protect them, and we really wanted to understand how this was working in those individuals."
For their study, Professor Gras and colleagues pored over available data from a large database of registered bone marrow donors.
To understand how the immune system was working, without the influence of vaccines, they needed to select only unvaccinated people.
One part of the study involved a subgroup of 1,428 unvaccinated donors who had tested positive for SARS-CoV-2.
The researchers found that of the 136 that had no symptoms, one in five carried a gene called HLA-B15. These findings, which pointed the finger at the protective gene, were replicated in other groups as well.
In a second part of the study, the researchers looked at samples taken from donors prior to the COVID pandemic. This is when they found a smoking gun.
Those with the HLA-B15 gene had T-cells (a type of white blood cell that fights infection) showing a strong immune response to SARS-CoV-2.
In addition, the same group also had a strong immune response to seasonal coronaviruses that cause the common cold.
But to understand the implications of this, we first have to dive a bit deeper into the science of immunology.
HLA 'early warning system'
HLA-B15 belongs to a highly variable family of genes – called HLA (human leucocyte antigens) – that code for a protein that helps alert the immune system when our cells are, for example, cancerous or have been invaded by viruses.
When it comes to bone marrow transplants, the donor and recipient must be "HLA matched" or the organ will be rejected as foreign.
In fact, the researchers used a database of bone-marrow donors precisely because their HLA "genotype" had already been recorded.
In the case of coronaviruses, HLA proteins bind fragments of the virus and present them to T-cells that can then go in for the kill.
"Sometimes I think of [HLA proteins] as like an arm with a hand on it, and the hand is holding the piece of virus," said lead researcher Jill Hollenbach at the University of California, San Francisco.
Once T-cells have been kicked into action by a viral fragment bound by HLA proteins, they will be ready and waiting as a defence the next time that particular virus comes around.
It's analogous to an army that's battle-ready because the soldiers know what the enemy looks like.
Similarities with the common cold
When the team took a closer look at the HLA-B15 protein in the asymptomatic people in their study, they discovered a physical basis for the link with the common cold.
The HLA protein bound a particular part of the SARS-CoV-2 spike protein – the bristling extremities on the outside of the coronavirus.
This fragment of spike protein is shaped like equivalent pieces in other coronaviruses that cause common colds.
If a donor had one copy of the HLA-B15 gene they were twice as likely to have had an asymptomatic COVID-19 infection as a donor with another HLA type, the researchers report.
Two copies of the gene meant they were eight times more likely to be asymptomatic.
Professor Gras said despite this they did find a few individuals with the gene that had severe COVID.
"It's not a magic bullet but you will be more likely to be asymptomatic," she said.
The researchers concluded exposure to seasonal cold viruses could have primed the immune system of people with HLA-B15.
This would have enabled their T-cells to kick in a lot faster against SARS-CoV-2, and knocked the virus out before any symptoms emerged, the researchers argued.
The team hope further studies of asymptomatic infection could help identify immune mechanisms that can quickly eliminate the virus.
"It may be possible to develop vaccines and treatments against SARS-CoV-2 that mimic the immune [system's] shield seen in the asymptomatic people with HLA-B15," Professor Gras said.
A bit like vaccination
Stuart Tangye of the Garvan Institute of Medical Research, who was not involved in the study, said HLA-B15 appeared to give people an advantage against COVID if they had been exposed to common cold viruses.
"In these cases, the seasonal coronaviruses sort of vaccinated the individuals," said Professor Tangye, who has looked into the genetics of severe COVID.
As with vaccination, the immune system had been primed to look for something similar to the pathogen.
The idea that exposure to the common cold might help trigger an immunity to SARS-CoV-2 was not new, Professor Tangye said.
But this study had pinpointed a specific gene linked to this protection.
"We knew that a certain percentage of people appeared to have cross-reactive immunity that existed before SARS-CoV-2. So this possibly explains that, and that's interesting."
Nathan Bartlett, a viral immunologist from the University of Newcastle, was a little more circumspect, describing the researchers' conclusions as "interesting speculation".
He noted one limitation of the study was that people "self-reported" their COVID test results and lack of symptoms.
Assuming these were truly asymptomatic infections, the research provided a "compelling association", suggesting the T-cell response had prevented symptoms.
But Professor Bartlett said it may only be able to do this against older variants of SARS-CoV-2.
The T-cell response takes a week to swing into action, he said, while some of the newer SARS-CoV-2 variants can generate symptoms faster – "within two or three days'' in the case of Omicron.
In this case, an asymptomatic infection would rely other parts of the immune system that are activated sooner.
But Professor Bartlett agreed the study was potentially useful in developing a universal coronavirus vaccine.
"This research suggests there are common targets across multiple coronaviruses."
HLA: The good, bad and the ugly
This was not the first time that an HLA genotype had been linked to the severity of a disease.
Certain HLA molecules have previously helped explain why some people infected by HIV stay healthier for longer.
On the other hand, some HLA genotypes have been linked to an increased risk of autoimmune disease.
"There are these associations between particular HLA genes and different types of immune responses," Professor Tangye said.
"HLA really drives the immune system's function and dysfunction."
HLA genotypes vary enormously between different ethnic groups, said epidemiologist Catherine Bennett of Deakin University.
This kind of research could help manage COVID risk at a population level, she said.
"This is helpful from a public health point of view."Loading...
If you're unable to load the form, click here.
|
<urn:uuid:ffe312a3-77fb-40b0-b11a-bc7d86fcaede>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510903.85/warc/CC-MAIN-20231001141548-20231001171548-00861.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.97018963098526,
"pii_count": 0,
"score": 3.34375,
"token_count": 1693,
"url": "https://www.abc.net.au/news/health/2023-07-20/asymptomatic-covid-19-linked-to-hla-gene-variant-common-cold/102619476"
}
|
Scientists have found a gene that can help the body fight SARS-CoV-2 so well that people with it may not have any COVID symptoms at all, especially if they've previously had a cold.
- One fifth of people in the study who had asymptomatic COVID-19 had a gene called HLA-B15
- The gene codes a protein that helps the immune system identify SARS-CoV-2 and other coronaviruses such as the common cold
- The discovery could help the development of better coronavirus vaccines
"We've been very curious to understand why some people are able to dodge COVID," said Stephanie Gras of La Trobe University, co-author of a new study published today in the journal Nature.
About 20 per cent of individuals that test positive for SARS-CoV-2, the virus that causes COVID-19, escape symptoms, she said.
"Our theory was the immune system was stronger and more able to protect them, and we really wanted to understand how this was working in those individuals."
For their study, Professor Gras and colleagues pored over available data from a large database of registered bone marrow donors.
To understand how the immune system was working, without the influence of vaccines, they needed to select only unvaccinated people.
One part of the study involved a subgroup of 1,428 unvaccinated donors who had tested positive for SARS-CoV-2.
The researchers found that of the 136 that had no symptoms, one in five carried a gene called HLA-B15. These findings, which pointed the finger at the protective gene, were replicated in other groups as well.
In a second part of the study, the researchers looked at samples taken from donors prior to the COVID pandemic. This is when they found a smoking gun.
Those with the HLA-B15 gene had T-cells (a type of white blood cell that fights infection) showing a strong immune response to SARS-CoV-2.
In addition, the same group also had a strong immune response to seasonal coronaviruses that cause the common cold.
But to understand the implications of this, we first have to dive a bit deeper into the science of immunology.
HLA 'early warning system'
HLA-B15 belongs to a highly variable family of genes – called HLA (human leucocyte antigens) – that code for a protein that helps alert the immune system when our cells are, for example, cancerous or have been invaded by viruses.
When it comes to bone marrow transplants
|
, the donor and recipient must be "HLA matched" or the organ will be rejected as foreign.
In fact, the researchers used a database of bone-marrow donors precisely because their HLA "genotype" had already been recorded.
In the case of coronaviruses, HLA proteins bind fragments of the virus and present them to T-cells that can then go in for the kill.
"Sometimes I think of [HLA proteins] as like an arm with a hand on it, and the hand is holding the piece of virus," said lead researcher Jill Hollenbach at the University of California, San Francisco.
Once T-cells have been kicked into action by a viral fragment bound by HLA proteins, they will be ready and waiting as a defence the next time that particular virus comes around.
It's analogous to an army that's battle-ready because the soldiers know what the enemy looks like.
Similarities with the common cold
When the team took a closer look at the HLA-B15 protein in the asymptomatic people in their study, they discovered a physical basis for the link with the common cold.
The HLA protein bound a particular part of the SARS-CoV-2 spike protein – the bristling extremities on the outside of the coronavirus.
This fragment of spike protein is shaped like equivalent pieces in other coronaviruses that cause common colds.
If a donor had one copy of the HLA-B15 gene they were twice as likely to have had an asymptomatic COVID-19 infection as a donor with another HLA type, the researchers report.
Two copies of the gene meant they were eight times more likely to be asymptomatic.
Professor Gras said despite this they did find a few individuals with the gene that had severe COVID.
"It's not a magic bullet but you will be more likely to be asymptomatic," she said.
The researchers concluded exposure to seasonal cold viruses could have primed the immune system of people with HLA-B15.
This would have enabled their T-cells to kick in a lot faster against SARS-CoV-2, and knocked the virus out before any symptoms emerged, the researchers argued.
The team hope further studies of asymptomatic infection could help identify immune mechanisms that can quickly eliminate the virus.
"It may be possible to develop vaccines and treatments against SARS-CoV-2 that mimic the immune [system's] shield seen in the asymptomatic people with HLA-B15," Professor Gras said.
A bit like vaccination
Stuart Tangye of the Garvan Institute of Medical Research, who was not involved in the study, said HLA-B15 appeared to give people an advantage against COVID if they had been exposed to common cold viruses.
"In these cases, the seasonal coronaviruses sort of vaccinated the individuals," said Professor Tangye, who has looked into the genetics of severe COVID.
As with vaccination, the immune system had been primed to look for something similar to the pathogen.
The idea that exposure to the common cold might help trigger an immunity to SARS-CoV-2 was not new, Professor Tangye said.
But this study had pinpointed a specific gene linked to this protection.
"We knew that a certain percentage of people appeared to have cross-reactive immunity that existed before SARS-CoV-2. So this possibly explains that, and that's interesting."
Nathan Bartlett, a viral immunologist from the University of Newcastle, was a little more circumspect, describing the researchers' conclusions as "interesting speculation".
He noted one limitation of the study was that people "self-reported" their COVID test results and lack of symptoms.
Assuming these were truly asymptomatic infections, the research provided a "compelling association", suggesting the T-cell response had prevented symptoms.
But Professor Bartlett said it may only be able to do this against older variants of SARS-CoV-2.
The T-cell response takes a week to swing into action, he said, while some of the newer SARS-CoV-2 variants can generate symptoms faster – "within two or three days'' in the case of Omicron.
In this case, an asymptomatic infection would rely other parts of the immune system that are activated sooner.
But Professor Bartlett agreed the study was potentially useful in developing a universal coronavirus vaccine.
"This research suggests there are common targets across multiple coronaviruses."
HLA: The good, bad and the ugly
This was not the first time that an HLA genotype had been linked to the severity of a disease.
Certain HLA molecules have previously helped explain why some people infected by HIV stay healthier for longer.
On the other hand, some HLA genotypes have been linked to an increased risk of autoimmune disease.
"There are these associations between particular HLA genes and different types of immune responses," Professor Tangye said.
"HLA really drives the immune system's function and dysfunction."
HLA genotypes vary enormously between different ethnic groups, said epidemiologist Catherine Bennett of Deakin University.
This kind of research could help manage COVID risk at a population level, she said.
"This is helpful from a public health point of view."Loading...
If you're unable to load the form, click here.
|
Traditional medical imaging – used to diagnose, monitor or treat certain medical conditions – has long struggled to get clear pictures of patients with dark skin, according to experts. Researchers say they have found a way to improve medical imaging, a process through which physicians can observe the inside of the body, regardless of skin tone. The new findings were published in the October edition of the journal Photoacoustics. The team tested the forearms of 18 volunteers, with skin tones ranging from light to dark. They found that a distortion of the photoacoustic signal that makes the imaging more difficult to read, called clutter, increased with darkness of skin. “When you have darker skin, you have more melanin. And melanin is actually one of the optical absorbers that we inherently have within our body,” Muyinatu Bell, an author of the study and director and founder of the Photoacoustic and Ultrasonics Systems Engineering (PULSE) Lab at JHU, told CNN. In other words, the amount of melanin content in the skin could be associated with more clutter. “The skin essentially acts as a transmitter of sound, but it’s not the same type of focused sound that we get and we want with ultrasound, it’s everywhere diffused and creates a lot of confusion,” Bell said. “And so, this scattering of the sound that’s caused by the melanin absorption is worse and worse with the higher melanin concentration.” Changing a technique The study – a collaboration with researchers in Brazil who had previously used one of Bell’s algorithms – found that signal-to-noise ratio, a scientific measure that compares signal with background noise, improved for all skin tones when the researchers used a technique called “short-lag spatial coherence beamforming” while performing medical imaging. That technique, originally used for ultrasounds, can be applied to photoacoustic imaging. The technique involves a combination of light and ultrasound technology, forming a new medical imaging modality, Theo Pavan, an author of the study and associate professor with the department of physics at University of São Paulo in Brazil, told CNN. “We really verified that it was much less sensitive to the skin color in terms of the quality of the image that you can get compared to the conventional methods that … is more commonly used by the community,” Pavan said. The study is “the first to objectively assess skin tone and to both qualitatively and quantitatively demonstrate that skin” photoacoustic signal “and clutter artifacts increase with epidermal melanin content,” the researchers wrote. The applications of photoacoustic technology vary, but with the researchers’ new developments, it may help diagnose health issues more accurately and equitably. “Right now, it’s increasing the application of the breast imaging,” and the next step would be to “increase the image quality overall,” said Guilherme Fernandes, an author of the study and a Ph.D. candidate in physics applied to medicine and biology at USP. Broader rethinking in health care The researchers’ work could also mean advancements for equity in health care at large. “In our scientific technology, there is a bias in terms of developing these products, for things that work well in lighter-skinned people,” said Dr. Camara Jones, a family physician, epidemiologist and former president of the American Public Health Association, who was not involved in the new study. “The biggest problem is that we use a thing we call race, as a risk factor — as a health risk factor. And so race is the social and interpretation of how people look in a race-conscious society. Race is not biology,” Jones explained. “We’ve mapped the human genome. We know there’s no basis in the human genome for racial sub-speciation.” This study isn’t the first to find skin color biases in medical technology. Medical equipment that leverages infrared sensing has also been found to not work as well on darker skin, since skin tone can interfere with the reflection of light. Many devices that were in frequent use during the Covid-19 pandemic, such as pulse oximeters and forehead thermometers, involve emitting and capturing light to make a measurement. But if that device isn’t calibrated for darker skin, the pigmentation could affect how the light is absorbed and how the infrared technology works. Bell said her research can hopefully pave a way to eliminating discrimination in health care and inspire others to develop technology that helps everyone, regardless of their skin tone. “I believe that with the ability to show that we can devise and develop technology — that doesn’t just work for one small subset of the population but works for a wider range of the population. This is very inspiring for not only my group, but for groups around the world to start thinking in this direction when designing technology. Does it serve the wider population?” Bell said.
|
<urn:uuid:47d197dd-2a65-41b6-bc70-78d85b0b26eb>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102637.84/warc/CC-MAIN-20231210190744-20231210220744-00732.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9600153565406799,
"pii_count": 0,
"score": 3.609375,
"token_count": 1040,
"url": "https://edition.cnn.com/2023/10/16/tech/medical-imaging-dark-skin-reaj/index.html"
}
|
Traditional medical imaging – used to diagnose, monitor or treat certain medical conditions – has long struggled to get clear pictures of patients with dark skin, according to experts. Researchers say they have found a way to improve medical imaging, a process through which physicians can observe the inside of the body, regardless of skin tone. The new findings were published in the October edition of the journal Photoacoustics. The team tested the forearms of 18 volunteers, with skin tones ranging from light to dark. They found that a distortion of the photoacoustic signal that makes the imaging more difficult to read, called clutter, increased with darkness of skin. “When you have darker skin, you have more melanin. And melanin is actually one of the optical absorbers that we inherently have within our body,” Muyinatu Bell, an author of the study and director and founder of the Photoacoustic and Ultrasonics Systems Engineering (PULSE) Lab at JHU, told CNN. In other words, the amount of melanin content in the skin could be associated with more clutter. “The skin essentially acts as a transmitter of sound, but it’s not the same type of focused sound that we get and we want with ultrasound, it’s everywhere diffused and creates a lot of confusion,” Bell said. “And so, this scattering of the sound that’s caused by the melanin absorption is worse and worse with the higher melanin concentration.” Changing a technique The study – a collaboration with researchers in Brazil who had previously used one of Bell’s algorithms – found that signal-to-noise ratio, a scientific measure that compares signal with background noise, improved for all skin tones when the researchers used a technique called “short-lag spatial coherence beamforming” while performing medical imaging. That technique, originally used for ultrasounds, can be applied to photoacoustic imaging. The technique involves a combination of light and ultrasound technology, forming a new medical imaging modality, Theo Pavan, an author of the study and associate professor with the department of physics at University of São Paulo in Brazil, told CNN. “We really verified that it was much less sensitive to the skin color in terms of the quality of the image that you can get compared to the conventional methods that … is more commonly used by the community,” Pavan said. The study is “the first to objectively assess skin tone and to both qualitatively and quantitatively demonstrate that skin” photoacoustic signal “and clutter artifacts increase with epidermal melanin content,” the researchers wrote. The applications of
|
photoacoustic technology vary, but with the researchers’ new developments, it may help diagnose health issues more accurately and equitably. “Right now, it’s increasing the application of the breast imaging,” and the next step would be to “increase the image quality overall,” said Guilherme Fernandes, an author of the study and a Ph.D. candidate in physics applied to medicine and biology at USP. Broader rethinking in health care The researchers’ work could also mean advancements for equity in health care at large. “In our scientific technology, there is a bias in terms of developing these products, for things that work well in lighter-skinned people,” said Dr. Camara Jones, a family physician, epidemiologist and former president of the American Public Health Association, who was not involved in the new study. “The biggest problem is that we use a thing we call race, as a risk factor — as a health risk factor. And so race is the social and interpretation of how people look in a race-conscious society. Race is not biology,” Jones explained. “We’ve mapped the human genome. We know there’s no basis in the human genome for racial sub-speciation.” This study isn’t the first to find skin color biases in medical technology. Medical equipment that leverages infrared sensing has also been found to not work as well on darker skin, since skin tone can interfere with the reflection of light. Many devices that were in frequent use during the Covid-19 pandemic, such as pulse oximeters and forehead thermometers, involve emitting and capturing light to make a measurement. But if that device isn’t calibrated for darker skin, the pigmentation could affect how the light is absorbed and how the infrared technology works. Bell said her research can hopefully pave a way to eliminating discrimination in health care and inspire others to develop technology that helps everyone, regardless of their skin tone. “I believe that with the ability to show that we can devise and develop technology — that doesn’t just work for one small subset of the population but works for a wider range of the population. This is very inspiring for not only my group, but for groups around the world to start thinking in this direction when designing technology. Does it serve the wider population?” Bell said.
|
Birds of a feather... huddle together? Here's how birds keep warm amid frigid winter temperatures.
When you step outside into the cold winter air, you may start shivering. Our bodies do this to keep us warm – our muscles tighten and loosen quickly, creating enough energy to give off heat.
Humans rely on thermoregulation to keep warm in the wintertime. We bundle up with cozy hats and jackets. We wear gloves and warm boots to keep our extremities cold. We turn on the heat in our homes.
But a bird's home is the great outdoors, so how do birds keep warm in the year’s coldest temperatures? Here’s everything you need to know about how birds survive in frigid winter temperatures.
How do birds keep warm in the winter?
Birds are warm-blooded animals just like humans, meaning they have a fairly consistent internal temperature and use regulatory mechanisms to maintain it in cold environments. Birds have a consistent internal temperature of about 104 degrees.
Birds keep warm in the winter by shivering, fluffing their feathers, cuddling together and tucking their feet and bills in.
Birds increase their body weight in late summer and fall to prepare for the winter ahead, and they also trap pockets of air around their bodies using their feathers, the U.S. Fish and Wildlife Service reports. Some birds use the oil produced from the glands near their tails as weatherproofing, while other species grow feathers with tips that break down into a powder used for the same protection.
Waterfowl (like gulls, ducks and geese) often stand on ice or cold ground, so they use a countercurrent heat exchange system to keep their feet warm. The cool blood from the foot travels through the arteries while heat transfers from the warm arteries to the cold veins. Their feet have little muscle or nerve tissue, so they can withstand cold temperatures for longer than humans can.
Birds of a feather... huddle together? Birds do this to stay warm, crowding together in trees or shrubs to share body heat and conserve energy.
State flowers:See the official flowers for all 50 US states
What animal kills the most humans?:Protect yourself from this unexpected predator
Why do birds fly south in the winter?
Northern Hemisphere birds migrate with food availability – they move south in winter when insects and plants are harder to come by in the cold weather. There are permanent resident birds who do not migrate, short-distance migrants who may move to lower elevations, medium-distance migrants who travel a few hundred miles and long-distance birds who typically move from North to Central and South America.
According to Cornell Lab’s All About Birds, migration is likely triggered by changes in day length, temperature, food supply and genetics – even some housepet birds have “migratory restlessness” in the spring and fall.
What do birds eat in the winter?
Residential birds who spend the winter in the north don't have as many options for berries as in the summer months, so they typically stick to seeds and whatever insects they can find.
If you have a bird feeder, the Humane Society recommends putting out black-oil sunflower seeds, white proso millet and peanuts for high protein and fat content in the winter months.
Are birds mammals?:Learn more about the feathered animals and their traits
Hummingbird food:Attract the winged beauties to your yard with this recipe
|
<urn:uuid:afb0c4d2-e4eb-4332-84eb-b4f088e8b3d9>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00136.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.927260160446167,
"pii_count": 0,
"score": 3.859375,
"token_count": 718,
"url": "https://www.usatoday.com/story/news/2023/03/13/how-do-birds-keep-warm-in-winter/11205196002/"
}
|
Birds of a feather... huddle together? Here's how birds keep warm amid frigid winter temperatures.
When you step outside into the cold winter air, you may start shivering. Our bodies do this to keep us warm – our muscles tighten and loosen quickly, creating enough energy to give off heat.
Humans rely on thermoregulation to keep warm in the wintertime. We bundle up with cozy hats and jackets. We wear gloves and warm boots to keep our extremities cold. We turn on the heat in our homes.
But a bird's home is the great outdoors, so how do birds keep warm in the year’s coldest temperatures? Here’s everything you need to know about how birds survive in frigid winter temperatures.
How do birds keep warm in the winter?
Birds are warm-blooded animals just like humans, meaning they have a fairly consistent internal temperature and use regulatory mechanisms to maintain it in cold environments. Birds have a consistent internal temperature of about 104 degrees.
Birds keep warm in the winter by shivering, fluffing their feathers, cuddling together and tucking their feet and bills in.
Birds increase their body weight in late summer and fall to prepare for the winter ahead, and they also trap pockets of air around their bodies using their feathers, the U.S. Fish and Wildlife Service reports. Some birds use the oil produced from the glands near their tails as weatherproofing, while other species grow feathers with tips that break down into a powder used for the same protection.
Waterfowl (like gulls, ducks and geese) often stand on ice or cold ground, so they use a countercurrent heat exchange system to keep their feet warm. The cool blood from the foot travels through the arteries while heat transfers from the warm arteries to the cold veins. Their feet have little muscle or nerve tissue, so they can withstand cold temperatures for longer than humans can.
Birds of a feather... huddle together? Birds do this to stay warm, crowding together in trees or shrubs to share body heat and conserve energy.
State flowers:See the official flowers for all 50 US states
What animal kills the most humans?:Protect yourself from this unexpected predator
Why do birds fly south in the winter?
Northern Hemisphere birds migrate with food availability – they move south in winter when insects and plants are harder to come by in the cold weather. There are permanent resident birds who do not migrate, short-distance migrants who may move to
|
lower elevations, medium-distance migrants who travel a few hundred miles and long-distance birds who typically move from North to Central and South America.
According to Cornell Lab’s All About Birds, migration is likely triggered by changes in day length, temperature, food supply and genetics – even some housepet birds have “migratory restlessness” in the spring and fall.
What do birds eat in the winter?
Residential birds who spend the winter in the north don't have as many options for berries as in the summer months, so they typically stick to seeds and whatever insects they can find.
If you have a bird feeder, the Humane Society recommends putting out black-oil sunflower seeds, white proso millet and peanuts for high protein and fat content in the winter months.
Are birds mammals?:Learn more about the feathered animals and their traits
Hummingbird food:Attract the winged beauties to your yard with this recipe
|
Two months after OpenAI unnerved some educators with the public release of ChatGPT, an AI chatbot that can help students and professionals generate shockingly convincing essays, the company is unveiling a new tool to help teachers adapt.
OpenAI on Tuesday announced a new feature, called an “AI text classifier,” that allows users to check if an essay was written by a human or AI. But even OpenAI admits it’s “imperfect.”
The tool, which works on English AI-generated text, is powered by a machine learning system that takes an input and assigns it to several categories. In this case, after pasting a body of text such as a school essay into the new tool, it will give one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”
Lama Ahmad, policy research director at OpenAI, told CNN that educators have been asking for a ChatGPT feature like this, but warns it should be “taken with a grain of salt.”
“We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Ahmad said. “We are emphasizing how important it is to keep a human in the loop … and that it’s just one data point among many others.”
Ahmad notes that some teachers have referenced past examples of student work and writing style to gauge whether it was written by the student. While the new tool might provide another reference point, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”
Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. It even recently passed law exams in four courses at the University of Minnesota, another exam at University of Pennsylvania’s Wharton School of Business and a US medical licensing exam.
In the process, it has raised alarms among some educators. Public schools in New York City and Seattle have already banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.
OpenAI now joins a small but growing list of efforts to help educators detect when a written work is generated by ChatGPT. Some companies such as Turnitin are actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan told CNN more than 95,000 people have already tried the beta version of his own ChatGPT detection feature, called ZeroGPT, noting there has been “incredible demand among teachers” so far.
Jan Leike – a lead on the OpenAI alignment team, which works to make sure the AI tool is aligned with human values – listed several reasons for why detecting plagiarism via ChatGPT may be a challenge. People can edit text to avoid being identified by the tool, for example. It will also “be best at identifying text that is very similar to the kind of text that we’ve trained it on.”
In addition, the company said it’s impossible to determine if predictable text – such as a list of the first 1,000 prime numbers – was written by AI or a human because the correct answer is always the same, according to a company blog post. The classifier is also “very unreliable” on short texts below 1,000 characters.
During a demo with CNN ahead of Tuesday’s launch, ChatGPT successfully labeled several bodies of work. An excerpt from the book “Peter Pan,” for example, was deemed “unlikely” to be AI generated. In the company blog post, however, OpenAI said it incorrectly labeled human-written text as AI-written 5% of the time.
Despite the possibility of false positives, Leike said the company aims to use the tool to spark conversations around AI literacy and possibly deter people from claiming that AI-written text was created by a human. He said the decision to release the new feature also stems from the debate around whether humans have a right to know if they’re interacting with AI.
“This question is much bigger than what we are doing here; society as a whole has to grapple with that question,” he said.
OpenAI said it encourages the general public to share their feedback on the AI check feature. Ahmad said the company continues to talk with K-12 educators and those at the collegiate level and beyond, such as Harvard University and the Stanford Design School.
The company sees its role as “an educator to the educators,” according to Ahmad, in the sense that OpenAI wants to make them more “aware about the technologies and what they can be used for and what they should not be used for.”
“We’re not educators ourselves – we’re very aware of that – and so our goals are really to help equip teachers to deploy these models effectively in and out of the classroom,” Ahmad said. “That means giving them the language to speak about it, help them understand the capabilities and the limitations, and then secondarily through them, equip students to navigate the complexities that AI is already introducing in the world.”
|
<urn:uuid:c0761896-89d4-4515-8683-0eae9e4a7984>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511021.4/warc/CC-MAIN-20231002200740-20231002230740-00582.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9711083769798279,
"pii_count": 0,
"score": 2.921875,
"token_count": 1181,
"url": "https://amp.cnn.com/cnn/2023/01/31/tech/chatgpt-check-tool/index.html"
}
|
Two months after OpenAI unnerved some educators with the public release of ChatGPT, an AI chatbot that can help students and professionals generate shockingly convincing essays, the company is unveiling a new tool to help teachers adapt.
OpenAI on Tuesday announced a new feature, called an “AI text classifier,” that allows users to check if an essay was written by a human or AI. But even OpenAI admits it’s “imperfect.”
The tool, which works on English AI-generated text, is powered by a machine learning system that takes an input and assigns it to several categories. In this case, after pasting a body of text such as a school essay into the new tool, it will give one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”
Lama Ahmad, policy research director at OpenAI, told CNN that educators have been asking for a ChatGPT feature like this, but warns it should be “taken with a grain of salt.”
“We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Ahmad said. “We are emphasizing how important it is to keep a human in the loop … and that it’s just one data point among many others.”
Ahmad notes that some teachers have referenced past examples of student work and writing style to gauge whether it was written by the student. While the new tool might provide another reference point, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”
Since it was made available in late November, ChatGPT has been used to generate original essays, stories and song lyrics in response to user prompts. It has drafted research paper abstracts that fooled some scientists. It even recently passed law exams in four courses at the University of Minnesota, another exam at University of Pennsylvania’s Wharton School of Business and a US medical licensing exam.
In the process, it has raised alarms among some educators. Public schools in New York City and Seattle have already banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators are now moving with remarkable speed to rethink their assignments in response to ChatGPT, even as it remains unclear how widespread use is of the tool among students and how harmful it could really be to learning.
OpenAI now joins a small but growing list of efforts to help educators detect when a written work is generated by
|
ChatGPT. Some companies such as Turnitin are actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan told CNN more than 95,000 people have already tried the beta version of his own ChatGPT detection feature, called ZeroGPT, noting there has been “incredible demand among teachers” so far.
Jan Leike – a lead on the OpenAI alignment team, which works to make sure the AI tool is aligned with human values – listed several reasons for why detecting plagiarism via ChatGPT may be a challenge. People can edit text to avoid being identified by the tool, for example. It will also “be best at identifying text that is very similar to the kind of text that we’ve trained it on.”
In addition, the company said it’s impossible to determine if predictable text – such as a list of the first 1,000 prime numbers – was written by AI or a human because the correct answer is always the same, according to a company blog post. The classifier is also “very unreliable” on short texts below 1,000 characters.
During a demo with CNN ahead of Tuesday’s launch, ChatGPT successfully labeled several bodies of work. An excerpt from the book “Peter Pan,” for example, was deemed “unlikely” to be AI generated. In the company blog post, however, OpenAI said it incorrectly labeled human-written text as AI-written 5% of the time.
Despite the possibility of false positives, Leike said the company aims to use the tool to spark conversations around AI literacy and possibly deter people from claiming that AI-written text was created by a human. He said the decision to release the new feature also stems from the debate around whether humans have a right to know if they’re interacting with AI.
“This question is much bigger than what we are doing here; society as a whole has to grapple with that question,” he said.
OpenAI said it encourages the general public to share their feedback on the AI check feature. Ahmad said the company continues to talk with K-12 educators and those at the collegiate level and beyond, such as Harvard University and the Stanford Design School.
The company sees its role as “an educator to the educators,” according to Ahmad, in the sense that OpenAI wants to make them more “aware about the technologies and what they can be used for and what they should not be used for.”
“We’re not educators ourselves – we’re very aware of that – and so our goals are really to help equip teachers to deploy these models effectively in and out of the classroom,” Ahmad said. “That means giving them the language to speak about it, help them understand the capabilities and the limitations, and then secondarily through them, equip students to navigate the complexities that AI is already introducing in the world.”
|
There is no shortage of media coverage on the housing industry right now. Most of it points to rising prices and the lack of affordability. Missing from most of those stories are the escalating costs and ever increasing requirements to build homes.
There are hundreds of inputs into the cost of a home, including land, labor, regulation, and materials. However, in the age of climate crises, there is more focus on finding the perfect, protected land, the right materials, and following the regulations that hopefully are written to keep the home and owner safe in the advent of a climate event.
While all of that sounds incredibly rational, pressure is on for builders to lower costs. The National Association of Home Builders doesn’t have average construction cost increases for housing, but offered average construction values instead.
The group’s chief economist Rob Dietz shared with me that US Census permit data shows that the average construction value, which does not include land, increased 78% since 2015 – going from $166,276 to $295,965 in 2021.
Dietz added that values have been increasing due to rising regulatory costs, rising material costs, limited lot availability, and skilled labor shortages among other factors.
“Moreover, it is an average, so if entry level homes are simply not built, it rises as an average,” he said. “And that has happened.”
Wildfires are just one climate event adding pressure to the housing industry. USA Today reported that in 2022 there were 65,000 wildfires in the US, adding up to more than 7 million burned acres.
Nonprofit research organization First Street Foundation reports that more than 20 million properties across the US are threatened by at least “moderate” wildfire risk, or have up to a 6% chance of being in a blaze at some point in the life of a 30-year mortgage.
During these fires, homes are destroyed, and at the same time, building codes are revised and become more complicated to navigate. Plus, surrounding land becomes more expensive, all adding to the costs to build again.
PolicyGenius reported on the risks in the most fire-prone states and the meanings of the risk. For instance, Colorado has 2.2 million homes and the number of those at risk sits around 17%. In 2021, the state’s worst year for wildfire losses that were tracked by insurance, it added up to $450 million. At an even higher risk is Idaho, where 26% of homes are at risk.
Even though this data shows the significant risks to homeowners, Colorado’s legislative efforts to require fire-resistant construction materials have not been successful. At the same time, the number of homes being built in the wildfire prone areas is growing, and in Colorado has more than doubled between 1990 and 2020.
There continues to be a snowball effect. The more wildfires that occur, the more land is susceptible to the burning, the more homes are at risk, the more costs increase for finding land and building homes.
The US Fire Administration shows that the amount of the wildland urban interface, or the zone between development and wildlife, is growing by nearly two million acres per year. The group also reports that homes in 70,000 communities worth $1.3 trillion are now within the path of a fire event.
According to the Federal Emergency Management Agency, adopting and carrying out building codes is the most effective mitigation strategy. In 2019, the National Institute of Building Science published a report underlining this finding. The report showed that implementing the International Code Council’s 2015 International Wildland Urban Interface Code saved $4 for every $1 invested and that bringing existing buildings up to that code could provide up to $8 in benefits for each dollar spent.
Blazing Innovative Solutions
Former fire chief and now chief scientific officer at FireGuardia, Oscar Dominguez, is working to commercialize a fireproof plastic he invented in 2002 to bring the 100-year-old fire detection and suppression techniques used today up to date.
“Many insurance carriers are refusing coverage or won’t renew policies when homes are built in fire prone zones,” said Heather Towsley, president and chief executive officer at FireGuardia. “The demand for greater smart home construction technology could accelerate homeowner insurance incentives for using more sophisticated home fire suppression technology – much like water conservation and solar panel rebates.”
She shares that the FireGuardia solution can retrofit without driving up costs. The product can be applied to a number of construction materials to make them fireproof.
The company has a focus on bringing the solution to scale with an incredibly affordable coating product, targeting between $50 to $60 per 5-gallon bucket where other solutions land between $180 to $600, and hopes to be available later this year after an investment round.
The FireGuardia home fire suppression system integrates detection, suppression and a software tool. It also is nontoxic, sustainably sourced, and has low VOC output, taking out the poisonous materials that have historically been used in fire retardants, so there would be no hazardous material to clean up post fire.
Towsley shared an example of the product’s performance. In the first 20 seconds of a piece of Kevlar subjected to fire, it rose to 360 degrees Fahrenheit. A FireGuardia-coated piece of paper only reached 100 degrees in that time.
Another similar solution is from Singapore-based Fire Terminator. Judah Jay is the founder, inventor and scientist behind this plant-based, liquid technology that provides an aerodynamic shield on each molecule of a combustible material, like the wood and drywall used to build homes.
Jay’s technology comes from work on combustion research for aerospace applications that he did in the 1980s with Russian, Bulgarian and other Eastern European scientists.
“Once you introduce heat to our product, free radicals are produced that negate the combustion molecule that fuels the fire,” Jay said. “Without combustion, the fire cannot start or spread. That is how we can prevent and extinguish any fire. The higher the temperature, the more free radicals are produced, therefore, the better the performance of our product. Once the fire is extinguished, it can no longer be reignited.”
Jetfire Xin is the company’s business developer and is working on ways to commercialize the innovation across North America. Fire Terminator’s goal is to provide every homeowner with a home protection product. The product will be sold by the liter, retailing at $20. After mixing, one liter can cover 172 square feet.
In addition, treating wood with Fire Terminator makes it incombustible and protects it against insect infestations and mold. A coating process over the wood can also be done, which would substantially increase its resilience against fire damage.
Finally, Xin points out that homes and buildings that are equipped with sprinkler systems can add Fire Terminator into the water in the system to prevent a fire from spreading, putting it out quickly.
Home Design to Minimize Risk
California builder Connect Homes has been focused on thoughtful design meant to minimize the risk of fire damages. Its homes are designed without eaves, which prevents flying embers from blowing up into the attic and starting a fire. The roofs also have a specific rating to be effective against severe fire exposure.
Plus, the builder also sources non-combustible exterior sheathing and finishes for the most dangerous areas. Connect Homes selects dual-pane glass exterior doors and windows to reduce the chance of breakage that typically occurs due to the extreme heat of a wildfire.
Gordon Stott, co-founder of the home building company, underlines the value of creating defensible space with limited landscaping.
“For me, it’s that balance of knowing that lovely landscaping could turn into a liability,” he said. “Another feature of our prefab system is the extensive use of floor-to-ceiling glass. I’ve been impressed with how floor-to-ceiling glass can sometimes overcome limitations of more limited landscaping. Standing in a modern house, feeling connected to the outdoors often still feels pretty great, even with limited landscaping and if the plant action is far away.”
Bottom line is that building and rebuilding isn’t the answer. Neither can any solution live on an island. There has to be industry-wide collaboration for the right regulations, the most innovative designs and products, along with ways to reduce the costs to bring these solutions to reality.
|
<urn:uuid:6d74c1d1-1660-4068-bc16-b41f6de19535>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00478.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9480310082435608,
"pii_count": 0,
"score": 2.515625,
"token_count": 1761,
"url": "https://www.forbes.com/sites/jennifercastenson/2023/01/23/managing-homes-dollars-and-building-codes-as-they-go-up-in-wildfire-flames/"
}
|
There is no shortage of media coverage on the housing industry right now. Most of it points to rising prices and the lack of affordability. Missing from most of those stories are the escalating costs and ever increasing requirements to build homes.
There are hundreds of inputs into the cost of a home, including land, labor, regulation, and materials. However, in the age of climate crises, there is more focus on finding the perfect, protected land, the right materials, and following the regulations that hopefully are written to keep the home and owner safe in the advent of a climate event.
While all of that sounds incredibly rational, pressure is on for builders to lower costs. The National Association of Home Builders doesn’t have average construction cost increases for housing, but offered average construction values instead.
The group’s chief economist Rob Dietz shared with me that US Census permit data shows that the average construction value, which does not include land, increased 78% since 2015 – going from $166,276 to $295,965 in 2021.
Dietz added that values have been increasing due to rising regulatory costs, rising material costs, limited lot availability, and skilled labor shortages among other factors.
“Moreover, it is an average, so if entry level homes are simply not built, it rises as an average,” he said. “And that has happened.”
Wildfires are just one climate event adding pressure to the housing industry. USA Today reported that in 2022 there were 65,000 wildfires in the US, adding up to more than 7 million burned acres.
Nonprofit research organization First Street Foundation reports that more than 20 million properties across the US are threatened by at least “moderate” wildfire risk, or have up to a 6% chance of being in a blaze at some point in the life of a 30-year mortgage.
During these fires, homes are destroyed, and at the same time, building codes are revised and become more complicated to navigate. Plus, surrounding land becomes more expensive, all adding to the costs to build again.
PolicyGenius reported on the risks in the most fire-prone states and the meanings of the risk. For instance, Colorado has 2.2 million homes and the number of those at risk sits around 17%. In 2021, the state’s worst year for wildfire losses that were tracked by insurance, it
|
added up to $450 million. At an even higher risk is Idaho, where 26% of homes are at risk.
Even though this data shows the significant risks to homeowners, Colorado’s legislative efforts to require fire-resistant construction materials have not been successful. At the same time, the number of homes being built in the wildfire prone areas is growing, and in Colorado has more than doubled between 1990 and 2020.
There continues to be a snowball effect. The more wildfires that occur, the more land is susceptible to the burning, the more homes are at risk, the more costs increase for finding land and building homes.
The US Fire Administration shows that the amount of the wildland urban interface, or the zone between development and wildlife, is growing by nearly two million acres per year. The group also reports that homes in 70,000 communities worth $1.3 trillion are now within the path of a fire event.
According to the Federal Emergency Management Agency, adopting and carrying out building codes is the most effective mitigation strategy. In 2019, the National Institute of Building Science published a report underlining this finding. The report showed that implementing the International Code Council’s 2015 International Wildland Urban Interface Code saved $4 for every $1 invested and that bringing existing buildings up to that code could provide up to $8 in benefits for each dollar spent.
Blazing Innovative Solutions
Former fire chief and now chief scientific officer at FireGuardia, Oscar Dominguez, is working to commercialize a fireproof plastic he invented in 2002 to bring the 100-year-old fire detection and suppression techniques used today up to date.
“Many insurance carriers are refusing coverage or won’t renew policies when homes are built in fire prone zones,” said Heather Towsley, president and chief executive officer at FireGuardia. “The demand for greater smart home construction technology could accelerate homeowner insurance incentives for using more sophisticated home fire suppression technology – much like water conservation and solar panel rebates.”
She shares that the FireGuardia solution can retrofit without driving up costs. The product can be applied to a number of construction materials to make them fireproof.
The company has a focus on bringing the solution to scale with an incredibly affordable coating product, targeting between $50 to $60 per 5-gallon bucket where other solutions land between $180 to $600, and hopes to be available later this year after an investment round.
The FireGuardia home fire suppression system integrates detection, suppression and a software tool. It also is nontoxic, sustainably sourced, and has low VOC output, taking out the poisonous materials that have historically been used in fire retardants, so there would be no hazardous material to clean up post fire.
Towsley shared an example of the product’s performance. In the first 20 seconds of a piece of Kevlar subjected to fire, it rose to 360 degrees Fahrenheit. A FireGuardia-coated piece of paper only reached 100 degrees in that time.
Another similar solution is from Singapore-based Fire Terminator. Judah Jay is the founder, inventor and scientist behind this plant-based, liquid technology that provides an aerodynamic shield on each molecule of a combustible material, like the wood and drywall used to build homes.
Jay’s technology comes from work on combustion research for aerospace applications that he did in the 1980s with Russian, Bulgarian and other Eastern European scientists.
“Once you introduce heat to our product, free radicals are produced that negate the combustion molecule that fuels the fire,” Jay said. “Without combustion, the fire cannot start or spread. That is how we can prevent and extinguish any fire. The higher the temperature, the more free radicals are produced, therefore, the better the performance of our product. Once the fire is extinguished, it can no longer be reignited.”
Jetfire Xin is the company’s business developer and is working on ways to commercialize the innovation across North America. Fire Terminator’s goal is to provide every homeowner with a home protection product. The product will be sold by the liter, retailing at $20. After mixing, one liter can cover 172 square feet.
In addition, treating wood with Fire Terminator makes it incombustible and protects it against insect infestations and mold. A coating process over the wood can also be done, which would substantially increase its resilience against fire damage.
Finally, Xin points out that homes and buildings that are equipped with sprinkler systems can add Fire Terminator into the water in the system to prevent a fire from spreading, putting it out quickly.
Home Design to Minimize Risk
California builder Connect Homes has been focused on thoughtful design meant to minimize the risk of fire damages. Its homes are designed without eaves, which prevents flying embers from blowing up into the attic and starting a fire. The roofs also have a specific rating to be effective against severe fire exposure.
Plus, the builder also sources non-combustible exterior sheathing and finishes for the most dangerous areas. Connect Homes selects dual-pane glass exterior doors and windows to reduce the chance of breakage that typically occurs due to the extreme heat of a wildfire.
Gordon Stott, co-founder of the home building company, underlines the value of creating defensible space with limited landscaping.
“For me, it’s that balance of knowing that lovely landscaping could turn into a liability,” he said. “Another feature of our prefab system is the extensive use of floor-to-ceiling glass. I’ve been impressed with how floor-to-ceiling glass can sometimes overcome limitations of more limited landscaping. Standing in a modern house, feeling connected to the outdoors often still feels pretty great, even with limited landscaping and if the plant action is far away.”
Bottom line is that building and rebuilding isn’t the answer. Neither can any solution live on an island. There has to be industry-wide collaboration for the right regulations, the most innovative designs and products, along with ways to reduce the costs to bring these solutions to reality.
|
So you can read. But how?
Tennessee Gov. Bill Lee (R) submitted a bill to his state’s legislature this year that would command all Tennessee school districts to rely on phonics for reading in kindergarten through third grade. More than 30 states and D.C. have taken this approach, instituting various degrees of phonics instruction on their turf. Yet teachers unions in many places have been resistant, and some politicians are on their side.
The so-called reading wars have been raging for decades now, sometimes pitting teachers against publishers or publishers against academicians — and also sometimes, as too many things do these days, pitting progressives against conservatives or Democrats against Republicans. That’s unfortunate, because — as perhaps too few things do these days — the debate over how best to teach children to read lends itself to a conclusive answer. That’s phonics.
In phonics, students learn a letter or a pair of letters at a time.
That’s how most Americans learned to read. Slowly, letters add up to words.
Eventually, through a process called “orthographic mapping,” some words will lodge themselves in a child’s memory so they’ll know them on sight. And it turns out the most efficient and effective route to this mapping is linking sounds, letter by letter, to written words. Our brains light up in the right places when we do it.
What’s more, knowing the sounds “a,” “m,” “n” and every vowel team and consonant blend on the long journey to “z” will eventually allow a young reader to decode any word, even when they don’t recognize it.
Not everyone, however, is sold.
Phonics isn’t new — it dates to at least the 19th century. What’s newer is the “whole language” approach to reading. The idea is to teach words rather than letters. It was persuasive in the mid-20th century, when “Dick and Jane” books replaced phonics-based McGuffey Readers.
In the whole-language approach, students are shown simple sentences and learn by logical association.
They learn entire words at a time.
But some students just memorize the narrow set of words in their books and exercises.
In the more modern version of this approach, heavily reliant on what’s known as the “three-cueing system,” students are essentially encouraged to guess words: Does it make sense? Does it sound right? Does it look right?
Because this includes a “look right” component, there’s some element of phonics involved — hence the marketing of this teaching strategy as “balanced literacy.” But too heavy an emphasis on the “make sense” part of the equation, combined with many of those helpful pictures, means some children can get by without sounding out anything. This approach breaks down when the words become longer, less familiar and when the pictures disappear.
About 40 percent of students will learn to read no matter what. They’ll manage to sound words out without systematic phonics instruction, or without any phonics instruction at all. That’s part of why the whole-language approach looks, sometimes, like it works. But research shows that the children who struggle most aren’t likely to stop struggling unless they’re taught to sound words out — unless they’re taught to read.
The Carnegie Corporation of New York released in the 1960s a comprehensive literature review that emphasized the importance of phonics in reading instruction. The U.S. Education Department and the National Institute of Child Health and Human Development convened a National Reading Panel that came to the same conclusion in 2000.
Recent numbers bear this out: The “Mississippi miracle” saw the state vault from 49th in fourth-graders’ reading proficiency test scores to 29th in a mere six years, after implementing phonics-based curriculums. Meanwhile, reading scores nationwide are dropping, and only about one-third of fourth- and eighth-graders across the country are proficient at reading.
So why do so many teachers refuse to adopt methods that work — and hold fast to those that don’t?
Many of those most devoted in recent decades to balanced literacy see phonics as, well, boring: “drill and kill,” as some put it. Especially in schools with fewer resources, the chances instructors will be skilled enough to bring these lessons to life might be slim. The thinking goes that kids won’t improve at reading if they don’t enjoy reading, and that to enjoy reading the focus should be on understanding the story a book is trying to tell rather than on getting each and every word exactly right. Who cares, for example, if a student says “puppy” instead of “dog?”
Certainly kids can get bored laboring all day over cats and rats who can’t do much more than be fat until the students have progressed to more challenging combinations of letters. And sounding out words can only take a student so far, if they have no idea what any of those words mean. Reading — really reading — requires myriad skills, starting with word recognition but reaching to background knowledge, vocabulary, syntax and semantics and eventually coping with irony, metaphors, genres and themes.
Some of these skills might come more naturally to students growing up in households with, say, college or high-school educated parents. They’ll definitely come more naturally to students growing up in English-speaking households. The students whose homes infuse them with less background knowledge, vocabulary and beyond than their peers will most need their schools to step in and provide it.
But balanced literacy isn’t really balanced — phonics instruction is usually sprinkled here and there rather than instituted systematically in the manner that’s required for students actually to benefit from it. And three-cueing methods sometimes teach students hacks. For those who don’t immediately catch on to sounding out words, those hacks can discourage them to ever learn how.
Recognizing that students will bring a range of vocabulary and experience to the classroom is important, but that doesn’t negate the reality that phonics is essential, because learning a new word starts with sounding out what the word is and because unspooling a good metaphor requires drinking in an entire sentence.
Parents and advocates are understandably squeamish about government dictates involving so intimate and traditionally local a matter as education — particularly when ideology enters the equation. School boards and other bodies closer to the ground are the ideal places for these decisions to happen. But they should happen. The techniques that will help students master “Bob Books” so someday they might make it to Robert Wright books aren’t a question of ideology. They’re a question of science.
Kids should absolutely learn to love to read. First, though, they need to learn to read.
Editorials represent the views of The Post as an institution, as determined through debate among members of the Editorial Board, based in the Opinions section and separate from the newsroom.
Members of the Editorial Board and areas of focus: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy, legal affairs, energy, the environment, health care); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (global public health); James Hohmann (domestic policy and electoral politics, including the White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (economics); Associate Editor Ruth Marcus; and Molly Roberts (technology and society).
|
<urn:uuid:15cd61f7-0842-4c71-beaa-6cf33f09286c>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00075.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9501856565475464,
"pii_count": 0,
"score": 3.53125,
"token_count": 1630,
"url": "https://www.washingtonpost.com/opinions/2023/03/11/phonics-schools-students-read-learn/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWJpZCI6IjE0MDU5MTYiLCJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNjc4NjgwMDAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNjc5OTc1OTk5LCJpYXQiOjE2Nzg2ODAwMDAsImp0aSI6ImExZDgzZWQ5LTE0NDMtNGJiZC04OThkLWRjNDcyYjcwZTgyMCIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS9vcGluaW9ucy8yMDIzLzAzLzExL3Bob25pY3Mtc2Nob29scy1zdHVkZW50cy1yZWFkLWxlYXJuLyJ9.JtvxhJdD1rD8Pc5sgon8_9qi406K9NkLp8Qms7k6Tds&itid=gfta"
}
|
So you can read. But how?
Tennessee Gov. Bill Lee (R) submitted a bill to his state’s legislature this year that would command all Tennessee school districts to rely on phonics for reading in kindergarten through third grade. More than 30 states and D.C. have taken this approach, instituting various degrees of phonics instruction on their turf. Yet teachers unions in many places have been resistant, and some politicians are on their side.
The so-called reading wars have been raging for decades now, sometimes pitting teachers against publishers or publishers against academicians — and also sometimes, as too many things do these days, pitting progressives against conservatives or Democrats against Republicans. That’s unfortunate, because — as perhaps too few things do these days — the debate over how best to teach children to read lends itself to a conclusive answer. That’s phonics.
In phonics, students learn a letter or a pair of letters at a time.
That’s how most Americans learned to read. Slowly, letters add up to words.
Eventually, through a process called “orthographic mapping,” some words will lodge themselves in a child’s memory so they’ll know them on sight. And it turns out the most efficient and effective route to this mapping is linking sounds, letter by letter, to written words. Our brains light up in the right places when we do it.
What’s more, knowing the sounds “a,” “m,” “n” and every vowel team and consonant blend on the long journey to “z” will eventually allow a young reader to decode any word, even when they don’t recognize it.
Not everyone, however, is sold.
Phonics isn’t new — it dates to at least the 19th century. What’s newer is the “whole language” approach to reading. The idea is to teach words rather than letters. It was persuasive in the mid-20th century, when “Dick and Jane” books replaced phonics-based McGuffey Readers.
In the whole-language approach, students are shown simple sentences and learn by logical association.
They learn entire words at a time.
But some students just memorize the narrow set of words in their books and exercises.
In the more modern version of this approach, heavily reliant on what’s known as the “three-cueing system,” students are essentially encouraged to guess words: Does it make sense? Does it sound right? Does it look right?
Because this includes a “look right”
|
component, there’s some element of phonics involved — hence the marketing of this teaching strategy as “balanced literacy.” But too heavy an emphasis on the “make sense” part of the equation, combined with many of those helpful pictures, means some children can get by without sounding out anything. This approach breaks down when the words become longer, less familiar and when the pictures disappear.
About 40 percent of students will learn to read no matter what. They’ll manage to sound words out without systematic phonics instruction, or without any phonics instruction at all. That’s part of why the whole-language approach looks, sometimes, like it works. But research shows that the children who struggle most aren’t likely to stop struggling unless they’re taught to sound words out — unless they’re taught to read.
The Carnegie Corporation of New York released in the 1960s a comprehensive literature review that emphasized the importance of phonics in reading instruction. The U.S. Education Department and the National Institute of Child Health and Human Development convened a National Reading Panel that came to the same conclusion in 2000.
Recent numbers bear this out: The “Mississippi miracle” saw the state vault from 49th in fourth-graders’ reading proficiency test scores to 29th in a mere six years, after implementing phonics-based curriculums. Meanwhile, reading scores nationwide are dropping, and only about one-third of fourth- and eighth-graders across the country are proficient at reading.
So why do so many teachers refuse to adopt methods that work — and hold fast to those that don’t?
Many of those most devoted in recent decades to balanced literacy see phonics as, well, boring: “drill and kill,” as some put it. Especially in schools with fewer resources, the chances instructors will be skilled enough to bring these lessons to life might be slim. The thinking goes that kids won’t improve at reading if they don’t enjoy reading, and that to enjoy reading the focus should be on understanding the story a book is trying to tell rather than on getting each and every word exactly right. Who cares, for example, if a student says “puppy” instead of “dog?”
Certainly kids can get bored laboring all day over cats and rats who can’t do much more than be fat until the students have progressed to more challenging combinations of letters. And sounding out words can only take a student so far, if they have no idea what any of those words mean. Reading — really reading — requires myriad skills, starting with word recognition but reaching to background knowledge, vocabulary, syntax and semantics and eventually coping with irony, metaphors, genres and themes.
Some of these skills might come more naturally to students growing up in households with, say, college or high-school educated parents. They’ll definitely come more naturally to students growing up in English-speaking households. The students whose homes infuse them with less background knowledge, vocabulary and beyond than their peers will most need their schools to step in and provide it.
But balanced literacy isn’t really balanced — phonics instruction is usually sprinkled here and there rather than instituted systematically in the manner that’s required for students actually to benefit from it. And three-cueing methods sometimes teach students hacks. For those who don’t immediately catch on to sounding out words, those hacks can discourage them to ever learn how.
Recognizing that students will bring a range of vocabulary and experience to the classroom is important, but that doesn’t negate the reality that phonics is essential, because learning a new word starts with sounding out what the word is and because unspooling a good metaphor requires drinking in an entire sentence.
Parents and advocates are understandably squeamish about government dictates involving so intimate and traditionally local a matter as education — particularly when ideology enters the equation. School boards and other bodies closer to the ground are the ideal places for these decisions to happen. But they should happen. The techniques that will help students master “Bob Books” so someday they might make it to Robert Wright books aren’t a question of ideology. They’re a question of science.
Kids should absolutely learn to love to read. First, though, they need to learn to read.
Editorials represent the views of The Post as an institution, as determined through debate among members of the Editorial Board, based in the Opinions section and separate from the newsroom.
Members of the Editorial Board and areas of focus: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy, legal affairs, energy, the environment, health care); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (global public health); James Hohmann (domestic policy and electoral politics, including the White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (economics); Associate Editor Ruth Marcus; and Molly Roberts (technology and society).
|
What the Supreme Court's 'Sackett' ruling could mean for Kentucky's dwindling wetlands
A U.S. Supreme Court ruling on a long-running environmental case that began in the Idaho panhandle strips protections for wetlands in Kentucky and across the nation.
The court's 5-4 ruling on Thursday sided with the Sackett family, who sued the U.S. Environmental Protection Agency more than a decade ago after the agency blocked them, under provisions of the Clean Water Act, from developing their property.
Some water advocates were expecting the court to loosen wetland protections, but Thursday's decision cut protections more than expected, according to Michael Washburn, executive director of Kentucky Waterways Alliance.
"Without exaggeration, it's calamitous," he said. Some advocacy groups estimated tens of millions of wetland acres lost protections due to the decision.
More:Ohio River listed as endangered. What that means and what can be done to help.
What does the high court's ruling mean for Kentucky?
What are wetlands?
Generally, wetlands are areas of land covered with water or saturated for some or all of the year. Marshes are an example of this habitat.
Boasting biodiversity on par with rainforests and coral reefs, wetlands provide critical habitat for animals like waterfowl, and are a hotbed for plant life.
More:Kentucky suing DuPont over years of chemical contamination from upstream W.Va. plant
Every year, wetlands' natural ability to filter runoff and help absorb floodwaters during rain events saves the state millions of dollars. They also help sequester carbon.
Deer, frog, salamander and bat species are just some of the wildlife reliant on Kentucky's wetlands.
Where are Kentucky's wetlands?
The most recent catalog of Kentucky wetlands, reported in 1997, estimated the state has lost more than 80% of its original wetlands — equating to more than a million acres of crucial Kentucky habitat, decimated.
Only a few hundred thousand acres of wetlands are left in the state, Washburn said. The majority are in the western half of the state.
Efforts there by the Nature Conservancy, the Natural Resources Conservation Service and local landowners have sought to restore the natural hydrology of thousands of acres of wetlands, particularly in the Jackson Purchase region.
More:EPA proposes new pollution rules for coal power plants. What it could mean for Kentucky
How does Sackett v. EPA impact Kentucky?
Wetlands in Kentucky are at the mercy of the federal Clean Water Act and its definitions, with nothing on the books at the state level, a report from the Tulane Institute on Water Resources Law and Policy found.
And when it comes to water quality, Kentucky state law prohibits an agency "from imposing any standards or conditions on permits that would otherwise not be required under federal law," according to the report.
In other words, Kentucky water protections rely on the Clean Water Act, which was eroded by Thursday's decision, and state law prevents Kentucky regulators from replacing the protections that were lost.
National analysis:'Significant repercussions.' Supreme Court limits government power to curb water pollution
How are stakeholders reacting to the ruling?
Some advocacy groups worried the ruling would open the door for increased agricultural runoff, threatening drinking water supplies with chemicals like nitrates, while others said it could represent an erosion of water protections as a whole.
The decision "flatly ignores the ecological and scientific underpinnings of the Clean Water Act," Washburn said.
U.S. Sen. Mitch McConnell celebrated the ruling as "long-awaited relief and much-needed clarity" for businesses and landowners, who he said were burdened by the Biden administration's wetland protections announced in December.
The Republican leader also cited complaints from leadership of the Kentucky Farm Bureau and Kentucky Corn Growers Association over the Biden administration's wetland protections in a Courier Journal op-ed last month.
Calling Thursday's decision a win for businesses is "short-sighted," Washburn said, as agriculture and other industries across Kentucky rely on clean water for their operations.
"I think we're shooting ourselves in the foot if we say that you can have a win for small businesses and homeowners ... at the expense of a healthy environment."
Connor Giffin is an environmental reporter for The Courier Journal and a corps member with Report for America, a national service program that places journalists in local newsrooms to report on under-covered issues. The program funds up to half of corps members’ salaries, but requires a portion also be raised through local community fundraising. To support local environmental reporting in Kentucky, tax-deductible donations can be made atcourier-journal.com/RFA.
Learn more about RFA atreportforamerica.org. Reach Connor directly at <email-pii> or on Twitter@byconnorgiffin.
|
<urn:uuid:45aa0b66-53eb-42ef-b441-b27402b05955>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652235.2/warc/CC-MAIN-20230606045924-20230606075924-00462.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9368151426315308,
"pii_count": 1,
"score": 3.34375,
"token_count": 997,
"url": "https://www.courier-journal.com/story/news/local/2023/05/26/supreme-courts-sackett-ruling-effect-on-kentuckys-dwindling-wetlands/70257740007/"
}
|
What the Supreme Court's 'Sackett' ruling could mean for Kentucky's dwindling wetlands
A U.S. Supreme Court ruling on a long-running environmental case that began in the Idaho panhandle strips protections for wetlands in Kentucky and across the nation.
The court's 5-4 ruling on Thursday sided with the Sackett family, who sued the U.S. Environmental Protection Agency more than a decade ago after the agency blocked them, under provisions of the Clean Water Act, from developing their property.
Some water advocates were expecting the court to loosen wetland protections, but Thursday's decision cut protections more than expected, according to Michael Washburn, executive director of Kentucky Waterways Alliance.
"Without exaggeration, it's calamitous," he said. Some advocacy groups estimated tens of millions of wetland acres lost protections due to the decision.
More:Ohio River listed as endangered. What that means and what can be done to help.
What does the high court's ruling mean for Kentucky?
What are wetlands?
Generally, wetlands are areas of land covered with water or saturated for some or all of the year. Marshes are an example of this habitat.
Boasting biodiversity on par with rainforests and coral reefs, wetlands provide critical habitat for animals like waterfowl, and are a hotbed for plant life.
More:Kentucky suing DuPont over years of chemical contamination from upstream W.Va. plant
Every year, wetlands' natural ability to filter runoff and help absorb floodwaters during rain events saves the state millions of dollars. They also help sequester carbon.
Deer, frog, salamander and bat species are just some of the wildlife reliant on Kentucky's wetlands.
Where are Kentucky's wetlands?
The most recent catalog of Kentucky wetlands, reported in 1997, estimated the state has lost more than 80% of its original wetlands — equating to more than a million acres of crucial Kentucky habitat, decimated.
Only a few hundred thousand acres of wetlands are left in the state, Washburn said. The majority are in the western half of the state.
Efforts there by the Nature Conservancy, the Natural Resources Conservation Service and local landowners have sought to restore the natural hydrology of thousands of acres of wetlands, particularly in the Jackson Purchase region.
More:EPA proposes new pollution rules for coal power plants. What it could mean for Kentucky
How does Sackett v. EPA impact Kentucky?
Wetlands in Kentucky are at the mercy
|
of the federal Clean Water Act and its definitions, with nothing on the books at the state level, a report from the Tulane Institute on Water Resources Law and Policy found.
And when it comes to water quality, Kentucky state law prohibits an agency "from imposing any standards or conditions on permits that would otherwise not be required under federal law," according to the report.
In other words, Kentucky water protections rely on the Clean Water Act, which was eroded by Thursday's decision, and state law prevents Kentucky regulators from replacing the protections that were lost.
National analysis:'Significant repercussions.' Supreme Court limits government power to curb water pollution
How are stakeholders reacting to the ruling?
Some advocacy groups worried the ruling would open the door for increased agricultural runoff, threatening drinking water supplies with chemicals like nitrates, while others said it could represent an erosion of water protections as a whole.
The decision "flatly ignores the ecological and scientific underpinnings of the Clean Water Act," Washburn said.
U.S. Sen. Mitch McConnell celebrated the ruling as "long-awaited relief and much-needed clarity" for businesses and landowners, who he said were burdened by the Biden administration's wetland protections announced in December.
The Republican leader also cited complaints from leadership of the Kentucky Farm Bureau and Kentucky Corn Growers Association over the Biden administration's wetland protections in a Courier Journal op-ed last month.
Calling Thursday's decision a win for businesses is "short-sighted," Washburn said, as agriculture and other industries across Kentucky rely on clean water for their operations.
"I think we're shooting ourselves in the foot if we say that you can have a win for small businesses and homeowners ... at the expense of a healthy environment."
Connor Giffin is an environmental reporter for The Courier Journal and a corps member with Report for America, a national service program that places journalists in local newsrooms to report on under-covered issues. The program funds up to half of corps members’ salaries, but requires a portion also be raised through local community fundraising. To support local environmental reporting in Kentucky, tax-deductible donations can be made atcourier-journal.com/RFA.
Learn more about RFA atreportforamerica.org. Reach Connor directly at <email-pii> or on Twitter@byconnorgiffin.
|
When most animals move into a new area, they don’t significantly change it. Beavers, on the other hand, leave a mark on the landscape that you can sometimes see from space.
Swarming ponds, building dams and expanding waterways, beavers are moving farther and farther into the Arctic, and are changing what the region looks like.
“We think of these beaver ponds like oases in the Arctic, oases of warmth, biodiversity, permafrost thaw,” said Ken Tape, an ecologist at the University of Alaska Fairbanks. “We used to think Arctic streams were these little free-flowing things about as wide as my office. And it turns out that once beavers get involved, that’s not what Arctic streams look like. They look like wetlands because of all these beaver dams.”
Tape — together with Benjamin Jones, research assistant professor at the Water and Environmental Research Center at UAF; biology doctoral student Sebastian Zavoico; and Northwest Alaska writer Seth Kantner — traveled by snowmachine from Nome to Kotzebue in recent weeks to study how beavers are changing the landscape.
While the implications of beavers’ increasing presence aren’t yet clear, several Northwest Alaska residents welcome the change. Beavers, they say, provide an additional food source for locals and create a more diverse environment.
“Unlike other animals in the Arctic that sort of migrate or move with the seasons, beavers are very reliable,” Kantner said. “They are almost like money in the bank: If you ever get hungry or we have starvation or need furs and need food, they’re an incredible resource.”
The number of beaver ponds in the Alaska Arctic doubled between 2003 and 2017, with the animals moving into tundra regions farther north, Tape said. The data comes from the analysis of ponds through satellite imagery and aerial photography.
Stream by stream, the population is spreading from forested areas in the Interior into the Northwest Arctic. In the 1970s and ‘80s, beavers started to occupy the Nome area, where there’s a lot of groundwater and shrubby vegetation available year-round. In the 1990s and early 2000s, they started to move to Kotzebue.
Kantner said that at Baldwin Peninsula behind Kotzebue, there were almost no beavers 30 to 40 years ago, and “now, every little puddle seems to have a beaver lodge on it.” Kotzebue hunter and trapper Lance Kramer agreed.
“We first started noticing beavers back around 1995 behind Kotzebue,” said Kramer, who is familiar with most of the lakes and the landscape around Kotzebue. “Every year there’s just more and more and more from those three lodges in 1995, now there’s over 65 lodges today behind town.”
[From 2017: ’Tundra be dammed’: Beavers head north, leaving their mark on the Arctic]
Following the beavers
To understand how beavers are changing the environment they’re moving into, researchers spent nine days traveling more than 400 miles on snowmachines earlier this month, visiting beaver sites on the Seward and Baldwin peninsulas. As part of a project funded by the National Science Foundation, they took water samples, measured snow depth and ice thickness and noted whether there’s unfrozen water at the bottom of ponds, Tape said.
“When we’re looking at how they change the landscape, we don’t actually have to see the beaver, which is kind of funny and slightly disappointing,” Tape said. “We were not looking so much at the behavior of beavers, but we’re looking at their impacts.”
Navigating in whiteout conditions, transporting heavy scientific equipment, looking for unfrozen water, drilling through 1 1/2 meters of ice — those were some of the challenges they faced during the trip, Tape said.
“The blizzard was our biggest challenge by far — having to hunker down and sit tight for a day, which we’re not very good at,” Tape said. “We’re eager to get out there and make measurements.”
After the blizzard, the group dug out from the snow and moved to their next stop: Serpentine Hot Springs. In addition to providing a repose for tired travelers, the hot springs, with unfrozen water at the surface, also entertained them with a long-awaited beaver sighting.
Researchers knew that the beavers have used springs and hot springs to gain a foothold in the Arctic because those are the first places where the habitat is really available, Tape said. So out of curiosity, they set up time-lapse cameras to observe them at Serpentine Hot Springs.
“Sure enough, here comes the beaver,” Tape said.
The group also met a local family at the hot springs and spoke to them about beavers and the region.
“You always learn a lot from talking to those knowledgeable folks — about beavers, about their history, about the history of the entire area,” Tape said.
Kantner, who was born along the Kobuk River, has his own experience to share. When he was growing up, there were already beavers in the area and people used them for meat and fur. He said he always had a lot of respect for beavers as animals.
“I personally always admired them because, you know, it lived very close to where I lived and had been hard-working all the time as I am too,” he said. “We sort of felt like we lived a little bit more like beavers because they were just a mile or two away in their latitudes with their hard work and hardscrabble life, and there we were, in our little sod igloo with a tunnel entrance.”
Changing the Arctic
When beavers make ponds, they alter the hydrology and tend to thaw the permafrost, Tape said. This can be a big issue in locations with a lot of ice-rich permafrost, like the Baldwin Peninsula and the northern part of the Seward Peninsula, though for now, beavers are actually occupying a relatively small part of the Arctic, Tape said.
“It’s not that every single beaver pond is thawing permafrost, but a lot of them do,” Tape said. “We think that they’re accelerating climate change. Is it a huge deal? Not clear right now.”
Kantner pointed out that of all the causes for the warming Arctic, an increased beaver presence might not be the biggest. “The land is definitely melting but I’m not about to blame that on beaver,” he said.
How the increased beaver population is affecting or will affect existing wildlife is an open question. Overall, scientists predict that the oases created by beaver activity will lead to more biological production in the water and on land, with more organisms thriving and shrubs growing, Tape said.
Some believe that beaver dams can negatively affect fish, Tape said — for example, by blocking a small stream and preventing fish from migrating. But warmer water can also create a better spawning environment for species like salmon, Tape added.
Kramer, who traps beavers from mid-November to the end of December, said that while beavers can pose a potential danger to sources of drinking water for residents, the increased presence of the animals is actually good news. Beavers expand the water system on the Baldwin Peninsula, improving the conditions for fish and insects, which in turn creates more food for minks, otters and martens.
Beavers, he said, “really enhance our area by making ponds deeper, making waterways deeper through their dams to their lodges. If it wasn’t for a beaver dam, so many ponds wouldn’t exist ... They help everything survive better, everything in our country.”
Some Northwest Alaska hunters rely more and more on beaver meat, especially now that caribou hunting is rare and scarce, Kantner and Kramer said.
In Kramer’s experience, beaver meat is also one of the best baits.
“Everything of the country likes to eat beaver,” he said. “They’re a very good critter to trap because they’re used so much. The hides, of course, I get them tanned and I sell them to local people here for hats and things like that. And then the carcasses, they are good to eat — a lot of protein, a lot of fat.
“I’m actually glad they’re here.”
|
<urn:uuid:e863e1f1-98e0-4369-9602-9b4687b2d471>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655247.75/warc/CC-MAIN-20230609032325-20230609062325-00439.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9584560990333557,
"pii_count": 0,
"score": 3.421875,
"token_count": 1863,
"url": "https://www.adn.com/alaska-news/rural-alaska/2023/04/22/as-beavers-gain-foothold-in-arctic-alaska-some-see-benefits-in-how-they-reshape-the-landscape/"
}
|
When most animals move into a new area, they don’t significantly change it. Beavers, on the other hand, leave a mark on the landscape that you can sometimes see from space.
Swarming ponds, building dams and expanding waterways, beavers are moving farther and farther into the Arctic, and are changing what the region looks like.
“We think of these beaver ponds like oases in the Arctic, oases of warmth, biodiversity, permafrost thaw,” said Ken Tape, an ecologist at the University of Alaska Fairbanks. “We used to think Arctic streams were these little free-flowing things about as wide as my office. And it turns out that once beavers get involved, that’s not what Arctic streams look like. They look like wetlands because of all these beaver dams.”
Tape — together with Benjamin Jones, research assistant professor at the Water and Environmental Research Center at UAF; biology doctoral student Sebastian Zavoico; and Northwest Alaska writer Seth Kantner — traveled by snowmachine from Nome to Kotzebue in recent weeks to study how beavers are changing the landscape.
While the implications of beavers’ increasing presence aren’t yet clear, several Northwest Alaska residents welcome the change. Beavers, they say, provide an additional food source for locals and create a more diverse environment.
“Unlike other animals in the Arctic that sort of migrate or move with the seasons, beavers are very reliable,” Kantner said. “They are almost like money in the bank: If you ever get hungry or we have starvation or need furs and need food, they’re an incredible resource.”
The number of beaver ponds in the Alaska Arctic doubled between 2003 and 2017, with the animals moving into tundra regions farther north, Tape said. The data comes from the analysis of ponds through satellite imagery and aerial photography.
Stream by stream, the population is spreading from forested areas in the Interior into the Northwest Arctic. In the 1970s and ‘80s, beavers started to occupy the Nome area, where there’s a lot of groundwater and shrubby vegetation available year-round. In the 1990s and early 2000s, they started to move to Kotzebue.
Kantner said that at Baldwin Peninsula behind Kotzebue, there were almost no beavers 30 to 40 years ago, and “now, every little puddle seems to have a
|
beaver lodge on it.” Kotzebue hunter and trapper Lance Kramer agreed.
“We first started noticing beavers back around 1995 behind Kotzebue,” said Kramer, who is familiar with most of the lakes and the landscape around Kotzebue. “Every year there’s just more and more and more from those three lodges in 1995, now there’s over 65 lodges today behind town.”
[From 2017: ’Tundra be dammed’: Beavers head north, leaving their mark on the Arctic]
Following the beavers
To understand how beavers are changing the environment they’re moving into, researchers spent nine days traveling more than 400 miles on snowmachines earlier this month, visiting beaver sites on the Seward and Baldwin peninsulas. As part of a project funded by the National Science Foundation, they took water samples, measured snow depth and ice thickness and noted whether there’s unfrozen water at the bottom of ponds, Tape said.
“When we’re looking at how they change the landscape, we don’t actually have to see the beaver, which is kind of funny and slightly disappointing,” Tape said. “We were not looking so much at the behavior of beavers, but we’re looking at their impacts.”
Navigating in whiteout conditions, transporting heavy scientific equipment, looking for unfrozen water, drilling through 1 1/2 meters of ice — those were some of the challenges they faced during the trip, Tape said.
“The blizzard was our biggest challenge by far — having to hunker down and sit tight for a day, which we’re not very good at,” Tape said. “We’re eager to get out there and make measurements.”
After the blizzard, the group dug out from the snow and moved to their next stop: Serpentine Hot Springs. In addition to providing a repose for tired travelers, the hot springs, with unfrozen water at the surface, also entertained them with a long-awaited beaver sighting.
Researchers knew that the beavers have used springs and hot springs to gain a foothold in the Arctic because those are the first places where the habitat is really available, Tape said. So out of curiosity, they set up time-lapse cameras to observe them at Serpentine Hot Springs.
“Sure enough, here comes the beaver,” Tape said.
The group also met a local family at the hot springs and spoke to them about beavers and the region.
“You always learn a lot from talking to those knowledgeable folks — about beavers, about their history, about the history of the entire area,” Tape said.
Kantner, who was born along the Kobuk River, has his own experience to share. When he was growing up, there were already beavers in the area and people used them for meat and fur. He said he always had a lot of respect for beavers as animals.
“I personally always admired them because, you know, it lived very close to where I lived and had been hard-working all the time as I am too,” he said. “We sort of felt like we lived a little bit more like beavers because they were just a mile or two away in their latitudes with their hard work and hardscrabble life, and there we were, in our little sod igloo with a tunnel entrance.”
Changing the Arctic
When beavers make ponds, they alter the hydrology and tend to thaw the permafrost, Tape said. This can be a big issue in locations with a lot of ice-rich permafrost, like the Baldwin Peninsula and the northern part of the Seward Peninsula, though for now, beavers are actually occupying a relatively small part of the Arctic, Tape said.
“It’s not that every single beaver pond is thawing permafrost, but a lot of them do,” Tape said. “We think that they’re accelerating climate change. Is it a huge deal? Not clear right now.”
Kantner pointed out that of all the causes for the warming Arctic, an increased beaver presence might not be the biggest. “The land is definitely melting but I’m not about to blame that on beaver,” he said.
How the increased beaver population is affecting or will affect existing wildlife is an open question. Overall, scientists predict that the oases created by beaver activity will lead to more biological production in the water and on land, with more organisms thriving and shrubs growing, Tape said.
Some believe that beaver dams can negatively affect fish, Tape said — for example, by blocking a small stream and preventing fish from migrating. But warmer water can also create a better spawning environment for species like salmon, Tape added.
Kramer, who traps beavers from mid-November to the end of December, said that while beavers can pose a potential danger to sources of drinking water for residents, the increased presence of the animals is actually good news. Beavers expand the water system on the Baldwin Peninsula, improving the conditions for fish and insects, which in turn creates more food for minks, otters and martens.
Beavers, he said, “really enhance our area by making ponds deeper, making waterways deeper through their dams to their lodges. If it wasn’t for a beaver dam, so many ponds wouldn’t exist ... They help everything survive better, everything in our country.”
Some Northwest Alaska hunters rely more and more on beaver meat, especially now that caribou hunting is rare and scarce, Kantner and Kramer said.
In Kramer’s experience, beaver meat is also one of the best baits.
“Everything of the country likes to eat beaver,” he said. “They’re a very good critter to trap because they’re used so much. The hides, of course, I get them tanned and I sell them to local people here for hats and things like that. And then the carcasses, they are good to eat — a lot of protein, a lot of fat.
“I’m actually glad they’re here.”
|
Visitors to Brittany in the 18th and 19th centuries noted many beliefs surrounding the little folk of the region. This post continues to look at some of the more notable characteristics once attributed to a specific group of fairies, known as the Fairies of the Swells, in the local legends and folklore of northern Brittany.
Fairies who ask to be godmothers to a mortal child are found in several old French tales and a rather peculiar example was also once noted in northern Brittany. Here, a tale relates that a fairy became the godmother of a human baby and was so besotted by him that she cast a powerful spell to ensure that the baby would not grow until he made her laugh. After seven years, the child, though healthy, remained as small as on the day he was born. One day, while riding his pet rat to the river, he was thrown off and landed awkwardly in front of the fairy who laughed uncontrollably at the strange sight; the curse was broken and the child immediately assumed the size of all other seven-year-olds.
Several legends show that the fairies did not guard their good fortune too closely but like all good neighbours were prepared to lend their prized possessions to those in genuine need. They lent their oxen to those neighbours who politely asked for them but they imposed certain conditions; most commonly they demanded that their beasts not be made to work before sunrise or after sunset. If the fairies’ animals made but a single furrow after dusk, they immediately burst and the fairies came to curse the imprudent ploughmen.
It was said that the fairies often kept their cattle stabled in a corner of their vast cave dwellings and that each morning a child from the nearest farm came to fetch them and took them to graze in the meadows. The cows were returned each evening but never once did the young cowherd see the fairies she diligently served but every month, a small cloth bag hung from the end of a rope was found containing the silver owed to her for their care. Likewise, the fairies of Saint-Agnan, who needed milk and butter for their cakes, had cows which were found every morning grazing in the midst of the communal herd and who, at night, suddenly disappeared. On the last day of the grazing season, one of them carried, suspended from its horn, a small bag containing the sum owed to the cowherd.
As might be expected, not all fairies were benevolent, some were even reputed to be evil and were known as such, while others were simply viewed as mischievous. Those mortals who had offended the fairies were sometimes transformed beyond human recognition. For instance, an enormous oak tree near Saint-Pôan was said to have once been a man changed into a tree by a fairy’s curse, while another legend tells that the lumpfish was once a fisherman. One evening, when walking along the seashore at nightfall, a fisherman heard a voice saying that the feast of the queen of the fairies would take place on the next day and that any fisherman who lifted his nets that day would be punished. The man ignored the warning and when he touched his nets, a voice cried out to him: “Unbeliever, you are the cursed of the fairies; be changed into a fish.”
Like the other little folk of Brittany, the Fairies of the Swells loved to dance, especially the circular dance. Traces of their nocturnal dancing were recognised in the morning light by large circles on the ground where the grass seemed greener or in the strange marks in the sand of the most isolated coves. The fairies did not welcome uninvited guests at their soirées; those mortals curious enough to spy on them were almost immediately bewitched. The fairies along the Emerald Coast west of Saint-Malo once invited some hapless men into their moonlit dance and suddenly turned them into cats. Locals reported seeing them wandering on the cliffs on windy evenings, wailing in distress. To regain their human form, they had only to weave, for the fairies, mantles of gold and silver from the grains of sand on the seashore.
Another glimpse into the spiteful nature of some fairies is afforded in a tale about two old maids long tormented by them in their small cottage by the sea. After weeks of anguish and many vain attempts to combat the fairies with charms and prayers, magical amulets and holy rosaries, the two ladies resolved to fortify their home with holy water. Copious amounts of which they sprinkled all over the house, including the doors, windows and fireplace, before retiring to bed. At midnight, the fairies appeared but found themselves unable to enter the house because the holy water burned them harshly. A few minutes later, they were lifting the earthen sods from the roof and throwing them down the chimney, and, walking carefully on these new lawns which they threw out in front of them, they reached the old women’s beds and began to whip them, singing in chorus: “All is not blessed! All is not blessed!”
In several parts of Brittany, it was said that fairies visited people’s homes by means of the chimney, particularly to see if any of the household dared to continue their spinning on certain auspicious days. Around Essé, it was also believed that this was the means fairies used to gain access to a house when they stole the children.
A perhaps more unsettling tale highlighting the dangers of antagonising the fairies lies in the jagged jumble of rocks and boulders that litter the base of the cliffs around Cap Fréhel. Local legend tells that a good house once stood upon the ground now covered by these rocks; home to a family that had repeatedly bothered the fairies of the neighbouring caves. To avenge their perceived offence, the fairies brought down these massive rocks and crushed the house, on the very day when the wedding of the eldest son was being celebrated.
Further west, the debris of the shore, specifically sand dunes, were at the heart of another fairy-related legend. Around the village of Portsall it was said that some fairies, having committed a murder, were condemned to fetch sand from the sea and to count the grains until they had arrived at a figure which the imagination could hardly conceive; the sand dunes that lie between Portsall and the estuary of the Aber represent the piles of sand that each fairy had to count.
Returning some miles east, the cave known as Toul ar Groac’h (Fairy’s Hole) near Loguivy was reputed to be home to a group of fairies who carried a most sinister reputation. As late as the middle of the 19th century, local fishermen preferred to sleep under their boats for the night rather than risk walking home near the fairy’s cave. Interestingly, it was said that the power of these fairies did not extend over women; if those of Loguivy came to meet their men at the end of a day’s fishing, they had nothing to fear as they passed the Toul ar Groac’h.
This area seems to have once been home to many groups of malevolent fairies as it was noted that around the nearby town of Tréguier, evil fairies once killed those who ventured onto the beach at night, while the salt workers of Crec’h Morvan feared the evil fairies that seemed to protect those of neighbouring Buguelès whose salt was reputedly of better quality. If the fairies of this stretch of coast were not evil then perhaps some enterprising smugglers spread such tales in order to keep prying eyes away from the beaches at night?
Further east, around the port of Saint-Cast-le-Guildo, the Pointe de l’Isle was said to be the domain of fairies who whipped human trespassers with the long strips of seaweed. Some 12km (8 miles) across the Bay of Saint-Malo lies the Goule-aux-Fées, just north of the resort of Dinard. Here, popular tradition warned that those people who dared to venture on the clifftops at night risked being seized by a ferocious whirlwind that would drag them down into the fairy cave below, where they would be devoured by the evil fairies chained there.
One of the key characteristics of the fairies was their industriousness, even if their activities were accomplished beyond the sight of mortal eyes. The fairies were reported to have visited their human neighbours at night; knocking on doors asking for the loan of ploughs and horses. It was believed necessary to agree to any request made by the fairies for fear of exposing the household to any evil spells. The fairies were said to have been very careful with whatever items they borrowed and would even return any damaged items fully repaired.
Despite their diminutive size, fairies were attributed prodigious strength as evidenced by certain menhirs which were said to be discarded spindles which they had once used to spin wool. In their aprons they could transport massive stones, such as those that were used to create the world’s largest surviving dolmen, La Roche-aux-Fées (Rock of the Fairies), near Essé. Constructed from 32 upright stones with nine roof slabs, this structure is about 20 metres long by five metres wide and at its highest point is over four metres high. These monumental stones were likely quarried about 4km (2.5 miles) away and dragged to this site some 5,000 years ago but local legend long ascribed the building of this dolmen to the fairies who completed the work in a single night.
A rather touching local legend tells that the structure was built by the fairies to shelter the souls of the just but that these fairies disappeared with the retreat of the forest. Since then, the whistling of the wind between the stones was held to be the lamentations of souls in pain no longer visited by the fairies.
Some 13km (8 miles) away at Saulnières stands another megalith said to have been built by the fairies, La Table aux Fées (Table of the Fairies) to serve as a table where they could eat and rest during their exertions at La Roche-aux-Fées. The presence of many of the neighbourhood menhirs were once explained away as discarded building stones; at the precise moment the dolmen was completed, the fairies carrying their now surplus stones simply dropped them where they stood. It was also said that the fairies had placed a spell of confoundment upon the monument so that no count of the number of stones would consistently tally.
Other significant landmarks were once credited to the skilled craftsmanship of the fairies, such as the 14th century Cesson Tower in Saint-Brieuc and the elaborate portal of the chapel of Saint James in Saint-Alban which is otherwise accredited to the Knights Templar in the 13th century. This was about the time that the castle of Montauban de Bretagne, just 49km (30 miles) away, was built although local lore attributes its construction to the fairies who are also reputed to save sown the forest that surrounds it in order to give it protection.
It was traditionally believed that during the hours of darkness everyone possessed the capacity to see the fairies but during the day this privilege was only afforded to a very small number of people, such as gifted sorcerers and those who had rubbed their eyes with a magic ointment. Many stories tell that it was thanks to this mysterious ointment that the fairies could make themselves invisible or transform themselves.
A few cautionary tales highlight the dangers to mortals who believe that they can wield the magic of the fairies. One tells that, one evening, a fisherman from Saint-Jacut was walking home along the bottom of the cliffs when he saw several fairies talking animatedly together in a cave. Alas, he heard nothing of their discussion but did see them rub their eyes with some kind of ointment and immediately change shape before walking away from the cave like ordinary women.
When he thought the fairies were far away, the fisherman entered the cave and saw, on the wall of the rock which formed part of the cave, a remnant of the ointment with which they had rubbed their eyes. He scraped a little with his fingertips and smeared it around his left eye, to see if he could, by this means, acquire the magic of the fairies and discover their hidden treasures.
A few days later, a ragged and dirty beggar came to the village where she pleaded for alms from door to door but the fisherman immediately recognized her as one of the fairies he had seen in the cave; he noticed that she was casting spells on certain houses and that she was looking carefully inside them as if she had wanted to see if there was something worth stealing within.
Sometime later, at the Ploubalay fair, the fisherman noted the presence of several fairies despite their various disguises; some masqueraded as beggars, others displayed curiosities or held games of chance in which the country people were taken like fools, one even appeared in the guise of a fortune-teller. He was careful not to imitate his companions and to play the fairies’ games but he could see that the fairies were worried; vaguely sensing perhaps that someone was aware of them. Delighted with the knowledge that he held the upper hand, the fisherman laughed as he wandered among the crowd. Passing by a tent where several fairies paraded on a platform, he quickly realised that he too had been unmasked and that they were looking at him irritably. He wanted to run away but swift as an arrow, one of the fairies used the wand in her hand to burst the eye which their ointment had made clairvoyant.
A similar tale was noted some 30km (18 miles) south, near Gouray, in 1881: A human midwife who delivered a fairy baby carelessly allowed some of the fairy ointment to get onto one of her own eyes. The eye at once became clairvoyant, so that she beheld the fairies in their true nature. A few days later, this midwife happened to see a fairy in the act of stealing and admonished her for it. The fairy quickly asked the midwife with which eye she beheld her and when the midwife indicated which one it was, the fairy immediately plucked it out.
Just 33km (20 miles) east, a local legend from near Dinard tells that a midwife of the town was once called out to attend a mother in labour in a cave on the Rance estuary. Having successfully delivered the baby, the midwife was given a jar of ointment with which to massage the newborn, along with strict instructions to avoid rubbing it around her own eyes. Unfortunately, she was unable to resist the temptation to do so and was startled to find everything around her changed; she now saw the dark cave was as beautiful as the finest castle and that the new mother and her friends were actually fairies dressed like princesses. Careful not to betray any surprise, the midwife completed her tasks and returned home well paid. Sometime later, as she could, thanks to the magic ointment, see the fairies that were invisible to others, she saw one flying and could not help exclaiming aloud. Realising she had been seen, the fairy swooped down and tore out the offending eye.
The invisibility charms woven by the fairies seem to have extended beyond masking their appearance and that of their dwellings. According to popular legend in Plévenon, the fairies of Cap Fréhel used to wash their clothes in a pool on Fréhel moor and spread their laundry to dry in the surrounding meadows. Their linen was reputedly the whitest that one could ever see and whoever could get near it without moving their eyelids would have had permission to take it but none of those who tried ever succeeded, for as soon as they moved their eyelids the linen became invisible.
In this region, fairies were renowned as skilled healers whose remedies were believed to contain compounds from plants that possessed yellow and blue flowers. Secret, bewitched herbs that enjoyed the virtue of curing all diseases were said to have been cultivated along the shorelines by the fairies who employed them to make the ointment which was used in many of their enchantments, although some tales say that the fairies also ate these herbs. Fairies were also said to feed on sylvies; a delicate plant whose downy seeds were sensitive enough to disperse at a fairy’s breath but highly toxic to humans. A fairy’s breath is usually lethal in Breton lore but there is a tale of an old leper on the Île-de-Groix visited one night by an old crone. Discovering him near death, the fairy recited some charms and breathed on the man’s sores, leaving him fully cured.
Most legends here agree that the fairies did not age and were immune to all sickness. However, they were believed susceptible to ailments and even death as soon as any salt was put into their mouths; a belief likely due to the association of blessed salt and the Christian baptismal ceremony. It was even said that all the fairies around Plévenon died at the same moment because a malicious boy, seeing a fairy asleep with her mouth open, threw a handful of salt into it.
About 24km (15 miles) to the east, along the Rance estuary, legends unique to this part of Brittany tell of fairies that appeared during storms and followed a queen who rode a boat fashioned from a nautilus shell, pulled by two large crayfish. It was said that she could command the winds and that she ordered the waves to return the corpses of the drowned. This fairy queen of the Rance sometimes visited the small island of Île Notre-Dame where she was seen landing one day by a young sailor who, having sighted her, quickly hid himself.
Captivated by the queen’s great beauty, the sailor noticed that she had fallen asleep and felt compelled to move closer so as to see her better. Standing over the sleeping queen, he was silently admiring when he was quickly surrounded by other fairies who wanted to throw him into the sea for his effrontery. The commotion awoke the queen who ordered her companions to do the lad no harm and to whom she addressed a few, sadly unknown, words before disappearing in a chariot drawn by butterflies.
The numerous legends of the fairies of the swells represent them as living as part of a family unit or wider community but there are a few notable exceptions. One is the Fairy of Puy who is reported to have lived in a cave popularly known as la Grotte-ès-Chiens (Dogs’ Cave) on the Rance estuary near Saint-Suliac. This fairy was said to emerge at sunset, being initially glimpsed as a white and indistinct vapour that seemed to dance over the ground before slowly evaporating to reveal a beautiful woman whose dress shone with all the colours of the rainbow. She was seen walking on the seashore, sometimes sitting on the grass of the cliffs; a sad, solitary figure who fled at the sight of man.
Local legend tells that this fairy did not always cut such a forlorn figure for she was once sovereign of these lands; her voice commanded the winds and controlled the waves. Recognising her power, fishermen would offer their homage to her before setting out to sea and the mouth of her cave, guarded by a pack of invisible dogs, was always bedecked with garlands offered by the wives and loved ones of those at sea. In return for such devotion, the fairy delivered favourable winds and abundant fishing.
One day, some shepherds found, near the entrance of the cave, a young woman lying close to death. She told them that she had come to this place to wait for her fiancé but she had seen the fairy who told her that her fiancé was dead and that she herself would die soon. The shepherds took her to the village where the priest, having heard their story, quickly gathered his congregation and marched to confront the fairy. At the mouth of her cave, he summoned her to appear and exorcised her but nothing was seen and only an anguished cry heard. Returning from the cave, the people who had accompanied the priest found the dead body of the young fiancé.
Since that fateful day, the Fairy of Puy does not often show herself; she flees from the sight of man because she no longer has any power over him. Her appearance now is said to announce some imminent misfortune and any bloody traces found on the beach are a bitter reminder of her rejection and her fall from benevolent protector to spiteful destroyer. Perhaps the Puy fairy’s journey was typical of others who were more thoroughly lost to the mists of time. It is not too fanciful to see in her some kind of pre-Christian sea deity or sacred oracle that, over time, was greatly rationalised and transformed into just another devilish creature for the superstitiously minded imagination.
One interesting aspect of the legends involving the fairies of the swells is the paucity of any meaningfully direct association with water. Certainly, they had their homes close to the sea but unlike the korrigans who are frequently noted as frolicking in fountains and streams, there are no tales that mention these fairies swimming or bathing and only one of them being said able to walk on water. Nor were they seemingly concerned with catching fish by natural or magical means, preferring to steal their oysters and fish from the catches landed by the fishermen. These fairies thus shared some attributes with other supernatural beings such as korrigans and mermaids but were viewed as a quite distinct, even unique, group.
Some Breton tales tell that the fairies transformed into moles in order to escape the Gospel or else that they were condemned to the darkness by God in punishment for having rejected the early saints. In southern Brittany, it was said that the Gulf of Morbihan was born from the abundant tears that the fairies shed when they were forced to leave Brittany; on this new sea, they threw their garlands which turned into little islands.
Legends surrounding the disappearance of the fairies of the swells are far more consistent than those surrounding Brittany’s other fairies; they left the country, all at once, during the course of a single night. They are said to have left for another country and several legends tell us that their destination was the island of Great Britain. While the exact date of their departure varied from commune to commune, most agreed that it was sometime around the beginning of the 19th century. Towards the end of that century, the Breton painter and author Paul Sébillot, who spent over two decades recording the folklore of the region, claimed to have met only two people who believed in the contemporary existence of fairies and who swore to having personally seen them.
There is no neat answer as to why the fairies left these lands but it is important to remember that primary education was being pushed in rural areas from the middle of the 19th century and was made compulsory in 1881. Young Breton children entering school were officially described as “like those of countries where civilization has not penetrated: savage, dirty and not understanding a word of the language”. Education was the State’s main tool in civilising the savages and “clods” of “the bush” in order to integrate them into national society and culture, specifically the culture of the city, of Paris; the war on superstition was now began in earnest. Over time, children became almost as separated from the world of their grandparents as they were from the court of the King of Siam.
With the existence of the fairies attacked by both Church and State and with the communities that sustained such beliefs changing rapidly, it is little wonder that certainty in the living presence of fairies waned. Perhaps the character Peter Pan summed it up best when he said: “You see children know such a lot now. Soon they don’t believe in fairies and every time a child says ‘I don’t believe in fairies’ there is a fairy somewhere that falls down dead.”
I shall end this post on a more positive note because Breton legend assures us that the fairies will, one day, return to this land; perhaps at a time when the Angelus bell is no longer sounded, perhaps at some other time of their choosing. The fairies all left in one night and will likewise all return during the course of a single night in a century that is an odd number. Some people were convinced that the fairies would return in the nineteen hundreds and in the early part of that century, the people of Saint-Cast, seeing women in motorcars for the first time, thought that the fairies had indeed returned.
|
<urn:uuid:1209d617-e88a-4513-b67d-91be236b9cc2>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.66/warc/CC-MAIN-20230203091020-20230203121020-00863.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9875769019126892,
"pii_count": 0,
"score": 2.53125,
"token_count": 5259,
"url": "https://bonjourfrombrittany.wordpress.com/2023/01/21/the-fairies-of-the-swells-ii/"
}
|
Visitors to Brittany in the 18th and 19th centuries noted many beliefs surrounding the little folk of the region. This post continues to look at some of the more notable characteristics once attributed to a specific group of fairies, known as the Fairies of the Swells, in the local legends and folklore of northern Brittany.
Fairies who ask to be godmothers to a mortal child are found in several old French tales and a rather peculiar example was also once noted in northern Brittany. Here, a tale relates that a fairy became the godmother of a human baby and was so besotted by him that she cast a powerful spell to ensure that the baby would not grow until he made her laugh. After seven years, the child, though healthy, remained as small as on the day he was born. One day, while riding his pet rat to the river, he was thrown off and landed awkwardly in front of the fairy who laughed uncontrollably at the strange sight; the curse was broken and the child immediately assumed the size of all other seven-year-olds.
Several legends show that the fairies did not guard their good fortune too closely but like all good neighbours were prepared to lend their prized possessions to those in genuine need. They lent their oxen to those neighbours who politely asked for them but they imposed certain conditions; most commonly they demanded that their beasts not be made to work before sunrise or after sunset. If the fairies’ animals made but a single furrow after dusk, they immediately burst and the fairies came to curse the imprudent ploughmen.
It was said that the fairies often kept their cattle stabled in a corner of their vast cave dwellings and that each morning a child from the nearest farm came to fetch them and took them to graze in the meadows. The cows were returned each evening but never once did the young cowherd see the fairies she diligently served but every month, a small cloth bag hung from the end of a rope was found containing the silver owed to her for their care. Likewise, the fairies of Saint-Agnan, who needed milk and butter for their cakes, had cows which were found every morning grazing in the midst of the communal herd and who, at night, suddenly disappeared. On the last day of the grazing season, one of them carried, suspended from its horn, a small bag containing the sum owed to the cowherd.
As might be expected, not all fairies were benevolent
|
, some were even reputed to be evil and were known as such, while others were simply viewed as mischievous. Those mortals who had offended the fairies were sometimes transformed beyond human recognition. For instance, an enormous oak tree near Saint-Pôan was said to have once been a man changed into a tree by a fairy’s curse, while another legend tells that the lumpfish was once a fisherman. One evening, when walking along the seashore at nightfall, a fisherman heard a voice saying that the feast of the queen of the fairies would take place on the next day and that any fisherman who lifted his nets that day would be punished. The man ignored the warning and when he touched his nets, a voice cried out to him: “Unbeliever, you are the cursed of the fairies; be changed into a fish.”
Like the other little folk of Brittany, the Fairies of the Swells loved to dance, especially the circular dance. Traces of their nocturnal dancing were recognised in the morning light by large circles on the ground where the grass seemed greener or in the strange marks in the sand of the most isolated coves. The fairies did not welcome uninvited guests at their soirées; those mortals curious enough to spy on them were almost immediately bewitched. The fairies along the Emerald Coast west of Saint-Malo once invited some hapless men into their moonlit dance and suddenly turned them into cats. Locals reported seeing them wandering on the cliffs on windy evenings, wailing in distress. To regain their human form, they had only to weave, for the fairies, mantles of gold and silver from the grains of sand on the seashore.
Another glimpse into the spiteful nature of some fairies is afforded in a tale about two old maids long tormented by them in their small cottage by the sea. After weeks of anguish and many vain attempts to combat the fairies with charms and prayers, magical amulets and holy rosaries, the two ladies resolved to fortify their home with holy water. Copious amounts of which they sprinkled all over the house, including the doors, windows and fireplace, before retiring to bed. At midnight, the fairies appeared but found themselves unable to enter the house because the holy water burned them harshly. A few minutes later, they were lifting the earthen sods from the roof and throwing them down the chimney, and, walking carefully on these new lawns which they threw out in front of them, they reached the old women’s beds and began to whip them, singing in chorus: “All is not blessed! All is not blessed!”
In several parts of Brittany, it was said that fairies visited people’s homes by means of the chimney, particularly to see if any of the household dared to continue their spinning on certain auspicious days. Around Essé, it was also believed that this was the means fairies used to gain access to a house when they stole the children.
A perhaps more unsettling tale highlighting the dangers of antagonising the fairies lies in the jagged jumble of rocks and boulders that litter the base of the cliffs around Cap Fréhel. Local legend tells that a good house once stood upon the ground now covered by these rocks; home to a family that had repeatedly bothered the fairies of the neighbouring caves. To avenge their perceived offence, the fairies brought down these massive rocks and crushed the house, on the very day when the wedding of the eldest son was being celebrated.
Further west, the debris of the shore, specifically sand dunes, were at the heart of another fairy-related legend. Around the village of Portsall it was said that some fairies, having committed a murder, were condemned to fetch sand from the sea and to count the grains until they had arrived at a figure which the imagination could hardly conceive; the sand dunes that lie between Portsall and the estuary of the Aber represent the piles of sand that each fairy had to count.
Returning some miles east, the cave known as Toul ar Groac’h (Fairy’s Hole) near Loguivy was reputed to be home to a group of fairies who carried a most sinister reputation. As late as the middle of the 19th century, local fishermen preferred to sleep under their boats for the night rather than risk walking home near the fairy’s cave. Interestingly, it was said that the power of these fairies did not extend over women; if those of Loguivy came to meet their men at the end of a day’s fishing, they had nothing to fear as they passed the Toul ar Groac’h.
This area seems to have once been home to many groups of malevolent fairies as it was noted that around the nearby town of Tréguier, evil fairies once killed those who ventured onto the beach at night, while the salt workers of Crec’h Morvan feared the evil fairies that seemed to protect those of neighbouring Buguelès whose salt was reputedly of better quality. If the fairies of this stretch of coast were not evil then perhaps some enterprising smugglers spread such tales in order to keep prying eyes away from the beaches at night?
Further east, around the port of Saint-Cast-le-Guildo, the Pointe de l’Isle was said to be the domain of fairies who whipped human trespassers with the long strips of seaweed. Some 12km (8 miles) across the Bay of Saint-Malo lies the Goule-aux-Fées, just north of the resort of Dinard. Here, popular tradition warned that those people who dared to venture on the clifftops at night risked being seized by a ferocious whirlwind that would drag them down into the fairy cave below, where they would be devoured by the evil fairies chained there.
One of the key characteristics of the fairies was their industriousness, even if their activities were accomplished beyond the sight of mortal eyes. The fairies were reported to have visited their human neighbours at night; knocking on doors asking for the loan of ploughs and horses. It was believed necessary to agree to any request made by the fairies for fear of exposing the household to any evil spells. The fairies were said to have been very careful with whatever items they borrowed and would even return any damaged items fully repaired.
Despite their diminutive size, fairies were attributed prodigious strength as evidenced by certain menhirs which were said to be discarded spindles which they had once used to spin wool. In their aprons they could transport massive stones, such as those that were used to create the world’s largest surviving dolmen, La Roche-aux-Fées (Rock of the Fairies), near Essé. Constructed from 32 upright stones with nine roof slabs, this structure is about 20 metres long by five metres wide and at its highest point is over four metres high. These monumental stones were likely quarried about 4km (2.5 miles) away and dragged to this site some 5,000 years ago but local legend long ascribed the building of this dolmen to the fairies who completed the work in a single night.
A rather touching local legend tells that the structure was built by the fairies to shelter the souls of the just but that these fairies disappeared with the retreat of the forest. Since then, the whistling of the wind between the stones was held to be the lamentations of souls in pain no longer visited by the fairies.
Some 13km (8 miles) away at Saulnières stands another megalith said to have been built by the fairies, La Table aux Fées (Table of the Fairies) to serve as a table where they could eat and rest during their exertions at La Roche-aux-Fées. The presence of many of the neighbourhood menhirs were once explained away as discarded building stones; at the precise moment the dolmen was completed, the fairies carrying their now surplus stones simply dropped them where they stood. It was also said that the fairies had placed a spell of confoundment upon the monument so that no count of the number of stones would consistently tally.
Other significant landmarks were once credited to the skilled craftsmanship of the fairies, such as the 14th century Cesson Tower in Saint-Brieuc and the elaborate portal of the chapel of Saint James in Saint-Alban which is otherwise accredited to the Knights Templar in the 13th century. This was about the time that the castle of Montauban de Bretagne, just 49km (30 miles) away, was built although local lore attributes its construction to the fairies who are also reputed to save sown the forest that surrounds it in order to give it protection.
It was traditionally believed that during the hours of darkness everyone possessed the capacity to see the fairies but during the day this privilege was only afforded to a very small number of people, such as gifted sorcerers and those who had rubbed their eyes with a magic ointment. Many stories tell that it was thanks to this mysterious ointment that the fairies could make themselves invisible or transform themselves.
A few cautionary tales highlight the dangers to mortals who believe that they can wield the magic of the fairies. One tells that, one evening, a fisherman from Saint-Jacut was walking home along the bottom of the cliffs when he saw several fairies talking animatedly together in a cave. Alas, he heard nothing of their discussion but did see them rub their eyes with some kind of ointment and immediately change shape before walking away from the cave like ordinary women.
When he thought the fairies were far away, the fisherman entered the cave and saw, on the wall of the rock which formed part of the cave, a remnant of the ointment with which they had rubbed their eyes. He scraped a little with his fingertips and smeared it around his left eye, to see if he could, by this means, acquire the magic of the fairies and discover their hidden treasures.
A few days later, a ragged and dirty beggar came to the village where she pleaded for alms from door to door but the fisherman immediately recognized her as one of the fairies he had seen in the cave; he noticed that she was casting spells on certain houses and that she was looking carefully inside them as if she had wanted to see if there was something worth stealing within.
Sometime later, at the Ploubalay fair, the fisherman noted the presence of several fairies despite their various disguises; some masqueraded as beggars, others displayed curiosities or held games of chance in which the country people were taken like fools, one even appeared in the guise of a fortune-teller. He was careful not to imitate his companions and to play the fairies’ games but he could see that the fairies were worried; vaguely sensing perhaps that someone was aware of them. Delighted with the knowledge that he held the upper hand, the fisherman laughed as he wandered among the crowd. Passing by a tent where several fairies paraded on a platform, he quickly realised that he too had been unmasked and that they were looking at him irritably. He wanted to run away but swift as an arrow, one of the fairies used the wand in her hand to burst the eye which their ointment had made clairvoyant.
A similar tale was noted some 30km (18 miles) south, near Gouray, in 1881: A human midwife who delivered a fairy baby carelessly allowed some of the fairy ointment to get onto one of her own eyes. The eye at once became clairvoyant, so that she beheld the fairies in their true nature. A few days later, this midwife happened to see a fairy in the act of stealing and admonished her for it. The fairy quickly asked the midwife with which eye she beheld her and when the midwife indicated which one it was, the fairy immediately plucked it out.
Just 33km (20 miles) east, a local legend from near Dinard tells that a midwife of the town was once called out to attend a mother in labour in a cave on the Rance estuary. Having successfully delivered the baby, the midwife was given a jar of ointment with which to massage the newborn, along with strict instructions to avoid rubbing it around her own eyes. Unfortunately, she was unable to resist the temptation to do so and was startled to find everything around her changed; she now saw the dark cave was as beautiful as the finest castle and that the new mother and her friends were actually fairies dressed like princesses. Careful not to betray any surprise, the midwife completed her tasks and returned home well paid. Sometime later, as she could, thanks to the magic ointment, see the fairies that were invisible to others, she saw one flying and could not help exclaiming aloud. Realising she had been seen, the fairy swooped down and tore out the offending eye.
The invisibility charms woven by the fairies seem to have extended beyond masking their appearance and that of their dwellings. According to popular legend in Plévenon, the fairies of Cap Fréhel used to wash their clothes in a pool on Fréhel moor and spread their laundry to dry in the surrounding meadows. Their linen was reputedly the whitest that one could ever see and whoever could get near it without moving their eyelids would have had permission to take it but none of those who tried ever succeeded, for as soon as they moved their eyelids the linen became invisible.
In this region, fairies were renowned as skilled healers whose remedies were believed to contain compounds from plants that possessed yellow and blue flowers. Secret, bewitched herbs that enjoyed the virtue of curing all diseases were said to have been cultivated along the shorelines by the fairies who employed them to make the ointment which was used in many of their enchantments, although some tales say that the fairies also ate these herbs. Fairies were also said to feed on sylvies; a delicate plant whose downy seeds were sensitive enough to disperse at a fairy’s breath but highly toxic to humans. A fairy’s breath is usually lethal in Breton lore but there is a tale of an old leper on the Île-de-Groix visited one night by an old crone. Discovering him near death, the fairy recited some charms and breathed on the man’s sores, leaving him fully cured.
Most legends here agree that the fairies did not age and were immune to all sickness. However, they were believed susceptible to ailments and even death as soon as any salt was put into their mouths; a belief likely due to the association of blessed salt and the Christian baptismal ceremony. It was even said that all the fairies around Plévenon died at the same moment because a malicious boy, seeing a fairy asleep with her mouth open, threw a handful of salt into it.
About 24km (15 miles) to the east, along the Rance estuary, legends unique to this part of Brittany tell of fairies that appeared during storms and followed a queen who rode a boat fashioned from a nautilus shell, pulled by two large crayfish. It was said that she could command the winds and that she ordered the waves to return the corpses of the drowned. This fairy queen of the Rance sometimes visited the small island of Île Notre-Dame where she was seen landing one day by a young sailor who, having sighted her, quickly hid himself.
Captivated by the queen’s great beauty, the sailor noticed that she had fallen asleep and felt compelled to move closer so as to see her better. Standing over the sleeping queen, he was silently admiring when he was quickly surrounded by other fairies who wanted to throw him into the sea for his effrontery. The commotion awoke the queen who ordered her companions to do the lad no harm and to whom she addressed a few, sadly unknown, words before disappearing in a chariot drawn by butterflies.
The numerous legends of the fairies of the swells represent them as living as part of a family unit or wider community but there are a few notable exceptions. One is the Fairy of Puy who is reported to have lived in a cave popularly known as la Grotte-ès-Chiens (Dogs’ Cave) on the Rance estuary near Saint-Suliac. This fairy was said to emerge at sunset, being initially glimpsed as a white and indistinct vapour that seemed to dance over the ground before slowly evaporating to reveal a beautiful woman whose dress shone with all the colours of the rainbow. She was seen walking on the seashore, sometimes sitting on the grass of the cliffs; a sad, solitary figure who fled at the sight of man.
Local legend tells that this fairy did not always cut such a forlorn figure for she was once sovereign of these lands; her voice commanded the winds and controlled the waves
|
Baby babble isn't just goo goo! And hearing 2 languages is better than one
In English, we adults like to say babies go "goo goo ga ga." In French, it's "areau areau." And in Mandarin, it's "ya ya."
So do babies really babble with different sounds in different parts of the world? And what does that tell us about how babies learn to speak?
A few years ago I was on a train, seated across a table from a French woman traveling with her toddler.
His mom and I hit it off and spoke in English for a few hours. Meanwhile, this little boy had a lot to say. I couldn't understand it, but he clearly had opinions about his books, his snacks, maybe how cool it was that his hands were attached to his arms – all in what I just assumed was French.
I said to his mom completely earnestly, "So, what's he saying?" She paused for a while and then she went, "Nothing. He's a baby."
Well, it turns out that baby babble has a deeper meaning.
I called up Megha Sundara, the linguistics department chair at UCLA, whose lab is unlike any lab you've ever heard of. For starters, there's a castle in it. Well, it's a sound booth, but it looks like a castle.
"So the thing about sound booths is they are intimidating spaces," she says. So her student with a background in set design built a castle around it. "And we've never had a problem with the baby going in. They just waltz right in."
Sundara studies how babies listen before they start talking and how they eventually learn language.
When babies first start babbling at around 6 months old, they all make the same sounds – even deaf babies. Then they start to drift toward the language around them.
Babies, it turns out, "are very good at imitating the rhythm and the intonation of the language they're hearing," Sundara says.
Sundara also studies how babies respond to hearing different languages. In bilingual households, babies switch that rhythm and melody in their babble depending on what language they're responding to. In monolingual households, they don't.
She led an experiment showing that those babble patterns can change, however.
At the beginning of the experiment, each participating baby is 9 or 10 months old and can only babble in what sounds like English. Then, they let the babies spend time with a research assistant who speaks Spanish. "And in these sessions, you are just reading to them, playing with them in Spanish. That's all that's happening."
The babies get about five hours of exposure to the second language spread out over four weeks.
Then, when her team gives the babies another chance to interact with a Spanish speaker, they are able to change their babbling to match the Spanish sounds.
Babies have this special skill for picking up language thanks to something called enhanced neuroplasticity. It basically means their brains are superadaptive.
"When babies are born, they can hear and distinguish all of the sounds and all the languages in the world," says Jeannette Reiff, associate director of clinical issues in speech language pathology with the American Speech Language Hearing Association.
So a baby who hears a couple of languages regularly would be able to tune into both languages. And there are, of course, cognitive and social benefits to being bilingual.
And what about the idea that trying to learn two languages at once would confuse babies. That has since been disproven, Reiff says. But the question still comes up.
"I work with many families and I have this conversation a lot with them," she says. You know, 'We speak three languages in our home. Which language should we choose?' And I say 'All three...We're not confusing [the baby]. We're only increasing brain flexibility and maximizing the neuroplasticity that your baby has right now.'"
That heightened ability to learn language lasts until children are around 5 years old — with some lingering language superpowers lasting until age 12.
So while my baby isn't walking yet and insists on scooting backward on his bottom while blowing raspberries, his babble is one sign his brain is doing amazing things.
As I occasionally struggle to conjugate verbs in Spanish, willing my brain to remember something from high school Spanish class, I sometimes wonder how much easier it would be if I had just started to learn a second language as a baby.
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
<urn:uuid:ae22d681-48ca-45ff-b5a3-3cf96fc4595b>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510734.55/warc/CC-MAIN-20231001005750-20231001035750-00000.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9794431924819946,
"pii_count": 0,
"score": 3.109375,
"token_count": 964,
"url": "https://www.delawarepublic.org/npr-headlines/2023-09-15/baby-babble-isnt-just-goo-goo-and-hearing-2-languages-is-better-than-one"
}
|
Baby babble isn't just goo goo! And hearing 2 languages is better than one
In English, we adults like to say babies go "goo goo ga ga." In French, it's "areau areau." And in Mandarin, it's "ya ya."
So do babies really babble with different sounds in different parts of the world? And what does that tell us about how babies learn to speak?
A few years ago I was on a train, seated across a table from a French woman traveling with her toddler.
His mom and I hit it off and spoke in English for a few hours. Meanwhile, this little boy had a lot to say. I couldn't understand it, but he clearly had opinions about his books, his snacks, maybe how cool it was that his hands were attached to his arms – all in what I just assumed was French.
I said to his mom completely earnestly, "So, what's he saying?" She paused for a while and then she went, "Nothing. He's a baby."
Well, it turns out that baby babble has a deeper meaning.
I called up Megha Sundara, the linguistics department chair at UCLA, whose lab is unlike any lab you've ever heard of. For starters, there's a castle in it. Well, it's a sound booth, but it looks like a castle.
"So the thing about sound booths is they are intimidating spaces," she says. So her student with a background in set design built a castle around it. "And we've never had a problem with the baby going in. They just waltz right in."
Sundara studies how babies listen before they start talking and how they eventually learn language.
When babies first start babbling at around 6 months old, they all make the same sounds – even deaf babies. Then they start to drift toward the language around them.
Babies, it turns out, "are very good at imitating the rhythm and the intonation of the language they're hearing," Sundara says.
Sundara also studies how babies respond to hearing different languages. In bilingual households, babies switch that rhythm and melody in their babble depending on what language they're responding to. In monolingual households, they don't.
She led an experiment showing that those babble patterns can change, however.
At the beginning of the experiment, each participating baby is 9 or 10 months old and can only babble in what
|
sounds like English. Then, they let the babies spend time with a research assistant who speaks Spanish. "And in these sessions, you are just reading to them, playing with them in Spanish. That's all that's happening."
The babies get about five hours of exposure to the second language spread out over four weeks.
Then, when her team gives the babies another chance to interact with a Spanish speaker, they are able to change their babbling to match the Spanish sounds.
Babies have this special skill for picking up language thanks to something called enhanced neuroplasticity. It basically means their brains are superadaptive.
"When babies are born, they can hear and distinguish all of the sounds and all the languages in the world," says Jeannette Reiff, associate director of clinical issues in speech language pathology with the American Speech Language Hearing Association.
So a baby who hears a couple of languages regularly would be able to tune into both languages. And there are, of course, cognitive and social benefits to being bilingual.
And what about the idea that trying to learn two languages at once would confuse babies. That has since been disproven, Reiff says. But the question still comes up.
"I work with many families and I have this conversation a lot with them," she says. You know, 'We speak three languages in our home. Which language should we choose?' And I say 'All three...We're not confusing [the baby]. We're only increasing brain flexibility and maximizing the neuroplasticity that your baby has right now.'"
That heightened ability to learn language lasts until children are around 5 years old — with some lingering language superpowers lasting until age 12.
So while my baby isn't walking yet and insists on scooting backward on his bottom while blowing raspberries, his babble is one sign his brain is doing amazing things.
As I occasionally struggle to conjugate verbs in Spanish, willing my brain to remember something from high school Spanish class, I sometimes wonder how much easier it would be if I had just started to learn a second language as a baby.
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
What does cortisol do to the body? All about the 'stress hormone.'
When our bodies are under stress, either physical (like an injury) or psychological (like relationship struggles), our bodies go through an intense hormonal and neurological response – most commonly known as the fight-or-flight reflex. It's also referred to as sympathetic nervous system activation, or the HPA axis.
Your body responds to stress by releasing a steroid hormone called cortisol – sometimes called “the stress hormone.”
So what is cortisol, and what does it do to our bodies? We spoke with Dr. Megan Gunnar, PhD., a professor of developmental psychology at the University of Minnesota and The Institute of Child Development, to answer some of the most common questions about cortisol.
What is cortisol?
Although cortisol is known for its role in the stress response, it has many functions.
“Cortisol does so many things,” Gunnar begins. “It’s a steroid hormone, and steroid hormones have a particular structure that allows them to be what’s called lipid soluble, which means they don’t need to be actively transported into cells. Once they are in our cells, they travel up to the nucleus, where they get actively transported to genes that are responsive to them – and almost all cells are responsive. This is how they play a role in the transcription of genes. That is why cortisol does a lot of different things because it’s regulating a lot of different genes.”
In other words, cortisol affects almost every tissue and organ in our bodies.
What does cortisol do to the body?
“We produce cortisol every day. We produce it according to a daily rhythm where our levels begin to rise in the last hours of sleep. They reach their peak levels about 30 to 40 minutes after we wake up. Then, they decrease until they’re almost nothing. In fact, it needs to be almost nothing about an hour or so after we fall asleep so you can sleep well at night,” explains Gunnar. She adds, “Cortisol does a lot of different things early in the morning, like stimulating your interest in getting up and out, in grappling with novelty and in finding carbohydrates. “It’s like your cup of coffee. It’s the ‘let’s get up and get going!’” Gunnar chirps.
When you experience stress, cortisol travels throughout the body to prepare it to take on the stressor.
The Cleveland Clinic explains, “During the fight-or-flight response, your body is trying to prioritize, so anything it doesn’t need for immediate survival is placed on the back burner. This means that digestion, reproductive and growth hormone production, and tissue repair are all temporarily halted. Instead, your body is using all of its energy on the most crucial priorities and functions.”
The American Institute of Stress describes five key symptoms of fight-or-flight system activation:
- Rapid heartbeat
- Rapid breathing (hyperventilation)
- Pale or flushed skin
- Dilated pupils
What does chronically elevated cortisol do to your body?
Gunnar says, “One effect of chronic stress is that it flattens the natural cortisol rhythm. You can get a sort-of hypo-functioning system with a flat rhythm that might be a little too high at night. This can disrupt your sleep and disrupt the levels in the morning, and really disrupt your whole system.”
And while the development of certain mental health conditions is influenced by a number of factors, cortisol dysregulation may play a role. “It is a risk factor for all sorts of health and mental health conditions, but it also (depends on) the characteristics of an individual,” explains Gunnar.
The next time you’re feeling stressed out – you may have cortisol to blame. But also, the next time you wake up, refreshed from a good night’s sleep and ready to grab some breakfast – you can thank cortisol!
Read more about stress and your body here:
Explained:Your body's intense reaction to stress
Spot the symptoms:Chronic stress can be a serious problem
Is stress making you sick?Symptoms of stress-related illness, tips for stress management.
Feeling stressed?Tips for how to reduce stress in your daily life
|
<urn:uuid:24a52a84-a1eb-48bf-bcfd-d82e86c91f4d>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649518.12/warc/CC-MAIN-20230604061300-20230604091300-00129.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9306058287620544,
"pii_count": 0,
"score": 3.3125,
"token_count": 917,
"url": "https://www.usatoday.com/story/life/health-wellness/2023/05/24/what-is-cortisol-body-too-high-stress/70232531007/"
}
|
What does cortisol do to the body? All about the 'stress hormone.'
When our bodies are under stress, either physical (like an injury) or psychological (like relationship struggles), our bodies go through an intense hormonal and neurological response – most commonly known as the fight-or-flight reflex. It's also referred to as sympathetic nervous system activation, or the HPA axis.
Your body responds to stress by releasing a steroid hormone called cortisol – sometimes called “the stress hormone.”
So what is cortisol, and what does it do to our bodies? We spoke with Dr. Megan Gunnar, PhD., a professor of developmental psychology at the University of Minnesota and The Institute of Child Development, to answer some of the most common questions about cortisol.
What is cortisol?
Although cortisol is known for its role in the stress response, it has many functions.
“Cortisol does so many things,” Gunnar begins. “It’s a steroid hormone, and steroid hormones have a particular structure that allows them to be what’s called lipid soluble, which means they don’t need to be actively transported into cells. Once they are in our cells, they travel up to the nucleus, where they get actively transported to genes that are responsive to them – and almost all cells are responsive. This is how they play a role in the transcription of genes. That is why cortisol does a lot of different things because it’s regulating a lot of different genes.”
In other words, cortisol affects almost every tissue and organ in our bodies.
What does cortisol do to the body?
“We produce cortisol every day. We produce it according to a daily rhythm where our levels begin to rise in the last hours of sleep. They reach their peak levels about 30 to 40 minutes after we wake up. Then, they decrease until they’re almost nothing. In fact, it needs to be almost nothing about an hour or so after we fall asleep so you can sleep well at night,” explains Gunnar. She adds, “Cortisol does a lot of different things early in the morning, like stimulating your interest in getting up and out, in grappling with novelty and in finding carbohydrates. “It’s like your cup of coffee. It’s the ‘let’s get up and get going!’” Gunnar chirps.
When you experience stress, cortisol travels throughout the body to prepare it to take on the stressor.
The Cleveland Clinic explains, “During the fight-or-flight response, your body is trying to prioritize, so anything it doesn’t need for immediate survival is placed on the back
|
burner. This means that digestion, reproductive and growth hormone production, and tissue repair are all temporarily halted. Instead, your body is using all of its energy on the most crucial priorities and functions.”
The American Institute of Stress describes five key symptoms of fight-or-flight system activation:
- Rapid heartbeat
- Rapid breathing (hyperventilation)
- Pale or flushed skin
- Dilated pupils
What does chronically elevated cortisol do to your body?
Gunnar says, “One effect of chronic stress is that it flattens the natural cortisol rhythm. You can get a sort-of hypo-functioning system with a flat rhythm that might be a little too high at night. This can disrupt your sleep and disrupt the levels in the morning, and really disrupt your whole system.”
And while the development of certain mental health conditions is influenced by a number of factors, cortisol dysregulation may play a role. “It is a risk factor for all sorts of health and mental health conditions, but it also (depends on) the characteristics of an individual,” explains Gunnar.
The next time you’re feeling stressed out – you may have cortisol to blame. But also, the next time you wake up, refreshed from a good night’s sleep and ready to grab some breakfast – you can thank cortisol!
Read more about stress and your body here:
Explained:Your body's intense reaction to stress
Spot the symptoms:Chronic stress can be a serious problem
Is stress making you sick?Symptoms of stress-related illness, tips for stress management.
Feeling stressed?Tips for how to reduce stress in your daily life
|
Anthrax has been confirmed as the cause of death of several animals on a cattle property north-east of Shepparton.
The property has been quarantined.
Anthrax is a disease caused by spore-forming bacteria that naturally occur in soil.
Grazing livestock are at the greatest risk of coming into contact with the bacteria, which can cause death within hours.
Agriculture Victoria said the dead animals have been disposed of, while the remaining cattle on the property are being vaccinated.
Anthrax can affect humans — although rarely — and a wide range of animals, with nearly all cases in Victoria having been seen in livestock, particularly grazing cattle and sheep.
Once ingested by animals, the bacteria produce potent toxins that cause the clinical signs of anthrax in animals, including fever, a lack of appetite, lethargy or sudden death, often with blood visible at the nose, mouth, and/or anus.
Swift action taken
Victoria's deputy chief veterinary officer Dr Cameron Bell said five animals had died from the illness and had been disposed of in Agriculture Victoria's specialised portable incinerator unit.
He said decisive action had been taken to limit the outbreak.
"Thanks to the early reporting by the owner and owner's vet, Ag Vic was able to undertake the necessary steps to control the spread of infection," he said.
"Since that notification we've quarantined the property and undertaken a number of measures to try and contain the spread.
"All the livestock on the property have been vaccinated and carcasses are in the process of being disposed of by burning, and the contaminated sites are being disinfected."
The burning of the dead animals aims to limit the spread of spores.
"Burning carcasses on-site is a very effective way to destroy the infected carcasses and prevent spread," Dr Bell said.
He urged people to be on the lookout for unexplained deaths and report them immediately.
"The key message is that any time of the year, anywhere in Victoria, it's really important to report unexplained livestock deaths," he said.
The bacteria that cause anthrax can live in soils for up to half a century, livestock veterinarian Alison Gunn said.
Wet weather can help the bacteria to multiply, and when soil dries out they can form spores.
Northern Victoria has experienced a wet summer followed by recent hot, dry days.
"I think the biggest thing you need to do is be aware that now is a risk period, and if you have any suddenly dead animals you should investigate it," Dr Gunn said.
Personal protective equipment is essential for anyone handling animals suspected to be infected with anthrax.
"There is a risk to humans," Dr Gunn said.
"Most commonly it causes an infection on the skin or it gets into a cut on the skin, and then you get blisters and quite a nasty infection."
In about 5 per cent of cases, people infected with anthrax can develop lung or gastrointestinal infections, she said.
Outbreaks of anthrax have previously been confirmed in the area.
In 2022, an anthrax outbreak occurred at a sheep property in the Murray River region.
Reports can be made to private veterinarians or to Agriculture Victoria's Emergency Animal Disease Hotline on 1800 675 000.
|
<urn:uuid:f444fc6b-f6a8-426d-99b7-6c2ce47ae032>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00377.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9607836008071899,
"pii_count": 0,
"score": 2.734375,
"token_count": 671,
"url": "https://www.abc.net.au/news/rural/2024-02-12/farm-under-quarantine-after-anthrax-cattle-deaths/103454508"
}
|
Anthrax has been confirmed as the cause of death of several animals on a cattle property north-east of Shepparton.
The property has been quarantined.
Anthrax is a disease caused by spore-forming bacteria that naturally occur in soil.
Grazing livestock are at the greatest risk of coming into contact with the bacteria, which can cause death within hours.
Agriculture Victoria said the dead animals have been disposed of, while the remaining cattle on the property are being vaccinated.
Anthrax can affect humans — although rarely — and a wide range of animals, with nearly all cases in Victoria having been seen in livestock, particularly grazing cattle and sheep.
Once ingested by animals, the bacteria produce potent toxins that cause the clinical signs of anthrax in animals, including fever, a lack of appetite, lethargy or sudden death, often with blood visible at the nose, mouth, and/or anus.
Swift action taken
Victoria's deputy chief veterinary officer Dr Cameron Bell said five animals had died from the illness and had been disposed of in Agriculture Victoria's specialised portable incinerator unit.
He said decisive action had been taken to limit the outbreak.
"Thanks to the early reporting by the owner and owner's vet, Ag Vic was able to undertake the necessary steps to control the spread of infection," he said.
"Since that notification we've quarantined the property and undertaken a number of measures to try and contain the spread.
"All the livestock on the property have been vaccinated and carcasses are in the process of being disposed of by burning, and the contaminated sites are being disinfected."
The burning of the dead animals aims to limit the spread of spores.
"Burning carcasses on-site is a very effective way to destroy the infected carcasses and prevent spread," Dr Bell said.
He urged people to be on the lookout for unexplained deaths and report them immediately.
"The key message is that any time of the year, anywhere in Victoria, it's really important to report unexplained livestock deaths," he said.
The bacteria that cause anthrax can live in soils for up to half a century, livestock veterinarian Alison Gunn said.
Wet weather can help the bacteria to multiply, and when soil dries out they can form spores.
Northern Victoria has experienced a wet summer followed by recent hot, dry days.
"I think the biggest thing you need to do is be aware that now is a risk period, and if you have any suddenly dead animals you should investigate it," Dr Gunn said.
Personal protective equipment is essential for anyone handling animals
|
suspected to be infected with anthrax.
"There is a risk to humans," Dr Gunn said.
"Most commonly it causes an infection on the skin or it gets into a cut on the skin, and then you get blisters and quite a nasty infection."
In about 5 per cent of cases, people infected with anthrax can develop lung or gastrointestinal infections, she said.
Outbreaks of anthrax have previously been confirmed in the area.
In 2022, an anthrax outbreak occurred at a sheep property in the Murray River region.
Reports can be made to private veterinarians or to Agriculture Victoria's Emergency Animal Disease Hotline on 1800 675 000.
|
Roaring Fork schools consider new drug prevention strategies, experts weigh in
When school district staff gathered at Basalt High School (BHS) on Jan. 18, they taught families about fentanyl contamination and how to recognize the warning signs of drug addiction.
Interim Principal Megan Hartmann says they’re hosting these sessions because kids across the country are using pretty serious substances, both intentionally or otherwise.
“There's been increased evidence of especially teenagers using drugs like fentanyl, cocaine, meth, heroin, [and] different other opioids in schools,” Hartmann said. “And it's not just in our valley. It's in our state and it's across the nation.”
Roaring Fork School District (RFSD) staff told Aspen Public Radio in an email that their evidence comes from “disciplinary actions and qualitative data.”
And while reports show that illicit drug use among students is down overall compared to pre-pandemic levels, youth overdose deaths are spiking across the country. The National Institutes of Health say this increase is largely attributed to illicit fentanyl.
In response, RFSD has offered up a few potential strategies to address these problems — such as implementing a robust drug and alcohol education program.
They’ve also suggested increasing the number of school resource officers, closing campuses so that kids can’t leave during their free periods, or bringing drug-sniffing dogs to campus, which have all drawn skepticism.
Maggie Seldeen, the founder of High Rockies Reduction, works in western Colorado to limit the negative effects of drug use and was at the BHS meeting.
“I haven't seen anyone who's supportive of these punitive methods,” Seldeen said. “And if anything, people seem really upset. And that's all I've heard from person after person, community member after community member, parent after parent.”
She grew up in the Roaring Fork Valley and knows what can happen when communities shame people struggling with addiction, or when resources are unavailable.
Her parents were addicts, and her mom struggled with mental health issues until she died in 2006 from a heroin overdose.
Seldeen was just 15.
“So I had kind of a rocky road,” Seldeen said. “I really struggled here and didn't have anyone who believed my voice as a teenager.”
She received D.A.R.E. programming in school, a set of drug abstinence lessons that became popular in the U.S. in the ‘80s and ‘90s.
Seldeen remembers the lessons were sometimes led by police officers and used ineffective scare tactics.
Her family also taught her that the police could get them all in trouble, so she worries that bringing in more officers or drug-sniffing dogs could alienate the students who need the most help.
“And so not every student is going to be freaked out by a K-9 unit, but the students that are, [they] are the students who we're going to lose trust with,” Seldeen said. “We're going to lose connection with. We're potentially going to lose seeing them in school as much.”
Seldeen suggests more evidence-based practices.
Blueprints for Healthy Youth Development is an online registry of scientifically-backed interventions that promote healthy habits.
Dr. Karl G. Hill is the director of the Prevention Science Program at CU Boulder and helped develop this resource. In an interview with Aspen Public Radio, he said many problem behaviors like drug use and violence have a common set of root causes.
“If you can address those root causes, then you have a broader effect across many different outcomes,” Dr. Hill said.
He’s been studying prevention and youth development since the 1990s, and he assessed the Roaring Fork School District’s different ideas.
When it came to increasing the number of school resource officers, Dr. Hill said that studies have shown “the presence of a school resource officer had actually zero impact on school violence.”
He does not expect that it would have any impact on drugs, either, since school violence and drug use tend to rise and fall in tandem, and he doesn’t think closing campuses shows much promise either.
“The majority of adolescent crime happens between 3 to 5 p.m. in the afternoon after kids get out of school and before their parents come home from work,” Dr. Hill said.
He said he had not heard of any schools introducing drug-sniffing dogs, and was doubtful it could be a viable solution.
Many of the evidence-based programs that Dr. Hill recommends teach kids to manage their impulses, emotions, and make good decisions.
He added that programs with a specific emphasis on drug education are promising, as long as they are targeted at students, parents and teachers, providing educators with the tools they need to handle this important curriculum.
And since the transition from elementary to middle school is a peak time for drug onset, Dr. Hill recommends schools start their programming early and incorporate it into a bigger strategy that boosts social-emotional learning.
Seldeen says kids are really hungry for this kind of knowledge.
“Kids want to know about drugs, and if they have the science and information and the opportunities to practice saying ‘no’ and having these conversations, they're a lot less likely to experiment or use.”
Roaring Fork School District staff said whatever drug prevention plan they choose, community input and buy-in will be an important component.
The district’s interim superintendent is scheduled to discuss next steps at its Feb. 6 school board meeting.
|
<urn:uuid:474d6e54-fbc5-480c-83ed-71e8d229a00b>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474361.75/warc/CC-MAIN-20240223053503-20240223083503-00076.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9698281288146973,
"pii_count": 0,
"score": 3.046875,
"token_count": 1192,
"url": "https://www.aspenpublicradio.org/education/2024-02-06/roaring-fork-schools-consider-new-drug-prevention-strategies-experts-weigh-in"
}
|
Roaring Fork schools consider new drug prevention strategies, experts weigh in
When school district staff gathered at Basalt High School (BHS) on Jan. 18, they taught families about fentanyl contamination and how to recognize the warning signs of drug addiction.
Interim Principal Megan Hartmann says they’re hosting these sessions because kids across the country are using pretty serious substances, both intentionally or otherwise.
“There's been increased evidence of especially teenagers using drugs like fentanyl, cocaine, meth, heroin, [and] different other opioids in schools,” Hartmann said. “And it's not just in our valley. It's in our state and it's across the nation.”
Roaring Fork School District (RFSD) staff told Aspen Public Radio in an email that their evidence comes from “disciplinary actions and qualitative data.”
And while reports show that illicit drug use among students is down overall compared to pre-pandemic levels, youth overdose deaths are spiking across the country. The National Institutes of Health say this increase is largely attributed to illicit fentanyl.
In response, RFSD has offered up a few potential strategies to address these problems — such as implementing a robust drug and alcohol education program.
They’ve also suggested increasing the number of school resource officers, closing campuses so that kids can’t leave during their free periods, or bringing drug-sniffing dogs to campus, which have all drawn skepticism.
Maggie Seldeen, the founder of High Rockies Reduction, works in western Colorado to limit the negative effects of drug use and was at the BHS meeting.
“I haven't seen anyone who's supportive of these punitive methods,” Seldeen said. “And if anything, people seem really upset. And that's all I've heard from person after person, community member after community member, parent after parent.”
She grew up in the Roaring Fork Valley and knows what can happen when communities shame people struggling with addiction, or when resources are unavailable.
Her parents were addicts, and her mom struggled with mental health issues until she died in 2006 from a heroin overdose.
Seldeen was just 15.
“So I had kind of a rocky road,” Seldeen said. “I really struggled here and didn't have anyone who believed my voice as a teenager.”
She received D.A.R.E. programming in school, a set of drug abstinence lessons that became popular in the U.S. in the ‘80s and ‘90s.
Seldeen remembers the lessons were sometimes led
|
by police officers and used ineffective scare tactics.
Her family also taught her that the police could get them all in trouble, so she worries that bringing in more officers or drug-sniffing dogs could alienate the students who need the most help.
“And so not every student is going to be freaked out by a K-9 unit, but the students that are, [they] are the students who we're going to lose trust with,” Seldeen said. “We're going to lose connection with. We're potentially going to lose seeing them in school as much.”
Seldeen suggests more evidence-based practices.
Blueprints for Healthy Youth Development is an online registry of scientifically-backed interventions that promote healthy habits.
Dr. Karl G. Hill is the director of the Prevention Science Program at CU Boulder and helped develop this resource. In an interview with Aspen Public Radio, he said many problem behaviors like drug use and violence have a common set of root causes.
“If you can address those root causes, then you have a broader effect across many different outcomes,” Dr. Hill said.
He’s been studying prevention and youth development since the 1990s, and he assessed the Roaring Fork School District’s different ideas.
When it came to increasing the number of school resource officers, Dr. Hill said that studies have shown “the presence of a school resource officer had actually zero impact on school violence.”
He does not expect that it would have any impact on drugs, either, since school violence and drug use tend to rise and fall in tandem, and he doesn’t think closing campuses shows much promise either.
“The majority of adolescent crime happens between 3 to 5 p.m. in the afternoon after kids get out of school and before their parents come home from work,” Dr. Hill said.
He said he had not heard of any schools introducing drug-sniffing dogs, and was doubtful it could be a viable solution.
Many of the evidence-based programs that Dr. Hill recommends teach kids to manage their impulses, emotions, and make good decisions.
He added that programs with a specific emphasis on drug education are promising, as long as they are targeted at students, parents and teachers, providing educators with the tools they need to handle this important curriculum.
And since the transition from elementary to middle school is a peak time for drug onset, Dr. Hill recommends schools start their programming early and incorporate it into a bigger strategy that boosts social-emotional learning.
Seldeen says kids are really hungry for this kind of knowledge.
“Kids want to know about drugs, and if they have the science and information and the opportunities to practice saying ‘no’ and having these conversations, they're a lot less likely to experiment or use.”
Roaring Fork School District staff said whatever drug prevention plan they choose, community input and buy-in will be an important component.
The district’s interim superintendent is scheduled to discuss next steps at its Feb. 6 school board meeting.
|
Reimagining Shakespeare through Latino and Indigenous lenses
The global movement to diversify the works of William Shakespeare is hitting the U.S. borderlands, with a new project, book and adaptations attempting to tell the writer's stories through Latino and Indigenous lenses.
The big picture: Shakespeare's work for centuries has catered to and been performed for white audiences, but a new movement is flipping that on its head.
- Scholars worldwide have in recent years examined how race and gender intersect in Shakespeare's work, resulting in diverse casting in productions and experimental adaptations of his most famous works.
Driving the news: Three Texas-based scholars who are part of a collective examining Shakespeare through a Chicano studies lens were recently awarded a $500,000 grant from the Mellon Foundation for their research project, "Shakespeare in the U.S.-Mexico Borderlands."
- The project seeks to make Shakespeare more accessible to border communities.
- Kathryn Vomero Santos of Trinity University, along with Katherine Gillen and Adrianna M. Santos of Texas A&M University-San Antonio, are behind the project, which is culminating with their upcoming book, "The Bard in the Borderlands: An Anthology of Shakespeare Appropriations en La Frontera."
The book, which is scheduled to publish in March, contains 12 plays that reenvision Shakespeare on the border.
- One adaptation in the book, "The Language of Flowers," has Romeo and Juliet ending up in a Mexican afterlife framed by Día de Los Muertos, Gillen says.
Zoom out: Shakespeare's Globe in London regularly stages productions with Black or South Asian actors in prominent roles.
- It sponsors "Anti-Racist Shakespeare," free webinars that bring together scholars and artists of color to examine Shakespeare's plays via the lens of social justice.
- In the U.S., classical theater companies are experimenting with more diverse Shakespeare productions.
- A bilingual audio adaptation of "Romeo and Juliet" — "Romeo y Julieta" — starring Lupita Nyong'o ran in 2021, and a Delaware production group last year reimagined "Twelfth Night, or What You Will" as a bilingual immigration story with Latino actors.
What they're saying: "We see our work aligned with a larger movement that has been embraced under the umbrella #ShakesRace," Kathryn Vomero Santos tells Axios.
- Adrianna Santos said it's common for students of color in K-12 to resist Shakespeare and see his work as unrelatable.
- "But when you offer them Shakespeare through the lens of the Borderlands, through multilingual adaptations...that teach Mexican American contexts and histories and culture, then it becomes a new life for them," Adrianna Santos adds.
One fun thing: Scholars say there is evidence that Shakespeare's audiences were more diverse than initially thought, and the playwright would have been exposed to travelers from around the world, including Africa and the Americas.
- In "Titus Andronicus," Aaron the Moor refuses to kill his biracial baby and gives what is believed to be one of the first Black power speeches in literature, according to Shakespeare scholar and Arizona State University English professor Ayanna Thompson.
Subscribe to Axios Latino to get vital news about Latinos and Latin America, delivered to your inbox on Tuesdays and Thursdays.
|
<urn:uuid:140f8ca3-e29d-4426-aa39-16a0f76ea631>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474470.37/warc/CC-MAIN-20240223221041-20240224011041-00488.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9432093501091003,
"pii_count": 0,
"score": 2.96875,
"token_count": 687,
"url": "https://www.axios.com/2023/02/21/shakespeare-us-mexico-border-latino-chicano"
}
|
Reimagining Shakespeare through Latino and Indigenous lenses
The global movement to diversify the works of William Shakespeare is hitting the U.S. borderlands, with a new project, book and adaptations attempting to tell the writer's stories through Latino and Indigenous lenses.
The big picture: Shakespeare's work for centuries has catered to and been performed for white audiences, but a new movement is flipping that on its head.
- Scholars worldwide have in recent years examined how race and gender intersect in Shakespeare's work, resulting in diverse casting in productions and experimental adaptations of his most famous works.
Driving the news: Three Texas-based scholars who are part of a collective examining Shakespeare through a Chicano studies lens were recently awarded a $500,000 grant from the Mellon Foundation for their research project, "Shakespeare in the U.S.-Mexico Borderlands."
- The project seeks to make Shakespeare more accessible to border communities.
- Kathryn Vomero Santos of Trinity University, along with Katherine Gillen and Adrianna M. Santos of Texas A&M University-San Antonio, are behind the project, which is culminating with their upcoming book, "The Bard in the Borderlands: An Anthology of Shakespeare Appropriations en La Frontera."
The book, which is scheduled to publish in March, contains 12 plays that reenvision Shakespeare on the border.
- One adaptation in the book, "The Language of Flowers," has Romeo and Juliet ending up in a Mexican afterlife framed by Día de Los Muertos, Gillen says.
Zoom out: Shakespeare's Globe in London regularly stages productions with Black or South Asian actors in prominent roles.
- It sponsors "Anti-Racist Shakespeare," free webinars that bring together scholars and artists of color to examine Shakespeare's plays via the lens of social justice.
- In the U.S., classical theater companies are experimenting with more diverse Shakespeare productions.
- A bilingual audio adaptation of "Romeo and Juliet" — "Romeo y Julieta" — starring Lupita Nyong'o ran in 2021, and a Delaware production group last year reimagined "Twelfth Night, or What You Will" as a bilingual immigration story with Latino actors.
What they're saying: "We see our work aligned with a larger movement that has been embraced under the umbrella #ShakesRace," Kathryn Vomero Santos tells Axios.
- Adrianna Santos said it's common for students of color in K-12 to resist Shakespeare and see his work as unrel
|
atable.
- "But when you offer them Shakespeare through the lens of the Borderlands, through multilingual adaptations...that teach Mexican American contexts and histories and culture, then it becomes a new life for them," Adrianna Santos adds.
One fun thing: Scholars say there is evidence that Shakespeare's audiences were more diverse than initially thought, and the playwright would have been exposed to travelers from around the world, including Africa and the Americas.
- In "Titus Andronicus," Aaron the Moor refuses to kill his biracial baby and gives what is believed to be one of the first Black power speeches in literature, according to Shakespeare scholar and Arizona State University English professor Ayanna Thompson.
Subscribe to Axios Latino to get vital news about Latinos and Latin America, delivered to your inbox on Tuesdays and Thursdays.
|
“The Nine Tailors is a 1934 mystery novel by the British writer Dorothy L. Sayers, her ninth featuring Lord Peter Wimsey. The story is set in the Lincolnshire Fens, and revolves around a group of bell-ringers at the local parish church. The book has been described as Sayers’ finest literary achievement
The Nine Tailors of the book’s title are taken from the old saying “Nine Tailors Make a Man”, which Sayers quotes at the end of the novel. As explained by John Shand in his 1936 Spectator article The Bellringers’ Art, “‘Nine Tailors’ means the nine strokes which at the beginning of the toll for the dead announce to the villagers that a man is dead. A woman’s death is announced with ‘Six Tailors’. Hence the old saying … which might otherwise be construed as a slander on a worthy profession”.”
From: Thirty-Two Years of Local Self-Government 1855-1887 (1888), by Rowley W. C. Richardson:
“In this book I have endeavoured to describe Surbiton as it was before the passing of the Improvement Act in June 1855; to give a record of the work which has since been done, and to represent the condition of the district in June last. The results of the important step which was taken in establishing independent existence in 1855 are thus traced up to the completion of the thirty-two years of self-government…
…The practice of tolling the bell to announce the death of a parishioner is maintained at Christ Church–the knell for a man being three times three; for a boy, twice three; for a woman, three times two; and for a girl, twice two.
The expression, ” Nine tailors make a man, ,” is supposed to have originated from the number of strokes given in ringing the death-knell. The following appears on this subject in “Folk Etymology,” by the Rev. A. Smythe Palmer:
” ‘Tailors, nine make a man,’ said to be a corruption of ‘Nine tailers (itself corrupted from ‘tellers’) make it a man;’ i.e. nine counting strokes at the end of a knell proclaim the death of a male adult. An old homily for Trinity Sunday declares that at the death of a man three bells were to be rung as his knell, and two bells for a woman (Hampson, ‘Med. vi Kalend, 294’). It is observable that Taylor, the Water Poet, has a version of the phrase conformable to this, speaking of ‘the slander that three taylers are one man’ (‘Works,’ 1630, iii. 73).”
In Brewer’s ” Dictionary of Phrase and Fable” it is stated that ” an orphan lad, in 1742, applied to a fashionable London tailor for alms. There were nine journeymen in the establishment, each of whom contributed something to set the little orphan boy up with a fruit-barrow. The little merchant in time became rich, and adopted for his motto, ‘Nine tailors make a man.’ “
|
<urn:uuid:35619383-47f0-4592-abc8-958bc42567df>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00596.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9649408459663391,
"pii_count": 0,
"score": 2.625,
"token_count": 711,
"url": "https://london1psychotherapist.wordpress.com/2023/01/23/nine-tailors-make-a-man/"
}
|
“The Nine Tailors is a 1934 mystery novel by the British writer Dorothy L. Sayers, her ninth featuring Lord Peter Wimsey. The story is set in the Lincolnshire Fens, and revolves around a group of bell-ringers at the local parish church. The book has been described as Sayers’ finest literary achievement
The Nine Tailors of the book’s title are taken from the old saying “Nine Tailors Make a Man”, which Sayers quotes at the end of the novel. As explained by John Shand in his 1936 Spectator article The Bellringers’ Art, “‘Nine Tailors’ means the nine strokes which at the beginning of the toll for the dead announce to the villagers that a man is dead. A woman’s death is announced with ‘Six Tailors’. Hence the old saying … which might otherwise be construed as a slander on a worthy profession”.”
From: Thirty-Two Years of Local Self-Government 1855-1887 (1888), by Rowley W. C. Richardson:
“In this book I have endeavoured to describe Surbiton as it was before the passing of the Improvement Act in June 1855; to give a record of the work which has since been done, and to represent the condition of the district in June last. The results of the important step which was taken in establishing independent existence in 1855 are thus traced up to the completion of the thirty-two years of self-government…
…The practice of tolling the bell to announce the death of a parishioner is maintained at Christ Church–the knell for a man being three times three; for a boy, twice three; for a woman, three times two; and for a girl, twice two.
The expression, ” Nine tailors make a man, ,” is supposed to have originated from the number of strokes given in ringing the death-knell. The following appears on this subject in “Folk Etymology,” by the Rev. A. Smythe Palmer:
” ‘Tailors, nine make a man,’ said to be a corruption of ‘Nine tailers (itself corrupted from ‘tellers’) make it a man;’ i.e. nine counting strokes at the end of a knell proclaim the death of a male adult. An old homily for Trinity Sunday declares that at the death of a man three bells were to be rung as his knell
|
, and two bells for a woman (Hampson, ‘Med. vi Kalend, 294’). It is observable that Taylor, the Water Poet, has a version of the phrase conformable to this, speaking of ‘the slander that three taylers are one man’ (‘Works,’ 1630, iii. 73).”
In Brewer’s ” Dictionary of Phrase and Fable” it is stated that ” an orphan lad, in 1742, applied to a fashionable London tailor for alms. There were nine journeymen in the establishment, each of whom contributed something to set the little orphan boy up with a fruit-barrow. The little merchant in time became rich, and adopted for his motto, ‘Nine tailors make a man.’ “
|
More and more people are adding fish into their diets for the heart-healthy omega-3 fatty acids. But it may be time to skip the salmon and tuna, and reach for foods lower on the seafood chain like sardines and clams instead.
Shifting your seafood diet to incorporate mainly smaller fish and bivalves can up your intake of micronutrients that people are typically deficient in, including zinc, iron, vitamin B12 and omega-3 fatty acids, says Christopher Golden, professor of nutrition and planetary health at the Harvard T.H. Chan School of Public Health.
"Of all of the different equestrial and aquatic-based foods, small fish are the best source of nutrients for these types of things on a per-weight basis," Golden tells CNBC Make it. "So, although they are underutilized, they're really good at delivering those sorts of nutrients."
Bivalves, like mussels, are also a "really healthy choice," Golden notes, and they're great sources of zinc and calcium. Bivalves are "a type of mollusk, such as an oyster, that has its body inside two connected shells."
Here's what Golden recommends adding to your diet to replace foods like red meat and farmed salmon.
That being said, eating lower on the seafood chain is something you should consider for more than just nutritional value. There are other important reasons why it's a healthy decision to make, says Golden.
Smaller fish and bivalves may have lower levels of mercury
While Golden is actively researching if there's a significant difference between the mercury levels in smaller fish and larger fish, scientifically it is shown that when you eat foods that are higher on the seafood chain, "you're going to have a greater risk of mercury toxicity within the foods that you eat and mercury exposure," he says.
"Eating the smaller fish, you're more likely to have a lesser mercury load than some of the larger carnivorous fish," says Golden.
Bivalves, including clams and oysters, will also have much lower levels of mercury, especially in geographic regions with less pollution, he adds.
"Bivalves from healthy waters and small fish is really the direction to go," says Golden.
Eating lower on the seafood chain is better for the environment
"These types of foods are the best possible sources of animal-sourced food from a sustainability standpoint," Golden says.
Bivalves, specifically, are "nature-positive" because they don't require feeding to be farmed, which is better for the environment, says Golden. They also filter water and capture carbon which can weed out pollution and contaminants within them, and provide "an ecosystem service," he adds.
Another way to support the environment is by adding aquatic plant foods like seaweed and kelp to your diet. "Seaweed is low in calories, is a good source of fiber, and also contains iodine, a mineral required to make thyroid hormones," according to an article by Harvard Health Publishing.
But it's important to keep in mind, "where something is produced and how it is produced," says Golden. "[It] is often so much more important than what the actual food is when it comes to seafood."
For the best protection, you should follow the Food and Drug Administration's guidelines for buying and preparing seafood safely.
DON'T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!
Get CNBC's free Warren Buffett Guide to Investing, which distills the billionaire's No. 1 best piece of advice for regular investors, do's and don'ts, and three key investing principles into a clear and simple guidebook.
|
<urn:uuid:1f92fbf3-deba-45dc-9bb1-6f14fe0239cf>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100800.25/warc/CC-MAIN-20231209040008-20231209070008-00229.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9637048840522766,
"pii_count": 0,
"score": 2.96875,
"token_count": 768,
"url": "https://www.cnbc.com/2023/06/25/harvard-expert-eat-lower-on-the-seafood-chain-for-vital-nutrients.html"
}
|
More and more people are adding fish into their diets for the heart-healthy omega-3 fatty acids. But it may be time to skip the salmon and tuna, and reach for foods lower on the seafood chain like sardines and clams instead.
Shifting your seafood diet to incorporate mainly smaller fish and bivalves can up your intake of micronutrients that people are typically deficient in, including zinc, iron, vitamin B12 and omega-3 fatty acids, says Christopher Golden, professor of nutrition and planetary health at the Harvard T.H. Chan School of Public Health.
"Of all of the different equestrial and aquatic-based foods, small fish are the best source of nutrients for these types of things on a per-weight basis," Golden tells CNBC Make it. "So, although they are underutilized, they're really good at delivering those sorts of nutrients."
Bivalves, like mussels, are also a "really healthy choice," Golden notes, and they're great sources of zinc and calcium. Bivalves are "a type of mollusk, such as an oyster, that has its body inside two connected shells."
Here's what Golden recommends adding to your diet to replace foods like red meat and farmed salmon.
That being said, eating lower on the seafood chain is something you should consider for more than just nutritional value. There are other important reasons why it's a healthy decision to make, says Golden.
Smaller fish and bivalves may have lower levels of mercury
While Golden is actively researching if there's a significant difference between the mercury levels in smaller fish and larger fish, scientifically it is shown that when you eat foods that are higher on the seafood chain, "you're going to have a greater risk of mercury toxicity within the foods that you eat and mercury exposure," he says.
"Eating the smaller fish, you're more likely to have a lesser mercury load than some of the larger carnivorous fish," says Golden.
Bivalves, including clams and oysters, will also have much lower levels of mercury, especially in geographic regions with less pollution, he adds.
"Bivalves from healthy waters and small fish is really the direction to go," says Golden.
Eating lower on the seafood chain is better for the environment
"These types of foods are the best possible sources of animal-sourced food from a sustainability standpoint," Golden says.
Bivalves, specifically, are "nature-positive" because
|
they don't require feeding to be farmed, which is better for the environment, says Golden. They also filter water and capture carbon which can weed out pollution and contaminants within them, and provide "an ecosystem service," he adds.
Another way to support the environment is by adding aquatic plant foods like seaweed and kelp to your diet. "Seaweed is low in calories, is a good source of fiber, and also contains iodine, a mineral required to make thyroid hormones," according to an article by Harvard Health Publishing.
But it's important to keep in mind, "where something is produced and how it is produced," says Golden. "[It] is often so much more important than what the actual food is when it comes to seafood."
For the best protection, you should follow the Food and Drug Administration's guidelines for buying and preparing seafood safely.
DON'T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!
Get CNBC's free Warren Buffett Guide to Investing, which distills the billionaire's No. 1 best piece of advice for regular investors, do's and don'ts, and three key investing principles into a clear and simple guidebook.
|
Do trees suck? You bet they do, and it’s time we do something about it, according to a group of conservative Utah lawmakers.
Claiming “overgrown” forests are guzzling Utah’s water resources dry, rural members are now calling for a major logging initiative as the best hope for saving the shrinking Great Salt Lake and Lake Powell, despite a lack of scientific evidence that tree removal would make a big difference.
Water conservation and efficiency are fine, but such measures are not enough to replenish Utah’s drought-depleted reservoirs and avert the ecological disaster unfolding at the Great Salt Lake, according to presentations Thursday before the Legislature’s “Yellow Cake Caucus,” a group of conservative lawmakers organized by Rep. Phil Lyman, R-Blanding.
Utah’s 5 million acres of forests are crowded with 100 to 200 trees per acre, about 10 times the densities in the 1800s, Randy Julander, a retired federal hydrologist, told the caucus gathering. And the trees on about a quarter of this land are standing dead because there isn’t enough water in the ground to sustain them, he added.
“The forests we have today have way too many trees,” said Julander, who led the Natural Resources Conservation Service’s Utah snow survey for years. “Too many trees, the wrong kinds of trees, and the wrong age class of trees on many of the forests that we see sitting around here. So forest management has a critical role in producing streamflow.”
Ecology professor Ben Abbott, however, cautioned against comparing today’s forests with those of the late 19th century.
“There was expansive wholesale over logging of our forests at that time,” Abbott said after viewing the caucus meeting on Zoom. “You look at some of these photos, and maybe there were 10 to 20 trees per acre out there. That was because all the trees that could be used for timber and fuel wood were cut down. It was extremely degraded forest ecosystems.”
Named for the uranium product used to generate nuclear power, the Yellow Cake Caucus was organized by Lyman last year with the goal of advancing the interests of rural Utah. It meets every Thursday where a dessert by the same name is served.
Julander was the main witness in Lyman’s case that there are better ways out of Utah’s water crisis than forcing growers to use less water on alfalfa and other thirsty crops. Prompting Lyman to discuss forest thinning at the caucus’s first meeting of the session was a letter sent last week by Salt Lake County Council Member Dea Theodore to legislative leadership, demanding greater focus on restoring forest health to address Utah’s growing water crisis.
“The trees in our over-grown forests and other non-native, noxious trees along our rivers and streams are consuming trillions of gallons of water that would otherwise flow downstream to the Great Salt Lake. Many of our forests are already in desperate need of tree thinning due to high and extreme fire danger,” states Theodore’s Jan. 13 letter, which was cosigned by several Yellow Cake members and rural county commissioners.
“The thinning of overgrowth in the Great Salt Lake Basin would restore over 1.5 million acre-feet of water per year to the Great Salt Lake,” says the letter, which doesn’t cite sources for the claim, “enough to have the Lake on its way to restored water levels over the next 5 years.”
Coincidentally, the U.S. Forest Service on Thursday announced major investments in forest health arising from the Bipartisan Infrastructure Law, some of which target Utah forests for thinning, prescribed fire and other treatments aimed at reducing wildfire risk. Some $18 million will be spent on the central Wasatch and western Uinta mountains in the program’s first year, according to Intermountain Regional Forester Mary Farnsworth.
While many of Julander and Theodore’s assertions about poor forest health are beyond dispute, their proposed remedy for Utah’s water problem is not grounded in science, according to Abbott and other academic Utah scientists.
“There are some nuggets of truth in there,” Abbott said. “Is this [logging] a real solution for the Great Salt Lake? That’s where the evidence really falls apart. Water yield increases following either mechanical tree thinning or wildfire are not guaranteed. There has been over 100 years of research on this topic, using paired watershed studies.”
In such studies, researchers compare how watersheds with similar vegetation, aspect, soil type, elevation and other elements respond after a disturbance in which trees are killed, whether it’s through logging, insects or wildfire.
“It is not a simple response. You can see no change in water yields. You can see decreases in water yields observed in many areas affected by pine beetle outbreaks,” he said. “You can see shifts in when the water comes out in the watershed, often shifting earlier in the year when you don’t necessarily want it to be coming out.”
Another critical issue Abbott raised is water quality, which can be degraded after trees die or are removed.
“Let’s say that you do have large vegetation reduction in a watershed, enough that you see an increase in streamflow,” he said, “then almost always that’s accompanied by release of pollutants from the soil, including erosion, that create all of these water infrastructure problems downstream.”
A Brigham Young University professor, Abbott led the recent presentation by Utah scientists calling on the Legislature to take action this session to ensure more water reaches the Great Salt Lake to stave off its ecological collapse, which has already begun. The group argued Utahns, especially alfalfa growers who account for most of the state’s water use, must reign in waste so that more water is available in the environment to support vital ecosystems.
Abbott applauded Julander for pointing out the ecological role of wildfire in forest health. The poor condition of the West’s forest can be largely attributed to fire suppression, although overgrazing and past logging practices also played a role.
“Having more prescribed burning would have a lot of benefits for forest health. That’s been proposed by the research and management community for a long time,” Abbott said.
The rise of fire suppression in the early 20th century enabled conifers to displace millions of acres of aspen, a species that is not only resilient to fire, but also helps increase streamflows, Julander told the caucus. Without regularly occurring fires to wipe out pines and furs, these species have all but replaced aspen in many places to the detriment of overall forest health.
“I see trees, particularly conifers, as the problem,” Julander said. “In order to have a forest, you have to have trees. Trees are a wonderful thing. But when you look at it specifically in terms of water production, trees are the enemy, particularly conifers.”
This article is published through The Great Salt Lake Collaborative: A Solutions Journalism Initiative, a partnership of news, education and media organizations that aims to inform readers about the Great Salt Lake.
|
<urn:uuid:8fe28fad-55c1-4972-bc62-fcfa62db8b88>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00058.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9514371752738953,
"pii_count": 0,
"score": 2.625,
"token_count": 1535,
"url": "https://www.sltrib.com/news/environment/2023/01/20/are-trees-enemy-some-utah/"
}
|
Do trees suck? You bet they do, and it’s time we do something about it, according to a group of conservative Utah lawmakers.
Claiming “overgrown” forests are guzzling Utah’s water resources dry, rural members are now calling for a major logging initiative as the best hope for saving the shrinking Great Salt Lake and Lake Powell, despite a lack of scientific evidence that tree removal would make a big difference.
Water conservation and efficiency are fine, but such measures are not enough to replenish Utah’s drought-depleted reservoirs and avert the ecological disaster unfolding at the Great Salt Lake, according to presentations Thursday before the Legislature’s “Yellow Cake Caucus,” a group of conservative lawmakers organized by Rep. Phil Lyman, R-Blanding.
Utah’s 5 million acres of forests are crowded with 100 to 200 trees per acre, about 10 times the densities in the 1800s, Randy Julander, a retired federal hydrologist, told the caucus gathering. And the trees on about a quarter of this land are standing dead because there isn’t enough water in the ground to sustain them, he added.
“The forests we have today have way too many trees,” said Julander, who led the Natural Resources Conservation Service’s Utah snow survey for years. “Too many trees, the wrong kinds of trees, and the wrong age class of trees on many of the forests that we see sitting around here. So forest management has a critical role in producing streamflow.”
Ecology professor Ben Abbott, however, cautioned against comparing today’s forests with those of the late 19th century.
“There was expansive wholesale over logging of our forests at that time,” Abbott said after viewing the caucus meeting on Zoom. “You look at some of these photos, and maybe there were 10 to 20 trees per acre out there. That was because all the trees that could be used for timber and fuel wood were cut down. It was extremely degraded forest ecosystems.”
Named for the uranium product used to generate nuclear power, the Yellow Cake Caucus was organized by Lyman last year with the goal of advancing the interests of rural Utah. It meets every Thursday where a dessert by the same name is served.
Julander was the main witness in Lyman’s case that there are better ways out of Utah’s water crisis than forcing growers to use less water on alfalfa and other thirsty crops. Prompting Lyman to discuss forest
|
thinning at the caucus’s first meeting of the session was a letter sent last week by Salt Lake County Council Member Dea Theodore to legislative leadership, demanding greater focus on restoring forest health to address Utah’s growing water crisis.
“The trees in our over-grown forests and other non-native, noxious trees along our rivers and streams are consuming trillions of gallons of water that would otherwise flow downstream to the Great Salt Lake. Many of our forests are already in desperate need of tree thinning due to high and extreme fire danger,” states Theodore’s Jan. 13 letter, which was cosigned by several Yellow Cake members and rural county commissioners.
“The thinning of overgrowth in the Great Salt Lake Basin would restore over 1.5 million acre-feet of water per year to the Great Salt Lake,” says the letter, which doesn’t cite sources for the claim, “enough to have the Lake on its way to restored water levels over the next 5 years.”
Coincidentally, the U.S. Forest Service on Thursday announced major investments in forest health arising from the Bipartisan Infrastructure Law, some of which target Utah forests for thinning, prescribed fire and other treatments aimed at reducing wildfire risk. Some $18 million will be spent on the central Wasatch and western Uinta mountains in the program’s first year, according to Intermountain Regional Forester Mary Farnsworth.
While many of Julander and Theodore’s assertions about poor forest health are beyond dispute, their proposed remedy for Utah’s water problem is not grounded in science, according to Abbott and other academic Utah scientists.
“There are some nuggets of truth in there,” Abbott said. “Is this [logging] a real solution for the Great Salt Lake? That’s where the evidence really falls apart. Water yield increases following either mechanical tree thinning or wildfire are not guaranteed. There has been over 100 years of research on this topic, using paired watershed studies.”
In such studies, researchers compare how watersheds with similar vegetation, aspect, soil type, elevation and other elements respond after a disturbance in which trees are killed, whether it’s through logging, insects or wildfire.
“It is not a simple response. You can see no change in water yields. You can see decreases in water yields observed in many areas affected by pine beetle outbreaks,” he said. “You can see shifts in when the water comes out in the watershed, often shifting earlier in the year when you don’t necessarily want it to be coming out.”
Another critical issue Abbott raised is water quality, which can be degraded after trees die or are removed.
“Let’s say that you do have large vegetation reduction in a watershed, enough that you see an increase in streamflow,” he said, “then almost always that’s accompanied by release of pollutants from the soil, including erosion, that create all of these water infrastructure problems downstream.”
A Brigham Young University professor, Abbott led the recent presentation by Utah scientists calling on the Legislature to take action this session to ensure more water reaches the Great Salt Lake to stave off its ecological collapse, which has already begun. The group argued Utahns, especially alfalfa growers who account for most of the state’s water use, must reign in waste so that more water is available in the environment to support vital ecosystems.
Abbott applauded Julander for pointing out the ecological role of wildfire in forest health. The poor condition of the West’s forest can be largely attributed to fire suppression, although overgrazing and past logging practices also played a role.
“Having more prescribed burning would have a lot of benefits for forest health. That’s been proposed by the research and management community for a long time,” Abbott said.
The rise of fire suppression in the early 20th century enabled conifers to displace millions of acres of aspen, a species that is not only resilient to fire, but also helps increase streamflows, Julander told the caucus. Without regularly occurring fires to wipe out pines and furs, these species have all but replaced aspen in many places to the detriment of overall forest health.
“I see trees, particularly conifers, as the problem,” Julander said. “In order to have a forest, you have to have trees. Trees are a wonderful thing. But when you look at it specifically in terms of water production, trees are the enemy, particularly conifers.”
This article is published through The Great Salt Lake Collaborative: A Solutions Journalism Initiative, a partnership of news, education and media organizations that aims to inform readers about the Great Salt Lake.
|
West Coast study to examine disease impacting honeybees
This spring, researchers from across the West Coast will begin surveying more than 1,500 honeybee hives in California, Oregon and Washington to determine why a bacterial disease is leading to a persistent decline of the pollinators.
Why it matters: Beekeepers have been aware of European foulbrood disease for decades. However, its rapid resurgence over the last few years has many worried about the disease's potential to harm commercial food production on bee-dependent crops throughout the country.
- That's according to Ramesh Sagili, an apiculturist and a professor of agriculture at Oregon State University. "We know the basics," he told Axios. "But what we don't know are the triggers."
How it works: European foulbrood disease infects honeybees in the larvae stage by turning the eggs into brown mush.
- Bees infected with the bacterial disease can easily spread it to larvae, infecting the comb.
- For example, if a queen bee lays an egg in a contaminated cell, or an infected bee feeds a larvae, that larvae will be infected, too — essentially prohibiting the replenishment of the hive and leading to its demise.
Driving the news: In December, OSU received a $4.2 million grant from the U.S. Department of Agriculture to lead a study on what stress factors make bees more susceptible to the disease, including malnutrition and climate.
- Researchers from OSU, Washington State University, and the University of California, Davis will follow dozens of commercial beekeepers as they travel with their hives to pollinate crops throughout the U.S. — starting in February with almonds in California.
The project, led by Sagili, will span four years. The first step is finding evidence of foulbrood, which involves labeling each hive and collecting samples for genetic testing to determine if there's a new strain while monitoring each colony's population.
- Researchers will also note climate, weather patterns, and availability of nutrition — i.e. pollen and nectar — in hopes of figuring out how bees contract foulbrood in the first place.
The intrigue: Sagili believes they may have identified where the disease starts to show up in the pollination process.
- When commercial beekeepers return to their home states after helping to pollinate orchards in California, there's "good correlational data at this point where we see the hives that go to blueberries are the ones showing high incidence of disease," Sagili said.
- Because bees can't go out and pollinate in the cold, a bee's "immune system may be compromised because of nutritional stress," Sigili added, making it a target for European foulbrood.
The bottom line: As foulbrood spreads, hives become smaller and weaker, and the bees are unable to pollinate the millions of acres needed to sustain a region's food supply.
- There's a high probability of cascading impacts on several bee-dependent crops down the line (like carrots, clover, turnips, and more) — economic damages researchers hope to quantify soon.
More Portland stories
No stories could be found
Get a free daily digest of the most important news in your backyard with Axios Portland.
|
<urn:uuid:372b55c2-d955-4183-9845-4bd55ad9ab57>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474653.81/warc/CC-MAIN-20240226062606-20240226092606-00232.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.935844361782074,
"pii_count": 0,
"score": 3.421875,
"token_count": 673,
"url": "https://www.axios.com/local/portland/2024/01/11/honeybee-disease-food-supply-research-study-university"
}
|
West Coast study to examine disease impacting honeybees
This spring, researchers from across the West Coast will begin surveying more than 1,500 honeybee hives in California, Oregon and Washington to determine why a bacterial disease is leading to a persistent decline of the pollinators.
Why it matters: Beekeepers have been aware of European foulbrood disease for decades. However, its rapid resurgence over the last few years has many worried about the disease's potential to harm commercial food production on bee-dependent crops throughout the country.
- That's according to Ramesh Sagili, an apiculturist and a professor of agriculture at Oregon State University. "We know the basics," he told Axios. "But what we don't know are the triggers."
How it works: European foulbrood disease infects honeybees in the larvae stage by turning the eggs into brown mush.
- Bees infected with the bacterial disease can easily spread it to larvae, infecting the comb.
- For example, if a queen bee lays an egg in a contaminated cell, or an infected bee feeds a larvae, that larvae will be infected, too — essentially prohibiting the replenishment of the hive and leading to its demise.
Driving the news: In December, OSU received a $4.2 million grant from the U.S. Department of Agriculture to lead a study on what stress factors make bees more susceptible to the disease, including malnutrition and climate.
- Researchers from OSU, Washington State University, and the University of California, Davis will follow dozens of commercial beekeepers as they travel with their hives to pollinate crops throughout the U.S. — starting in February with almonds in California.
The project, led by Sagili, will span four years. The first step is finding evidence of foulbrood, which involves labeling each hive and collecting samples for genetic testing to determine if there's a new strain while monitoring each colony's population.
- Researchers will also note climate, weather patterns, and availability of nutrition — i.e. pollen and nectar — in hopes of figuring out how bees contract foulbrood in the first place.
The intrigue: Sagili believes they may have identified where the disease starts to show up in the pollination process.
- When commercial beekeepers return to their home states after helping to pollinate orchards in California, there's "good correlational data at this point where we see the hives that go to blueberries are the ones
|
showing high incidence of disease," Sagili said.
- Because bees can't go out and pollinate in the cold, a bee's "immune system may be compromised because of nutritional stress," Sigili added, making it a target for European foulbrood.
The bottom line: As foulbrood spreads, hives become smaller and weaker, and the bees are unable to pollinate the millions of acres needed to sustain a region's food supply.
- There's a high probability of cascading impacts on several bee-dependent crops down the line (like carrots, clover, turnips, and more) — economic damages researchers hope to quantify soon.
More Portland stories
No stories could be found
Get a free daily digest of the most important news in your backyard with Axios Portland.
|
Sign up for CNN’s Sleep, But Better newsletter series. Our seven-part guide has helpful hints to achieve better sleep.
Ever drift off to sleep and suddenly feel like you’re falling, forcing you to wake with a start? Some people say they are startled alert by a loud snapping noise or a blinding light coming from inside their head, while others describe their muscles twitching involuntarily from a sudden electric shock.
“Sleep starts usually involve one strong jerk that moves most of your body, with the arms and legs more likely to be affected. This can jolt you awake before you have the chance to fall asleep,” said sleep specialist Dr. Raj Dasgupta, an associate professor of clinical medicine at the University of Southern California’s Keck School of Medicine.
Chris Breitigan, a 29-year-old podcast producer from Huron, Ohio, says he’s sometimes awakened by a rather ghostly experience.
“I’ll be right on the verge of falling asleep, and it’s like someone tickles me,” he said. “It starts from my back and moves down through my legs. I startle and get a sort of jerk in my body.”
The experience may be accompanied by a rapid heartbeat, faster breathing, sweating or a vivid dream or hallucination, according to scientists.
Sleep starts, officially called “hypnic jerks,” are normal occurrences that can happen to men and women of any age and are typically nothing to worry about, Dasgupta said.
“It’s estimated that nearly 70% of the population experience sleep starts at some point,” he said. “Medically speaking, hypnic jerks are classified as a type of myoclonus, which is a category of rapid, involuntary muscle movements. A classic example of myoclonus are hiccups.”
No one knows precisely why the body twitches while falling asleep, but experts believe excessive caffeine intake, and physical or emotional stress may increase their frequency.
“They also may be facilitated by fatigue or sleep deprivation,” Dasgupta said. “However, most hypnic jerks occur essentially at random in healthy people.”
Breitigan’s sleep is typically disturbed after going out with friends on Taco Tuesday nights.
“I really don’t drink much,” he said. “But on Tuesdays, I go out with friends and we have some drinks with tacos. So for me it seems it’s triggered by alcohol because I don’t drink regularly.”
When to worry
There is no treatment for sleep starts, Dasguta said, and they are generally harmless. However, it’s time to visit the doctor if you experience any of the following symptoms: multiple muscle jerks during the day, an injury caused by hypnic jerks, tongue or mouth biting while you sleep, or wetting the bed.
“Hypnic jerks can sometimes be confused with seizures,” Dasgupta said. “While they may seem similar, they have some key differences: Seizures are a serious occurrence that can be a result of an underlying condition.
“Hypnic jerks, on the other hand, are benign phenomena that aren’t tied to any health conditions or concerns,” he said. “Mainly they’re just annoying, especially if they keep preventing you from falling asleep.”
However, some people develop a fixation on these jolts from slumber, leading to increased anxiety about the disruptive experience, he added.
“This increased anxiety and fatigue increases the likelihood of experiencing these jerks, resulting in a vicious cycle of insomnia and sleep deprivation,” Dasgupta said.
Try these fixes
For anyone bothered by such events, Dasgupta has the following suggestions:
Reduce caffeine intake: Drinking less caffeine throughout the day can help improve your overall quality of sleep, Dasgupta said, especially if you avoid caffeine in the late afternoon and at night.
Avoid or reduce alcohol near bedtime: The same applies to alcohol, he said. Alcohol may make you sleepy but when the body is finished metabolizing it, you will wake, typically in the middle of the night. That will increase your fatigue, making you more vulnerable to hypnic jerks.
Try meditation and mindfulness before bed: Relaxing the body may ease that transition into sleep, making your muscles less likely to twitch, Dasgupta said.
“Also, one of the best ways to help yourself fall asleep is focusing on breathing. Most breathing exercises for sleeping usually involves slow and deep breaths,” he said.
Keep to a sleep routine: Dasgupta said the best sleep requires a set bedtime — even on weekends and holidays. It also helps to avoid bright screens at bedtime.
“The bright light of a TV, computer or smartphone can affect your sleep patterns and keep you alert when you should be getting sleepy,” he said. “Sleeping is something you’ve done your whole life, but the older you get, the harder it can get to fall sleep, so practice good sleep hygiene.”
Get inspired by a weekly roundup on living well, made simple. Sign up for CNN’s Life, But Better newsletter for information and tools designed to improve your well-being.
|
<urn:uuid:fa08d58a-3964-4b72-8234-249219243034>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00731.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9430684447288513,
"pii_count": 0,
"score": 2.578125,
"token_count": 1138,
"url": "https://www.cnn.com/2023/02/09/health/sleep-starts-hypnic-jerks-wellness/index.html"
}
|
Sign up for CNN’s Sleep, But Better newsletter series. Our seven-part guide has helpful hints to achieve better sleep.
Ever drift off to sleep and suddenly feel like you’re falling, forcing you to wake with a start? Some people say they are startled alert by a loud snapping noise or a blinding light coming from inside their head, while others describe their muscles twitching involuntarily from a sudden electric shock.
“Sleep starts usually involve one strong jerk that moves most of your body, with the arms and legs more likely to be affected. This can jolt you awake before you have the chance to fall asleep,” said sleep specialist Dr. Raj Dasgupta, an associate professor of clinical medicine at the University of Southern California’s Keck School of Medicine.
Chris Breitigan, a 29-year-old podcast producer from Huron, Ohio, says he’s sometimes awakened by a rather ghostly experience.
“I’ll be right on the verge of falling asleep, and it’s like someone tickles me,” he said. “It starts from my back and moves down through my legs. I startle and get a sort of jerk in my body.”
The experience may be accompanied by a rapid heartbeat, faster breathing, sweating or a vivid dream or hallucination, according to scientists.
Sleep starts, officially called “hypnic jerks,” are normal occurrences that can happen to men and women of any age and are typically nothing to worry about, Dasgupta said.
“It’s estimated that nearly 70% of the population experience sleep starts at some point,” he said. “Medically speaking, hypnic jerks are classified as a type of myoclonus, which is a category of rapid, involuntary muscle movements. A classic example of myoclonus are hiccups.”
No one knows precisely why the body twitches while falling asleep, but experts believe excessive caffeine intake, and physical or emotional stress may increase their frequency.
“They also may be facilitated by fatigue or sleep deprivation,” Dasgupta said. “However, most hypnic jerks occur essentially at random in healthy people.”
Breitigan’s sleep is typically disturbed after going out with friends on Taco Tuesday nights.
“I really don’t drink much,” he said. “But on Tuesdays, I go out with friends and we have some drinks with tacos. So for me it seems it’s triggered by alcohol because I don’t drink regularly.”
When to worry
There is no treatment for sleep starts, Dasg
|
uta said, and they are generally harmless. However, it’s time to visit the doctor if you experience any of the following symptoms: multiple muscle jerks during the day, an injury caused by hypnic jerks, tongue or mouth biting while you sleep, or wetting the bed.
“Hypnic jerks can sometimes be confused with seizures,” Dasgupta said. “While they may seem similar, they have some key differences: Seizures are a serious occurrence that can be a result of an underlying condition.
“Hypnic jerks, on the other hand, are benign phenomena that aren’t tied to any health conditions or concerns,” he said. “Mainly they’re just annoying, especially if they keep preventing you from falling asleep.”
However, some people develop a fixation on these jolts from slumber, leading to increased anxiety about the disruptive experience, he added.
“This increased anxiety and fatigue increases the likelihood of experiencing these jerks, resulting in a vicious cycle of insomnia and sleep deprivation,” Dasgupta said.
Try these fixes
For anyone bothered by such events, Dasgupta has the following suggestions:
Reduce caffeine intake: Drinking less caffeine throughout the day can help improve your overall quality of sleep, Dasgupta said, especially if you avoid caffeine in the late afternoon and at night.
Avoid or reduce alcohol near bedtime: The same applies to alcohol, he said. Alcohol may make you sleepy but when the body is finished metabolizing it, you will wake, typically in the middle of the night. That will increase your fatigue, making you more vulnerable to hypnic jerks.
Try meditation and mindfulness before bed: Relaxing the body may ease that transition into sleep, making your muscles less likely to twitch, Dasgupta said.
“Also, one of the best ways to help yourself fall asleep is focusing on breathing. Most breathing exercises for sleeping usually involves slow and deep breaths,” he said.
Keep to a sleep routine: Dasgupta said the best sleep requires a set bedtime — even on weekends and holidays. It also helps to avoid bright screens at bedtime.
“The bright light of a TV, computer or smartphone can affect your sleep patterns and keep you alert when you should be getting sleepy,” he said. “Sleeping is something you’ve done your whole life, but the older you get, the harder it can get to fall sleep, so practice good sleep hygiene.”
Get inspired by a weekly roundup on living well, made simple. Sign up for CNN’s Life, But Better newsletter for information and tools designed to improve your well-being.
|
Exposure to loud noises from headphones and commuting is leaving half of under-35s with poor hearing, a study has found.
Experts said the extent to which loud or prolonged exposure to noise is causing irreversible damage to young people’s ears was an “emerging crisis”.
Data from more than 1,000 people aged 16 to 35 collated between June and October 2023, found that 49 per cent had “less than normal hearing” levels expected and that it was affecting their quality of life.
A quarter of under-35s had hearing loss while a further two in five had “probable” hearing loss, according to analysis of clinical tests by hearing health company eargym.
Just 40 per cent had no sign of hearing loss, with the remainder undetermined.
The test used was a “speech-in-noise” activity, which requires participants to identify three numbers against a background of noise, multiple times and at a range of volumes, giving them a score called a signal-to-noise ratio.
It was corroborated by a clinical questionnaire called the Amsterdam Inventory for Auditory Disability and Handicap (AIADH), which found that half of under-35s had difficulty hearing, particularly in the categories “intelligibility in noise” and “auditory localisation”.
This means hearing speech in noisy environments, such as following conversation against loud backgrounds and identifying lyrics in music, and being able to identify where a sound is coming from, such as the direction of a travelling car.
Amanda Philpott, a former local NHS chief executive, hearing-aid user and co-founder and chief executive of eargym, told the Telegraph that asking someone these types of questions about their hearing “can really give a clear clue about whether or not you’re starting to experience the early signs of hearing loss from wear and tear”.
‘Urban environment noise’ also a cause
Ms Philpott said the increase in hearing damage among young people is principally because of “unsafe listening”, but also “urban environment noise” from construction, roads and trains or tubes.
“We see headphone use is pretty much ubiquitous, and we tend not to take our headphones off. We tend to listen to music or podcasts a bit more loudly than we should, and we don’t take enough breaks,” she said. “We have around 18,000 hair cells in our cochlea and those are the bits that get damaged that we can’t regrow.
She likened the damaging of hair cells to walking the same route across the lawn without giving the grass a break.
“If you walked around the path occasionally or just didn’t walk on the grass sometimes then the path would be less pronounced, and it’s a little bit like that for our hair cells. We pound them with noise and we don’t give them a break and chance to recover so they die,” she said.
Ms Philpott suggested people follow the 60/60 rule, listening at 60 per cent volume, for no more than 60 minutes at a time.
Several studies have linked hearing loss with neurodegenerative conditions like Alzheimer’s disease. Experts at the University of Oxford said it could be the biggest single risk factor for dementia.
Ms Philpott explained that hearing loss leads to social isolation, which can cause mild-cognitive impairment and then dementia.
“We lose our high frequencies first, and so we lose the sibilance, the ‘s’ sounds at the beginning and end of words, and so it starts to sound like people are mumbling,” she said.
“It’s more difficult for me to distinguish what you’re saying against the environmental sounds in the background and so I have to concentrate, watch your lips move, and the listening effort and cognitive load of listening when the hardware is failing, tires me out, and so I stop going to those environments, I start to withdraw from social interactions, which affects my mental health and also therefore my cognitive health.”
|
<urn:uuid:5ab3b914-0d06-408e-bdfc-849436b08450>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475806.52/warc/CC-MAIN-20240302084508-20240302114508-00604.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9654989242553711,
"pii_count": 0,
"score": 2.9375,
"token_count": 872,
"url": "https://www.telegraph.co.uk/news/2023/12/01/half-young-people-damage-hearing-loss-headphones/"
}
|
Exposure to loud noises from headphones and commuting is leaving half of under-35s with poor hearing, a study has found.
Experts said the extent to which loud or prolonged exposure to noise is causing irreversible damage to young people’s ears was an “emerging crisis”.
Data from more than 1,000 people aged 16 to 35 collated between June and October 2023, found that 49 per cent had “less than normal hearing” levels expected and that it was affecting their quality of life.
A quarter of under-35s had hearing loss while a further two in five had “probable” hearing loss, according to analysis of clinical tests by hearing health company eargym.
Just 40 per cent had no sign of hearing loss, with the remainder undetermined.
The test used was a “speech-in-noise” activity, which requires participants to identify three numbers against a background of noise, multiple times and at a range of volumes, giving them a score called a signal-to-noise ratio.
It was corroborated by a clinical questionnaire called the Amsterdam Inventory for Auditory Disability and Handicap (AIADH), which found that half of under-35s had difficulty hearing, particularly in the categories “intelligibility in noise” and “auditory localisation”.
This means hearing speech in noisy environments, such as following conversation against loud backgrounds and identifying lyrics in music, and being able to identify where a sound is coming from, such as the direction of a travelling car.
Amanda Philpott, a former local NHS chief executive, hearing-aid user and co-founder and chief executive of eargym, told the Telegraph that asking someone these types of questions about their hearing “can really give a clear clue about whether or not you’re starting to experience the early signs of hearing loss from wear and tear”.
‘Urban environment noise’ also a cause
Ms Philpott said the increase in hearing damage among young people is principally because of “unsafe listening”, but also “urban environment noise” from construction, roads and trains or tubes.
“We see headphone use is pretty much ubiquitous, and we tend not to take our headphones off. We tend to listen to music or podcasts a bit more loudly than we should, and we don’t take enough breaks,” she said. “We have around 18,000 hair cells in our cochlea and those are the bits that get damaged that we can’t
|
regrow.
She likened the damaging of hair cells to walking the same route across the lawn without giving the grass a break.
“If you walked around the path occasionally or just didn’t walk on the grass sometimes then the path would be less pronounced, and it’s a little bit like that for our hair cells. We pound them with noise and we don’t give them a break and chance to recover so they die,” she said.
Ms Philpott suggested people follow the 60/60 rule, listening at 60 per cent volume, for no more than 60 minutes at a time.
Several studies have linked hearing loss with neurodegenerative conditions like Alzheimer’s disease. Experts at the University of Oxford said it could be the biggest single risk factor for dementia.
Ms Philpott explained that hearing loss leads to social isolation, which can cause mild-cognitive impairment and then dementia.
“We lose our high frequencies first, and so we lose the sibilance, the ‘s’ sounds at the beginning and end of words, and so it starts to sound like people are mumbling,” she said.
“It’s more difficult for me to distinguish what you’re saying against the environmental sounds in the background and so I have to concentrate, watch your lips move, and the listening effort and cognitive load of listening when the hardware is failing, tires me out, and so I stop going to those environments, I start to withdraw from social interactions, which affects my mental health and also therefore my cognitive health.”
|
If you, like me, are among the several billion people who received an mRNA Covid-19 vaccine, we have benefited from one of the greatest medical breakthroughs in human history. And we have many scientists to thank, including American physician and immunologist Drew Weissman, who shared this year’s Nobel Prize in Physiology or Medicine with his research partner Katalin Karikó. Last week, I had the honor of speaking to Weissman about how he envisions the future of mRNA vaccines and therapies.
For quick background, Weissman and Karikó spent years solving the incredibly complex problem of how to dampen the harmful effect of lab-made mRNAs on the immune system. In a landmark paper in 2005 in the journal Immunity, they reported that a substitution of one chemical base of the mRNA molecule for another modified base nearly eliminated the immune reaction. Several of their later papers showed that the modified mRNA also greatly increased protein production compared to unmodified mRNA.
Their breakthrough helped lay the foundation for the Covid-19 vaccines, which cleverly train our bodies to prepare to fight the virus without seeing the real thing. As the Nobel Assembly wrote in its announcement of this year’s award, “Through their groundbreaking findings, which have fundamentally changed our understanding of how mRNA interacts with our immune system, the laureates contributed to the unprecedented rate of vaccine development during one of the greatest threats to human health in modern times.”
Weissman is a quintessential scientist: humble, matter of fact, driven and not content to rest on his laurels. Rather, he’s laser-focused on working to alleviate devastating diseases “that torture the human race.” Our discussion revealed seven memorable insights:
1. Many of our new vaccines will be RNA-based, but not all.
“All of the more difficult diseases, RNA is leading the development, so for HIV, Hepatitis C, TB, malaria,” Weissman said. “RNA vaccines are now in clinical trials for all of those.”
For common diseases, it’s hard to replace an established, cheap technology that is easy to distribute around the world with something new. “The MMR vaccine, for example, is dollars to produce and works great. I don’t think that is ever going to be replaced with RNA or if it is, it will be a long time coming.”
2. Beyond vaccines, there’s tremendous momentum to develop mRNA therapies for noninfectious diseases, including genetic and autoimmune diseases and cancer.
There are many diseases in which the body does not produce the right protein or enzyme, and which may theoretically be corrected by mRNA being sent to the right organ or cell to replace the malfunctioning one with the right copy. Thanks to Weissman and Karikó’s discovery of figuring out how to enable the body to tolerate lab-made mRNA without a prohibitive immune reaction, the race is on to develop potentially curative therapies not possible until now.
3. The future of gene therapy depends on targeted delivery, which Weissman is working on.
By now, most people have heard of lipid nanoparticles, the fatty delivery vehicle that stabilizes fragile mRNA. The thing about LNPs is that they go to the liver, because the liver has blood vessels with larger pores that attract these nanoparticles. This is a great approach for liver diseases, but not as helpful for all the diseases that originate in other places outside the liver.
“We figured out how to target LNPs to other cells and tissues,” Weissman told me. “We had a paper in Science a few weeks ago where we targeted repopulating bone marrow stem cells with incredibly high efficiency approaching 100% for gene editing.”
He’s working with other researchers, including the Gates Foundation, to develop several novel approaches to curing sickle cell disease.
“The future is going to be, we can go to Africa and the entire world, give people a single injection of RNA LNPs that are targeted to bone marrow stem cells and correct the gene mutation for sickle cell and cure sickle cell with just a simple injection.”
He adds: “You can expand that to the thousands of other bone marrow and genetic diseases. We can also target brain, lung, heart, kidneys, spleen, adrenals, and we continue to expand that. We’re working on being able to deliver either therapeutic proteins or gene editing technology to a variety of diseased cells or tissues.”
4. Faster, easier and cheaper CAR T-cell therapy is a major challenge that Weissman is working on.
Back in 2010, the first cancer patients were treated with CAR T-cell therapy in a landmark trial at the University of Pennsylvania Perelman School of Medicine (where Weissman also has his lab). The first child ever given the therapy, Emily Whitehead, went into remission from leukemia and is still doing well today.
The therapy works by removing some of a patient’s own T cells, engineering them in a lab to hunt down their cancer, and then infusing the souped-up T cells back into their bodies to circulate in perpetuity as a living drug.
So far, there are five FDA approved CAR T-cell therapies, all for hematologic malignancies like leukemia and lymphoma. To date, 27,000 patients worldwide have received approved CAR T-cell therapies, not including clinical trials. But the time, cost and complexity to engineer a patient’s own cells outside their body puts the therapy out of reach for many.
One of the next horizons is to figure out how to engineer the cells inside a patient’s body, known as in vivo—a much more direct and efficient approach. Weissman is working on targeted T-cell delivery of RNA therapies.
“We’ve shown in our cardiac fibrosis paper that we can cure a mouse with a single treatment,” he said.
Clinical trials for in vivo CAR therapies are expected to start within a year or two, and the earliest FDA approval would be a few more years after that.
5. Figuring out how to make CAR T-cell therapies attack solid tumors—which make up 90% of adult cancers—is a significant unsolved problem.
“I suspect it will be a multistep process,” Weissman said, “because there are lots of issues with solid tumors. The environments are immunosuppressive,” so even if the cell therapy gets into the tumor, it gets turned off and can’t kill the cancer.
Different approaches under study include adding proteins or knocking out proteins to make the CAR Ts more functional, including with gene editing. “Many levels have to be addressed before we come up with something that works well.”
6. Weissman is developing new vaccines and the basic science underlying them.
His lab has a couple of HIV vaccines and universal influenza vaccines in clinical trials, as well as a pan coronavirus vaccine that will go into clinical trials soon.
7. The time is coming closer for personalized cancer vaccines.
“There are already cancer vaccines in clinical trials that have shown efficacy for melanoma and pancreatic cancer,” Weissman said. “They are in phase 2 now, I’m sure they will move to phase 3 very quickly and they’re continuing to study many other types of cancer with personalized vaccines.”
Thanks in part to Weissman and Karikó’s breakthrough, there are close to 1,000 approaches in development for new vaccines and therapies using mRNA. (Disclosure: One such example is Capstan Therapeutics, a startup co-founded by Weissman and which has received an investment from my team at Leaps. Capstan is developing a platform to allow for off-the-shelf, targeted delivery of RNA to the desired cells or organs. It’s a broad enough platform to be applicable in cancer, genetic diseases, fibrosis and autoimmune diseases, like lupus.)
“I personally can’t even think of a therapeutic application of mRNA if their discovery hadn’t happened,” said Priya Karmali, Capstan’s chief technology officer. “Drew is a scientist to the core. What he really, really cares about is how one can translate the discoveries being done in his lab and others in the field into the future of human health.”
One day, for example, Weissman imagines a child receiving a single shot containing mRNA vaccines that protect against multiple diseases, all delivered with one lipid nanoparticle.
Karmali concludes: “He’s a visionary for thinking of the applications of this field beyond where we are today.”
Thank you to Kira Peikoff for additional research and reporting on this article.
|
<urn:uuid:68213850-95aa-48b0-8dd1-40351ec4c4c0>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474649.44/warc/CC-MAIN-20240225234904-20240226024904-00133.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9517185688018799,
"pii_count": 0,
"score": 2.625,
"token_count": 1845,
"url": "https://www.forbes.com/sites/juergeneckhardt/2023/11/16/nobel-winner-drew-weissman-shares-7-insights-on-future-mrna-breakthroughs/"
}
|
If you, like me, are among the several billion people who received an mRNA Covid-19 vaccine, we have benefited from one of the greatest medical breakthroughs in human history. And we have many scientists to thank, including American physician and immunologist Drew Weissman, who shared this year’s Nobel Prize in Physiology or Medicine with his research partner Katalin Karikó. Last week, I had the honor of speaking to Weissman about how he envisions the future of mRNA vaccines and therapies.
For quick background, Weissman and Karikó spent years solving the incredibly complex problem of how to dampen the harmful effect of lab-made mRNAs on the immune system. In a landmark paper in 2005 in the journal Immunity, they reported that a substitution of one chemical base of the mRNA molecule for another modified base nearly eliminated the immune reaction. Several of their later papers showed that the modified mRNA also greatly increased protein production compared to unmodified mRNA.
Their breakthrough helped lay the foundation for the Covid-19 vaccines, which cleverly train our bodies to prepare to fight the virus without seeing the real thing. As the Nobel Assembly wrote in its announcement of this year’s award, “Through their groundbreaking findings, which have fundamentally changed our understanding of how mRNA interacts with our immune system, the laureates contributed to the unprecedented rate of vaccine development during one of the greatest threats to human health in modern times.”
Weissman is a quintessential scientist: humble, matter of fact, driven and not content to rest on his laurels. Rather, he’s laser-focused on working to alleviate devastating diseases “that torture the human race.” Our discussion revealed seven memorable insights:
1. Many of our new vaccines will be RNA-based, but not all.
“All of the more difficult diseases, RNA is leading the development, so for HIV, Hepatitis C, TB, malaria,” Weissman said. “RNA vaccines are now in clinical trials for all of those.”
For common diseases, it’s hard to replace an established, cheap technology that is easy to distribute around the world with something new. “The MMR vaccine, for example, is dollars to produce and works great. I don’t think that is ever going to be replaced with RNA or if it is, it will be a long time coming.”
2. Beyond vaccines, there’s tremendous momentum to develop mRNA therapies for noninfectious diseases, including genetic and autoimmune diseases and cancer.
There are many diseases in which the body does not
|
produce the right protein or enzyme, and which may theoretically be corrected by mRNA being sent to the right organ or cell to replace the malfunctioning one with the right copy. Thanks to Weissman and Karikó’s discovery of figuring out how to enable the body to tolerate lab-made mRNA without a prohibitive immune reaction, the race is on to develop potentially curative therapies not possible until now.
3. The future of gene therapy depends on targeted delivery, which Weissman is working on.
By now, most people have heard of lipid nanoparticles, the fatty delivery vehicle that stabilizes fragile mRNA. The thing about LNPs is that they go to the liver, because the liver has blood vessels with larger pores that attract these nanoparticles. This is a great approach for liver diseases, but not as helpful for all the diseases that originate in other places outside the liver.
“We figured out how to target LNPs to other cells and tissues,” Weissman told me. “We had a paper in Science a few weeks ago where we targeted repopulating bone marrow stem cells with incredibly high efficiency approaching 100% for gene editing.”
He’s working with other researchers, including the Gates Foundation, to develop several novel approaches to curing sickle cell disease.
“The future is going to be, we can go to Africa and the entire world, give people a single injection of RNA LNPs that are targeted to bone marrow stem cells and correct the gene mutation for sickle cell and cure sickle cell with just a simple injection.”
He adds: “You can expand that to the thousands of other bone marrow and genetic diseases. We can also target brain, lung, heart, kidneys, spleen, adrenals, and we continue to expand that. We’re working on being able to deliver either therapeutic proteins or gene editing technology to a variety of diseased cells or tissues.”
4. Faster, easier and cheaper CAR T-cell therapy is a major challenge that Weissman is working on.
Back in 2010, the first cancer patients were treated with CAR T-cell therapy in a landmark trial at the University of Pennsylvania Perelman School of Medicine (where Weissman also has his lab). The first child ever given the therapy, Emily Whitehead, went into remission from leukemia and is still doing well today.
The therapy works by removing some of a patient’s own T cells, engineering them in a lab to hunt down their cancer, and then infusing the souped-up T cells back into their bodies to circulate in perpetuity as a living drug.
So far, there are five FDA approved CAR T-cell therapies, all for hematologic malignancies like leukemia and lymphoma. To date, 27,000 patients worldwide have received approved CAR T-cell therapies, not including clinical trials. But the time, cost and complexity to engineer a patient’s own cells outside their body puts the therapy out of reach for many.
One of the next horizons is to figure out how to engineer the cells inside a patient’s body, known as in vivo—a much more direct and efficient approach. Weissman is working on targeted T-cell delivery of RNA therapies.
“We’ve shown in our cardiac fibrosis paper that we can cure a mouse with a single treatment,” he said.
Clinical trials for in vivo CAR therapies are expected to start within a year or two, and the earliest FDA approval would be a few more years after that.
5. Figuring out how to make CAR T-cell therapies attack solid tumors—which make up 90% of adult cancers—is a significant unsolved problem.
“I suspect it will be a multistep process,” Weissman said, “because there are lots of issues with solid tumors. The environments are immunosuppressive,” so even if the cell therapy gets into the tumor, it gets turned off and can’t kill the cancer.
Different approaches under study include adding proteins or knocking out proteins to make the CAR Ts more functional, including with gene editing. “Many levels have to be addressed before we come up with something that works well.”
6. Weissman is developing new vaccines and the basic science underlying them.
His lab has a couple of HIV vaccines and universal influenza vaccines in clinical trials, as well as a pan coronavirus vaccine that will go into clinical trials soon.
7. The time is coming closer for personalized cancer vaccines.
“There are already cancer vaccines in clinical trials that have shown efficacy for melanoma and pancreatic cancer,” Weissman said. “They are in phase 2 now, I’m sure they will move to phase 3 very quickly and they’re continuing to study many other types of cancer with personalized vaccines.”
Thanks in part to Weissman and Karikó’s breakthrough, there are close to 1,000 approaches in development for new vaccines and therapies using mRNA. (Disclosure: One such example is Capstan Therapeutics, a startup co-founded by Weissman and which has received an investment from my team at Leaps. Capstan is developing a platform to allow for off-the-shelf, targeted delivery of RNA to the desired cells or organs. It’s a broad enough platform to be applicable in cancer, genetic diseases, fibrosis and autoimmune diseases, like lupus.)
“I personally can’t even think of a therapeutic application of mRNA if their discovery hadn’t happened,” said Priya Karmali, Capstan’s chief technology officer. “Drew is a scientist to the core. What he really, really cares about is how one can translate the discoveries being done in his lab and others in the field into the future of human health.”
One day, for example, Weissman imagines a child receiving a single shot containing mRNA vaccines that protect against multiple diseases, all delivered with one lipid nanoparticle.
Karmali concludes: “He’s a visionary for thinking of the applications of this field beyond where we are today.”
Thank you to Kira Peikoff for additional research and reporting on this article.
|
Unusual bush tomato with ladder-like features discovered in Northern Territory national parkBy Lee Robinson
During a nature walk along one of the Northern Territory's most spectacular lookouts, botanist Peter Jobson and his companions were stopped in their tracks by an unusual looking plant.
The bush tomato was low-growing, and had dozens of unique right-angle spines below each flower.
"You can see 20 or 30 spines going up, and they look like the rungs of a ladder," Mr Jobson said.
Mr Jobson, who was formerly the chief botanist of the Northern Territory, was the lone Australian among a group of American scientists hiking through the Judbarra/Gregory National Park, and in the region for an unrelated research expedition.
Enjoying panoramic views of Victoria River and its towering red escarpments, he said a spontaneous decision to make a detour led to the discovery.
"We came across the escarpment walk and we went, 'Well, let's just see what's up there'," Mr Jobson said.
"If nothing else, it would be a nice view, and we'd be able to take some photos."
The plant turned out to be a new species of bush tomato.
Ladder-like spines excite researchers
Its ladder-like appearance inspired its scientific name, Solanum scalarium.
"Then we started thinking that we had climbed up a lot of steps to see it, and we continued up more steps to get up to the lookout at the top," Mr Jobson said.
"We looked for the Latin word for 'stairs' or 'ladder', and fortunately it's the same thing — it's 'scalarium'.
"We liked the play on words."
The researchers were thinking about yet another wordplay when recommending a common name for their new discovery.
"As biologists, we get to go and see lots of the countryside — it's part of our job, and we get very excited about it," Mr Jobson said.
"In our paper, we wanted to acknowledge that this was also climbing up to go into the national park."
They suggested using "Garrarnawun Bush Tomato" as the common name for the new species, in recognition of the Garrarnawun Lookout near where it was discovered.
In their paper, published in December in plant journal PhytoKeys, the authors said the naming served as a tool to highlight "the importance of building community around natural spaces".
"It's also about us introducing the general public back into the wilderness that we have, particularly in the Northern Territory, which is probably one of the best places to be isolated in such an easy way," Mr Jobson said.
Many species still undiscovered
The vast Judbarra/Gregory National Park, located 360 kilometres south of Darwin, is recognised as a botanically rich area, meaning there are potentially hundreds of species yet to be formally identified.
"A lot of people think that we know every plant in Australia, but we don't," Mr Jobson said.
It is too early for scientists to postulate the potential applications of the tomato, but Mr Jobson said related species of the plant had been used in Aboriginal ceremonies and as food in the Kimberley in WA.
"We guess that there's a potential for it to be eaten," he said.
"But the thing with tomatoes is you have two types: the ones you can eat and the ones that are highly toxic and will happily kill you.
"I have a very healthy respect for my liver … so until I can talk with the traditional owners … I'd rather not taste it."
He said it was also possible to take a sample into the lab for testing, where it could be tested for the toxic compounds.
Bright future ahead for bush tomato
Mr Jobson said he was excited for further research to be conducted on the plant.
"What would be wonderful is if were able to work out a management plan to try to find out how rare it is," he said.
"If it is very rare, we need to work out how to manage it so that it doesn't go extinct and it stays around.
"And I think it's smart that we ask the local tribe to see if they used it. If it is used, the scope is amazing — it spreads out like ripples in a pond."
|
<urn:uuid:9ab6ee73-9068-457b-825b-1218f4a3406a>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00326.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.978350043296814,
"pii_count": 0,
"score": 3.078125,
"token_count": 912,
"url": "https://www.abc.net.au/news/2023-01-08/bush-tomato-species-discovered-nt-national-park/101806166"
}
|
Unusual bush tomato with ladder-like features discovered in Northern Territory national parkBy Lee Robinson
During a nature walk along one of the Northern Territory's most spectacular lookouts, botanist Peter Jobson and his companions were stopped in their tracks by an unusual looking plant.
The bush tomato was low-growing, and had dozens of unique right-angle spines below each flower.
"You can see 20 or 30 spines going up, and they look like the rungs of a ladder," Mr Jobson said.
Mr Jobson, who was formerly the chief botanist of the Northern Territory, was the lone Australian among a group of American scientists hiking through the Judbarra/Gregory National Park, and in the region for an unrelated research expedition.
Enjoying panoramic views of Victoria River and its towering red escarpments, he said a spontaneous decision to make a detour led to the discovery.
"We came across the escarpment walk and we went, 'Well, let's just see what's up there'," Mr Jobson said.
"If nothing else, it would be a nice view, and we'd be able to take some photos."
The plant turned out to be a new species of bush tomato.
Ladder-like spines excite researchers
Its ladder-like appearance inspired its scientific name, Solanum scalarium.
"Then we started thinking that we had climbed up a lot of steps to see it, and we continued up more steps to get up to the lookout at the top," Mr Jobson said.
"We looked for the Latin word for 'stairs' or 'ladder', and fortunately it's the same thing — it's 'scalarium'.
"We liked the play on words."
The researchers were thinking about yet another wordplay when recommending a common name for their new discovery.
"As biologists, we get to go and see lots of the countryside — it's part of our job, and we get very excited about it," Mr Jobson said.
"In our paper, we wanted to acknowledge that this was also climbing up to go into the national park."
They suggested using "Garrarnawun Bush Tomato" as the common name for the new species, in recognition of the Garrarnawun Lookout near where it was discovered.
In their paper, published in December in plant journal PhytoKeys, the authors said the naming served as a tool to highlight "the importance of building community around natural spaces".
"It's also about us introducing the general public back into the wilderness
|
that we have, particularly in the Northern Territory, which is probably one of the best places to be isolated in such an easy way," Mr Jobson said.
Many species still undiscovered
The vast Judbarra/Gregory National Park, located 360 kilometres south of Darwin, is recognised as a botanically rich area, meaning there are potentially hundreds of species yet to be formally identified.
"A lot of people think that we know every plant in Australia, but we don't," Mr Jobson said.
It is too early for scientists to postulate the potential applications of the tomato, but Mr Jobson said related species of the plant had been used in Aboriginal ceremonies and as food in the Kimberley in WA.
"We guess that there's a potential for it to be eaten," he said.
"But the thing with tomatoes is you have two types: the ones you can eat and the ones that are highly toxic and will happily kill you.
"I have a very healthy respect for my liver … so until I can talk with the traditional owners … I'd rather not taste it."
He said it was also possible to take a sample into the lab for testing, where it could be tested for the toxic compounds.
Bright future ahead for bush tomato
Mr Jobson said he was excited for further research to be conducted on the plant.
"What would be wonderful is if were able to work out a management plan to try to find out how rare it is," he said.
"If it is very rare, we need to work out how to manage it so that it doesn't go extinct and it stays around.
"And I think it's smart that we ask the local tribe to see if they used it. If it is used, the scope is amazing — it spreads out like ripples in a pond."
|
A few years ago, I started asking lecture halls filled with students to raise their hands if they had run out of time on the SAT. In each room, nearly every hand went up. I was surprised, but I shouldn’t have been.
For decades, educators have seen speed as a marker of aptitude or mastery, forcing students to scramble to finish tests. But a race against the clock doesn’t measure knowledge or intelligence. It assesses the much narrower skill of how well students reason under stress. As a result, timed tests underestimate the capabilities of countless students.
New evidence shows that although smarter people are faster at solving easy problems, they’re actually slower to finish difficult ones. They’re well aware that haste makes waste, and they don’t want to sacrifice accuracy for speed. You wouldn’t want a surgeon who rushes through a craniectomy, or an accountant who dashes through your taxes. Even for the many jobs in which people are judged on speed, there’s no evidence that doing algebra under time pressure is useful preparation. Although it pays to be quick, it also pays to be determined, disciplined and dependable.
Strangely, though, the tests that define students’ grades and help determine their educational and professional fates are rarely designed for deliberation. They evaluate students as if they’re applying to join a bomb squad or appear on “Jeopardy.” Time pressure rewards students who think fast and shallow — and punishes those who think slow and deep.
One fall, one of our daughters was pleasantly surprised by her grade on a math midterm. Despite being the longest and hardest test of the semester, it was her highest score. At first we were puzzled: She hadn’t changed her study habits or made a quantum leap in understanding. Then we learned that it was the first test where she didn’t feel pressed for time. Her teacher had allowed more time per question than usual.
On math tests, one of the few skills in which boys consistently outperform girls is mental rotation — turning 3-D shapes in their minds. But gender differences vary dramatically based on time pressure. Across several dozen studies, the more time students had to finish tests, the smaller the female disadvantage became. Shifting from short time limits to no time limits — or even just allowing more than 30 seconds per question — was enough to cut the gender gap in half.
It’s well known that the “girls can’t do math” stereotype can cause female students to underperform on math tests. The fear of confirming the stereotype can lead to test anxiety, draining working memory and disrupting cognitive processing. What we’ve overlooked is that time pressure can exacerbate these effects. When girls are distracted by doubts about their abilities, it takes them longer to finish problems. Having to rush leaves them more prone to choosing suboptimal strategies — and to possibly making mistakes. Even if they’re not anxious, female students tend to work more methodically than male students. When they have more time, they can rethink their approaches and double-check their answers. They also become more comfortable making educated guesses.
I tried this out with our daughter on practice tests at home. She’s an honors math student, but when she was under time pressure, she made distracted mistakes like plugging in the wrong formula on relatively easy algebra problems. It was the math version of a typo, and we ended up coining a term for it: a matho. But when there was no time limit, she excelled on difficult algebra problems — and mental rotation, too.
Time pressure doesn’t just underrate the math abilities of girls and young women. Having to hurry can obscure the abilities of anyone who has reason to worry. That might include boys who are expected to underperform in reading, immigrants who are doubted on verbal aptitude and Black students who face a host of questions about their intellect. It also includes students with learning difficulties such as dyslexia and ADHD — or mental health conditions, physical disabilities and sensory impairments.
A common solution to this problem is for schools to offer extended time to accommodate students with disabilities. But lately, this has created an accommodations arms race as parents and students try to game the system to get diagnosed with a learning difficulty or health condition. Why not give everyone enough time to complete the test?
I heard from one counselor in the Northeast whose high school experimented with extended time for final exams. After allotting four hours instead of two, they were bombarded with complaints. Students who had previously met the requirements for special accommodations — and their parents — said they ran out of time. Why? The teachers wrote longer exams to fill the space.
This madness has to end. If a significant portion of the students run out of time, it means the test is too long or the time period is too short. That’s why, as soon as I read about this evidence, I started writing two-hour exams for the three-hour exam period allocated to my class. But many other educators still cling to the fraternity hazing excuse: I had to walk five miles barefoot up a snow-covered hill, so you should suffer too! Most teachers, though, say they’re preparing their students for the pressure they’ll face in standardized tests.
It’s a delicious twist of irony, then, that the lifeboat to rescue us from the tyranny of time pressure is being piloted by the folks behind the mother of all standardized tests. I learned recently that the College Board has redesigned the SAT to minimize time pressure.
Historically, the SAT gave students “too much to cover and not enough time to do it,” the College Board’s chief executive officer, David Coleman, told me. But developing a digital version gave them the opportunity to experiment. And the results were so impressive they decided to stick with them. Starting next year, the test is shorter overall, and most importantly, “on average, 97 percent of students complete all questions in a section with up to seven minutes to spare on each section,” Mr. Coleman said. “It’s time we stop confusing quick with smart.”
This could be game-changing for teachers as well as students. If the dominant standardized test no longer creates time pressure, there’s less need to use a ticking clock on classroom quizzes and exams. I don’t expect students to start looking forward to tests, but they should be less likely to dread them. That will give them a better chance at putting their best foot forward. It will also give them a more realistic preview of what it takes to excel in the future.
In school, timed tests teach kids that success is a sprint. But in life, success is a marathon. Wisdom is less about the speed of thought than the complexity of thinking. The students with the greatest potential aren’t always the ones who can rapidly spit out the right answers. They’re often the ones who take the time to ask the right questions.
Adam Grant is an organizational psychologist at the University of Pennsylvania’s Wharton School. He is the author of “Hidden Potential” and “Think Again,” and the host of the TED podcast “Re: Thinking.” This article originally appeared in The New York Times.
|
<urn:uuid:787f61c9-88c7-4997-96d3-754cb1413ab7>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00189.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9741044044494629,
"pii_count": 0,
"score": 2.90625,
"token_count": 1528,
"url": "https://www.sltrib.com/opinion/commentary/2023/09/24/adam-grant-sats-will-be-different/"
}
|
A few years ago, I started asking lecture halls filled with students to raise their hands if they had run out of time on the SAT. In each room, nearly every hand went up. I was surprised, but I shouldn’t have been.
For decades, educators have seen speed as a marker of aptitude or mastery, forcing students to scramble to finish tests. But a race against the clock doesn’t measure knowledge or intelligence. It assesses the much narrower skill of how well students reason under stress. As a result, timed tests underestimate the capabilities of countless students.
New evidence shows that although smarter people are faster at solving easy problems, they’re actually slower to finish difficult ones. They’re well aware that haste makes waste, and they don’t want to sacrifice accuracy for speed. You wouldn’t want a surgeon who rushes through a craniectomy, or an accountant who dashes through your taxes. Even for the many jobs in which people are judged on speed, there’s no evidence that doing algebra under time pressure is useful preparation. Although it pays to be quick, it also pays to be determined, disciplined and dependable.
Strangely, though, the tests that define students’ grades and help determine their educational and professional fates are rarely designed for deliberation. They evaluate students as if they’re applying to join a bomb squad or appear on “Jeopardy.” Time pressure rewards students who think fast and shallow — and punishes those who think slow and deep.
One fall, one of our daughters was pleasantly surprised by her grade on a math midterm. Despite being the longest and hardest test of the semester, it was her highest score. At first we were puzzled: She hadn’t changed her study habits or made a quantum leap in understanding. Then we learned that it was the first test where she didn’t feel pressed for time. Her teacher had allowed more time per question than usual.
On math tests, one of the few skills in which boys consistently outperform girls is mental rotation — turning 3-D shapes in their minds. But gender differences vary dramatically based on time pressure. Across several dozen studies, the more time students had to finish tests, the smaller the female disadvantage became. Shifting from short time limits to no time limits — or even just allowing more than 30 seconds per question — was enough to cut the gender gap in half.
It’s well known that the “girls can’t do math” stereotype can cause female students to underperform on math tests.
|
The fear of confirming the stereotype can lead to test anxiety, draining working memory and disrupting cognitive processing. What we’ve overlooked is that time pressure can exacerbate these effects. When girls are distracted by doubts about their abilities, it takes them longer to finish problems. Having to rush leaves them more prone to choosing suboptimal strategies — and to possibly making mistakes. Even if they’re not anxious, female students tend to work more methodically than male students. When they have more time, they can rethink their approaches and double-check their answers. They also become more comfortable making educated guesses.
I tried this out with our daughter on practice tests at home. She’s an honors math student, but when she was under time pressure, she made distracted mistakes like plugging in the wrong formula on relatively easy algebra problems. It was the math version of a typo, and we ended up coining a term for it: a matho. But when there was no time limit, she excelled on difficult algebra problems — and mental rotation, too.
Time pressure doesn’t just underrate the math abilities of girls and young women. Having to hurry can obscure the abilities of anyone who has reason to worry. That might include boys who are expected to underperform in reading, immigrants who are doubted on verbal aptitude and Black students who face a host of questions about their intellect. It also includes students with learning difficulties such as dyslexia and ADHD — or mental health conditions, physical disabilities and sensory impairments.
A common solution to this problem is for schools to offer extended time to accommodate students with disabilities. But lately, this has created an accommodations arms race as parents and students try to game the system to get diagnosed with a learning difficulty or health condition. Why not give everyone enough time to complete the test?
I heard from one counselor in the Northeast whose high school experimented with extended time for final exams. After allotting four hours instead of two, they were bombarded with complaints. Students who had previously met the requirements for special accommodations — and their parents — said they ran out of time. Why? The teachers wrote longer exams to fill the space.
This madness has to end. If a significant portion of the students run out of time, it means the test is too long or the time period is too short. That’s why, as soon as I read about this evidence, I started writing two-hour exams for the three-hour exam period allocated to my class. But many other educators still cling to the fraternity hazing excuse: I had to walk five miles barefoot up a snow-covered hill, so you should suffer too! Most teachers, though, say they’re preparing their students for the pressure they’ll face in standardized tests.
It’s a delicious twist of irony, then, that the lifeboat to rescue us from the tyranny of time pressure is being piloted by the folks behind the mother of all standardized tests. I learned recently that the College Board has redesigned the SAT to minimize time pressure.
Historically, the SAT gave students “too much to cover and not enough time to do it,” the College Board’s chief executive officer, David Coleman, told me. But developing a digital version gave them the opportunity to experiment. And the results were so impressive they decided to stick with them. Starting next year, the test is shorter overall, and most importantly, “on average, 97 percent of students complete all questions in a section with up to seven minutes to spare on each section,” Mr. Coleman said. “It’s time we stop confusing quick with smart.”
This could be game-changing for teachers as well as students. If the dominant standardized test no longer creates time pressure, there’s less need to use a ticking clock on classroom quizzes and exams. I don’t expect students to start looking forward to tests, but they should be less likely to dread them. That will give them a better chance at putting their best foot forward. It will also give them a more realistic preview of what it takes to excel in the future.
In school, timed tests teach kids that success is a sprint. But in life, success is a marathon. Wisdom is less about the speed of thought than the complexity of thinking. The students with the greatest potential aren’t always the ones who can rapidly spit out the right answers. They’re often the ones who take the time to ask the right questions.
Adam Grant is an organizational psychologist at the University of Pennsylvania’s Wharton School. He is the author of “Hidden Potential” and “Think Again,” and the host of the TED podcast “Re: Thinking.” This article originally appeared in The New York Times.
|
A Psalm for Desert-Like People
Psalm 74 is titled, “Plea for Help in Time of National Humiliation.” It is a desert-like psalm—a psalm of desert-like troubles. It is a psalm of barrenness and of inhospitable conditions. Verse 3 sums up the problem well: “Direct your steps to the perpetual ruins; the enemy has destroyed everything in the sanctuary.” Verse 19 speaks of the soul of God’s dove (God’s beloved people) being delivered “to the wild animals.”
How can a people survive in a desert? How can a people survive amidst desert-like conditions?
Ray Vander Laan writes, “Community is essential in the desert. Survival in the desert literally demands that its people care for one another. Even today, Bedouin will say that the unbelievable commitment to hospitality expressed among desert tribes exists in part because as they travel through the barren wilderness they need to depend on others for food, shelter, and especially water. So the code of hospitality is very strong.
“In the desert, guests and complete strangers are welcomed and receive the best food and water a family has. Families will serve the last bit of flour they have or defend a guest in their tent with their lives—even if they just met that guest. This code of hospitality is quite foreign to many people in the Western world where privacy, competition, and a spirit of self-sufficiency prevail.
“Perhaps it shouldn’t surprise us that God chose the desert to be the place where he would prepare his people to be his community of priests who would reveal him to the world. He knew that at times his people would be in the desert, but that far more often they would suffer the intense pain and suffering of life’s desert experiences. He would provide manna, shade, and water so that his people would not only survive their desert experiences but as a community—numerous as the sand of the seashore—would share what God had provided them with other people who find themselves in the desert.”
Because this psalm is a desert-like psalm—a psalm of desert-like troubles—the psalm puts an emphasis on community. It begins with a plea for God’s protection and provision for the community of Israel: “O God, why do you cast us off forever? Why does your anger smoke against the sheep of your pasture? Remember your congregation, which you acquired long ago, which you redeemed to be the tribe of your heritage. Remember Mount Zion where you came to dwell.”
And in verses 20-21 (near the end of the psalm), the psalmist pleads for the most vulnerable in the nation: “Have regard for your covenant, for the dark places of the land are full of the haunts of violence. Do not let the downtrodden be put to shame; let the poor and needy praise your name.”
The South African concept of Ubuntu shines forth in Psalm 74. Ubuntu, a Nguni Bantu term meaning “humanity,” is often translated as “I am because we are.” In a preface to Richard Stengel’s Mandela’s Way: Fifteen Lessons on Life, Love, and Courage, Nelson Mandela describes ubuntu as “the profound sense that we are human only through the humanity of others; that if we are to accomplish anything in this world, it will in equal measure be due to the work and achievements of others.”
May Psalm 74 remind us that we are desert-like people—people who are often facing desert-like circumstances along with others who are facing desert-like circumstances. And may Psalm 74 remind us to keep our hearts attuned to the whole community, practicing ubuntu care for one another, including the most vulnerable among us.
|
<urn:uuid:5ca6e232-b37a-445b-b016-5246465a2658>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00554.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9622231125831604,
"pii_count": 0,
"score": 2.6875,
"token_count": 825,
"url": "https://tomtrippblog.wordpress.com/2023/01/12/a-psalm-for-desert-like-people/"
}
|
A Psalm for Desert-Like People
Psalm 74 is titled, “Plea for Help in Time of National Humiliation.” It is a desert-like psalm—a psalm of desert-like troubles. It is a psalm of barrenness and of inhospitable conditions. Verse 3 sums up the problem well: “Direct your steps to the perpetual ruins; the enemy has destroyed everything in the sanctuary.” Verse 19 speaks of the soul of God’s dove (God’s beloved people) being delivered “to the wild animals.”
How can a people survive in a desert? How can a people survive amidst desert-like conditions?
Ray Vander Laan writes, “Community is essential in the desert. Survival in the desert literally demands that its people care for one another. Even today, Bedouin will say that the unbelievable commitment to hospitality expressed among desert tribes exists in part because as they travel through the barren wilderness they need to depend on others for food, shelter, and especially water. So the code of hospitality is very strong.
“In the desert, guests and complete strangers are welcomed and receive the best food and water a family has. Families will serve the last bit of flour they have or defend a guest in their tent with their lives—even if they just met that guest. This code of hospitality is quite foreign to many people in the Western world where privacy, competition, and a spirit of self-sufficiency prevail.
“Perhaps it shouldn’t surprise us that God chose the desert to be the place where he would prepare his people to be his community of priests who would reveal him to the world. He knew that at times his people would be in the desert, but that far more often they would suffer the intense pain and suffering of life’s desert experiences. He would provide manna, shade, and water so that his people would not only survive their desert experiences but as a community—numerous as the sand of the seashore—would share what God had provided them with other people who find themselves in the desert.”
Because this psalm is a desert-like psalm—a psalm of desert-like troubles—the psalm puts an emphasis on community. It begins with a plea for God’s protection and provision for the community of Israel: “O God, why do you cast us off forever? Why does your anger smoke against the sheep of your pasture? Remember your congregation, which you acquired long ago, which you redeemed to be the tribe of your heritage. Remember Mount Zion where you
|
came to dwell.”
And in verses 20-21 (near the end of the psalm), the psalmist pleads for the most vulnerable in the nation: “Have regard for your covenant, for the dark places of the land are full of the haunts of violence. Do not let the downtrodden be put to shame; let the poor and needy praise your name.”
The South African concept of Ubuntu shines forth in Psalm 74. Ubuntu, a Nguni Bantu term meaning “humanity,” is often translated as “I am because we are.” In a preface to Richard Stengel’s Mandela’s Way: Fifteen Lessons on Life, Love, and Courage, Nelson Mandela describes ubuntu as “the profound sense that we are human only through the humanity of others; that if we are to accomplish anything in this world, it will in equal measure be due to the work and achievements of others.”
May Psalm 74 remind us that we are desert-like people—people who are often facing desert-like circumstances along with others who are facing desert-like circumstances. And may Psalm 74 remind us to keep our hearts attuned to the whole community, practicing ubuntu care for one another, including the most vulnerable among us.
|
In the rapidly evolving landscape of globalisation, multinational corporations have witnessed their influence stretch across borders in previously unimaginable ways. Operating in diverse contexts has become a daily concern for corporate giants, and adapting to local realities in different countries is a crucial aspect of business strategy.
Despite decades of development and progress in the corporate sector, a profound divide persists between the representatives of developed and developing nations, particularly in the area of corporate leadership talent and its development. This division often results in senior leaders from developed countries being dispatched to lead projects or business units in developing nations, necessitating collaboration with local teams in contexts that are vastly different from their own experiences and understanding.
This carries a high risk for causing technically and functionally brilliant leaders to falter in developing countries, primarily because of their inability to contextualise the value of their contributions within unfamiliar circumstances. These leaders struggle to immerse themselves in local contexts, to comprehend issues from alternative perspectives and to apply cultural intelligence to effectively lead in unfamiliar territory.
Cultural meaning systems: Ubuntu’s influence
A profound philosophy, Ubuntu is embedded as a societal value within sub-Saharan Africa. It shapes how individuals within a range of societies make sense of their surroundings. The philosophy has been observed to permeate the cultural meaning systems of employees within organisations operating in the region.
Multinational corporations often encounter Ubuntu’s cultural influence as being in stark contrast to westernised cultures. This dichotomy arises for two principal reasons: first, as leaders in multinationals compete on a global stage, they engage primarily with westernised cultures, often led by leaders from developed, westernised backgrounds. Second, leaders in southern Africa sometimes translate predominantly westernised thinking into their business practices, influenced by an education system steeped in western-infused management philosophies.
Historically, leadership in Southern Africa has been shaped by two distinct influences: indigenous African worldviews and imported western perspectives. The traditional and cultural leadership in tribal areas included chiefs at the apex of the hierarchy. Though informal, their authority was widely recognised within the tribe.
Western influence on leadership can be traced back to the colonisation of the region, first by the Netherlands and later by England. Much of African history has been narrated through the lens of colonisers who, driven by self-interest and a lack of cultural understanding, produced a one-sided and biased account of events.
This paradigm dominated the political landscape during the apartheid era and extended its reach into corporate South Africa and the public sector. The enduring impact of this legacy cannot be separated from the discourse on cross-cultural leadership in sub-Saharan Africa, as unresolved issues from the birth of a democratised South Africa in 1994 continue to inform regressive societal and organisational leadership dynamics.
Throughout history, beliefs have often been moulded by prejudice rather than concrete evidence. It is self-evident that societal values alone do not account for issues such as poverty and wealth. Colonialism’s enduring legacy has played a substantial role in shaping societal structures and economic realities in the region. But stereotypes and preconceived notions persist, perpetuating among others the false idea that African poverty can be attributed to laziness, the suppression of individualism, and irrationality. This disregards the profound impact of colonisation.
Botswana’s model of success
Botswana, Africa’s longest continuous democracy, offers a compelling example of how indigenous culture can drive economic success. Seretse Khama’s leadership was instrumental in Botswana’s transition from English rule and South African apartheid influence in 1966. Botswana, now a sociopolitically stable and economically vibrant country, has embraced a parliamentary democracy built on a multiparty system rooted in the chieftaincy systems that have existed for centuries.
This system is heavily influenced by the underlying philosophy of Kgotla, characterised by principles of democracy, inclusiveness and open dialogue. Chiefs in Botswana continue to exert influence, serving as a check on inappropriate actions in government. Botswana’s example demonstrates that economic prosperity can be built upon indigenous culture, a stark contrast to the dominance of foreign models in many other African countries, especially within the realm of business culture.
Rethinking leadership through indigenous perspectives
The realm of leadership and management theory has been predominantly shaped by the writings of early 20th-century western scholars, deeply influenced by economics and classical sociology. This perspective centrally emphasises logic and a masculine archetype and minimises sensemaking, instinct and emotions.
Recent scholarship has shed light on the biases embedded in these theories. Human beings are not solely rational and transaction-oriented creatures; we are also guided by emotions and communal instincts. Recognising the significance of emotions in leadership can pave the way for more holistic, inclusive and emancipatory theories of management. Eastern philosophies have already demonstrated their positive impact in areas such as business process management and emotional intelligence.
Dr Reuel Khoza stands as one of the pioneering thought leaders who have operationalised the Ubuntu philosophy as a conceptual framework for interpersonal relationships and leadership. His book, Ubuntu Botho – African Humanism, aimed to reposition African ideas of communalism and humanism as fundamental to philosophies found on the continent, extending these principles into corporate life. Ubuntu, originating from the isiZulu language, encapsulates the aphorism, “Umuntu ngumuntu ngabantu”, which translates to “a person is a person because of or through other people”.
Desmond Tutu beautifully encapsulated this philosophy by emphasising that none of us enters the world fully formed; we become human through our interactions with other human beings.
Ubuntu’s ancient roots are traced to central Africa, where it emerged in the earliest societies. As different groups migrated across the continent, they carried the Ubuntu philosophy with them.
Today, Ubuntu philosophy lies at the heart of how organisations can evolve into more inclusive and purpose-driven entities. It finds expression in open innovation platforms, where individuals collaborate externally to navigate fast-changing business landscapes. This interconnectedness fosters a social organisational culture that promotes personal growth, communal achievement, and shared aspirations.
Expressing mutual values such as compassion, reciprocity, and humanity becomes the key to success in online communities and beyond.
The levels of Ubuntu
Johann Broodryk, the first person to receive a PhD in Ubuntu, published a seminal work on the Ubuntu philosophy and its application in modern business. He captured the essence of Ubuntu and explained how it can serve as a management philosophy in contemporary organisations.
Later, the principles of Ubuntu were further elucidated by Vuyisile Msila through his notion of the “5Ps of Ubuntu,” which encompass people-centeredness, permeable walls, partisanship, progeny and productivity.
Traditional western management systems have been guided by misapplied economic assumptions about human nature, often focusing on self-interest as the ultimate driver of behaviour. Subsequently this self-interest may lead to employees wanting to earn as much as possible while contributing as little as possible.
In general, organisational culture, a product of leadership style and decisions, systems, policy and process, influences employee engagement, and this system needs to be explored to articulate the inherent cultural dynamics (values, leadership style, levels of engagement and behaviour and the symbols influencing how things get done).
Post-colonial theoretical perspectives critically examine the far-reaching impacts of European colonial rule on various aspects of society. They challenge prevailing notions of power structures, amplify marginalised viewpoints and advocate for a more balanced representation of history. Understanding leadership and relationships in South Africa necessitates an examination of the country’s post-colonial history and each leader’s personal connection to it. This requires emotional intelligence, self-awareness, and the pursuit of personal mastery, as postulated by Peter Senge.
Personal mastery is a journey of deepening self-identity, emotional awareness, patience and objective reality perception. It entails a keen awareness of one’s mental models, the deeply ingrained assumptions and biases shaping one’s worldview. Management decisions often reflect organisational intentions, while individual paradigms inform how those messages are interpreted. Therefore, leaders must consciously scrutinise their beliefs and perceptions, recognising these factors’ profound influence on their behaviour and outcomes.
Navigating the legacy of colonialism
Colonisation of Africa commenced in the 15th century, with different European nations leaving their distinct marks on the continent. Africa became a strategic asset, coveted for its trading posts and abundant natural resources. However, this era was marked by an inherent power imbalance between colonists and indigenous populations, leading to enduring legacies of marginalisation and inequality. The process of decolonisation unfolded differently in various African countries, further shaping the post-colonial landscape.
Leaders in Africa and those operating within its context face a multitude of challenges, from economic and political instability to limited infrastructure and disparities in access to technology. Cultural diversity and the historical baggage of colonisation also exert a profound influence on leadership dynamics. The lingering perceptions of poor governance, corruption and overt racism, alongside an oversimplified belief that societal values determine poverty and wealth, complicate leadership efforts in the region.
Countries in southern Africa have demonstrated commendable transitions to democracy and boast admirable constitutions. However, little has changed in terms of economic empowerment and patterns of wealth accumulation.
While legislative measures and political changes have fostered a more diverse middle class, economic disparities remain entrenched. An increase in the registration of Black-owned businesses may deflect from the reality that many new businesses fail and the reality that there has been a limited rise in Black management control. Black women continue to face marginalisation, with low representation in business ownership and leadership positions. These challenges persist despite legislation aimed at promoting transformation.
Leadership is inherently context-specific, shaped by the environment in which it operates. African leaders and those operating within African contexts must navigate sometimes extraordinarily unstable economic and political landscapes while addressing a plethora of challenges. These challenges include the need for socially and environmentally responsible practices addressing long-standing poverty and inequality, limited education and healthcare infrastructure, disparities in technological access, cultural diversity and the enduring legacy of colonisation. These factors contribute to complex leadership dynamics marked by authoritarianism, over-reliance on positional authority, marginalisation, bias, and the perpetuation of “in-groups”.
Responsible leadership operates at four interconnected levels, starting with authentic self-leadership. It emphasises transformational, ethical, and servant leadership in relationships with others; values-based leadership within organisations; and a systems perspective that views the world of work as an interconnected value chain in the ecosystem. Responsible leadership extends beyond individual organisations to broader society and the world, embodying principles of stewardship. In this context, Ubuntu philosophy finds resonance, promoting values such as trust, cooperation, communication, information sharing, reciprocity and resilience.
Leaders operating within the sub-Saharan African context recognise the value of social capital, a concept gaining prominence in management theory. Social capital encompasses shared values, trust, cooperation, communication, information sharing, reciprocity and resilience within a group of individuals in an organisation. It fosters mutual benefit, but its impact can take on a darker dimension when relationships foster corruption and diminish the value of social capital, particularly in contexts characterised by political interference.
The development of social capital involves creating new connections, enhancing relational dynamics and recognising collective identity through shared language and meaning-making. In the South African context, collective identity has the potential to influence power dynamics and force organisations into disruptive change.
The influence of indigenous perspectives within societies where western-dominated organisational cultures operate places a responsibility on global leaders to remain mindful of potential tensions and to critically examine their own beliefs that shape decision-making and relationships. Ultimately, cohesive engagement requires the deliberate creation of organisational cultures that harness the collective strengths of all cultural paradigms, fostering awareness and sensemaking throughout the business architecture and relationships.
The path to inclusive leadership in sub-Saharan Africa is a multifaceted journey, where indigenous philosophies such as Ubuntu can play a pivotal role in reshaping organisational cultures and promoting holistic leadership. As the region continues to grapple with the legacies of colonisation and persistent challenges, responsible leadership that embraces diversity and shared values offers a promising avenue for progress. By recognising the interconnectedness of all individuals and communities, leaders can navigate the complex terrain of sub-Saharan Africa with empathy, compassion and a commitment to positive change. DM
Dr Natasha Winkler-Titus is a senior lecturer in Organisational Behaviour and Leadership at Stellenbosch Business School.
|
<urn:uuid:05314ed4-050b-4ff4-aa1c-8c4b9ed2cc0d>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00781.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9275641441345215,
"pii_count": 0,
"score": 2.5625,
"token_count": 2570,
"url": "https://www.dailymaverick.co.za/opinionista/2023-09-25-ubuntu-as-a-path-to-inclusive-leadership-in-sub-saharan-africa/"
}
|
In the rapidly evolving landscape of globalisation, multinational corporations have witnessed their influence stretch across borders in previously unimaginable ways. Operating in diverse contexts has become a daily concern for corporate giants, and adapting to local realities in different countries is a crucial aspect of business strategy.
Despite decades of development and progress in the corporate sector, a profound divide persists between the representatives of developed and developing nations, particularly in the area of corporate leadership talent and its development. This division often results in senior leaders from developed countries being dispatched to lead projects or business units in developing nations, necessitating collaboration with local teams in contexts that are vastly different from their own experiences and understanding.
This carries a high risk for causing technically and functionally brilliant leaders to falter in developing countries, primarily because of their inability to contextualise the value of their contributions within unfamiliar circumstances. These leaders struggle to immerse themselves in local contexts, to comprehend issues from alternative perspectives and to apply cultural intelligence to effectively lead in unfamiliar territory.
Cultural meaning systems: Ubuntu’s influence
A profound philosophy, Ubuntu is embedded as a societal value within sub-Saharan Africa. It shapes how individuals within a range of societies make sense of their surroundings. The philosophy has been observed to permeate the cultural meaning systems of employees within organisations operating in the region.
Multinational corporations often encounter Ubuntu’s cultural influence as being in stark contrast to westernised cultures. This dichotomy arises for two principal reasons: first, as leaders in multinationals compete on a global stage, they engage primarily with westernised cultures, often led by leaders from developed, westernised backgrounds. Second, leaders in southern Africa sometimes translate predominantly westernised thinking into their business practices, influenced by an education system steeped in western-infused management philosophies.
Historically, leadership in Southern Africa has been shaped by two distinct influences: indigenous African worldviews and imported western perspectives. The traditional and cultural leadership in tribal areas included chiefs at the apex of the hierarchy. Though informal, their authority was widely recognised within the tribe.
Western influence on leadership can be traced back to the colonisation of the region, first by the Netherlands and later by England. Much of African history has been narrated through the lens of colonisers who, driven by self-interest and a lack of cultural understanding, produced a one-sided and biased account of events.
This paradigm dominated the political landscape during the apartheid era and extended its reach into corporate South Africa and the public sector. The enduring impact of this legacy cannot be separated from the discourse on cross-cultural leadership in sub-Saharan Africa, as
|
unresolved issues from the birth of a democratised South Africa in 1994 continue to inform regressive societal and organisational leadership dynamics.
Throughout history, beliefs have often been moulded by prejudice rather than concrete evidence. It is self-evident that societal values alone do not account for issues such as poverty and wealth. Colonialism’s enduring legacy has played a substantial role in shaping societal structures and economic realities in the region. But stereotypes and preconceived notions persist, perpetuating among others the false idea that African poverty can be attributed to laziness, the suppression of individualism, and irrationality. This disregards the profound impact of colonisation.
Botswana’s model of success
Botswana, Africa’s longest continuous democracy, offers a compelling example of how indigenous culture can drive economic success. Seretse Khama’s leadership was instrumental in Botswana’s transition from English rule and South African apartheid influence in 1966. Botswana, now a sociopolitically stable and economically vibrant country, has embraced a parliamentary democracy built on a multiparty system rooted in the chieftaincy systems that have existed for centuries.
This system is heavily influenced by the underlying philosophy of Kgotla, characterised by principles of democracy, inclusiveness and open dialogue. Chiefs in Botswana continue to exert influence, serving as a check on inappropriate actions in government. Botswana’s example demonstrates that economic prosperity can be built upon indigenous culture, a stark contrast to the dominance of foreign models in many other African countries, especially within the realm of business culture.
Rethinking leadership through indigenous perspectives
The realm of leadership and management theory has been predominantly shaped by the writings of early 20th-century western scholars, deeply influenced by economics and classical sociology. This perspective centrally emphasises logic and a masculine archetype and minimises sensemaking, instinct and emotions.
Recent scholarship has shed light on the biases embedded in these theories. Human beings are not solely rational and transaction-oriented creatures; we are also guided by emotions and communal instincts. Recognising the significance of emotions in leadership can pave the way for more holistic, inclusive and emancipatory theories of management. Eastern philosophies have already demonstrated their positive impact in areas such as business process management and emotional intelligence.
Dr Reuel Khoza stands as one of the pioneering thought leaders who have operationalised the Ubuntu philosophy as a conceptual framework for interpersonal relationships and leadership. His book, Ubuntu Botho – African Humanism, aimed to reposition African ideas of communalism and humanism as fundamental to philosophies found on the continent, extending these principles into corporate life. Ubuntu, originating from the isiZulu language, encapsulates the aphorism, “Umuntu ngumuntu ngabantu”, which translates to “a person is a person because of or through other people”.
Desmond Tutu beautifully encapsulated this philosophy by emphasising that none of us enters the world fully formed; we become human through our interactions with other human beings.
Ubuntu’s ancient roots are traced to central Africa, where it emerged in the earliest societies. As different groups migrated across the continent, they carried the Ubuntu philosophy with them.
Today, Ubuntu philosophy lies at the heart of how organisations can evolve into more inclusive and purpose-driven entities. It finds expression in open innovation platforms, where individuals collaborate externally to navigate fast-changing business landscapes. This interconnectedness fosters a social organisational culture that promotes personal growth, communal achievement, and shared aspirations.
Expressing mutual values such as compassion, reciprocity, and humanity becomes the key to success in online communities and beyond.
The levels of Ubuntu
Johann Broodryk, the first person to receive a PhD in Ubuntu, published a seminal work on the Ubuntu philosophy and its application in modern business. He captured the essence of Ubuntu and explained how it can serve as a management philosophy in contemporary organisations.
Later, the principles of Ubuntu were further elucidated by Vuyisile Msila through his notion of the “5Ps of Ubuntu,” which encompass people-centeredness, permeable walls, partisanship, progeny and productivity.
Traditional western management systems have been guided by misapplied economic assumptions about human nature, often focusing on self-interest as the ultimate driver of behaviour. Subsequently this self-interest may lead to employees wanting to earn as much as possible while contributing as little as possible.
In general, organisational culture, a product of leadership style and decisions, systems, policy and process, influences employee engagement, and this system needs to be explored to articulate the inherent cultural dynamics (values, leadership style, levels of engagement and behaviour and the symbols influencing how things get done).
Post-colonial theoretical perspectives critically examine the far-reaching impacts of European colonial rule on various aspects of society. They challenge prevailing notions of power structures, amplify marginalised viewpoints and advocate for a more balanced representation of history. Understanding leadership and relationships in South Africa necessitates an examination of the country’s post-colonial history and each leader’s personal connection to it. This requires emotional intelligence, self-awareness, and the pursuit of personal mastery, as postulated by Peter Senge.
Personal mastery is a journey of deepening self-identity, emotional awareness, patience and objective reality perception. It entails a keen awareness of one’s mental models, the deeply ingrained assumptions and biases shaping one’s worldview. Management decisions often reflect organisational intentions, while individual paradigms inform how those messages are interpreted. Therefore, leaders must consciously scrutinise their beliefs and perceptions, recognising these factors’ profound influence on their behaviour and outcomes.
Navigating the legacy of colonialism
Colonisation of Africa commenced in the 15th century, with different European nations leaving their distinct marks on the continent. Africa became a strategic asset, coveted for its trading posts and abundant natural resources. However, this era was marked by an inherent power imbalance between colonists and indigenous populations, leading to enduring legacies of marginalisation and inequality. The process of decolonisation unfolded differently in various African countries, further shaping the post-colonial landscape.
Leaders in Africa and those operating within its context face a multitude of challenges, from economic and political instability to limited infrastructure and disparities in access to technology. Cultural diversity and the historical baggage of colonisation also exert a profound influence on leadership dynamics. The lingering perceptions of poor governance, corruption and overt racism, alongside an oversimplified belief that societal values determine poverty and wealth, complicate leadership efforts in the region.
Countries in southern Africa have demonstrated commendable transitions to democracy and boast admirable constitutions. However, little has changed in terms of economic empowerment and patterns of wealth accumulation.
While legislative measures and political changes have fostered a more diverse middle class, economic disparities remain entrenched. An increase in the registration of Black-owned businesses may deflect from the reality that many new businesses fail and the reality that there has been a limited rise in Black management control. Black women continue to face marginalisation, with low representation in business ownership and leadership positions. These challenges persist despite legislation aimed at promoting transformation.
Leadership is inherently context-specific, shaped by the environment in which it operates. African leaders and those operating within African contexts must navigate sometimes extraordinarily unstable economic and political landscapes while addressing a plethora of challenges. These challenges include the need for socially and environmentally responsible practices addressing long-standing poverty and inequality, limited education and healthcare infrastructure, disparities in technological access, cultural diversity and the enduring legacy of colonisation. These factors contribute to complex leadership dynamics marked by authoritarianism, over-reliance on positional authority, marginalisation, bias, and the perpetuation of “in-groups”.
Responsible leadership operates at four interconnected levels, starting with authentic self-leadership. It emphasises transformational, ethical, and servant leadership in relationships with others; values-based leadership within organisations; and a systems perspective that views the world of work as an interconnected value chain in the ecosystem. Responsible leadership extends beyond individual organisations to broader society and the world, embodying principles of stewardship. In this context, Ubuntu philosophy finds resonance, promoting values such as trust, cooperation, communication, information sharing, reciprocity and resilience.
Leaders operating within the sub-Saharan African context recognise the value of social capital, a concept gaining prominence in management theory. Social capital encompasses shared values, trust, cooperation, communication, information sharing, reciprocity and resilience within a group of individuals in an organisation. It fosters mutual benefit, but its impact can take on a darker dimension when relationships foster corruption and diminish the value of social capital, particularly in contexts characterised by political interference.
The development of social capital involves creating new connections, enhancing relational dynamics and recognising collective identity through shared language and meaning-making. In the South African context, collective identity has the potential to influence power dynamics and force organisations into disruptive change.
The influence of indigenous perspectives within societies where western-dominated organisational cultures operate places a responsibility on global leaders to remain mindful of potential tensions and to critically examine their own beliefs that shape decision-making and relationships. Ultimately, cohesive engagement requires the deliberate creation of organisational cultures that harness the collective strengths of all cultural paradigms, fostering awareness and sensemaking throughout the business architecture and relationships.
The path to inclusive leadership in sub-Saharan Africa is a multifaceted journey, where indigenous philosophies such as Ubuntu can play a pivotal role in reshaping organisational cultures and promoting holistic leadership. As the region continues to grapple with the legacies of colonisation and persistent challenges, responsible leadership that embraces diversity and shared values offers a promising avenue for progress. By recognising the interconnectedness of all individuals and communities, leaders can navigate the complex terrain of sub-Saharan Africa with empathy, compassion and a commitment to positive change. DM
Dr Natasha Winkler-Titus is a senior lecturer in Organisational Behaviour and Leadership at Stellenbosch Business School.
|
Missouri educators hope a new approach to reading will improve low literacy rates
Education leaders want Missouri teachers to lean into research-backed literacy instruction methods, known as the science of reading.
Missouri education leaders are pushing for a big change in the way children are taught to read. They’re leaning into something called the science of reading, a blanket term for research-backed teaching methods that have been gaining in popularity in recent years.
Multiple new laws are part of this push, including one that takes effect this week. At the same time, the state is in the middle of an effort to train elementary English teachers to completely rethink their approach in the classroom.
In a state where reading test scores have been declining for years, educators hope this method could be a solution to Missouri schools’ failure to teach all kids to read.
The research-backed approach
In the St. Louis area, you can see the science of reading in action in KIPP elementary schools. The charter network has been moving toward a research-backed curriculum in its schools for a few years, and about 70% of its English teachers are going through an intensive professional development course on these teaching methods.
Something seems to be working here. While overall test scores are low, KIPP Victory Academy students’ English language growth scores led the state in the 2020-21 school year, according to St. Louis University’s PRiME center. PRiME researchers say growth is an important measure, because while test scores often reflect the socioeconomic background of students, growth can tell you whether students are improving at school.
On a December Monday at Victory Academy in St. Louis’ West End neighborhood, first graders were getting fidgety in anticipation of recess, but they powered through for their teacher Allison Feldmann. A big textbook from the science of reading professional development course sat on a table at the back of her room; Feldmann is in the middle of the program.
She wore a headset that amplifies her voice as she led the kids through exercises — they were learning how letter sounds combine to make words.
“Say damp,” she asked.
“Damp!” the class responded.
“Change ‘duh’ to ‘luh,’” she prompted, making the sounds for the letters.
“Lamp!” the class yelled in unison.
Feldmann asked the students to move their bodies as they broke down the sounds in words. As they replaced letters to form a new word, they folded their little arms in front of them, "Macarena" style.
This letter sound exercise is meant to teach students a skill called phonemic awareness. science of reading-based classes focus on it alongside phonics, reading fluency, vocabulary and comprehension. Research has shown readers need to have a good grasp of these concepts, and these words have also recently been written into multiple Missouri education laws.
At December’s state board of education meeting, the stakes were clear during a presentation on the latest test results from the Nation’s Report Card. Education leaders looked at charts showing consistent declines in Missouri students' reading scores since 2015, with a big drop after the start of the pandemic.
Missouri’s Department of Elementary and Secondary Education officials are hoping a push toward research-backed literacy instruction will help.
The department is rolling out a big, statewide effort to put teachers through an intensive professional development program on the science of reading for all teachers, called LETRS. It’s the same course Victory Academy teacher Allison Feldmann is in the middle of.
The training is a time commitment for busy teachers; it can take more than 160 hours to complete over the course of two years. The state has funding for 15,000 kindergarten through fifth grade teachers to go through this training. So far, about 9,000 have at least started it.
On top of that, lawmakers enacted a series of reading instruction changes last session that are already in effect. One makes components of the science of reading a required topic for teachers to learn in college or other training institutions. Another says schools must offer an evidence-based reading program for elementary students.
Another law, which took effect this week, requires new reading testing for kindergarten through third grade at the beginning and end of the school year, to identify students who are behind or at risk for dyslexia. Those students’ parents will then be notified so the struggling students can be given intensive reading instruction.
Altogether, the laws and initiatives represent a big investment of time and money in the science of reading in Missouri.
“We want to ensure that when you hear conversations across the nation about states who have been successful with reading improvement, Missouri needs to be part of that conversation,” said Tracy Hinds, an assistant commissioner at the Department of Elementary and Secondary Education.
The science of reading training for teachers is a good start, but more will be needed to truly change how kids are taught, said Matt Burns, a professor of special education at the University of Missouri. Teachers will need support to implement effective instruction.
“If we want this to happen, we have to have school districts have people on site who could help coach teachers,” Burns said.
There’s another barrier to implementation; methods that aren’t backed by science are still popular in schools. Allison Feldmann remembers how she and her colleagues used to teach at KIPP Victory Academy.
“We weren't really teaching kids to listen to the sounds in words,” Feldmann said.
In about 3 out of 4 classrooms, teachers do the same, deemphasizing methods like sounding out words and instead telling students to use context clues or illustrations to guess a word they don’t know while reading.
“We had been teaching kids, ‘guess this letter and look at the picture,’ and now, because of the science of reading, we know that was definitely not what we were supposed to be doing,” said Angela Jackson, an elementary literacy curriculum manager at KIPP.
To truly improve reading in Missouri, Burns said other schools will have to make the same change. “I think we have to really make sure that some of these practices for which there isn't a research base aren't used in schools,” Burns said.
Feldmann said going through the intensive science of reading course has changed her perspective.
“I really enjoy it. It's just a very deep dive into the curriculums that I'm teaching and kind of the why behind everything,” Feldmann said. “It has changed my perspective a lot.”
Missouri education leaders hope more teachers will change their perspectives too.
Copyright 2023 St. Louis Public Radio. To see more, visit St. Louis Public Radio.
|
<urn:uuid:513b35e0-e8b3-42a6-a946-f83db336d3e3>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00131.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9670431613922119,
"pii_count": 0,
"score": 3.34375,
"token_count": 1423,
"url": "https://www.kcur.org/education/2023-01-05/missouri-educators-hope-a-new-approach-to-reading-will-improve-low-literacy-rates"
}
|
Missouri educators hope a new approach to reading will improve low literacy rates
Education leaders want Missouri teachers to lean into research-backed literacy instruction methods, known as the science of reading.
Missouri education leaders are pushing for a big change in the way children are taught to read. They’re leaning into something called the science of reading, a blanket term for research-backed teaching methods that have been gaining in popularity in recent years.
Multiple new laws are part of this push, including one that takes effect this week. At the same time, the state is in the middle of an effort to train elementary English teachers to completely rethink their approach in the classroom.
In a state where reading test scores have been declining for years, educators hope this method could be a solution to Missouri schools’ failure to teach all kids to read.
The research-backed approach
In the St. Louis area, you can see the science of reading in action in KIPP elementary schools. The charter network has been moving toward a research-backed curriculum in its schools for a few years, and about 70% of its English teachers are going through an intensive professional development course on these teaching methods.
Something seems to be working here. While overall test scores are low, KIPP Victory Academy students’ English language growth scores led the state in the 2020-21 school year, according to St. Louis University’s PRiME center. PRiME researchers say growth is an important measure, because while test scores often reflect the socioeconomic background of students, growth can tell you whether students are improving at school.
On a December Monday at Victory Academy in St. Louis’ West End neighborhood, first graders were getting fidgety in anticipation of recess, but they powered through for their teacher Allison Feldmann. A big textbook from the science of reading professional development course sat on a table at the back of her room; Feldmann is in the middle of the program.
She wore a headset that amplifies her voice as she led the kids through exercises — they were learning how letter sounds combine to make words.
“Say damp,” she asked.
“Damp!” the class responded.
“Change ‘duh’ to ‘luh,’” she prompted, making the sounds for the letters.
“Lamp!” the class yelled in unison.
Feldmann asked the students to move their bodies as they broke down the sounds in words. As they replaced letters to form a new word, they folded their little arms in front of them, "Macarena"
|
style.
This letter sound exercise is meant to teach students a skill called phonemic awareness. science of reading-based classes focus on it alongside phonics, reading fluency, vocabulary and comprehension. Research has shown readers need to have a good grasp of these concepts, and these words have also recently been written into multiple Missouri education laws.
At December’s state board of education meeting, the stakes were clear during a presentation on the latest test results from the Nation’s Report Card. Education leaders looked at charts showing consistent declines in Missouri students' reading scores since 2015, with a big drop after the start of the pandemic.
Missouri’s Department of Elementary and Secondary Education officials are hoping a push toward research-backed literacy instruction will help.
The department is rolling out a big, statewide effort to put teachers through an intensive professional development program on the science of reading for all teachers, called LETRS. It’s the same course Victory Academy teacher Allison Feldmann is in the middle of.
The training is a time commitment for busy teachers; it can take more than 160 hours to complete over the course of two years. The state has funding for 15,000 kindergarten through fifth grade teachers to go through this training. So far, about 9,000 have at least started it.
On top of that, lawmakers enacted a series of reading instruction changes last session that are already in effect. One makes components of the science of reading a required topic for teachers to learn in college or other training institutions. Another says schools must offer an evidence-based reading program for elementary students.
Another law, which took effect this week, requires new reading testing for kindergarten through third grade at the beginning and end of the school year, to identify students who are behind or at risk for dyslexia. Those students’ parents will then be notified so the struggling students can be given intensive reading instruction.
Altogether, the laws and initiatives represent a big investment of time and money in the science of reading in Missouri.
“We want to ensure that when you hear conversations across the nation about states who have been successful with reading improvement, Missouri needs to be part of that conversation,” said Tracy Hinds, an assistant commissioner at the Department of Elementary and Secondary Education.
The science of reading training for teachers is a good start, but more will be needed to truly change how kids are taught, said Matt Burns, a professor of special education at the University of Missouri. Teachers will need support to implement effective instruction.
“If we want this to happen, we have to have school districts have people on site who could help coach teachers,” Burns said.
There’s another barrier to implementation; methods that aren’t backed by science are still popular in schools. Allison Feldmann remembers how she and her colleagues used to teach at KIPP Victory Academy.
“We weren't really teaching kids to listen to the sounds in words,” Feldmann said.
In about 3 out of 4 classrooms, teachers do the same, deemphasizing methods like sounding out words and instead telling students to use context clues or illustrations to guess a word they don’t know while reading.
“We had been teaching kids, ‘guess this letter and look at the picture,’ and now, because of the science of reading, we know that was definitely not what we were supposed to be doing,” said Angela Jackson, an elementary literacy curriculum manager at KIPP.
To truly improve reading in Missouri, Burns said other schools will have to make the same change. “I think we have to really make sure that some of these practices for which there isn't a research base aren't used in schools,” Burns said.
Feldmann said going through the intensive science of reading course has changed her perspective.
“I really enjoy it. It's just a very deep dive into the curriculums that I'm teaching and kind of the why behind everything,” Feldmann said. “It has changed my perspective a lot.”
Missouri education leaders hope more teachers will change their perspectives too.
Copyright 2023 St. Louis Public Radio. To see more, visit St. Louis Public Radio.
|
Lake Tahoe reaches clarity levels not seen since the 1980s
In 2022, Lake Tahoe was especially blue.
Over the last five months of 2022, scientists measuring the lake’s clarity could see down 80.6 feet. It was the clearest the lake has been since the 1980s, according to a report released by the U.C. Davis Tahoe Environmental Research Center.
“It’s encouraging to see that water clarity improved in 2022 compared to the year prior. Yet, a better annual average for a single year shouldn’t be taken as a signal that Lake Tahoe’s water clarity is now on a path to recovery,” League to Save Lake Tahoe CEO Darcie Goodman Collins said in a press release. “The data from 2022 tells a nuanced story and reminds us there is much more we need to understand about Lake Tahoe if we hope to keep – and restore – its blueness.”
Measuring Tahoe's clarity
Scientists regularly measure the lake’s clarity by lowering a large, white disk into the water, recording how deep the disk can go before it is no longer visible.
While Tahoe’s clarity peaked in late 2022, its average clarity over the year was just 71.7 feet; much lower than the averages that hovered closer to 100 feet in the 1960s and '70s.
In 2021, the lake’s clarity was just 61 feet.
Since the 1960s, agencies and groups from California and Nevada have worked to restore the lake’s clear water, with the goal of reaching an annual average clarity of 97.4 feet, a depth not seen in roughly half a century.
Zooplankton, invasive shrimp and dog treats
Clarity is affected by the concentration of tiny particles such as silt and clay that come from runoff, and phytoplankton. Other factors can impact it, but a report from the Tahoe Environmental Research Center concluded that the change in zooplankton was the largest contributor toward improving the lake’s clarity.
Zooplankton – microscopic animals – eat the phytoplankton, and in late 2022, there was a significant spike in the number of zooplankton in Tahoe.
Two notable zooplankton returned this past fall and winter – daphnia and bosmina. They had largely disappeared from Tahoe after invasive mysis shrimp were introduced in the 1960s. The shrimp were so abundant in the lake that a California nonprofit group was studying ways to harvest the shrimp and turn them into dog treats.
In 2021, the mysis shrimp population crashed unexpectedly, and a year later, daphnia and bosmina were thriving.
Scientists aren’t sure, but clarity levels could continue to improve this year, returning to levels seen in the 1970s if daphnia and bosmina continue to thrive – despite the large amounts of runoff expected this spring and summer.
And Tahoe fishermen can rejoice – Kokanee salmon, which feed on daphnia, are expected to be larger this season with their bountiful food supply.
But that improvement would likely be short-lived. Mysis shrimp populations are expected to rebound, and, as they consume bosmina and daphnia, clarity would once again decline.
Amy Alonzo covers the outdoors, recreation and environment for Nevada and Lake Tahoe. Reach her <email-pii>.
|
<urn:uuid:e421f91f-bf5e-4ec1-b5b6-dc2a4dfe7528>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649518.12/warc/CC-MAIN-20230604061300-20230604091300-00687.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9576290249824524,
"pii_count": 1,
"score": 2.96875,
"token_count": 733,
"url": "https://www.rgj.com/story/news/2023/04/13/lake-tahoe-reached-levels-not-seen-since-1980s-in-2022/70109386007/"
}
|
Lake Tahoe reaches clarity levels not seen since the 1980s
In 2022, Lake Tahoe was especially blue.
Over the last five months of 2022, scientists measuring the lake’s clarity could see down 80.6 feet. It was the clearest the lake has been since the 1980s, according to a report released by the U.C. Davis Tahoe Environmental Research Center.
“It’s encouraging to see that water clarity improved in 2022 compared to the year prior. Yet, a better annual average for a single year shouldn’t be taken as a signal that Lake Tahoe’s water clarity is now on a path to recovery,” League to Save Lake Tahoe CEO Darcie Goodman Collins said in a press release. “The data from 2022 tells a nuanced story and reminds us there is much more we need to understand about Lake Tahoe if we hope to keep – and restore – its blueness.”
Measuring Tahoe's clarity
Scientists regularly measure the lake’s clarity by lowering a large, white disk into the water, recording how deep the disk can go before it is no longer visible.
While Tahoe’s clarity peaked in late 2022, its average clarity over the year was just 71.7 feet; much lower than the averages that hovered closer to 100 feet in the 1960s and '70s.
In 2021, the lake’s clarity was just 61 feet.
Since the 1960s, agencies and groups from California and Nevada have worked to restore the lake’s clear water, with the goal of reaching an annual average clarity of 97.4 feet, a depth not seen in roughly half a century.
Zooplankton, invasive shrimp and dog treats
Clarity is affected by the concentration of tiny particles such as silt and clay that come from runoff, and phytoplankton. Other factors can impact it, but a report from the Tahoe Environmental Research Center concluded that the change in zooplankton was the largest contributor toward improving the lake’s clarity.
Zooplankton – microscopic animals – eat the phytoplankton, and in late 2022, there was a significant spike in the number of zooplankton in Tahoe.
Two notable zooplankton returned this past fall and winter – daphnia and bosmina. They had largely disappeared from Tahoe after invasive m
|
ysis shrimp were introduced in the 1960s. The shrimp were so abundant in the lake that a California nonprofit group was studying ways to harvest the shrimp and turn them into dog treats.
In 2021, the mysis shrimp population crashed unexpectedly, and a year later, daphnia and bosmina were thriving.
Scientists aren’t sure, but clarity levels could continue to improve this year, returning to levels seen in the 1970s if daphnia and bosmina continue to thrive – despite the large amounts of runoff expected this spring and summer.
And Tahoe fishermen can rejoice – Kokanee salmon, which feed on daphnia, are expected to be larger this season with their bountiful food supply.
But that improvement would likely be short-lived. Mysis shrimp populations are expected to rebound, and, as they consume bosmina and daphnia, clarity would once again decline.
Amy Alonzo covers the outdoors, recreation and environment for Nevada and Lake Tahoe. Reach her <email-pii>.
|
Dan Frechtling, CEO, Boltive.
How does data privacy, a subject that got very little attention 15 years ago, lay claim to a week-long commemoration from January 24 to 28?
What we now recognize as Data Privacy Week began as Data Privacy Day in 2007. The day grew into a week after the U.S. National Cybersecurity Alliance (NCA) wished to spread awareness about online privacy in 2021.
This recognition did not happen overnight. Rather, it culminated from a cadence of data tracking and data protection incidents over the past 20 years.
The Data Tracking Movement
First, let us don our web archaeology hats and examine the rise of data tracking. Why have marketers gathered more and more personal information about individuals?
There are many reasons for this. A primary cause was the redistribution of content on the Web. For example, at the dawn of the consumer internet, people used to browse new cars by visiting auto pages on Yahoo, MSN and AOL. Carmakers placed ads on these sites.
But by 2002, search engines, led by Google, became the starting point for Web surfers. Instead of audiences flocking to the content hubs—as they did in the print world—Web traffic became fragmented. Advertisers needed to work with ad networks and ad tech to aggregate “auto intenders” across many publishers.
Cookies made this possible. In 1994, the first cookie, or unique ID, was born at Netscape when engineer Lou Montulli wanted to remember visitors anonymously and privately. Without cookies, browsers tended to “forget” signed-in users, preferred language and items in shopping carts.
Then the cookie became a “monster.” It turned into a tracker in 1995 as Doubleclick, an internet advertising pioneer, repurposed cookies to follow users from one site to another. With cookies, you did not just see who visited smaller auto sites today but also who visited them in the past 30 days. So, you could find auto intenders in many places and advertise to them.
From these beginnings came more targeting innovations over 20 years. Interest-based ads (IBA) began tracking a user across sites to serve ads to that browser’s interests. By placing an individual in interest categories, advertisers could send relevant messages.
In 2007, the iPhone launch introduced the multi-device era. Then cross-app advertising became the mobile app equivalent of IBA, tracking users across nonaffiliated apps. In 2014, device graphs connected desktop, mobile then later connected TV (CTV). All these nodes made it very profitable to retarget users. Retargeting recycles data collected on one site to show an ad to users on other sites or apps.
The Data Protection Movement
A second trend began blowing in the opposite direction. Data protection intensified like a drumbeat over the past 10 years. For the sake of space, we will concentrate on U.S. developments.
In the last 10 years, the number of exposed records in U.S. data breaches has grown from 419 in 2011 to 1,862 in 2021, the last full year of reporting. That is a compound annual growth rate of 16%.
Among these numbers were high-profile mishaps. There was a series of attacks allegedly out of Russia carried out against Yahoo from 2013 to 2016 in which attackers breached 3 billion user accounts worldwide. The was also the Equifax breach in 201,7 when Chinese hackers gained access to the sensitive data of 147 million Americans.
Beyond these numbers, other events shattered complacency about privacy. The 2013 leak of classified information about NSA surveillance by Edward Snowden eroded trust in our government, and the 2018 Cambridge Analytica manipulation of users’ sensitive data on Facebook eroded trust in Big Tech.
On the one hand, tracking transformed into hyper-targeting, the over-collection of data and the proliferation of third parties. On the other hand, data protection became a security and privacy issue after alarming incidents changed consumer attitudes. The data tracking and data protection movements collided in 2018.
The Global Data Protection Regulation (GDPR) lowered the boom in business practices in mid-2018. GDPR is still regarded as the most comprehensive privacy and security law in the world. It has created a regulatory wave unlike any set of rules before.
Why 2023 Will Be A Banner Year For U.S. Data Privacy
GDPR spawned similar laws in Brazil (LGPD), Japan (APPI) and China (PIPL). We are witnessing the fastest simultaneous change in global regulation in history.
This year, five years after GDPR took effect, we are seeing a shift in the U.S. California’s CPRA amendment took effect on January 1, along with Virginia’s VCDPA. Colorado’s CPA and Connecticut’s CTDPA will be in force on July 1. Utah’s UCPA will go into effect on December 31. Four more states have active bills. In addition, the FTC is engaged in privacy rulemaking, and the American Data Privacy and Protection Act (ADPPA) sits in the House of Representatives.
With this perspective, Data Privacy Week is a short ceremony. The other 51 weeks of the year, regulators continue to conduct investigations and prosecute non-compliant companies.
Coming out of this week, companies in 2023 will have a choice: They may continue down the path of business as usual. However, both GDPR enforcement and CCPA enforcement trends indicate smaller enterprises are increasingly caught in regulatory sweeps.
A better path, in the best interests of customers as well as regulators, is to implement privacy practices that ensure user data is only shared with user consent. These small steps are accelerated by software that automates self-audits for businesses.
It is time to call it a day on commemorating privacy for only one week in January. Instead, it should be a week-in, week-out pursuit. And customers will thank us.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
|
<urn:uuid:51e7f322-b3fd-4180-a90c-ac61353577e0>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00093.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9465274214744568,
"pii_count": 0,
"score": 2.5625,
"token_count": 1258,
"url": "https://www.forbes.com/sites/forbestechcouncil/2023/01/24/how-privacy-got-on-the-calendar/"
}
|
Dan Frechtling, CEO, Boltive.
How does data privacy, a subject that got very little attention 15 years ago, lay claim to a week-long commemoration from January 24 to 28?
What we now recognize as Data Privacy Week began as Data Privacy Day in 2007. The day grew into a week after the U.S. National Cybersecurity Alliance (NCA) wished to spread awareness about online privacy in 2021.
This recognition did not happen overnight. Rather, it culminated from a cadence of data tracking and data protection incidents over the past 20 years.
The Data Tracking Movement
First, let us don our web archaeology hats and examine the rise of data tracking. Why have marketers gathered more and more personal information about individuals?
There are many reasons for this. A primary cause was the redistribution of content on the Web. For example, at the dawn of the consumer internet, people used to browse new cars by visiting auto pages on Yahoo, MSN and AOL. Carmakers placed ads on these sites.
But by 2002, search engines, led by Google, became the starting point for Web surfers. Instead of audiences flocking to the content hubs—as they did in the print world—Web traffic became fragmented. Advertisers needed to work with ad networks and ad tech to aggregate “auto intenders” across many publishers.
Cookies made this possible. In 1994, the first cookie, or unique ID, was born at Netscape when engineer Lou Montulli wanted to remember visitors anonymously and privately. Without cookies, browsers tended to “forget” signed-in users, preferred language and items in shopping carts.
Then the cookie became a “monster.” It turned into a tracker in 1995 as Doubleclick, an internet advertising pioneer, repurposed cookies to follow users from one site to another. With cookies, you did not just see who visited smaller auto sites today but also who visited them in the past 30 days. So, you could find auto intenders in many places and advertise to them.
From these beginnings came more targeting innovations over 20 years. Interest-based ads (IBA) began tracking a user across sites to serve ads to that browser’s interests. By placing an individual in interest categories, advertisers could send relevant messages.
In 2007, the iPhone launch introduced the multi-device era. Then cross-app
|
advertising became the mobile app equivalent of IBA, tracking users across nonaffiliated apps. In 2014, device graphs connected desktop, mobile then later connected TV (CTV). All these nodes made it very profitable to retarget users. Retargeting recycles data collected on one site to show an ad to users on other sites or apps.
The Data Protection Movement
A second trend began blowing in the opposite direction. Data protection intensified like a drumbeat over the past 10 years. For the sake of space, we will concentrate on U.S. developments.
In the last 10 years, the number of exposed records in U.S. data breaches has grown from 419 in 2011 to 1,862 in 2021, the last full year of reporting. That is a compound annual growth rate of 16%.
Among these numbers were high-profile mishaps. There was a series of attacks allegedly out of Russia carried out against Yahoo from 2013 to 2016 in which attackers breached 3 billion user accounts worldwide. The was also the Equifax breach in 201,7 when Chinese hackers gained access to the sensitive data of 147 million Americans.
Beyond these numbers, other events shattered complacency about privacy. The 2013 leak of classified information about NSA surveillance by Edward Snowden eroded trust in our government, and the 2018 Cambridge Analytica manipulation of users’ sensitive data on Facebook eroded trust in Big Tech.
On the one hand, tracking transformed into hyper-targeting, the over-collection of data and the proliferation of third parties. On the other hand, data protection became a security and privacy issue after alarming incidents changed consumer attitudes. The data tracking and data protection movements collided in 2018.
The Global Data Protection Regulation (GDPR) lowered the boom in business practices in mid-2018. GDPR is still regarded as the most comprehensive privacy and security law in the world. It has created a regulatory wave unlike any set of rules before.
Why 2023 Will Be A Banner Year For U.S. Data Privacy
GDPR spawned similar laws in Brazil (LGPD), Japan (APPI) and China (PIPL). We are witnessing the fastest simultaneous change in global regulation in history.
This year, five years after GDPR took effect, we are seeing a shift in the U.S. California’s CPRA amendment took effect on January 1, along with Virginia’s VCDPA. Colorado’s CPA and Connecticut’s CTDPA will be in force on July 1. Utah’s UCPA will go into effect on December 31. Four more states have active bills. In addition, the FTC is engaged in privacy rulemaking, and the American Data Privacy and Protection Act (ADPPA) sits in the House of Representatives.
With this perspective, Data Privacy Week is a short ceremony. The other 51 weeks of the year, regulators continue to conduct investigations and prosecute non-compliant companies.
Coming out of this week, companies in 2023 will have a choice: They may continue down the path of business as usual. However, both GDPR enforcement and CCPA enforcement trends indicate smaller enterprises are increasingly caught in regulatory sweeps.
A better path, in the best interests of customers as well as regulators, is to implement privacy practices that ensure user data is only shared with user consent. These small steps are accelerated by software that automates self-audits for businesses.
It is time to call it a day on commemorating privacy for only one week in January. Instead, it should be a week-in, week-out pursuit. And customers will thank us.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
|
Environmental advocates, regulators at odds over Vermont's pesticide regulations
For the first time in more than 30 years, Vermont is updating its regulations on pesticide use in the state.
After working on the update for years, regulators at the Agency of Agriculture, Food and Markets have prepared a set of draft rules. But a coalition of environmental advocates say the proposed regulations don’t go far enough to protect Vermont’s environment in light of climate change.
Lawmakers are expected to vote on the new rules this Thursday, Jan. 5.
Vermont Public’s Jenn Jarecki spoke with climate and environment reporter Abagael Giles about the draft regulations. Their conversation below has been edited and condensed for clarity.
Jenn Jarecki: So to get started, how are pesticides regulated in Vermont and what do the rules look like right now?
Abagael Giles: Yeah, so Vermont law defines pesticides as “economic poisons,” which I think is a good place to start.
These are toxic chemicals that our government — and a lot of people in farming — see as necessary for industry.
These rules govern everything from chemicals like atrazine and glyphosate, used in corn fields, to insecticides used on lawns and golf courses, as well as the herbicides and insecticides used for invasive species control (and there is overlap there).
Basically, pesticide sellers have to register their products with the state before they can sell them here. The state sorts them into categories based on how toxic they are. Federal, but also state law, decides a lot of that.
The most toxic pesticides can only be applied by someone with a special certification. And people who apply them have to report to the state how much they use, where they apply them and when.
Mostly these rules deal with all the red tape. Not what pesticides fall into what pot, but how Vermont treats them once they’re there. Who gets to apply them, use them, sell them and what kind of permit, if any, you need to do that. But also who you have to tell when you use them.
So what are the key changes on the table here?
Big picture, these updates are happening to bring Vermont’s regulations up to date with the EPA’s; the federal agency made changes to its rules a few years ago. Vermont has to have regulations that are at least as strict as the federal government’s regulations — though Vermont can also choose to regulate pesticides more aggressively.
Big changes include a requirement that people who apply commercial pesticides notify all landowners about pesticide use on their property.
It also creates a process for towns and nonprofits to apply for a permit to spray for invasive species — think Japanese knotweed — on land that doesn’t belong to them, or in public right of ways. There is a new permit for spraying for adult mosquitos.
And there are also some stricter rules about how companies that use pesticides store them. And a rule that landlords and homeowner associations notify condo residents when they use pesticides there.
What do we know right now about how pesticides are used in Vermont? And is that changing?
Yeah, so this is complicated. Because of the way the current regulations work, some of the pesticides applied on farms — cornfields are a big place where this happens — don’t get reported in the state’s data. That’s because only the chemicals regulators have agreed are the most hazardous have to be reported by private applicators.
For example, atrazine, which is a commonly used weed killer, is in this category. But glyphosate, the big ingredient in Roundup, is also used a lot by Vermont farms. You only have to report that one to the state’s database if you’re a commercial applicator — someone who gets paid to apply it.
But Vermont farmers are contending with new and more prolific pests every year. It’s a problem that’s made worse by climate change. As we see fewer prolonged freeze periods and our winters warm, it makes it easier for pests that couldn’t survive here in the past to get a foothold.
Environmental advocates are worried that climate change will lead to farms using more pesticides unless we create new regulations that push for alternative solutions.
Also, bees are in decline here and globally. A lot of science has shown that pesticides are part of the problem.
What are the biggest objections from the environmental community? What are they asking for?
Yeah, so several environmental groups including Conservation Law Foundation, the Lake Champlain Committee and Rural Vermont signed a letter to lawmakers and the Agency of Agriculture calling for the lawmakers to reject this new set of rules for a couple big reasons.
They say the state should require that people who get a permit to apply the most toxic pesticides also present a plan for how they’ll use less over time and do environmental monitoring.
They feel like the state didn’t present enough scientific evidence to support its decision not to require this as part of every permit it issues to people using regulated pesticides, not just some.
And the environmental groups say without this, the agency’s claims that these new regulations will be a boon for the environment and for climate change just don’t pencil out.
State regulators point out that the current pesticide rules don’t require this sort of tracking or plan as part of any permit. So they see what’s proposed as a big step forward. And they say doing what advocates call for would be tricky from a regulatory standpoint.
Advocates also wanted this bill to include explicit protections for native pollinators.
The current 1991 regulations primarily protect honeybees — and the proposed new regulations would expand protections for honeybees, but not for native bees.
Advocates also want to see more public involvement in decisions about pesticide regulations.
So yeah, the advocates are calling on lawmakers to reject this new rule and for the Agency of Agriculture to delay the process until lawmakers can be more explicit about what they’re looking for in an update. They say the rule as written doesn’t go far enough beyond what’s federally required.
Now, pesticides have come up in the Statehouse a lot in the last few years, especially around the decline of bees and concerns about water quality. Are there any places where this rule deals with policy that lawmakers are still figuring out?
Yeah, there’s growing evidence that PFAS can be spread when people apply pesticides.
These are a class of toxic chemicals that never break down in the natural environment. They’ve been linked to cancer. And this past summer, the EPA proposed designating them as hazardous substances under its superfund program, which is a big deal.
The agency has said that even tiny amounts of these so-called forever chemicals are super toxic for humans. But to the frustration of regulators in a lot of states, the EPA is still working on a plan for what to do about them.
PFAS are often used for waterproofing, and they tend to show up in the containers pesticides are stored in.
This proposed set of regulations doesn’t mention PFAS. Some advocates found that disappointing and say it should be a priority for lawmakers in the coming sessions.
Also last year, lawmakers debated a ban on seeds coated with neonicotinoid pesticides.
This insecticide is a neurologic toxin that is bad for bees. Right now, it’s used by most farmers preventatively to protect seeds and seedlings.
The ban was softened last year after testimony from farmers who said even when they try, it’s hard to find seeds in Vermont that aren’t treated. Lawmakers asked the Agency of Agriculture to adopt best management practices for using neonicotinoid treated seeds instead, and those are still in the works.
More from Vermont Public: Vt.’s housing health & safety system didn’t protect farmworkers, so they created their own program | En Vermont, las regulaciones de alojamiento y seguridad no protegían a los trabajadores agrícolas, entonces crearon su propio programa
Beekeepers in the state have been pushing for stricter regulations for some time. A committee of experts told lawmakers a few years ago that Vermont should be tracking all pesticide use, not just the most dangerous ones. And we should be setting clear targets for reducing our dependence on them.
These are things legislators could revisit and something environmental advocates really wanted these regulations to address.
What’s the next step in the process and how can Vermonters still weigh in?
So again, these regulations were written by regulators at the Agency of Agriculture. Next, a small group of lawmakers from both the House and Senate will get to vote on whether the rules are consistent with Vermont law.
That vote happens Thursday in Montpelier, before the Legislative Committee on Administrative Rules.
Now the committee has already had quite a bit of back and forth with the Agency of Agriculture over this rule. So if they vote not to approve it, Vermont law technically allows the agency to adopt the regulations anyway. But the agency says they’d rather not have to.
Still, lawmakers can refer the issue to the committees in both chambers who deal with environmental regulations and agriculture and tell them they want to see some bills that call for updates. What they tell those committees could have a big impact on future policy.
People who want to weigh in can reach out to the lawmakers that sit on the Legislative Committee on Administrative Rules, or to their local representatives.
|
<urn:uuid:6493e22b-4991-496c-a0e3-b3a93d673729>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00292.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9492064714431763,
"pii_count": 0,
"score": 2.640625,
"token_count": 1982,
"url": "https://www.vermontpublic.org/local-news/2023-01-03/environmental-advocates-regulators-at-odds-over-vermonts-pesticide-regulations"
}
|
Environmental advocates, regulators at odds over Vermont's pesticide regulations
For the first time in more than 30 years, Vermont is updating its regulations on pesticide use in the state.
After working on the update for years, regulators at the Agency of Agriculture, Food and Markets have prepared a set of draft rules. But a coalition of environmental advocates say the proposed regulations don’t go far enough to protect Vermont’s environment in light of climate change.
Lawmakers are expected to vote on the new rules this Thursday, Jan. 5.
Vermont Public’s Jenn Jarecki spoke with climate and environment reporter Abagael Giles about the draft regulations. Their conversation below has been edited and condensed for clarity.
Jenn Jarecki: So to get started, how are pesticides regulated in Vermont and what do the rules look like right now?
Abagael Giles: Yeah, so Vermont law defines pesticides as “economic poisons,” which I think is a good place to start.
These are toxic chemicals that our government — and a lot of people in farming — see as necessary for industry.
These rules govern everything from chemicals like atrazine and glyphosate, used in corn fields, to insecticides used on lawns and golf courses, as well as the herbicides and insecticides used for invasive species control (and there is overlap there).
Basically, pesticide sellers have to register their products with the state before they can sell them here. The state sorts them into categories based on how toxic they are. Federal, but also state law, decides a lot of that.
The most toxic pesticides can only be applied by someone with a special certification. And people who apply them have to report to the state how much they use, where they apply them and when.
Mostly these rules deal with all the red tape. Not what pesticides fall into what pot, but how Vermont treats them once they’re there. Who gets to apply them, use them, sell them and what kind of permit, if any, you need to do that. But also who you have to tell when you use them.
So what are the key changes on the table here?
Big picture, these updates are happening to bring Vermont’s regulations up to date with the EPA’s; the federal agency made changes to its rules a few years ago. Vermont has to have regulations that are at least as strict as the federal government’s regulations — though Vermont can also choose to regulate pesticides more aggressively.
Big changes include a requirement that people who apply commercial pesticides notify all landowners about pesticide use
|
on their property.
It also creates a process for towns and nonprofits to apply for a permit to spray for invasive species — think Japanese knotweed — on land that doesn’t belong to them, or in public right of ways. There is a new permit for spraying for adult mosquitos.
And there are also some stricter rules about how companies that use pesticides store them. And a rule that landlords and homeowner associations notify condo residents when they use pesticides there.
What do we know right now about how pesticides are used in Vermont? And is that changing?
Yeah, so this is complicated. Because of the way the current regulations work, some of the pesticides applied on farms — cornfields are a big place where this happens — don’t get reported in the state’s data. That’s because only the chemicals regulators have agreed are the most hazardous have to be reported by private applicators.
For example, atrazine, which is a commonly used weed killer, is in this category. But glyphosate, the big ingredient in Roundup, is also used a lot by Vermont farms. You only have to report that one to the state’s database if you’re a commercial applicator — someone who gets paid to apply it.
But Vermont farmers are contending with new and more prolific pests every year. It’s a problem that’s made worse by climate change. As we see fewer prolonged freeze periods and our winters warm, it makes it easier for pests that couldn’t survive here in the past to get a foothold.
Environmental advocates are worried that climate change will lead to farms using more pesticides unless we create new regulations that push for alternative solutions.
Also, bees are in decline here and globally. A lot of science has shown that pesticides are part of the problem.
What are the biggest objections from the environmental community? What are they asking for?
Yeah, so several environmental groups including Conservation Law Foundation, the Lake Champlain Committee and Rural Vermont signed a letter to lawmakers and the Agency of Agriculture calling for the lawmakers to reject this new set of rules for a couple big reasons.
They say the state should require that people who get a permit to apply the most toxic pesticides also present a plan for how they’ll use less over time and do environmental monitoring.
They feel like the state didn’t present enough scientific evidence to support its decision not to require this as part of every permit it issues to people using regulated pesticides, not just some.
And the environmental groups say without this, the agency’s claims that these new regulations will be a boon for the environment and for climate change just don’t pencil out.
State regulators point out that the current pesticide rules don’t require this sort of tracking or plan as part of any permit. So they see what’s proposed as a big step forward. And they say doing what advocates call for would be tricky from a regulatory standpoint.
Advocates also wanted this bill to include explicit protections for native pollinators.
The current 1991 regulations primarily protect honeybees — and the proposed new regulations would expand protections for honeybees, but not for native bees.
Advocates also want to see more public involvement in decisions about pesticide regulations.
So yeah, the advocates are calling on lawmakers to reject this new rule and for the Agency of Agriculture to delay the process until lawmakers can be more explicit about what they’re looking for in an update. They say the rule as written doesn’t go far enough beyond what’s federally required.
Now, pesticides have come up in the Statehouse a lot in the last few years, especially around the decline of bees and concerns about water quality. Are there any places where this rule deals with policy that lawmakers are still figuring out?
Yeah, there’s growing evidence that PFAS can be spread when people apply pesticides.
These are a class of toxic chemicals that never break down in the natural environment. They’ve been linked to cancer. And this past summer, the EPA proposed designating them as hazardous substances under its superfund program, which is a big deal.
The agency has said that even tiny amounts of these so-called forever chemicals are super toxic for humans. But to the frustration of regulators in a lot of states, the EPA is still working on a plan for what to do about them.
PFAS are often used for waterproofing, and they tend to show up in the containers pesticides are stored in.
This proposed set of regulations doesn’t mention PFAS. Some advocates found that disappointing and say it should be a priority for lawmakers in the coming sessions.
Also last year, lawmakers debated a ban on seeds coated with neonicotinoid pesticides.
This insecticide is a neurologic toxin that is bad for bees. Right now, it’s used by most farmers preventatively to protect seeds and seedlings.
The ban was softened last year after testimony from farmers who said even when they try, it’s hard to find seeds in Vermont that aren’t treated. Lawmakers asked the Agency of Agriculture to adopt best management practices for using neonicotinoid treated seeds instead, and those are still in the works.
More from Vermont Public: Vt.’s housing health & safety system didn’t protect farmworkers, so they created their own program | En Vermont, las regulaciones de alojamiento y seguridad no protegían a los trabajadores agrícolas, entonces crearon su propio programa
Beekeepers in the state have been pushing for stricter regulations for some time. A committee of experts told lawmakers a few years ago that Vermont should be tracking all pesticide use, not just the most dangerous ones. And we should be setting clear targets for reducing our dependence on them.
These are things legislators could revisit and something environmental advocates really wanted these regulations to address.
What’s the next step in the process and how can Vermonters still weigh in?
So again, these regulations were written by regulators at the Agency of Agriculture. Next, a small group of lawmakers from both the House and Senate will get to vote on whether the rules are consistent with Vermont law.
That vote happens Thursday in Montpelier, before the Legislative Committee on Administrative Rules.
Now the committee has already had quite a bit of back and forth with the Agency of Agriculture over this rule. So if they vote not to approve it, Vermont law technically allows the agency to adopt the regulations anyway. But the agency says they’d rather not have to.
Still, lawmakers can refer the issue to the committees in both chambers who deal with environmental regulations and agriculture and tell them they want to see some bills that call for updates. What they tell those committees could have a big impact on future policy.
People who want to weigh in can reach out to the lawmakers that sit on the Legislative Committee on Administrative Rules, or to their local representatives.
|
Against all odds, this Black NJ lawyer helped ignite the civil rights movement | Stile
Oliver Randolph, the lone Black delegate to the New Jersey Constitutional Convention, had reason to worry after arriving more than an hour late to the Aug. 13, 1947, deliberations inside the Rutgers University gym.
Before his arrival, opposition had been building all morning to his bold plans to break Jim Crow’s chokehold over the Garden State's public schools and its National Guard.
Some colleagues, while sympathetic to his overall goal, were concerned that his approach was an overreach or redundant. The pushback threatened to defeat Randolph's long effort to forge a progressive and equitable blueprint for New Jersey’s growing Black population. He needed to act quickly.
After thanking the delegates for waiting — a train accident near Elizabeth delayed him from reaching New Brunswick at the session's start — the lawyer and former legislator went on the offensive, relying on a tactic he had used in political battles as a prominent Republican operative in Essex County in the 1920s.
He shamed them.
“The fact is that those discriminated against belong to a race, as you all know, of which a great many young men gave their blood and laid down their lives for this great cause of democracy," Randolph said, referring to the large numbers of Black servicemen who had fought in World War II. “I can’t believe that this convention — the convention which is taking such marvelous strides to present a product that will be a model — I can’t believe that we will allow discrimination in our public schools.”
Haggling continued over the next few days, but Randolph’s fierce appeal to the delegates' duty and their place in history clearly had turned the tide. By the end of the week, the convention ratified Randolph’s bold bid in a lopsided vote.
Stile:Who was Oliver Randolph, New Jersey's civil rights pioneer?
The action would prove to be a major, historic triumph. Three months later, New Jersey voters ratified a modern constitution featuring a streamlined judicial system, a powerful new governor’s role and Randolph’s proposal, formally enshrined as Article 1, Paragraph 5:
"No person shall be denied the enjoyment of any civil or military right, nor be discriminated against in the exercise of any civil or military right, nor be segregated in the militia or in the public schools, because of religious principles, race, color, ancestry or national origin."
New Jersey, a state derided by African American leaders as the "Georgia of the North," was now stamped with distinction as the first in the nation that unequivocally outlawed segregation in both its schools and its National Guard.
Virtually overnight, the Garden State went from a backwater to a progressive exemplar of social justice. Since then, only two other states — Connecticut and Hawaii — have taken similar steps to include anti-segregation clauses in their constitutions, according to John Dinan, a professor of politics and international relations at Wake Forest University, who is an expert on state constitutions.
It was a remarkable feat: A lone Black man determinedly and successfully pursued racial equity roughly a decade before the civil rights movement.
“It took the force of will of someone like Randolph to recognize that the state was not living up to its possibilities," Elise Boddie, a former law professor at Rutgers University-Newark and a civil rights activist, said in an interview last year.
But more than seven decades later, New Jersey is far from realizing those possibilities. Since then, New Jersey public schools remain segregated in largely segregated communities, a reflection of the "white flight" suburbanization that took root after the war.
A lawsuit seeking to crack the segregated patterns has stalled in the face of resistance from the state's liberal governor, Phil Murphy. And the Legislature, controlled by Democrats — moderate-to-socially liberal Democrats, not the Jim Crow segregationists of Randolph's time — has shown little inclination to embrace the experimental aims of the lawsuit.
Still, much of the work we can suppose Randolph dreamed of doing is still undone. New Jersey's record on civil rights, equal rights and segregation has many boxes still to be ticked if Randolph's vision is to be realized. His efforts in 1947 were only a starting point.
'Separate but equal' upended
But Randolph's achievement was remarkable in other ways.
In arguing for his provision, Randolph successfully attacked the pernicious "separate but equal" doctrine — the result of the U.S. Supreme Court's Plessy v. Ferguson ruling — that permitted segregation to flourish in New Jersey schools, despite a 60-year-old law banning the practice. Randolph's effort at Rutgers prevailed — seven years before the lawyers for the NAACP prevailed at the U.S. Supreme Court in the landmark Brown v. Board of Education decision outlawing segregated public schools across the country.
In a 1997 documentary about the making of the New Jersey Constitution, Clement Price, a Rutgers University-Newark professor and city historian, asserted that the Randolph-drafted provision encouraged the Supreme Court to take its historic leap.
“The court could point to New Jersey as an example of a state that voluntarily desegregated its schools, and it did so without racial … havoc," said Price, who died in 2014.
More:Ruby Bridges tells her story of trailblazing desegregation in children's book
The new anti-segregation clause also ratcheted up pressure on then-President Harry S. Truman to abolish segregation in the U.S. armed forces the next year.
After New Jersey’s decision to integrate its Guard units in February 1948, other states demanded a waiver from Truman’s Defense Department policy to do the same. Truman eventually issued his order five months later.
Little-known in New Jersey history
Still, except among a small community of historians, law professors, journalists and staff at the Newark Public Library who maintain Randolph's papers and associated ephemera, and who work to sustain his memory, Randolph remains a ghost of New Jersey history.
His achievements, sadly, are rarely celebrated.
That may be because Randolph, who died four years after the convention, was not one to boast of his success, said Bernard K. Freamon, a former Seton Hall Law professor who wrote an extensive Rutgers Law Journal article in 2004 about the New Jersey Constitution's anti-segregation clause.
New Jersey:What happened to Phil Murphy's progressive ambition? His second term is moderate, so far
And in a few years, the bus boycotts and protests and the violent tumult of the civil rights movement would quickly dominate the nation’s television screens and its consciousness. By then, New Jersey’s 1947 breakthrough, Randolph's greatest achievement, was already a footnote.
But perhaps the biggest reason we don't better remember Randolph as a giant of Garden State history may be because New Jersey failed to fulfill the spirit and purpose of Article 1, Paragraph 5. Today, the breakthrough clause no longer stands as a source of pride, but can be seen as a benchmark of disappointment.
More than 75 years after its enactment, New Jersey's public schools are ranked as the sixth-most-segregated in the country, according to a 2017 UCLA study.
Although the new state constitution ensured that all public schools would open their doors to anyone, regardless of race, it proved to be no match for the demographic transformation that shaped the suburbanization of post-World War II New Jersey.
And many of those policies slammed the doors on aspiring minorities.
The sweeping G.I. Bill, enacted to help World War II veterans prosper after the war with a range of benefits, including access to higher education and, importantly, guaranteed mortgages, was largely denied to some 1.2 million Black veterans.
The discriminatory “redlining” practice of denying home loans in low-income neighborhoods prevented many Black and minority families from moving into the new neighborhoods. Exclusionary zoning in the suburbs also undermined integration.
The result: Most people of color congregated in urban, low-income areas and sent their children to crumbling, underperforming schools while most whites prospered in the new suburbs and in public schools that became the envy of the nation. A state residency law, requiring students to attend school in the towns where they live, deepened the segregated divide.
In 2023, there's still more to be done
Statewide school enrollment data from 2015 to 2020 paint the portrait of today's status quo, de facto segregation. Nearly half of New Jersey's Black students attended schools that are at least 90% non-white, while nearly 70% of white students attend schools that are three-fourths white, according to the state Department of Education.
In 2018, a coalition of education and civil rights activists sued New Jersey, arguing that the racial divide violated the Randolph-authored clause in the state constitution and that the state Education Department failed in its duty to fix it.
Activists want the New Jersey Supreme Court to strike down the state law requiring students to live in their schools' municipalities and instead let students transfer to neighboring districts or attend “magnet” schools that specialize in a specific discipline or skill.
The 2018 coalition had hoped that a new, progressive governor, Phil Murphy, would be an enthusiastic ally, but just the opposite happened. Murphy has been mum.
Stile:NJ is a bipartisan paradise? Really, Gov. Murphy?
After early settlement discussions collapsed, the state Attorney General’s Office has actively fought the lawsuit; the case remains stalled in Superior Court.
During a hearing last year, Deputy Attorney General Christopher Weber argued that the statistics were insufficient to prove that the state is responsible. He argued that a more “holistic” study of the public school system needed to be conducted before the court could order such dramatic change.
Education advocates "refuse to explore how or why the state defendants may be liable or what the [education] commissioner could or should have done,’’ Weber said.
The aggressive opposition from the Murphy administration has disillusioned many of the activists who were swept into office promising to reform New Jersey into a “stronger, fairer” state.
“I'm perplexed by the Murphy administration's aggressive opposition to this lawsuit, and as a governor who is not only a Democrat, but is a self-styled progressive,” said Boddie, the former Rutgers law professor. “The failure to see that there is an opportunity to do something about it. I can't explain it.”
Tackling New Jersey segregation
Not long after the convention began that June, Randolph began introducing a series of proposals addressing racial discrimination.
And by early August, Randolph had crafted a proposal to ban segregation in the state militia. While most of his colleagues were sympathetic to his aim, some raised concerns that by giving the militia — operating as the New Jersey National Guard — special status, the absence of similar language addressing other areas, like the courts or employment, might be seen as permitting discrimination. Others felt that an anti-discrimination clause in the Bill of Rights would cover everything. That group argued Randolph's language was needlessly redundant.
Randolph forcefully replied that the broad New Jersey Bill of Rights language would not suffice, and that it would only allow the status quo of segregated units to persist. Strong, declarative language, focusing solely on the militia, was needed, he said.
Randolph also issued an implied warning: Failure to clearly desegregate Black soldiers could plant the seeds of social unrest, even violence.
“I just don’t know whether you realize just what mental status it creates among those who are segregated, or whether you want to continue that mental status which breeds hatred, which breeds a great deal of danger in that members of a whole class of citizens begin to think that they are segregated on account of race, color or something else," Randolph warned.
First the Guard, then the schools
The 1947 Constitutional Convention approved Randolph's measure to desegregate New Jersey's militia by a vote of 45-26. Emboldened, Randolph came back two days later with an equally ambitious plan: a provision with similar language to end segregation in public schools.
This time, Randolph had hard evidence to support his argument. In 1881, New Jersey outlawed school segregation, he told his fellow delegates at the convention. But the law was rarely enforced.
At the time of the convention, about 60 school districts maintained segregated, “separate but equal” schools, particularly in rural South Jersey, but also in affluent bastions like Princeton and Englewood. It was a deeply rooted custom.
“We have a very peculiar situation in our state," Randolph said in a floor speech. “In spite of that law, I dare say every delegate knows that we have separation on account of race.”
Yet this time Randolph was rebuffed — his measure was referred back to committee. Freamon, the former Seton Hall law professor, suggests that the issue of school segregation lacked the urgency of providing fair treatment to returning GIs, including Black soldiers, and new recruits.
Instead of retreating, Randolph took an even bolder step. He followed up with a new proposal calling for desegregating school and the militia in the same clause.
As he did earlier, he argued that broad anti-discrimination language in the Bill of Rights would fail to uproot the racist custom. But this time, he had won the backing of Col. George H. Walton, an influential Camden County delegate, who served as Gov. Alfred E. Driscoll’s representative in the delegation.
The committee reported the plan to the full convention, where it was met considerable resistance, some steeped in pragmatism, some in caution, and in one case in sarcastic disdain.
John F. Schenck, a Flemington businessman and a descendant of the Frelinghuysen political dynasty, argued that New Jersey's Bill of Rights would adequately serve the purpose of preventing future discrimination. But then, to make his point, he mocked Randolph’s new proposal as an overreach.
Perhaps, Schenck argued, “Why not include the right to swim in public pools?"
Schenck went on: "I see this is a controversy in New Jersey concerning this point," he said. "Or the right to use the boardwalk at the seashore and the right to sit where you please at the motion pictures?”
Marie Katzenbach, a Rutgers trustee and a leading New Jersey civic figure — and one of only five women delegates dispatched to the 1947 constitutional convention — supported Randolph’s aims. Still, she feared that singling out schools and militia could open doors to other areas of government and civic life that could well be exempted from the protections. She believed that the 1881 law barring school segregation needed to be more aggressive enforced.
"The answer lies in … giving the commissioner of education the power to enforce the law, now so manifestly lacking," said Katzenbach, whose son, Nicholas, would later serve as U.S. attorney general during the heyday of the civil rights movement in the 1960s.
Then Myra C. Hacker, a delegate from Bergen County, suggested that discrimination “is a problem of education and rejuvenation of the spirit that can only be solved by the people themselves, by having the right intellectual, moral and social perspective.”
In other words, Hacker argued that discrimination couldn't be eliminated by legal mandate.
Yet there was evidence that the tide was turning in Randolph's favor.
Judge Francis Stanger, a Cumberland County delegate, stressed his support for the broad Bill of Rights language, but suggested that if Randolph’s latest amendment was cleared by committee, he would reluctantly support it.
Walton, the delegate who collaborated with Randolph to revise the amendment, took to the floor to vigorously defended the plan.
Finally, Randolph made one last appeal. He urged the delegates to "insert a real clause in our constitution which cannot possibly be misconstrued by the courts."
Minutes later, the convention approved the measure, 50-18.
The transcripts of the convention do not offer much insight on how Randolph pulled off such a major triumph that day, but Freamon believes that his conduct, "virtuosity" and ability to persuade the all-white delegates to suspend their own beliefs and peer into the future helped galvanize support.
And the trauma of the war had given Black people a new moral platform to make their case for long-denied enfranchisement in the country they fought for.
“One of his main arguments was that you have all these African Americans coming back from the war, and they weren't in the mood to put up with segregation anymore," Freamon said. “They had risked their lives in World War II and fought for the country, and then to have to come back to a segregated National Guard unit or to send their kids to a segregated school? What's up with that?"
More than 75 years later, many parents are asking a similar question.
Charlie Stile is a veteran New Jersey political columnist. For unlimited access to his unique insights into New Jersey’s political power structure and his powerful watchdog work, please subscribe or activate your digital account today.
|
<urn:uuid:e896af21-e395-4c12-95e4-feed18546cca>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649293.44/warc/CC-MAIN-20230603133129-20230603163129-00784.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9669415950775146,
"pii_count": 0,
"score": 2.84375,
"token_count": 3618,
"url": "https://www.northjersey.com/story/news/columnists/charles-stile/2023/02/17/nj-lawyer-oliver-randolph-paved-path-in-civil-rights-era/9874054002/"
}
|
Against all odds, this Black NJ lawyer helped ignite the civil rights movement | Stile
Oliver Randolph, the lone Black delegate to the New Jersey Constitutional Convention, had reason to worry after arriving more than an hour late to the Aug. 13, 1947, deliberations inside the Rutgers University gym.
Before his arrival, opposition had been building all morning to his bold plans to break Jim Crow’s chokehold over the Garden State's public schools and its National Guard.
Some colleagues, while sympathetic to his overall goal, were concerned that his approach was an overreach or redundant. The pushback threatened to defeat Randolph's long effort to forge a progressive and equitable blueprint for New Jersey’s growing Black population. He needed to act quickly.
After thanking the delegates for waiting — a train accident near Elizabeth delayed him from reaching New Brunswick at the session's start — the lawyer and former legislator went on the offensive, relying on a tactic he had used in political battles as a prominent Republican operative in Essex County in the 1920s.
He shamed them.
“The fact is that those discriminated against belong to a race, as you all know, of which a great many young men gave their blood and laid down their lives for this great cause of democracy," Randolph said, referring to the large numbers of Black servicemen who had fought in World War II. “I can’t believe that this convention — the convention which is taking such marvelous strides to present a product that will be a model — I can’t believe that we will allow discrimination in our public schools.”
Haggling continued over the next few days, but Randolph’s fierce appeal to the delegates' duty and their place in history clearly had turned the tide. By the end of the week, the convention ratified Randolph’s bold bid in a lopsided vote.
Stile:Who was Oliver Randolph, New Jersey's civil rights pioneer?
The action would prove to be a major, historic triumph. Three months later, New Jersey voters ratified a modern constitution featuring a streamlined judicial system, a powerful new governor’s role and Randolph’s proposal, formally enshrined as Article 1, Paragraph 5:
"No person shall be denied the enjoyment of any civil or military right, nor be discriminated against in the exercise of any civil or military right, nor be segregated in the militia or in the public schools, because of religious principles, race, color, ancestry or national origin."
New Jersey, a state derided
|
by African American leaders as the "Georgia of the North," was now stamped with distinction as the first in the nation that unequivocally outlawed segregation in both its schools and its National Guard.
Virtually overnight, the Garden State went from a backwater to a progressive exemplar of social justice. Since then, only two other states — Connecticut and Hawaii — have taken similar steps to include anti-segregation clauses in their constitutions, according to John Dinan, a professor of politics and international relations at Wake Forest University, who is an expert on state constitutions.
It was a remarkable feat: A lone Black man determinedly and successfully pursued racial equity roughly a decade before the civil rights movement.
“It took the force of will of someone like Randolph to recognize that the state was not living up to its possibilities," Elise Boddie, a former law professor at Rutgers University-Newark and a civil rights activist, said in an interview last year.
But more than seven decades later, New Jersey is far from realizing those possibilities. Since then, New Jersey public schools remain segregated in largely segregated communities, a reflection of the "white flight" suburbanization that took root after the war.
A lawsuit seeking to crack the segregated patterns has stalled in the face of resistance from the state's liberal governor, Phil Murphy. And the Legislature, controlled by Democrats — moderate-to-socially liberal Democrats, not the Jim Crow segregationists of Randolph's time — has shown little inclination to embrace the experimental aims of the lawsuit.
Still, much of the work we can suppose Randolph dreamed of doing is still undone. New Jersey's record on civil rights, equal rights and segregation has many boxes still to be ticked if Randolph's vision is to be realized. His efforts in 1947 were only a starting point.
'Separate but equal' upended
But Randolph's achievement was remarkable in other ways.
In arguing for his provision, Randolph successfully attacked the pernicious "separate but equal" doctrine — the result of the U.S. Supreme Court's Plessy v. Ferguson ruling — that permitted segregation to flourish in New Jersey schools, despite a 60-year-old law banning the practice. Randolph's effort at Rutgers prevailed — seven years before the lawyers for the NAACP prevailed at the U.S. Supreme Court in the landmark Brown v. Board of Education decision outlawing segregated public schools across the country.
In a 1997 documentary about the making of the New Jersey Constitution, Clement Price, a Rutgers University-Newark professor and city historian, asserted that the Randolph-drafted provision encouraged the Supreme Court to take its historic leap.
“The court could point to New Jersey as an example of a state that voluntarily desegregated its schools, and it did so without racial … havoc," said Price, who died in 2014.
More:Ruby Bridges tells her story of trailblazing desegregation in children's book
The new anti-segregation clause also ratcheted up pressure on then-President Harry S. Truman to abolish segregation in the U.S. armed forces the next year.
After New Jersey’s decision to integrate its Guard units in February 1948, other states demanded a waiver from Truman’s Defense Department policy to do the same. Truman eventually issued his order five months later.
Little-known in New Jersey history
Still, except among a small community of historians, law professors, journalists and staff at the Newark Public Library who maintain Randolph's papers and associated ephemera, and who work to sustain his memory, Randolph remains a ghost of New Jersey history.
His achievements, sadly, are rarely celebrated.
That may be because Randolph, who died four years after the convention, was not one to boast of his success, said Bernard K. Freamon, a former Seton Hall Law professor who wrote an extensive Rutgers Law Journal article in 2004 about the New Jersey Constitution's anti-segregation clause.
New Jersey:What happened to Phil Murphy's progressive ambition? His second term is moderate, so far
And in a few years, the bus boycotts and protests and the violent tumult of the civil rights movement would quickly dominate the nation’s television screens and its consciousness. By then, New Jersey’s 1947 breakthrough, Randolph's greatest achievement, was already a footnote.
But perhaps the biggest reason we don't better remember Randolph as a giant of Garden State history may be because New Jersey failed to fulfill the spirit and purpose of Article 1, Paragraph 5. Today, the breakthrough clause no longer stands as a source of pride, but can be seen as a benchmark of disappointment.
More than 75 years after its enactment, New Jersey's public schools are ranked as the sixth-most-segregated in the country, according to a 2017 UCLA study.
Although the new state constitution ensured that all public schools would open their doors to anyone, regardless of race, it proved to be no match for the demographic transformation that shaped the suburbanization of post-World War II New Jersey.
And many of those policies slammed the doors on aspiring minorities.
The sweeping G.I. Bill, enacted to help World War II veterans prosper after the war with a range of benefits, including access to higher education and, importantly, guaranteed mortgages, was largely denied to some 1.2 million Black veterans.
The discriminatory “redlining” practice of denying home loans in low-income neighborhoods prevented many Black and minority families from moving into the new neighborhoods. Exclusionary zoning in the suburbs also undermined integration.
The result: Most people of color congregated in urban, low-income areas and sent their children to crumbling, underperforming schools while most whites prospered in the new suburbs and in public schools that became the envy of the nation. A state residency law, requiring students to attend school in the towns where they live, deepened the segregated divide.
In 2023, there's still more to be done
Statewide school enrollment data from 2015 to 2020 paint the portrait of today's status quo, de facto segregation. Nearly half of New Jersey's Black students attended schools that are at least 90% non-white, while nearly 70% of white students attend schools that are three-fourths white, according to the state Department of Education.
In 2018, a coalition of education and civil rights activists sued New Jersey, arguing that the racial divide violated the Randolph-authored clause in the state constitution and that the state Education Department failed in its duty to fix it.
Activists want the New Jersey Supreme Court to strike down the state law requiring students to live in their schools' municipalities and instead let students transfer to neighboring districts or attend “magnet” schools that specialize in a specific discipline or skill.
The 2018 coalition had hoped that a new, progressive governor, Phil Murphy, would be an enthusiastic ally, but just the opposite happened. Murphy has been mum.
Stile:NJ is a bipartisan paradise? Really, Gov. Murphy?
After early settlement discussions collapsed, the state Attorney General’s Office has actively fought the lawsuit; the case remains stalled in Superior Court.
During a hearing last year, Deputy Attorney General Christopher Weber argued that the statistics were insufficient to prove that the state is responsible. He argued that a more “holistic” study of the public school system needed to be conducted before the court could order such dramatic change.
Education advocates "refuse to explore how or why the state defendants may be liable or what the [education] commissioner could or should have done,’’ Weber said.
The aggressive opposition from the Murphy administration has disillusioned many of the activists who were swept into office promising to reform New Jersey into a “stronger, fairer” state.
“I'm perplexed by the Murphy administration's aggressive opposition to this lawsuit, and as a governor who is not only a Democrat, but is a self-styled progressive,” said Boddie, the former Rutgers law professor. “The failure to see that there is an opportunity to do something about it. I can't explain it.”
Tackling New Jersey segregation
Not long after the convention began that June, Randolph began introducing a series of proposals addressing racial discrimination.
And by early August, Randolph had crafted a proposal to ban segregation in the state militia. While most of his colleagues were sympathetic to his aim, some raised concerns that by giving the militia — operating as the New Jersey National Guard — special status, the absence of similar language addressing other areas, like the courts or employment, might be seen as permitting discrimination. Others felt that an anti-discrimination clause in the Bill of Rights would cover everything. That group argued Randolph's language was needlessly redundant.
Randolph forcefully replied that the broad New Jersey Bill of Rights language would not suffice, and that it would only allow the status quo of segregated units to persist. Strong, declarative language, focusing solely on the militia, was needed, he said.
Randolph also issued an implied warning: Failure to clearly desegregate Black soldiers could plant the seeds of social unrest, even violence.
“I just don’t know whether you realize just what mental status it creates among those who are segregated, or whether you want to continue that mental status which breeds hatred, which breeds a great deal of danger in that members of a whole class of citizens begin to think that they are segregated on account of race, color or something else," Randolph warned.
First the Guard, then the schools
The 1947 Constitutional Convention approved Randolph's measure to desegregate New Jersey's militia by a vote of 45-26. Emboldened, Randolph came back two days later with an equally ambitious plan: a provision with similar language to end segregation in public schools.
This time, Randolph had hard evidence to support his argument. In 1881, New Jersey outlawed school segregation, he told his fellow delegates at the convention. But the law was rarely enforced.
At the time of the convention, about 60 school districts maintained segregated, “separate but equal” schools, particularly in rural South Jersey, but also in affluent bastions like Princeton and Englewood. It was a deeply rooted custom.
“We have a very peculiar situation in our state," Randolph said in a floor speech. “In spite of that law, I dare say every delegate knows that we have separation on account of race.”
Yet this time Randolph was rebuffed — his measure was referred back to committee. Freamon, the former Seton Hall law professor, suggests that the issue of school segregation lacked the urgency of providing fair treatment to returning GIs, including Black soldiers, and new recruits.
Instead of retreating, Randolph took an even bolder step. He followed up with a new proposal calling for desegregating school and the militia in the same clause.
As he did earlier, he argued that broad anti-discrimination language in the Bill of Rights would fail to uproot the racist custom. But this time, he had won the backing of Col. George H. Walton, an influential Camden County delegate, who served as Gov. Alfred E. Driscoll’s representative in the delegation.
The committee reported the plan to the full convention, where it was met considerable resistance, some steeped in pragmatism, some in caution, and in one case in sarcastic disdain.
John F. Schenck, a Flemington businessman and a descendant of the Frelinghuysen political dynasty, argued that New Jersey's Bill of Rights would adequately serve the purpose of preventing future discrimination. But then, to make his point, he mocked Randolph’s new proposal as an overreach.
Perhaps, Schenck argued, “Why not include the right to swim in public pools?"
Schenck went on: "I see this is a controversy in New Jersey concerning this point," he said. "Or the right to use the boardwalk at the seashore and the right to sit where you please at the motion pictures?”
Marie Katzenbach, a Rutgers trustee and a leading New Jersey civic figure — and one of only five women delegates dispatched to the 1947 constitutional convention — supported Randolph’s aims. Still, she feared that singling out schools and militia could open doors to other areas of government and civic life that could well be exempted from the protections. She believed that the 1881 law barring school segregation needed to be more aggressive enforced.
"The answer lies in … giving the commissioner of education the power to enforce the law, now so manifestly lacking," said Katzenbach, whose son, Nicholas, would later serve as U.S. attorney general during the heyday of the civil rights movement in the 1960s.
Then Myra C. Hacker, a delegate from Bergen County, suggested that discrimination “is a problem of education and rejuvenation of the spirit that can only be solved by the people themselves, by having the right intellectual, moral and social perspective.”
In other words, Hacker argued that discrimination couldn't be eliminated by legal mandate.
Yet there was evidence that the tide was turning in Randolph's favor.
Judge Francis Stanger, a Cumberland County delegate, stressed his support for the broad Bill of Rights language, but suggested that if Randolph’s latest amendment was cleared by committee, he would reluctantly support it.
Walton, the delegate who collaborated with Randolph to revise the amendment, took to the floor to vigorously defended the plan.
Finally, Randolph made one last appeal. He urged the delegates to "insert a real clause in our constitution which cannot possibly be misconstrued by the courts."
Minutes later, the convention approved the measure, 50-18.
The transcripts of the convention do not offer much insight on how Randolph pulled off such a major triumph that day, but Freamon believes that his conduct, "virtuosity" and ability to persuade the all-white delegates to suspend their own beliefs and peer into the future helped galvanize support.
And the trauma of the war had given Black people a new moral platform to make their case for long-denied enfranchisement in the country they fought for.
“One of his main arguments was that you have all these African Americans coming back from the war, and they weren't in the mood to put up with segregation anymore," Freamon said. “They had risked their lives in World War II and fought for the country, and then to have to come back to a segregated National Guard unit or to send their kids to a segregated school? What's up with that?"
More than 75 years later, many parents are asking a similar question.
Charlie Stile is a veteran New Jersey political columnist. For unlimited access to his unique insights into New Jersey’s political power structure and his powerful watchdog work, please subscribe or activate your digital account today.
|
New findings on the "Doomsday Glacier"
The Thwaites Glacier in Antarctica is known as the "doomsday glacier." That's because if it melts, it could raise sea levels by up to 10 feet. Now, new research is changing our understanding of it.
- Plus, Tesla's role in making electric vehicle chargers more available nationwide.
Guests: Axios' Andrew Freedman and Joann Muller.
Credits: Axios Today is produced by Niala Boodhoo, Alexandra Botti, Naomi Shavin, Fonda Mwangi and Alex Sugiura. Music is composed by Evan Viola. You can reach us at [email protected]. You can text questions, comments and story ideas to Niala as a text or voice memo to 202-918-4893.
- Thwaites Glacier findings give reasons to worry about Antarctic ice melt
- Drastic emissions cuts needed to avert multi-century sea level rise, study finds
- Tesla opening up its chargers could spur electric car adoption
NIALA: Good morning! Welcome to Axios Today!
It’s Thursday, February 16th.
I’m Niala Boodhoo.
Here’s what we’re covering today: making electric vehicle chargers more available nationwide. But first, new findings on the so-called Doomsday Glacier. That’s today’s One Big Thing.
New findings on the "Doomsday Glacier"
NIALA: The Thwaites Glacier is at the top of the list of glaciers that keep polar scientists up at night, says Axios’ Senior Climate and Energy Reporter Andrew Freedman. That’s because if it melts, it could raise sea levels by up to 10 feet. Now, new research is expanding – and complicating - our understanding of this glacier. So how to make sense of it all? Andrew’s here with us for that.
Hey - so Andrew - first - why is Thwaites called “the doomsday glacier”?
ANDREW FREEDMAN: Yeah, so scientists are trying to push back against that term, but it's really very, very true. It's the glacier that could generate the most sea level rise the fastest, at least of the ones that we understand relatively well.
NIALA: So there were two studies just published yesterday about Thwaites. What are the most important things we need to know from those studies?
ANDREW: So those studies were the result of a five-year, $50 million research campaign, which sent scientists, putting robots, underneath the ice. It was some really cool, ambitious work that was done. And, uh, the really important things to know about these two different studies are that the melting that is going on there is complicated. There is a reason for greater concern. The water that's coming into contact with the bottom of the ice sheet is not as warm as we expected, but the glacier is still retreating incredibly fast.
NIALA: So is there anything that can be done to slow or stop the melting of this glacier? Is that even a goal at this point?
ANDREW: There are some studies that show that we have already triggered irreversible loss of the West Antarctic ice sheet. I think most scientists, however, say that what we emit in the next several decades will determine the course of sea level rise for the next several centuries, and that includes West Antarctica. The decisions that we make in the next two decades, especially, uh, may push glaciers like Thwaites over the edge or may contain them.
NIALA: Andrew, you also wrote about another study out this week that had to do with climate change and ice sheet melt, and that study found that human caused global warming must be limited to 1.5 degrees Celsius or less in order to avoid a multi-century melting of the ice sheets and a rise in sea levels. Is that goal realistic at this point?
ANDREW: No, that goal is not realistic at this point, you know, we're already at 1.2 C and we're going right now, uh, if all global pledges are met, we would go to 2.4 C. So it, it's not realistic, however, It does illustrate the point that there is so much sensitivity in what we might imagine to be these vast ice sheets that are immovable and we can't imagine that we can have such a big effect on them, but in reality, even relatively low amounts of warming compared to what is possible, would significantly disrupt, both Greenland and Antarctica.
NIALA: Andrew, I feel like you and I often talk about how difficult climate news is to digest, especially when it seems so alarming. So with that in mind, it was especially interesting for me to read that one climate scientist you talked to said they viewed these papers on Thwaites Glacier with a sense of optimism.
ANDREW: The reason there's some optimism about this glacier is because scientists were looking at all of these nightmare scenarios. We literally didn't know if there was gonna be a large collapse of a portion or all of this glacier, tomorrow. And now we go in there and they find this data, and the data doesn't say, oh, it's gonna happen ASAP. It says there's reasons for concern, there's reasons for alarm. We need more studies done and more observations, but it just shows how science works and that, it kind of leads to additional questions, and additional worries. Overall, you know, maybe we'll get a better sense of what the true range is here and it won't be quite as scary.
NIALA: Andrew Freedman is senior climate and energy reporter. Thanks Andrew.
ANDREW: Thanks for having me.
NIALA: In a moment, making EVs a more attractive option for potential buyers.
Tesla’s role in making electric vehicle chargers more available nationwide
NIALA: Welcome back to Axios Today. I'm Niala Boodhoo.
The White House announced yesterday that Tesla will open part of its charging network to other brands of electric vehicles for the first time ever. By the end of 2024, at least 7,500 Tesla chargers will be available for all EVs across the U.S., including along highway corridors, making long-distance travel more accessible.
Axios’ Joann Muller has been covering this, coincidentally from a cross-country trip in an EV. Hey, Joann.
JOANN MULLER: Hi, Niala.
NIALA: So Joann, the Biden administration, has been pushing for a shift away from gasoline powered cars. How does this charging deal with Tesla help that goal?
JOANN: Well look, a lot of people have been reluctant to buy EVs because even though they're gonna charge mostly at home, they're worried about that rare road trip that they wanna have. So it could have a very big impact. So, you know, the chargers are going in and the government is providing the funding for some of it, private companies are spending as well. So I think over the next, uh, two to five years, we're going to see a big explosion in the amount of charging plugs available for EVs.
NIALA: And you know this first hand because you just drove from Michigan to Florida, not in a Tesla. How did that go?
JOANN: Yeah, we drove from, from Michigan to Washington, D.C. and then down to Florida. And we charged mostly at Fast chargers that are owned by, uh, Electrify America and EVgo. And they're, they're similar to Tesla Chargers, but there's just not enough of them across the U.S. yet for everyone to feel comfortable. We didn't have any problems. We didn't have any giant range anxiety. My husband set out first before me. He was a little worried about the cold temperatures draining the battery. So he kept the heat down in the car and just used the seat heater to keep himself warm. You know, we, we found that we were okay, but once you start having a lot of people driving electric vehicles, we're going to need a lot more chargers.
NIALA: And for non Tesla EVs is having Tesla chargers at their disposal actually helpful because how does this work with charging ports? Are they universal?
JOANN: The Tesla network uses a different type of plug, a different connector than everyone else in the industry. What it means is you're going to need an adapter, to charge your car if it's not a Tesla. Now, right now you can buy one of these adapters, it costs like $150. I opted not to buy one because I'm betting on the other networks being sufficient to get me where I need to go.
NIALA: So by 2030, the White House is hoping EVs make up at least half of new car sales given the number of charging stations that are currently available and the rate at which it's growing. How feasible is that?
JOANN: S&P Global Mobility, which studies this space really carefully, they say we're gonna need at least eight times as many chargers as we have today, both Tesla and the non Tesla chargers. But I also think that the White House's goal of seeing 50% of all new car sales being electric by 2030 is rather aggressive. I'm not sure we're going to get there. But really it's a problem, you have to scale the infrastructure at the same time that you grow the sales of EVs. And those things really need to be lined up and potentially the chargers need to get there first in order for people to feel comfortable.
NIALA: Joann Muller covers the future of transportation for Axios. Thanks, Joann.
JOANN: Thank you, Niala.
NIALA: One last thing before we go: I want to take a second to say thank you so much to all of you for writing in your comments, your reactions, your story ideas, today and every day.
Including this, a text from Kara in Philadelphia, in response to our story yesterday about diabetes drugs being used for weight loss.
Kara wrote, “as someone who has a chronic illness related to weight, being on the drug is imperative for my health. On one podcast I listened to, a celebrity says her doctor gives ozempic out like candy. It frustrates me so much that wealthier people have easier access to the drug because they can afford out of pocket costs while I had to jump through so many hoops to get insurance to cover it. Even with it being medically necessary. Also, the side effects are no joke.”
Thanks Kara, and remember you can send me a text too – at (202) 918-4893.
I’m Niala Boodhoo - thanks for listening - stay safe and we’ll see you back here tomorrow morning.
Get more than just the headlines on “Here and Now Anytime,” a new podcast from N-P-R. In fewer than 30 minutes you'll hear the news you need to know today... and the in-depth conversations that will stick with you tomorrow. Listen to “Here and Now Anytime,” from N-P-R and W-B-U-R, wherever you get your podcasts.
|
<urn:uuid:ec435629-bee4-44f0-b2b8-a5b827e17316>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653608.76/warc/CC-MAIN-20230607042751-20230607072751-00330.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9666695594787598,
"pii_count": 0,
"score": 3.421875,
"token_count": 2396,
"url": "https://www.axios.com/2023/02/16/new-findings-doomsday-glacier"
}
|
New findings on the "Doomsday Glacier"
The Thwaites Glacier in Antarctica is known as the "doomsday glacier." That's because if it melts, it could raise sea levels by up to 10 feet. Now, new research is changing our understanding of it.
- Plus, Tesla's role in making electric vehicle chargers more available nationwide.
Guests: Axios' Andrew Freedman and Joann Muller.
Credits: Axios Today is produced by Niala Boodhoo, Alexandra Botti, Naomi Shavin, Fonda Mwangi and Alex Sugiura. Music is composed by Evan Viola. You can reach us at [email protected]. You can text questions, comments and story ideas to Niala as a text or voice memo to 202-918-4893.
- Thwaites Glacier findings give reasons to worry about Antarctic ice melt
- Drastic emissions cuts needed to avert multi-century sea level rise, study finds
- Tesla opening up its chargers could spur electric car adoption
NIALA: Good morning! Welcome to Axios Today!
It’s Thursday, February 16th.
I’m Niala Boodhoo.
Here’s what we’re covering today: making electric vehicle chargers more available nationwide. But first, new findings on the so-called Doomsday Glacier. That’s today’s One Big Thing.
New findings on the "Doomsday Glacier"
NIALA: The Thwaites Glacier is at the top of the list of glaciers that keep polar scientists up at night, says Axios’ Senior Climate and Energy Reporter Andrew Freedman. That’s because if it melts, it could raise sea levels by up to 10 feet. Now, new research is expanding – and complicating - our understanding of this glacier. So how to make sense of it all? Andrew’s here with us for that.
Hey - so Andrew - first - why is Thwaites called “the doomsday glacier”?
ANDREW FREEDMAN: Yeah, so scientists are trying to push back against that term, but it's really very, very true. It's the glacier that could generate the most sea level rise the fastest, at least of the ones that we understand relatively well.
NIALA: So there were two studies just published yesterday about Thwaites. What are the most important things we need to know
|
from those studies?
ANDREW: So those studies were the result of a five-year, $50 million research campaign, which sent scientists, putting robots, underneath the ice. It was some really cool, ambitious work that was done. And, uh, the really important things to know about these two different studies are that the melting that is going on there is complicated. There is a reason for greater concern. The water that's coming into contact with the bottom of the ice sheet is not as warm as we expected, but the glacier is still retreating incredibly fast.
NIALA: So is there anything that can be done to slow or stop the melting of this glacier? Is that even a goal at this point?
ANDREW: There are some studies that show that we have already triggered irreversible loss of the West Antarctic ice sheet. I think most scientists, however, say that what we emit in the next several decades will determine the course of sea level rise for the next several centuries, and that includes West Antarctica. The decisions that we make in the next two decades, especially, uh, may push glaciers like Thwaites over the edge or may contain them.
NIALA: Andrew, you also wrote about another study out this week that had to do with climate change and ice sheet melt, and that study found that human caused global warming must be limited to 1.5 degrees Celsius or less in order to avoid a multi-century melting of the ice sheets and a rise in sea levels. Is that goal realistic at this point?
ANDREW: No, that goal is not realistic at this point, you know, we're already at 1.2 C and we're going right now, uh, if all global pledges are met, we would go to 2.4 C. So it, it's not realistic, however, It does illustrate the point that there is so much sensitivity in what we might imagine to be these vast ice sheets that are immovable and we can't imagine that we can have such a big effect on them, but in reality, even relatively low amounts of warming compared to what is possible, would significantly disrupt, both Greenland and Antarctica.
NIALA: Andrew, I feel like you and I often talk about how difficult climate news is to digest, especially when it seems so alarming. So with that in mind, it was especially interesting for me to read that one climate scientist you talked to said they viewed these papers on Thwaites Glacier with a sense of optimism.
ANDREW: The reason there's some optimism about this glacier is because scientists were looking at all of these nightmare scenarios. We literally didn't know if there was gonna be a large collapse of a portion or all of this glacier, tomorrow. And now we go in there and they find this data, and the data doesn't say, oh, it's gonna happen ASAP. It says there's reasons for concern, there's reasons for alarm. We need more studies done and more observations, but it just shows how science works and that, it kind of leads to additional questions, and additional worries. Overall, you know, maybe we'll get a better sense of what the true range is here and it won't be quite as scary.
NIALA: Andrew Freedman is senior climate and energy reporter. Thanks Andrew.
ANDREW: Thanks for having me.
NIALA: In a moment, making EVs a more attractive option for potential buyers.
Tesla’s role in making electric vehicle chargers more available nationwide
NIALA: Welcome back to Axios Today. I'm Niala Boodhoo.
The White House announced yesterday that Tesla will open part of its charging network to other brands of electric vehicles for the first time ever. By the end of 2024, at least 7,500 Tesla chargers will be available for all EVs across the U.S., including along highway corridors, making long-distance travel more accessible.
Axios’ Joann Muller has been covering this, coincidentally from a cross-country trip in an EV. Hey, Joann.
JOANN MULLER: Hi, Niala.
NIALA: So Joann, the Biden administration, has been pushing for a shift away from gasoline powered cars. How does this charging deal with Tesla help that goal?
JOANN: Well look, a lot of people have been reluctant to buy EVs because even though they're gonna charge mostly at home, they're worried about that rare road trip that they wanna have. So it could have a very big impact. So, you know, the chargers are going in and the government is providing the funding for some of it, private companies are spending as well. So I think over the next, uh, two to five years, we're going to see a big explosion in the amount of charging plugs available for EVs.
NIALA: And you know this first hand because you just drove from Michigan to Florida, not in a Tesla. How did that go?
JOANN: Yeah, we drove from, from Michigan to Washington, D.C. and then down to Florida. And we charged mostly at Fast chargers that are owned by, uh, Electrify America and EVgo. And they're, they're similar to Tesla Chargers, but there's just not enough of them across the U.S. yet for everyone to feel comfortable. We didn't have any problems. We didn't have any giant range anxiety. My husband set out first before me. He was a little worried about the cold temperatures draining the battery. So he kept the heat down in the car and just used the seat heater to keep himself warm. You know, we, we found that we were okay, but once you start having a lot of people driving electric vehicles, we're going to need a lot more chargers.
NIALA: And for non Tesla EVs is having Tesla chargers at their disposal actually helpful because how does this work with charging ports? Are they universal?
JOANN: The Tesla network uses a different type of plug, a different connector than everyone else in the industry. What it means is you're going to need an adapter, to charge your car if it's not a Tesla. Now, right now you can buy one of these adapters, it costs like $150. I opted not to buy one because I'm betting on the other networks being sufficient to get me where I need to go.
NIALA: So by 2030, the White House is hoping EVs make up at least half of new car sales given the number of charging stations that are currently available and the rate at which it's growing. How feasible is that?
JOANN: S&P Global Mobility, which studies this space really carefully, they say we're gonna need at least eight times as many chargers as we have today, both Tesla and the non Tesla chargers. But I also think that the White House's goal of seeing 50% of all new car sales being electric by 2030 is rather aggressive. I'm not sure we're going to get there. But really it's a problem, you have to scale the infrastructure at the same time that you grow the sales of EVs. And those things really need to be lined up and potentially the chargers need to get there first in order for people to feel comfortable.
NIALA: Joann Muller covers the future of transportation for Axios. Thanks, Joann.
JOANN: Thank you, Niala.
NIALA: One last thing before we go: I want to take a second to say thank you so much to all of you for writing in your comments, your reactions, your story ideas, today and every day.
Including this, a text from Kara in Philadelphia, in response to our story yesterday about diabetes drugs being used for weight loss.
Kara wrote, “as someone who has a chronic illness related to weight, being on the drug is imperative for my health. On one podcast I listened to, a celebrity says her doctor gives ozempic out like candy. It frustrates me so much that wealthier people have easier access to the drug because they can afford out of pocket costs while I had to jump through so many hoops to get insurance to cover it. Even with it being medically necessary. Also, the side effects are no joke.”
Thanks Kara, and remember you can send me a text too – at (202) 918-4893.
I’m Niala Boodhoo - thanks for listening - stay safe and we’ll see you back here tomorrow morning.
Get more than just the headlines on “Here and Now Anytime,” a new podcast from N-P-R. In fewer than 30 minutes you'll hear the news you need to know today... and the in-depth conversations that will stick with you tomorrow. Listen to “Here and Now Anytime,” from N-P-R and W-B-U-R, wherever you get your podcasts.
|
The end of affirmative action in university admissions has been prophesied since 2003, when the Supreme Court issued its decision in Grutter v. Bollinger. In the majority opinion, Justice Sandra Day O'Connor wrote that "25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today." That reckoning has now arrived, and five years earlier than predicted: In June, the Supreme Court ruled 6–3 that public universities must stop favoring certain applicants, and disfavoring others, based on their race or ethnicity.
"Eliminating racial discrimination means eliminating all of it," Chief Justice John Roberts declared, writing for the majority in Students for Fair Admissions v. President and Fellows of Harvard College. "In other words, the student must be treated based on his or her experiences as an individual—not on the basis of race."
For everyone who values fairness, individuality, and nondiscrimination, this decision could not have come soon enough. The perniciousness of the admissions system was on full display, thanks to the details of the case. The plaintiff—an advocacy organization that filed suits against Harvard and the University of North Carolina at Chapel Hill (UNC)—persuasively demonstrated that race-based admissions schemes systematically disadvantaged Asian-American students. UNC, for instance, admitted more than 80 percent of its black applicants but less than 70 percent of its white and Asian applicants. (Reason Foundation, the nonprofit that publishes this magazine, submitted an amicus brief in support of the plaintiff.)
At Harvard, discriminatory practices were overt and began with recruitment. Admissions officials would send letters of interest to black and Hispanic high schoolers who received a score of 1100 or more on the SAT. Asian Americans were ignored unless they received at least a 1350. During the actual admissions process, students were sorted into "deciles"—10 levels of academic performance. Asian Americans in the top decile were less likely to get in than black students in the fourth decile.
The plaintiff also submitted evidence that Harvard admissions officers tended to give Asian Americans negative scores on the personality rating, a wholly subjective criterion. Favoritism also extended to white applicants from what Harvard describes as "sparse country": rural states with historically low enrollment numbers. The result was that applicants were judged not solely on the merits of their individual achievements but on immutable characteristics like their race and place of origin.
These schemes, according to the Supreme Court, violated federal law and, in UNC's case, the 14th Amendment's Equal Protection Clause. "Many universities have for too long wrongly concluded that the touchstone of an individual's identity is not challenges bested, skills built, or lessons learned, but the color of their skin," wrote Roberts. "This Nation's constitutional history does not tolerate that choice."
Title VI of the 1964 Civil Rights Act prohibits entities that receive federal funding from practicing racial discrimination. But affirmative action—a scheme to benefit racial minorities in hiring, contracting, and school admissions—was viewed as an exception; the idea was to practice discrimination on behalf of historically marginalized groups in order to make amends for past wrongs.
In 2003, a pair of Supreme Court rulings involving the University of Michigan—Gratz v. Bollinger and the aforementioned Grutter—upended that justification. In Gratz, the Court held 6–3 that Michigan's undergraduate admissions program went too far in its consideration of race. The university used a point system, with 100 points guaranteeing admission; belonging to an underrepresented minority group was worth 20 points, while a perfect SAT score was worth only 12 points.
In Grutter, however, the Court permitted Michigan's law school to consider race as one factor among many in admissions decisions, on the grounds that a racially diverse student body was a "compelling interest" of the state. While the decision preserved affirmative action in some form—for perhaps 25 years, per O'Connor's time limit—it forced higher education administrators to change their reasoning: Henceforth, they would have to defend race-based admissions as diversity enhancement programs.
Whether affirmative action actually promotes diversity is up for debate, of course. Schools that engage in racial gerrymandering may succeed in making their campuses more diverse in the most superficial sense without doing anything to improve intellectual, political, socioeconomic, or geographic diversity. No one in a position to defend Harvard's admissions system ever argued that the school needed more conservative or libertarian representation; in practice, the institution's position was simply that it needed fewer Asians.
At a time when the Supreme Court is often accused of being out of touch and counter-majoritarian, it's worth mentioning that Students for Fair Admissions undeniably reflects the will of the people. Race-based admissions systems are opposed by 69 percent of poll respondents, including 58 percent of Democrats, according to The New York Times. Voters in California, a deep-blue state, banned affirmative action twice—in 1996 and again, for good measure, in 2020. Faced with this reality, many defenders of affirmative action are trying to change the subject.
Rep. Alexandria Ocasio-Cortez (D–N.Y.), for instance, complained that the Supreme Court had ignored a more serious example of unfairness in higher education. "If SCOTUS was serious about their ludicrous 'colorblindness' claims," she wrote on Twitter, "they would have abolished legacy admissions, aka affirmative action for the privileged." Other progressive Democrats, such as Reps. Cori Bush (D–Mo.) and Jamaal Bowman (D–N.Y.), made similar observations.
It should go without saying, but the justices declined to adjudicate legacy admissions because this issue was not before them. That said, legislators do not need to wait for the Court; they can and should abolish the practice within public institutions. The widespread practice of granting preferential treatment to the scions of alumni is unfair and has no place at taxpayer-funded colleges and universities.
The fact that legacy admissions still exist is not a reason to maintain affirmative action; eliminating explicit racial discrimination is a noble goal in and of itself. But to any naysayers who disdain the Supreme Court's ruling because they think legacy admissions should face the same fate: Your terms are acceptable.
|
<urn:uuid:0c59ae25-58db-43dd-82dc-4a2d0f1fd02a>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00571.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9641602039337158,
"pii_count": 0,
"score": 2.59375,
"token_count": 1270,
"url": "https://reason.com/2023/09/07/affirmative-action-loses-in-court/"
}
|
The end of affirmative action in university admissions has been prophesied since 2003, when the Supreme Court issued its decision in Grutter v. Bollinger. In the majority opinion, Justice Sandra Day O'Connor wrote that "25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today." That reckoning has now arrived, and five years earlier than predicted: In June, the Supreme Court ruled 6–3 that public universities must stop favoring certain applicants, and disfavoring others, based on their race or ethnicity.
"Eliminating racial discrimination means eliminating all of it," Chief Justice John Roberts declared, writing for the majority in Students for Fair Admissions v. President and Fellows of Harvard College. "In other words, the student must be treated based on his or her experiences as an individual—not on the basis of race."
For everyone who values fairness, individuality, and nondiscrimination, this decision could not have come soon enough. The perniciousness of the admissions system was on full display, thanks to the details of the case. The plaintiff—an advocacy organization that filed suits against Harvard and the University of North Carolina at Chapel Hill (UNC)—persuasively demonstrated that race-based admissions schemes systematically disadvantaged Asian-American students. UNC, for instance, admitted more than 80 percent of its black applicants but less than 70 percent of its white and Asian applicants. (Reason Foundation, the nonprofit that publishes this magazine, submitted an amicus brief in support of the plaintiff.)
At Harvard, discriminatory practices were overt and began with recruitment. Admissions officials would send letters of interest to black and Hispanic high schoolers who received a score of 1100 or more on the SAT. Asian Americans were ignored unless they received at least a 1350. During the actual admissions process, students were sorted into "deciles"—10 levels of academic performance. Asian Americans in the top decile were less likely to get in than black students in the fourth decile.
The plaintiff also submitted evidence that Harvard admissions officers tended to give Asian Americans negative scores on the personality rating, a wholly subjective criterion. Favoritism also extended to white applicants from what Harvard describes as "sparse country": rural states with historically low enrollment numbers. The result was that applicants were judged not solely on the merits of their individual achievements but on immutable characteristics like their race and place of origin.
These schemes, according to the Supreme Court, violated
|
federal law and, in UNC's case, the 14th Amendment's Equal Protection Clause. "Many universities have for too long wrongly concluded that the touchstone of an individual's identity is not challenges bested, skills built, or lessons learned, but the color of their skin," wrote Roberts. "This Nation's constitutional history does not tolerate that choice."
Title VI of the 1964 Civil Rights Act prohibits entities that receive federal funding from practicing racial discrimination. But affirmative action—a scheme to benefit racial minorities in hiring, contracting, and school admissions—was viewed as an exception; the idea was to practice discrimination on behalf of historically marginalized groups in order to make amends for past wrongs.
In 2003, a pair of Supreme Court rulings involving the University of Michigan—Gratz v. Bollinger and the aforementioned Grutter—upended that justification. In Gratz, the Court held 6–3 that Michigan's undergraduate admissions program went too far in its consideration of race. The university used a point system, with 100 points guaranteeing admission; belonging to an underrepresented minority group was worth 20 points, while a perfect SAT score was worth only 12 points.
In Grutter, however, the Court permitted Michigan's law school to consider race as one factor among many in admissions decisions, on the grounds that a racially diverse student body was a "compelling interest" of the state. While the decision preserved affirmative action in some form—for perhaps 25 years, per O'Connor's time limit—it forced higher education administrators to change their reasoning: Henceforth, they would have to defend race-based admissions as diversity enhancement programs.
Whether affirmative action actually promotes diversity is up for debate, of course. Schools that engage in racial gerrymandering may succeed in making their campuses more diverse in the most superficial sense without doing anything to improve intellectual, political, socioeconomic, or geographic diversity. No one in a position to defend Harvard's admissions system ever argued that the school needed more conservative or libertarian representation; in practice, the institution's position was simply that it needed fewer Asians.
At a time when the Supreme Court is often accused of being out of touch and counter-majoritarian, it's worth mentioning that Students for Fair Admissions undeniably reflects the will of the people. Race-based admissions systems are opposed by 69 percent of poll respondents, including 58 percent of Democrats, according to The New York Times. Voters in California, a deep-blue state, banned affirmative action twice—in 1996 and again, for good measure, in 2020. Faced with this reality, many defenders of affirmative action are trying to change the subject.
Rep. Alexandria Ocasio-Cortez (D–N.Y.), for instance, complained that the Supreme Court had ignored a more serious example of unfairness in higher education. "If SCOTUS was serious about their ludicrous 'colorblindness' claims," she wrote on Twitter, "they would have abolished legacy admissions, aka affirmative action for the privileged." Other progressive Democrats, such as Reps. Cori Bush (D–Mo.) and Jamaal Bowman (D–N.Y.), made similar observations.
It should go without saying, but the justices declined to adjudicate legacy admissions because this issue was not before them. That said, legislators do not need to wait for the Court; they can and should abolish the practice within public institutions. The widespread practice of granting preferential treatment to the scions of alumni is unfair and has no place at taxpayer-funded colleges and universities.
The fact that legacy admissions still exist is not a reason to maintain affirmative action; eliminating explicit racial discrimination is a noble goal in and of itself. But to any naysayers who disdain the Supreme Court's ruling because they think legacy admissions should face the same fate: Your terms are acceptable.
|
A previous version of this article misstated the amount of carbon dioxide U.S. geothermal plants could capture. It is 12.8 million tons, not 12.8 billion tons. The article has been corrected.
The problem with direct air capture, however, has been that it takes energy — a lot of energy. Carbon dioxide only makes up 0.04 percent of ambient air, making the process of its extraction chemically and energy intensive. According to the U.N. Intergovernmental Panel on Climate Change, by 2100 the world needs to remove between 100 and 1,000 billion tons of carbon dioxide from the air to meet its most ambitious climate goals — or between 10 and 100 times China’s annual emissions. But if the energy powering that comes from fossil fuels, direct air capture starts to look less like a time machine than an accelerator: a way to emit even more CO2.
Now, however, a company is working to combine direct air capture with a relatively untapped source of energy: Heat from Earth’s crust. Fervo Energy, a geothermal company headquartered in Houston, announced on Thursday that it will design and engineer the first purpose-built geothermal and direct air capture plant. With the help of a grant from the Chan Zuckerberg Initiative, the company hopes to have a pilot facility online in 3 to 5 years.
If it works, it will be a way to produce carbon-free electricity, while reducing CO2 in the atmosphere at the same time. In short, a win-win for the climate.
“You have to have your energy from a carbon-free source” for direct air capture to make sense, said Timothy Latimer, the CEO of Fervo Energy. “Geothermal is a great match.”
The reason is heat from Earth’s core. The center of the planet, despite Jules Verne’s imaginings, is a molten core with temperatures reaching up to 11,000 degrees Fahrenheit. (According to the Energy Department, just 0.2 percent of Earth’s heat content could supply enough power for all of humanity for 2 million years — if only we could access it.)
Geothermal wells don’t, of course, get anywhere close to Earth’s core, but a geothermal well drilled just 1 to 2 miles into hot rocks below the surface can reach temperatures of up to 1,000 degrees. Water is pumped into the well, heated and returned to the surface, where it can be converted into steam and electricity.
Even after generating electricity, most geothermal plants have a lot of waste heat — often clocking in around 212 degrees. And conveniently, that happens to be the exact temperature needed to pull carbon dioxide out of an air filter and bury it underground.
Hélène Pilorgé, a research associate at the University of Pennsylvania who studies carbon dioxide removal, says that one of the main ways to pull CO2 out of the air is known as the “solid sorbent” method. Big fans draw air into a box with an air filter; the air filter is then heated to around 212 degrees to remove the CO2 for burial. That high temperature “fits well with the energy that geothermal can provide,” Pilorgé said.
Other renewables, like solar and wind, aren’t natural fits. Solar and wind can produce electricity, but they don’t produce high heat easily. (This is also why it’s hard to make things like steel without fossil fuels.) The energy needed for direct air capture, Pilorgé says, is about 80 percent heat and only 20 percent electricity.
According to one study co-authored by Pilorgé, if air capture were combined with all of the geothermal plants currently in the United States, the country could suck up around 12.8 million tons of carbon dioxide every year.
One company already uses geothermal to power its direct air capture: Climeworks, which has one carbon capture plant in Iceland and is building another. Iceland, with its hundreds of volcanoes and hot springs, is already a hot spot for geothermal — 85 percent of homes are currently heated by Earth itself. Climeworks built its carbon capture machinery on top of the already existing Hellisheidi geothermal plant.
By contrast, Latimer says, Fervo Energy will be able to experiment with how to build geothermal energy in a way tailor-made for capturing CO2. “It’s a totally unexplored place,” he said. “What would you do differently in the design of a geothermal power plant if you knew you were pairing it with a direct air capture facility?”
The company plans to design and engineer the plant at one of its demonstration sites — possibly in northwestern Nevada, central Utah, or another spot yet to be announced. It will likely be in the Western United States, where geothermal heat is more easily accessible.
There are still many details to work out. It’s not enough to simply take carbon dioxide out of the air — ideally, it also needs to be stored deep underground, often in porous stones filled with salty water. Pilorgé says that one of the challenges will be placing geothermal plants in areas where such aquifers are readily available.
One potential criticism of the project is that it focuses on drawing carbon out of the air instead of preventing it from getting there in the first place. After all, geothermal energy could play a key role in shifting the electricity grid over to renewables. Unlike wind and solar, a geothermal plant can be on all of the time, producing electricity even when the wind isn’t blowing or the sun isn’t shining.
Latimer says that there is a possibility that a single geothermal plant could have multiple uses. It could produce electricity when needed and suck CO2 out of the atmosphere when wind and solar are dominating the grid. But right now, he added, it’s hard to link geothermal to the electricity grid thanks to long waits to get connected.
There is also much more funding available for direct air capture than there is for geothermal alone. The Energy Department is offering up to $74 million for demonstration projects of new geothermal technologies but a whopping $3.5 billion to establish regional hubs for direct air capture. Geothermal has often been called the “forgotten renewable” — useful, but not as sexy or appealing as solar or wind.
“What we have here is a really compelling way to produce round-the-clock carbon-free electricity and heat,” Latimer said. “The question is what society prioritizes and what policy incentives are put in place.”
Still, if the combination of geothermal and direct air capture works, there will be a kind of poetic symmetry to it. The burning of fossil fuels — pieces of plants and animals crushed under high temperatures and pressure in Earth’s crust for millions of years — has sent planet-warming gases spewing into the atmosphere. It’s only fitting that high temperatures from the crust can pull it back out again.
|
<urn:uuid:3fa9d0aa-5073-4218-a170-49ad92b6d118>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00752.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9443034529685974,
"pii_count": 0,
"score": 3.796875,
"token_count": 1470,
"url": "https://www.washingtonpost.com/climate-solutions/2023/02/23/geothermal-direct-air-capture-fervo/?utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-_iotmy9ADVCBNIOc0BgdDESKZwEd7fDgCl442Aq0rwxQ4EgY3zy8ldZbfbgo19iM_u4NQF"
}
|
A previous version of this article misstated the amount of carbon dioxide U.S. geothermal plants could capture. It is 12.8 million tons, not 12.8 billion tons. The article has been corrected.
The problem with direct air capture, however, has been that it takes energy — a lot of energy. Carbon dioxide only makes up 0.04 percent of ambient air, making the process of its extraction chemically and energy intensive. According to the U.N. Intergovernmental Panel on Climate Change, by 2100 the world needs to remove between 100 and 1,000 billion tons of carbon dioxide from the air to meet its most ambitious climate goals — or between 10 and 100 times China’s annual emissions. But if the energy powering that comes from fossil fuels, direct air capture starts to look less like a time machine than an accelerator: a way to emit even more CO2.
Now, however, a company is working to combine direct air capture with a relatively untapped source of energy: Heat from Earth’s crust. Fervo Energy, a geothermal company headquartered in Houston, announced on Thursday that it will design and engineer the first purpose-built geothermal and direct air capture plant. With the help of a grant from the Chan Zuckerberg Initiative, the company hopes to have a pilot facility online in 3 to 5 years.
If it works, it will be a way to produce carbon-free electricity, while reducing CO2 in the atmosphere at the same time. In short, a win-win for the climate.
“You have to have your energy from a carbon-free source” for direct air capture to make sense, said Timothy Latimer, the CEO of Fervo Energy. “Geothermal is a great match.”
The reason is heat from Earth’s core. The center of the planet, despite Jules Verne’s imaginings, is a molten core with temperatures reaching up to 11,000 degrees Fahrenheit. (According to the Energy Department, just 0.2 percent of Earth’s heat content could supply enough power for all of humanity for 2 million years — if only we could access it.)
Geothermal wells don’t, of course, get anywhere close to Earth’s core, but a geothermal well drilled just 1 to 2 miles into hot rocks below the surface can reach temperatures of up to 1,000 degrees. Water is pumped into the well, heated
|
and returned to the surface, where it can be converted into steam and electricity.
Even after generating electricity, most geothermal plants have a lot of waste heat — often clocking in around 212 degrees. And conveniently, that happens to be the exact temperature needed to pull carbon dioxide out of an air filter and bury it underground.
Hélène Pilorgé, a research associate at the University of Pennsylvania who studies carbon dioxide removal, says that one of the main ways to pull CO2 out of the air is known as the “solid sorbent” method. Big fans draw air into a box with an air filter; the air filter is then heated to around 212 degrees to remove the CO2 for burial. That high temperature “fits well with the energy that geothermal can provide,” Pilorgé said.
Other renewables, like solar and wind, aren’t natural fits. Solar and wind can produce electricity, but they don’t produce high heat easily. (This is also why it’s hard to make things like steel without fossil fuels.) The energy needed for direct air capture, Pilorgé says, is about 80 percent heat and only 20 percent electricity.
According to one study co-authored by Pilorgé, if air capture were combined with all of the geothermal plants currently in the United States, the country could suck up around 12.8 million tons of carbon dioxide every year.
One company already uses geothermal to power its direct air capture: Climeworks, which has one carbon capture plant in Iceland and is building another. Iceland, with its hundreds of volcanoes and hot springs, is already a hot spot for geothermal — 85 percent of homes are currently heated by Earth itself. Climeworks built its carbon capture machinery on top of the already existing Hellisheidi geothermal plant.
By contrast, Latimer says, Fervo Energy will be able to experiment with how to build geothermal energy in a way tailor-made for capturing CO2. “It’s a totally unexplored place,” he said. “What would you do differently in the design of a geothermal power plant if you knew you were pairing it with a direct air capture facility?”
The company plans to design and engineer the plant at one of its demonstration sites — possibly in northwestern Nevada, central Utah, or another spot yet to be announced. It will likely be in the Western United States, where geothermal heat is more easily accessible.
There are still many details to work out. It’s not enough to simply take carbon dioxide out of the air — ideally, it also needs to be stored deep underground, often in porous stones filled with salty water. Pilorgé says that one of the challenges will be placing geothermal plants in areas where such aquifers are readily available.
One potential criticism of the project is that it focuses on drawing carbon out of the air instead of preventing it from getting there in the first place. After all, geothermal energy could play a key role in shifting the electricity grid over to renewables. Unlike wind and solar, a geothermal plant can be on all of the time, producing electricity even when the wind isn’t blowing or the sun isn’t shining.
Latimer says that there is a possibility that a single geothermal plant could have multiple uses. It could produce electricity when needed and suck CO2 out of the atmosphere when wind and solar are dominating the grid. But right now, he added, it’s hard to link geothermal to the electricity grid thanks to long waits to get connected.
There is also much more funding available for direct air capture than there is for geothermal alone. The Energy Department is offering up to $74 million for demonstration projects of new geothermal technologies but a whopping $3.5 billion to establish regional hubs for direct air capture. Geothermal has often been called the “forgotten renewable” — useful, but not as sexy or appealing as solar or wind.
“What we have here is a really compelling way to produce round-the-clock carbon-free electricity and heat,” Latimer said. “The question is what society prioritizes and what policy incentives are put in place.”
Still, if the combination of geothermal and direct air capture works, there will be a kind of poetic symmetry to it. The burning of fossil fuels — pieces of plants and animals crushed under high temperatures and pressure in Earth’s crust for millions of years — has sent planet-warming gases spewing into the atmosphere. It’s only fitting that high temperatures from the crust can pull it back out again.
|
The fast-moving fire that ripped through Maui’s historic town of Lahaina killed more than 90 people, making it the deadliest US wildfire in more than a century. The cause of the blaze is still under investigation, but power equipment in the area is coming under increasing scrutiny.
(Bloomberg) — The fast-moving fire that ripped through Maui’s historic town of Lahaina killed more than 90 people, making it the deadliest US wildfire in more than a century. The cause of the blaze is still under investigation, but power equipment in the area is coming under increasing scrutiny.
Across the nation, wildfires are growing in intensity and frequency as climate change sparks prolonged droughts. The initial cause can vary — a spark from downed electric lines, a lightning strike or a cigarette butt tossed out a car window — but the result is the same: Once vegetation dries out, it can easily ignite.
Here’s a look at recent major US wildfires.
In November 2018, flames leveled the California town of Paradise, killing more than 80 people and destroying more than 18,000 structures. It was the state’s deadliest and most-destructive fire, according to the California Department of Forestry and Fire Protection, known as Cal Fire.
The fire was blamed on power lines operated by the state’s largest utility, PG&E Corp. The company eventually filed for bankruptcy in 2019, facing $30 billion in liabilities from several devastating wildfires, and in 2020 it pleaded guilty to more than 80 counts of involuntary manslaughter for its role in starting the Camp Fire blaze.
Read More: Why Hawaii’s Power Lines Are Suspect in Maui Fire: QuickTake
August Complex Fire
The August Complex fire, the biggest ever in California, blazed for nearly three months in late 2020 and destroyed more than 1 million acres before it was fully corralled. The blazes were ignited by lightning, according to the US Forest Service.
The fire raged through Mendocino, Humboldt, Trinity, Glenn, Lake and Colusa counties, becoming the second-largest fire in US history, according to the Western Fire Chiefs Association.
Read More: Why California Fire Threats Cut Lights for Millions: QuickTake
More than 1,000 homes burned near Boulder, Colorado, just a few days after Christmas in 2021 after extremely strong winds quickly spread flames in the drought-parched land. It was the most destructive fire in the state’s history.
A sheriff’s report found that two blazes — the first of which spread from scrap wood and tree branches set on fire at a home, while the most likely cause for the second was hot particles from a power line — eventually merged to scorch about 6,000 acres.
Xcel Energy Inc. has said that it strongly disagrees with any suggestion that its power lines caused the second ignition, calling the report’s analyses “flawed” and conclusions “incorrect.”
Read More: Wildfire Expert on What Fed Deadly Flames in Hawaii: Q&A
Hermits Peak and Calf Canyon
The Hermits Peak and Calf Canyon blaze scorched more than 340,000 acres in New Mexico during the late spring and early summer of 2022. The wildfire was the largest in the state’s history and was caused by smoldering debris left over from a controlled burn in January, forest service officials said.
Randy Moore, the chief of the US Forest Service, apologized for his agency’s role in accidentally triggering the flames, saying that the events leading to the fire were “nearly unheard of until recently in the century-plus of experience the Forest Service has in working on these landscapes.”
“Climate change is leading to conditions on the ground we have never encountered,” Moore said last year.
–With assistance from Mark Chediak and Shiyin Chen.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.
|
<urn:uuid:191bad39-786a-4707-911b-a4afa0770a78>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510516.56/warc/CC-MAIN-20230929122500-20230929152500-00563.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9505967497825623,
"pii_count": 0,
"score": 2.5625,
"token_count": 833,
"url": "https://ec2-15-188-152-128.eu-west-3.compute.amazonaws.com/2023/08/15/what-sparks-us-wildfires-power-lines-burning-trash-and-lightning/"
}
|
The fast-moving fire that ripped through Maui’s historic town of Lahaina killed more than 90 people, making it the deadliest US wildfire in more than a century. The cause of the blaze is still under investigation, but power equipment in the area is coming under increasing scrutiny.
(Bloomberg) — The fast-moving fire that ripped through Maui’s historic town of Lahaina killed more than 90 people, making it the deadliest US wildfire in more than a century. The cause of the blaze is still under investigation, but power equipment in the area is coming under increasing scrutiny.
Across the nation, wildfires are growing in intensity and frequency as climate change sparks prolonged droughts. The initial cause can vary — a spark from downed electric lines, a lightning strike or a cigarette butt tossed out a car window — but the result is the same: Once vegetation dries out, it can easily ignite.
Here’s a look at recent major US wildfires.
In November 2018, flames leveled the California town of Paradise, killing more than 80 people and destroying more than 18,000 structures. It was the state’s deadliest and most-destructive fire, according to the California Department of Forestry and Fire Protection, known as Cal Fire.
The fire was blamed on power lines operated by the state’s largest utility, PG&E Corp. The company eventually filed for bankruptcy in 2019, facing $30 billion in liabilities from several devastating wildfires, and in 2020 it pleaded guilty to more than 80 counts of involuntary manslaughter for its role in starting the Camp Fire blaze.
Read More: Why Hawaii’s Power Lines Are Suspect in Maui Fire: QuickTake
August Complex Fire
The August Complex fire, the biggest ever in California, blazed for nearly three months in late 2020 and destroyed more than 1 million acres before it was fully corralled. The blazes were ignited by lightning, according to the US Forest Service.
The fire raged through Mendocino, Humboldt, Trinity, Glenn, Lake and Colusa counties, becoming the second-largest fire in US history, according to the Western Fire Chiefs Association.
Read More: Why California Fire Threats Cut Lights for Millions: QuickTake
More than 1,000 homes burned near Boulder, Colorado, just a few
|
days after Christmas in 2021 after extremely strong winds quickly spread flames in the drought-parched land. It was the most destructive fire in the state’s history.
A sheriff’s report found that two blazes — the first of which spread from scrap wood and tree branches set on fire at a home, while the most likely cause for the second was hot particles from a power line — eventually merged to scorch about 6,000 acres.
Xcel Energy Inc. has said that it strongly disagrees with any suggestion that its power lines caused the second ignition, calling the report’s analyses “flawed” and conclusions “incorrect.”
Read More: Wildfire Expert on What Fed Deadly Flames in Hawaii: Q&A
Hermits Peak and Calf Canyon
The Hermits Peak and Calf Canyon blaze scorched more than 340,000 acres in New Mexico during the late spring and early summer of 2022. The wildfire was the largest in the state’s history and was caused by smoldering debris left over from a controlled burn in January, forest service officials said.
Randy Moore, the chief of the US Forest Service, apologized for his agency’s role in accidentally triggering the flames, saying that the events leading to the fire were “nearly unheard of until recently in the century-plus of experience the Forest Service has in working on these landscapes.”
“Climate change is leading to conditions on the ground we have never encountered,” Moore said last year.
–With assistance from Mark Chediak and Shiyin Chen.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.
|
The Fourth of July was the hottest day on Earth in as many as 125,000 years—breaking a record set the day before—as the return of the El Niño weather pattern collides with soaring temperatures at the start of summer, researchers say.
The global average temperature Tuesday reached 17.18 degrees Celsius (62.92 degrees Fahrenheit), data from the University of Maine’s Climate Change Institute showed, the hottest ever recorded on any day of any year.
The same record was broken the day before, when July 3 temperatures reached 17.01 degrees Celsius (62.62 degrees Fahrenheit), higher than the previous hottest-day record of 16.92 degrees Celsius held by a tie of two dates, July 24, 2022, and August 14, 2016, according to the University of Maine and the National Centers for Environmental Prediction.
The exact modeling system used to estimate Tuesday’s temperature has only been used since 1979, but scientists are able to estimate average temperatures going back tens of thousands of years by using instrument-based global temperature records, tree rings and ice cores, climate scientist Paulo Ceppi told the Washington Post.
The Washington Post’s heat index forecast said as many as 57 million people across the United States were exposed to dangerous heat on Tuesday.
Excessive heat warnings and advisories remain in effect throughout Florida, parts of the Pacific Northwest, southern Texas and other areas Wednesday, the National Weather Service reports, though a weeks-long heat wave in Texas and other southern states did ultimately subside.
Global temperatures have been on the rise for years due to human-caused climate change, but Tuesday’s scorching temperatures were also driven by the first El Niño weather pattern since 2018-19.The United Nations’ World Meteorological Organization warned Tuesday that billions of people will be impacted as El Niño brings warmer sea surface temperatures and triggers extreme heat both in the ocean and on land. An El Niño pattern weakens trade winds and pushes warm water toward the west coast of the Americas, the National Ocean Service says, causing areas in the northern U.S. and Canada to be dryer and warmer than usual. The U.S. Gulf Coast and Southeast often see wetter weather than usual and have increased flooding. El Niño is expected to increase global temperatures for the next 9 to 12 months, the WMO predicted.
What To Watch For
More broken records. The return of the El Niño weather pattern for the first time in four years means more extreme weather and a quickening of global warming, which would bring more record-breaking heat in the coming year. A WMO report released in May predicted there is a 98% likelihood that at least one of the next five years—and the five-year period as a whole—will be warmest on record.
“Early warnings and anticipatory action . . . are vital to save lives and livelihoods,” WMO Secretary-General professor Petteri Taalas said.
|
<urn:uuid:1896f495-0567-4ec1-b933-7177270c8bd1>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00822.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9368686079978943,
"pii_count": 0,
"score": 3.734375,
"token_count": 605,
"url": "https://www.forbes.com/sites/maryroeloffs/2023/07/05/july-4-was-earths-hottest-day-in-over-100000-years-breaking-record-for-2nd-day-in-a-row/?sh=1b31283167dd"
}
|
The Fourth of July was the hottest day on Earth in as many as 125,000 years—breaking a record set the day before—as the return of the El Niño weather pattern collides with soaring temperatures at the start of summer, researchers say.
The global average temperature Tuesday reached 17.18 degrees Celsius (62.92 degrees Fahrenheit), data from the University of Maine’s Climate Change Institute showed, the hottest ever recorded on any day of any year.
The same record was broken the day before, when July 3 temperatures reached 17.01 degrees Celsius (62.62 degrees Fahrenheit), higher than the previous hottest-day record of 16.92 degrees Celsius held by a tie of two dates, July 24, 2022, and August 14, 2016, according to the University of Maine and the National Centers for Environmental Prediction.
The exact modeling system used to estimate Tuesday’s temperature has only been used since 1979, but scientists are able to estimate average temperatures going back tens of thousands of years by using instrument-based global temperature records, tree rings and ice cores, climate scientist Paulo Ceppi told the Washington Post.
The Washington Post’s heat index forecast said as many as 57 million people across the United States were exposed to dangerous heat on Tuesday.
Excessive heat warnings and advisories remain in effect throughout Florida, parts of the Pacific Northwest, southern Texas and other areas Wednesday, the National Weather Service reports, though a weeks-long heat wave in Texas and other southern states did ultimately subside.
Global temperatures have been on the rise for years due to human-caused climate change, but Tuesday’s scorching temperatures were also driven by the first El Niño weather pattern since 2018-19.The United Nations’ World Meteorological Organization warned Tuesday that billions of people will be impacted as El Niño brings warmer sea surface temperatures and triggers extreme heat both in the ocean and on land. An El Niño pattern weakens trade winds and pushes warm water toward the west coast of the Americas, the National Ocean Service says, causing areas in the northern U.S. and Canada to be dryer and warmer than usual. The U.S. Gulf Coast and Southeast often see wetter weather than usual and have increased flooding. El Niño is expected to increase global temperatures for the next 9 to 12 months, the WMO predicted.
What To Watch
|
For
More broken records. The return of the El Niño weather pattern for the first time in four years means more extreme weather and a quickening of global warming, which would bring more record-breaking heat in the coming year. A WMO report released in May predicted there is a 98% likelihood that at least one of the next five years—and the five-year period as a whole—will be warmest on record.
“Early warnings and anticipatory action . . . are vital to save lives and livelihoods,” WMO Secretary-General professor Petteri Taalas said.
|
2,000 ram skulls found in ancient pharaoh's temple in one of Egypt's oldest cities
More than 2,000 decomposed ram skulls unearthed in a newly discovered warehouse in the temple of the ancient pharaoh Ramses II will help archeologists uncover more details about one of the oldest Egyptian cities and its leader.
In an announcement Saturday, Egypt’s Ministry of Tourism and Antiquities said a mission led by the New York University’s Institute for the Study of the Ancient World discovered the skulls while excavating near the northern precinct of the temple in Abydos, about 270 miles south of Cairo.
The sacred city of Abydos is one of the oldest and most important sites of ancient Egypt, according to the World Monuments Fund, which works to conserve key structures in partnership with the Egyptian antiquities ministry and the American Research Center in Egypt.
"With its valuable inscriptions and numerous funerary monuments, Abydos has perhaps contributed more than any other site in Egypt to our present understanding of the history of state formation, linguistic development, and architecture in Ancient Egypt," the World Monuments Fund said.
The large number of embellished rams, among other discoveries, could provide insight into the life and history of this ancient temple, which was first discovered more than 150 years ago, according to the statement.
Archeologists found the remains of a number of animals, including goats, dogs, cows, deer and an ostrich, perched beside the rams’ heads, according to the statement.
Dr. Sameh Iskander, head of the mission, told CBS News that rams' heads were "obviously offerings that were made to the temple of Ramses during the Ptolemaic period, which shows even 1,000 years after Ramses II, that he was still revered."
Some of the heads were still mummified, while "others could have been mummified, but the wrappings or the covers of mummifications were not there anymore," Iskander told CBS.
The team also found parts of statues, hills, remains of ancient trees, clothing and leather shoes.
Rams were known to be worshiped at an “unprecedented level” in Abydus during Ramses II’s ruling, but the discovery suggests the animals remained significant because skulls were left there a millennium after the pharaoh's rule, the statement said.
The architecture of the newly discovered building is “distinctive and unique” and will contribute to researchers’ understanding of the city, the statement said.
Who was Ramses II?
According to the Rosicrucian Egyptian Museum, Ramses II, commonly known as “Ramses the Great," is one of the most famous Egyptian pharaohs. Before his death in 1213 BC, ancient Egyptians knew Ramses, who ruled Egypt for about 70 years, as a fierce warrior and keeper of harmony and balance.
More coverage from USA TODAY
- Discovery of once-hidden corridor inside Great Pyramid of Giza
- Where are America's most toxic watersheds that harm human health and the environment?
- Man finds 'big, ugly' 3.29-carat diamond at Arkansas state park: 'It was so shiny'
- What's everyone talking about? Sign up for our trending newsletter to get the latest news of the day
Camille Fine is a trending visual producer on USA TODAY's NOW team.
|
<urn:uuid:c6d89d17-da7b-48c2-a820-af5f34a92dda>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647810.28/warc/CC-MAIN-20230601110845-20230601140845-00285.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9630446434020996,
"pii_count": 0,
"score": 2.953125,
"token_count": 713,
"url": "https://www.usatoday.com/story/news/nation/2023/03/28/ram-skulls-ancient-egypt-ramses/11549565002/?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=usatoday-newstopstories"
}
|
2,000 ram skulls found in ancient pharaoh's temple in one of Egypt's oldest cities
More than 2,000 decomposed ram skulls unearthed in a newly discovered warehouse in the temple of the ancient pharaoh Ramses II will help archeologists uncover more details about one of the oldest Egyptian cities and its leader.
In an announcement Saturday, Egypt’s Ministry of Tourism and Antiquities said a mission led by the New York University’s Institute for the Study of the Ancient World discovered the skulls while excavating near the northern precinct of the temple in Abydos, about 270 miles south of Cairo.
The sacred city of Abydos is one of the oldest and most important sites of ancient Egypt, according to the World Monuments Fund, which works to conserve key structures in partnership with the Egyptian antiquities ministry and the American Research Center in Egypt.
"With its valuable inscriptions and numerous funerary monuments, Abydos has perhaps contributed more than any other site in Egypt to our present understanding of the history of state formation, linguistic development, and architecture in Ancient Egypt," the World Monuments Fund said.
The large number of embellished rams, among other discoveries, could provide insight into the life and history of this ancient temple, which was first discovered more than 150 years ago, according to the statement.
Archeologists found the remains of a number of animals, including goats, dogs, cows, deer and an ostrich, perched beside the rams’ heads, according to the statement.
Dr. Sameh Iskander, head of the mission, told CBS News that rams' heads were "obviously offerings that were made to the temple of Ramses during the Ptolemaic period, which shows even 1,000 years after Ramses II, that he was still revered."
Some of the heads were still mummified, while "others could have been mummified, but the wrappings or the covers of mummifications were not there anymore," Iskander told CBS.
The team also found parts of statues, hills, remains of ancient trees, clothing and leather shoes.
Rams were known to be worshiped at an “unprecedented level” in Abydus during Ramses II’s ruling, but the discovery suggests the animals remained significant because skulls were left there a millennium after the pharaoh's rule, the statement said.
The architecture of the newly discovered building is “distinctive and unique” and
|
will contribute to researchers’ understanding of the city, the statement said.
Who was Ramses II?
According to the Rosicrucian Egyptian Museum, Ramses II, commonly known as “Ramses the Great," is one of the most famous Egyptian pharaohs. Before his death in 1213 BC, ancient Egyptians knew Ramses, who ruled Egypt for about 70 years, as a fierce warrior and keeper of harmony and balance.
More coverage from USA TODAY
- Discovery of once-hidden corridor inside Great Pyramid of Giza
- Where are America's most toxic watersheds that harm human health and the environment?
- Man finds 'big, ugly' 3.29-carat diamond at Arkansas state park: 'It was so shiny'
- What's everyone talking about? Sign up for our trending newsletter to get the latest news of the day
Camille Fine is a trending visual producer on USA TODAY's NOW team.
|
By MIKE SCHNEIDER and MORGAN LEE
SANTA FE, N.M (AP) — During the 2020 census, Native American leaders across the U.S. invested time and resources to make sure their members were tallied during the head count, which determines political power and federal funding.
But the detailed data sets from the 2020 census they will receive this month are more limited and less accurate than they were in the previous census — and it isn’t because the COVID-19 pandemic severely limited outreach efforts.
Rather, it’s due to new privacy methods implemented by the U.S. Census Bureau in order to protect the confidentiality of participants, one of which introduces intentional errors, or “noise,” to the data.
At stake is the availability and accuracy of data helping tribal leaders make decisions about where to locate grocery stores or schools and estimate future population growth. Census numbers determine funding for social programs, education, roads and elderly care for tribes that have been historically undercounted.
“It was never clearly articulated to them by the Census Bureau that this would be the case, that they wouldn’t receive the level of data that they received from the previous census,” New Mexico State Demographer Robert Rhatigan said. “In those tribal conversations it was never made clear that the data would not be available, or that it would be so noisy in these smaller areas.”
In fact, more than 80% of tribes in the U.S. won’t receive the full suite of detailed demographic data from the 2020 census at tribal-area levels they had in the 2010 census because of the changes, according to a report released in August by the Center for Indian Country Development, which is part of the Federal Reserve Bank of Minneapolis.
Many leaders in Indian Country are unaware they are going to get fewer tables when the detailed data sets are released Sept. 21, said Brandi Liberty, a consultant who helps tribes get federal and state grants.
“It’s going to be difficult for a lot of tribes when they need the data,” said Liberty, a member of the Iowa Tribe of Kansas and Nebraska.
The 2020 census put the American Indian and Alaska Native alone population at 3.7 million people; it was 9.6 million for those who identified as American Indian and Alaska Native in combination with another race. The Census Bureau provides detailed data for 1,200 American Indian and Alaska Native tribes and villages.
The privacy changes to the detailed census data “will harm the ability of self-governing tribes to meet the needs of their citizens,” the Federal Reserve report said.
The Census Bureau told The Associated Press that it doesn’t comment on outside reports but acknowledged the number of tables for tribes in 2020 were reduced from 2010 because of the privacy concerns.
The privacy changes arrive during heightened sensitivities about who controls data from Indian Country.
“The concept of tribal data sovereignty and just data sovereignty in general has been kind of elevated. In a sense, this is their data,” Rhatigan said. “You can say that it’s a problem for the smaller tribal communities that won’t even get the detailed age data. It’s possible that the bigger problem comes from the tribes that do receive the data. Nobody knows … how inaccurate those data are.”
That’s because of the privacy method, known as “differential privacy,” uses algorithms to create intentional errors to data by adding or subtracting people from the actual count in order to obscure the identity of any given participant in a particular area.
The Census Bureau has said the differential privacy algorithms are needed because, without them, the growth of easily available third-party data combined with modern computing could allow hackers to piece together the identities of participants in its censuses and surveys in violation of the law.
Differential privacy’s impact on accuracy is greatest when population totals are broken down by race, age and sex, making it harder to understand demographic changes in individual tribal areas, the Federal Reserve report said.
Also complicating the availability of detailed tribal census data are new population thresholds by the Census Bureau. The thresholds determine how much data tribes, or racial or ethnic groups, get for a particular area.
In 2010, in order to protect people’s identities, a tribe or a racial or ethnic group in any particular geography like a county needed at least 100 people to get all 71 available data tables. In 2020, “dynamic population thresholds” are being used, with the size of the tribe or racial or ethnic group in a location determining how many data tables they get.
For national or state level data, the 40% of all tribes with less than 500 people across the U.S. will receive only country or state-wide population totals, keeping them from getting the more detailed data they got in 2010. At the tribal-area level, 80% of tribes will only receive population totals instead of breakdowns of age data reported by sex, according to the Federal Reserve report.
In New Mexico, for instance, only the Navajo Nation — the tribe with the largest reservation, extending into Arizona and Utah — will receive the full suite of data with almost two dozen age categories by sex. Sixteen of the state’s 22 populated tribal areas are likely to receive limited data sets breaking down populations into only four age groups per sex. Two Native American pueblos will receive no age breakdowns at all, Rhatigan said.
American Indian or Alaska Native people on reservations were among the most undercounted populations in the 2020 census, with an estimated 5.6% of residents missed, according to an evaluation by the Census Bureau.
The COVID-19 pandemic severely limited the outreach efforts many tribal communities had planned. Many tribes closed their borders in an effort to stop the virus’ spread, severely restricting the ability to get a head count. Plus, the digital divide in some tribal communities made responding to the head count difficult during the first census, in which participants were encouraged to answer census questions online.
It might have been worse. The Census Bureau earlier contemplated eliminating detailed tribal tables altogether, said James Tucker, a voting rights attorney for the Native American Rights Fund.
“It could have been really bad,” said Tucker, who is a former chair of a Census Bureau advisory committee. “But they took it to heart to make the data as accurate as possible while balancing that against the privacy concerns.”
Follow Mike Schneider on Twitter at @MikeSchneiderAP
Follow Morgan Lee on Twitter at @MLeeAP
|
<urn:uuid:17b59caf-9a75-416e-af16-e7a5e8a349af>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474700.89/warc/CC-MAIN-20240228080245-20240228110245-00184.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9533408284187317,
"pii_count": 0,
"score": 2.59375,
"token_count": 1372,
"url": "https://kvia.com/news/2023/09/08/tribal-nations-face-less-accurate-more-limited-2020-census-data-because-of-privacy-methods-2/"
}
|
By MIKE SCHNEIDER and MORGAN LEE
SANTA FE, N.M (AP) — During the 2020 census, Native American leaders across the U.S. invested time and resources to make sure their members were tallied during the head count, which determines political power and federal funding.
But the detailed data sets from the 2020 census they will receive this month are more limited and less accurate than they were in the previous census — and it isn’t because the COVID-19 pandemic severely limited outreach efforts.
Rather, it’s due to new privacy methods implemented by the U.S. Census Bureau in order to protect the confidentiality of participants, one of which introduces intentional errors, or “noise,” to the data.
At stake is the availability and accuracy of data helping tribal leaders make decisions about where to locate grocery stores or schools and estimate future population growth. Census numbers determine funding for social programs, education, roads and elderly care for tribes that have been historically undercounted.
“It was never clearly articulated to them by the Census Bureau that this would be the case, that they wouldn’t receive the level of data that they received from the previous census,” New Mexico State Demographer Robert Rhatigan said. “In those tribal conversations it was never made clear that the data would not be available, or that it would be so noisy in these smaller areas.”
In fact, more than 80% of tribes in the U.S. won’t receive the full suite of detailed demographic data from the 2020 census at tribal-area levels they had in the 2010 census because of the changes, according to a report released in August by the Center for Indian Country Development, which is part of the Federal Reserve Bank of Minneapolis.
Many leaders in Indian Country are unaware they are going to get fewer tables when the detailed data sets are released Sept. 21, said Brandi Liberty, a consultant who helps tribes get federal and state grants.
“It’s going to be difficult for a lot of tribes when they need the data,” said Liberty, a member of the Iowa Tribe of Kansas and Nebraska.
The 2020 census put the American Indian and Alaska Native alone population at 3.7 million people; it was 9.6 million for those who identified as American Indian and Alaska Native in combination with another race. The Census Bureau provides detailed data for 1,200 American Indian and Alaska Native tribes and villages.
|
The privacy changes to the detailed census data “will harm the ability of self-governing tribes to meet the needs of their citizens,” the Federal Reserve report said.
The Census Bureau told The Associated Press that it doesn’t comment on outside reports but acknowledged the number of tables for tribes in 2020 were reduced from 2010 because of the privacy concerns.
The privacy changes arrive during heightened sensitivities about who controls data from Indian Country.
“The concept of tribal data sovereignty and just data sovereignty in general has been kind of elevated. In a sense, this is their data,” Rhatigan said. “You can say that it’s a problem for the smaller tribal communities that won’t even get the detailed age data. It’s possible that the bigger problem comes from the tribes that do receive the data. Nobody knows … how inaccurate those data are.”
That’s because of the privacy method, known as “differential privacy,” uses algorithms to create intentional errors to data by adding or subtracting people from the actual count in order to obscure the identity of any given participant in a particular area.
The Census Bureau has said the differential privacy algorithms are needed because, without them, the growth of easily available third-party data combined with modern computing could allow hackers to piece together the identities of participants in its censuses and surveys in violation of the law.
Differential privacy’s impact on accuracy is greatest when population totals are broken down by race, age and sex, making it harder to understand demographic changes in individual tribal areas, the Federal Reserve report said.
Also complicating the availability of detailed tribal census data are new population thresholds by the Census Bureau. The thresholds determine how much data tribes, or racial or ethnic groups, get for a particular area.
In 2010, in order to protect people’s identities, a tribe or a racial or ethnic group in any particular geography like a county needed at least 100 people to get all 71 available data tables. In 2020, “dynamic population thresholds” are being used, with the size of the tribe or racial or ethnic group in a location determining how many data tables they get.
For national or state level data, the 40% of all tribes with less than 500 people across the U.S. will receive only country or state-wide population totals, keeping them from getting the more detailed data they got in 2010. At the tribal-area level, 80% of tribes will only receive population totals instead of breakdowns of age data reported by sex, according to the Federal Reserve report.
In New Mexico, for instance, only the Navajo Nation — the tribe with the largest reservation, extending into Arizona and Utah — will receive the full suite of data with almost two dozen age categories by sex. Sixteen of the state’s 22 populated tribal areas are likely to receive limited data sets breaking down populations into only four age groups per sex. Two Native American pueblos will receive no age breakdowns at all, Rhatigan said.
American Indian or Alaska Native people on reservations were among the most undercounted populations in the 2020 census, with an estimated 5.6% of residents missed, according to an evaluation by the Census Bureau.
The COVID-19 pandemic severely limited the outreach efforts many tribal communities had planned. Many tribes closed their borders in an effort to stop the virus’ spread, severely restricting the ability to get a head count. Plus, the digital divide in some tribal communities made responding to the head count difficult during the first census, in which participants were encouraged to answer census questions online.
It might have been worse. The Census Bureau earlier contemplated eliminating detailed tribal tables altogether, said James Tucker, a voting rights attorney for the Native American Rights Fund.
“It could have been really bad,” said Tucker, who is a former chair of a Census Bureau advisory committee. “But they took it to heart to make the data as accurate as possible while balancing that against the privacy concerns.”
Follow Mike Schneider on Twitter at @MikeSchneiderAP
Follow Morgan Lee on Twitter at @MLeeAP
|
Earlier this year, people who pay attention to food insecurity in New Hampshire noticed something alarming: The rate of families reporting insufficient food access shot up by more than 10%. By this spring, estimates showed that more than half of Granite State households with children didn’t have enough to eat.
And in the most recent Census pulse survey last month, nearly 200,000 children in the state lived in households that lacked sufficient food. That’s up by nearly 40,000 additional children since January.
While this rise in hunger is troubling, it’s not a complete shock to those who work on the issue. Advocates in the field say many of New Hampshire’s most vulnerable households are trapped in a double bind these days: Continued inflation is pushing the prices of essentials like groceries ever higher, at the same time that several pandemic-era safety net programs have been rolled back. The result? A sharp increase in the number of families in the state who report trouble putting enough food on their table week to week.
Advocates say the rise in food insecurity illustrates the tentative financial state of many families here and the important role that pandemic-era assistance played in improving the quality of life for many.
“The only thing that changed with the pandemic is that we created programs that made it easier for people to survive with the cost of living the way it is,” said Jessica Gorhan, social impact and development director of the non-profit New Hampshire Hunger Solutions.
A rollback of pandemic-era aid
During the COVID-19 pandemic, the federal government made it easier for people to qualify for a range of support programs. Most notably, it increased monthly allotments for the Supplemental Nutrition Assistance Program, referred to as SNAP or food stamps, giving low-income people more money to spend on groceries.
The federal government did not end the national public health emergency until May of 2023, but it cut back the additional SNAP support months earlier, issuing the last pandemic allotment in February. The impact in New Hampshire can be seen in the numbers: Families in the state reporting insufficient food leapt from 44% of respondents to 54% between February and April, according to Census pulse data. That’s about 50,000 more households struggling to put enough food on the table.
These rates leveled-off a little but have stayed continually high through the summer.
Gorhan said the expansion of SNAP and other similar aid programs during the pandemic brought about a long-awaited policy change that made food more accessible — even though they did not address the larger structural issues behind food insecurity.
In part due to this additional money, New Hampshire saw lower rates of food insecurity during the pandemic, according to the New Hampshire Fiscal Policy Institute. But now those who work to fight hunger in the state say government policy has not kept up with the reality many residents are still facing, where inflation-related food costs linked to the pandemic persist, but the tools designed to help people stay afloat have suddenly gone away.
Eileen Liponis, who leads the New Hampshire Food Bank, said the cut in the additional funding put a significant strain on food-insecure people. Local advocates estimated the average individual in the state saw a $108 cut and the average family saw a $206 cut in their monthly issuance.
Liponis said demand for the charitable food system has been higher since March, with the Food Bank hosting more frequent mobile food pantries for more households.
“Before COVID, we would do maybe six or 10 [mobile food pantries] a year and they would be targeted and probably have enough for 250 families,” she said. “Now we're doing one a week on average [for] 400 families.”
Alice Farry, a new mom in Nashua, is one of those Granite Staters who has been turning to her local food pantry to make ends meet. Alice works full-time at McDonald’s, earning $15.50 an hour, and her boyfriend works 20 hours a week for the same wage, so they narrowly miss the qualifications to receive SNAP benefits. Still, even as a two-income household and some additional support, they struggle to afford enough nutritious food and anything other than “the most basic of basic stuff.”
She says some of the food bank offerings help a little, but her own dietary restrictions and the limited options result in a lot of the food she receives going to waste.
“The fruits and vegetables are great, but they don't particularly last very long, so if you don't eat them in like a day or two then they go bad,” she said. “The dry goods and the canned goods: Those things are very helpful, but there's just only so much that you can do.”
Hunger advocates say that the charitable food system is not designed to be a long-term solution to food insecurity. Instead, they say programs like SNAP offer a more sustainable alternative since they allow families more consistency and choice.
Gorhan sees the reduction of SNAP allotments as a significant step backwards. “They’ve reversed something that's actually fixed a problem,” she said.
Keeping students fed through the summer
Many food-insecure families with school-aged children rely on programs like free and reduced-price school meals during the academic year, so summer vacation presents a particular challenge for them.
During the pandemic, the federal government provided additional money to help feed school children during the summer months, referred to as pandemic EBT or P-EBT. Beginning next summer, this program – now renamed summer EBT – will become permanent, with qualifying families receiving an extra $40 per month for each child. But this year, the program remains opt-in for each state, and New Hampshire likely won’t be able to distribute the additional money until the end of the year.
The Summer Food Service Program steps in to address the need for food when school is not in session by providing free and nutritious meals for students. These food distribution sites are federally funded by the U.S. Department of Agriculture and administered on the state-level. During the pandemic, any community – regardless of its food insecurity rates – could offer these sites. But now, only areas that meet USDA criteria based on poverty levels are eligible for reimbursement for their summer meals programs.
Laura Milliken, executive director of New Hampshire Hunger solutions, says this prevents food-insecure families who live in wealthier areas from easily getting to sites.
“If there are poor kids living in those [wealthier] communities, they've got to drive longer distances to get to a program,” she said.
Meanwhile, the demand for meals at these sites is growing. Southern New Hampshire Services, which sponsors sites across Hillsborough and Rockingham counties, said it had served 4,000 more meals by mid-summer compared to the same time last year.
The sites are often more than just a place to get food, serving as a community gathering spot for families. On a recent weekday, Abukar Salat brought his children to pick up food at the Elmwood Park site in Manchester, where between 50% and 60% of children qualify for free and reduced meals.
“It's not the food. It's a lot of people out here that care about each other and kids,” he said.
Esperanza Rivera distributed food at the Hunt’s Pool site in Manchester this summer, where many families went to the splash pad to cool off. She said it is important anti-hunger resources come to places like this to make food as accessible as possible. She said the stigma of accepting something free prevented many people, especially parents, from taking meals from her at the summer’s start, but her daily presence made people more comfortable.
Rivera said she does not agree fully with the USDA’s guidelines for the program, which she said are not conducive to feeding as many hungry people as it should. Earlier in the pandemic, sites like Hunt’s pool were granted a “non-congregant waiver,” which meant it could operate on a grab-and-go model. Now, the department dictates that meal recipients in cities like Manchester eat at their distribution site, which Rivera said could discourage some.
“Some of them are already embarrassed for whatever reason when they shouldn't feel embarrassed,” she said. “But I don't think [the USDA] should put restrictions on those people.”
The sites are designed to feed only children, but Rivera will give anyone, including adults, a meal, no questions asked. When she does this, she doesn’t mark them down, so Southern New Hampshire Services — which oversees the program here — does not receive reimbursement for meals from the USDA.
“There’s so many elderly that are vulnerable, and there's so many other people that are vulnerable that are not children and teens,” she said. “So it's like, how can you pick and choose?”
The New Hampshire Food Bank oversees sites in the North Country, where rates of childhood food insecurity are some of the highest in the state, but barriers to food access there look different than in more urban areas. The USDA allowed the non-congregant waiver to remain in place for rural communities this summer, which local anti-hunger advocates say has been helpful in reducing the stigma around food insecurity.
“It's a lot easier for somebody to take food and not have to be embarrassed,” said Travis Shearer, who handed out meals at Groveton Elementary School this summer.
SNAP outreach a challenge
One of the largest barriers to food access in the state is low-enrollment in the programs that do exist, said Milliken. As of 2021, only 50% of people in New Hampshire who qualify for SNAP and only 44% who qualify for WIC, a program for new mothers and infants, were enrolled. Local advocates say this is in part because the state has lacked a SNAP outreach program since 2017. But advocates are hopeful those numbers will improve soon, since the New Hampshire Food Bank was contracted by the state in May to revive the program.
Another aspect of food insecurity is not just access to enough food but to healthy food. The state offers two programs, Granite State Market Match and Double Up Food Bucks, for SNAP participants designed to offer more nutritious options by doubling the amount of money they can spend on locally grown fruits and vegetables. Still advocates say very few people are even aware that these programs exist and often assume places like farmers markets are too expensive.
Morgan Morani, community engagement programs manager for Seacoast Eat Local, estimated that as low as 1% of SNAP participants partake in Granite State Market Match or Double Up Food Bucks.
“There’s not adequate communication [with] individuals who actually receive SNAP that these benefits are available to them and where they can actually use it,” she said.
Liponis says the Food Bank plans to raise awareness about these additional benefits in its outreach program, which it is still in the process of fully planning and implementing.
Milliken said New Hampshire Hunger Solutions and its partners are advocating to change legislation to more closely resemble the pandemic-era support programs, but in the meantime she emphasized the importance of making sure people know the resources that are out there.
“All of us in the state need to think about what are the resources available? Are we using them efficiently?” Milliken said. “How can we get the word out so that nobody needs to go hungry?”
This story is a production of the New England News Collaborative. It was originally published by New Hampshire Public Radio.
|
<urn:uuid:0382853a-138b-4fa2-a358-7d2d7f77fd2f>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233505362.29/warc/CC-MAIN-20230921073711-20230921103711-00127.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9697039127349854,
"pii_count": 0,
"score": 2.671875,
"token_count": 2442,
"url": "https://www.wbur.org/news/2023/09/12/n-h-seeing-sharp-increase-in-food-insecurity-because-of-inflation-end-of-pandemic-aid"
}
|
Earlier this year, people who pay attention to food insecurity in New Hampshire noticed something alarming: The rate of families reporting insufficient food access shot up by more than 10%. By this spring, estimates showed that more than half of Granite State households with children didn’t have enough to eat.
And in the most recent Census pulse survey last month, nearly 200,000 children in the state lived in households that lacked sufficient food. That’s up by nearly 40,000 additional children since January.
While this rise in hunger is troubling, it’s not a complete shock to those who work on the issue. Advocates in the field say many of New Hampshire’s most vulnerable households are trapped in a double bind these days: Continued inflation is pushing the prices of essentials like groceries ever higher, at the same time that several pandemic-era safety net programs have been rolled back. The result? A sharp increase in the number of families in the state who report trouble putting enough food on their table week to week.
Advocates say the rise in food insecurity illustrates the tentative financial state of many families here and the important role that pandemic-era assistance played in improving the quality of life for many.
“The only thing that changed with the pandemic is that we created programs that made it easier for people to survive with the cost of living the way it is,” said Jessica Gorhan, social impact and development director of the non-profit New Hampshire Hunger Solutions.
A rollback of pandemic-era aid
During the COVID-19 pandemic, the federal government made it easier for people to qualify for a range of support programs. Most notably, it increased monthly allotments for the Supplemental Nutrition Assistance Program, referred to as SNAP or food stamps, giving low-income people more money to spend on groceries.
The federal government did not end the national public health emergency until May of 2023, but it cut back the additional SNAP support months earlier, issuing the last pandemic allotment in February. The impact in New Hampshire can be seen in the numbers: Families in the state reporting insufficient food leapt from 44% of respondents to 54% between February and April, according to Census pulse data. That’s about 50,000 more households struggling to put enough food on the table.
These rates leveled-off a little but have stayed continually high through the summer.
Gorhan said the expansion of SNAP and other similar aid programs during
|
the pandemic brought about a long-awaited policy change that made food more accessible — even though they did not address the larger structural issues behind food insecurity.
In part due to this additional money, New Hampshire saw lower rates of food insecurity during the pandemic, according to the New Hampshire Fiscal Policy Institute. But now those who work to fight hunger in the state say government policy has not kept up with the reality many residents are still facing, where inflation-related food costs linked to the pandemic persist, but the tools designed to help people stay afloat have suddenly gone away.
Eileen Liponis, who leads the New Hampshire Food Bank, said the cut in the additional funding put a significant strain on food-insecure people. Local advocates estimated the average individual in the state saw a $108 cut and the average family saw a $206 cut in their monthly issuance.
Liponis said demand for the charitable food system has been higher since March, with the Food Bank hosting more frequent mobile food pantries for more households.
“Before COVID, we would do maybe six or 10 [mobile food pantries] a year and they would be targeted and probably have enough for 250 families,” she said. “Now we're doing one a week on average [for] 400 families.”
Alice Farry, a new mom in Nashua, is one of those Granite Staters who has been turning to her local food pantry to make ends meet. Alice works full-time at McDonald’s, earning $15.50 an hour, and her boyfriend works 20 hours a week for the same wage, so they narrowly miss the qualifications to receive SNAP benefits. Still, even as a two-income household and some additional support, they struggle to afford enough nutritious food and anything other than “the most basic of basic stuff.”
She says some of the food bank offerings help a little, but her own dietary restrictions and the limited options result in a lot of the food she receives going to waste.
“The fruits and vegetables are great, but they don't particularly last very long, so if you don't eat them in like a day or two then they go bad,” she said. “The dry goods and the canned goods: Those things are very helpful, but there's just only so much that you can do.”
Hunger advocates say that the charitable food system is not designed to be a long-term solution to food insecurity. Instead, they say programs like SNAP offer a more sustainable alternative since they allow families more consistency and choice.
Gorhan sees the reduction of SNAP allotments as a significant step backwards. “They’ve reversed something that's actually fixed a problem,” she said.
Keeping students fed through the summer
Many food-insecure families with school-aged children rely on programs like free and reduced-price school meals during the academic year, so summer vacation presents a particular challenge for them.
During the pandemic, the federal government provided additional money to help feed school children during the summer months, referred to as pandemic EBT or P-EBT. Beginning next summer, this program – now renamed summer EBT – will become permanent, with qualifying families receiving an extra $40 per month for each child. But this year, the program remains opt-in for each state, and New Hampshire likely won’t be able to distribute the additional money until the end of the year.
The Summer Food Service Program steps in to address the need for food when school is not in session by providing free and nutritious meals for students. These food distribution sites are federally funded by the U.S. Department of Agriculture and administered on the state-level. During the pandemic, any community – regardless of its food insecurity rates – could offer these sites. But now, only areas that meet USDA criteria based on poverty levels are eligible for reimbursement for their summer meals programs.
Laura Milliken, executive director of New Hampshire Hunger solutions, says this prevents food-insecure families who live in wealthier areas from easily getting to sites.
“If there are poor kids living in those [wealthier] communities, they've got to drive longer distances to get to a program,” she said.
Meanwhile, the demand for meals at these sites is growing. Southern New Hampshire Services, which sponsors sites across Hillsborough and Rockingham counties, said it had served 4,000 more meals by mid-summer compared to the same time last year.
The sites are often more than just a place to get food, serving as a community gathering spot for families. On a recent weekday, Abukar Salat brought his children to pick up food at the Elmwood Park site in Manchester, where between 50% and 60% of children qualify for free and reduced meals.
“It's not the food. It's a lot of people out here that care about each other and kids,” he said.
Esperanza Rivera distributed food at the Hunt’s Pool site in Manchester this summer, where many families went to the splash pad to cool off. She said it is important anti-hunger resources come to places like this to make food as accessible as possible. She said the stigma of accepting something free prevented many people, especially parents, from taking meals from her at the summer’s start, but her daily presence made people more comfortable.
Rivera said she does not agree fully with the USDA’s guidelines for the program, which she said are not conducive to feeding as many hungry people as it should. Earlier in the pandemic, sites like Hunt’s pool were granted a “non-congregant waiver,” which meant it could operate on a grab-and-go model. Now, the department dictates that meal recipients in cities like Manchester eat at their distribution site, which Rivera said could discourage some.
“Some of them are already embarrassed for whatever reason when they shouldn't feel embarrassed,” she said. “But I don't think [the USDA] should put restrictions on those people.”
The sites are designed to feed only children, but Rivera will give anyone, including adults, a meal, no questions asked. When she does this, she doesn’t mark them down, so Southern New Hampshire Services — which oversees the program here — does not receive reimbursement for meals from the USDA.
“There’s so many elderly that are vulnerable, and there's so many other people that are vulnerable that are not children and teens,” she said. “So it's like, how can you pick and choose?”
The New Hampshire Food Bank oversees sites in the North Country, where rates of childhood food insecurity are some of the highest in the state, but barriers to food access there look different than in more urban areas. The USDA allowed the non-congregant waiver to remain in place for rural communities this summer, which local anti-hunger advocates say has been helpful in reducing the stigma around food insecurity.
“It's a lot easier for somebody to take food and not have to be embarrassed,” said Travis Shearer, who handed out meals at Groveton Elementary School this summer.
SNAP outreach a challenge
One of the largest barriers to food access in the state is low-enrollment in the programs that do exist, said Milliken. As of 2021, only 50% of people in New Hampshire who qualify for SNAP and only 44% who qualify for WIC, a program for new mothers and infants, were enrolled. Local advocates say this is in part because the state has lacked a SNAP outreach program since 2017. But advocates are hopeful those numbers will improve soon, since the New Hampshire Food Bank was contracted by the state in May to revive the program.
Another aspect of food insecurity is not just access to enough food but to healthy food. The state offers two programs, Granite State Market Match and Double Up Food Bucks, for SNAP participants designed to offer more nutritious options by doubling the amount of money they can spend on locally grown fruits and vegetables. Still advocates say very few people are even aware that these programs exist and often assume places like farmers markets are too expensive.
Morgan Morani, community engagement programs manager for Seacoast Eat Local, estimated that as low as 1% of SNAP participants partake in Granite State Market Match or Double Up Food Bucks.
“There’s not adequate communication [with] individuals who actually receive SNAP that these benefits are available to them and where they can actually use it,” she said.
Liponis says the Food Bank plans to raise awareness about these additional benefits in its outreach program, which it is still in the process of fully planning and implementing.
Milliken said New Hampshire Hunger Solutions and its partners are advocating to change legislation to more closely resemble the pandemic-era support programs, but in the meantime she emphasized the importance of making sure people know the resources that are out there.
“All of us in the state need to think about what are the resources available? Are we using them efficiently?” Milliken said. “How can we get the word out so that nobody needs to go hungry?”
This story is a production of the New England News Collaborative. It was originally published by New Hampshire Public Radio.
|
When Florida Gov. Ron DeSantis (R) blocked the first draft of an Advanced Placement African American studies course, he insisted he did not want to eliminate Black history, but only to control it. It might seem that his campaign has succeeded: The College Board announced a new watered-down curriculum that transformed resistance figures such as Frederick Douglass into “Black Conservatives,” even as they insisted the changes had nothing to do with political blowback.
Yet history tells us these efforts to use Black history to teach a heroic story of White America probably won’t succeed in the long run. For more than a century, conservatives have tried to use history classes to infuse students with, in the words of one 1920s activist, “a patriotic and unswerving loyalty to our United States.” They’ve done this by insisting on a curriculum that twists and distorts the United States’ racial history, turning centuries of struggle and oppression into patriotic tales of American heroism. They fear that a more accurate narrative might diminish students’ love of country.
The results have been poor. It has always been hard enough for students to learn basic facts about America’s past. By making it unacceptable to teach the truth of America’s racial history — even when the historical facts are unambiguous — conservatives have managed to ensure that students learn even less.
As historians like Bethany Bell have pointed out, White Southerners tried to control history textbooks in the former Confederacy beginning almost as soon as the Civil War ended. In the 1920s, White conservatives in organizations such as the American Legion and the Ku Klux Klan expanded this campaign across the entire country. Leaders of both groups were aghast at the content of modern history textbooks, such as David Muzzey’s “An American History” (1917), which emphasized the importance of the first importation of enslaved Africans in 1619 and the “Horrors of the Slave Trade.” Mortified conservatives accused Muzzey of focusing too much on the centrality of conflict in America’s past. They charged him with fomenting “class hatred” and racial division.
States such as Wisconsin, Oregon, New York and New Jersey passed laws or considered bills to ban books like Muzzey’s. The 1923 New Jersey bill stated that no textbook could be adopted that “belittles … noted American patriots” such as the slave-owning Thomas Jefferson or George Washington. If a book “questions the worthiness of their motives” it was not fit for students. In the eyes of conservatives in organizations such as the Daughters of the American Revolution, American Legion and Ku Klux Klan, such questioning was anathema, a treasonous attack on America’s uniquely heroic past.
The right didn’t only try to block and remove objectionable textbooks. They also produced one of their own, one that they hoped would “portray,” in the words of one ambitious conservative, “in colorful outline the heroic incidents” of U.S. history. Leaders of the American Legion contracted with Charles F. Horne, an English professor at City College of New York, to write this kind of American history textbook, one that aimed to inculcate the proper patriotism in students.
Horne had an ambitious vision. His book endeavored to tell a heroic story of America that could unify White people north and south. It included Black history and difficult topics such as slavery and genocide — but in a skewed way. The “horrors” of the slave ships were acknowledged as “unspeakable,” but overall, Horne wrote, slavery was a benefit for Africans and a burden for White leaders. As he put it, “The blight of slavery fell less upon their race than on their masters.”
To politicians hoping to capitalize on conservative angst about modern history textbooks, Horne’s book was a smashing success. In the words of Oregon’s Klan-backed Gov. Walter Pierce, “It is the finest history of early America that we have ever had.”
Other readers, however, disagreed. One critic called it nothing but the “old moth-eaten, discredited, and dangerous” history, “the placing of the seal of unquestioning historical approval upon the thoughts and deeds of one’s ancestors.”
Horne’s book didn’t only horrify liberals. Even its backers in the American Legion noted that the book was plagued with “misleading statements … [and] inaccuracies.” In 1925, the Legion withdrew its support and the book languished as a conservative never-ran.
The gap in assessments of Horne’s book stemmed from a divergence of goals. Its champions weren’t concerned with providing the most accurate account of the past. Instead, they wanted a version that would leave students with an uncritical love of the United States.
But as they continued to push for history education that might achieve this goal, they did not recognize that their efforts were only leaving students confused about the past. The evidence showed that students did not retain much from their history classes. One 1915 survey from Texas, for example, showed that students could barely identify the key terms of U.S. history. At the low end, students knew about one in five key terms; even the best students still only identified just under half. Similarly, a 1922 survey from Indiana found a slightly better range of scores, but the top grade was still just under 60 percent. By the 1960s, the average score on a national history test administered by the Cooperative Test Service was only about 55 percent.
Moreover, conservative efforts to minimize the ugly, racist chapters of America’s past seemed also to minimize student learning. Questions about racial topics proved more challenging for students to answer. When scholar Dale Whittington combined the results from multiple student surveys between 1933 and 1944, she found that 81 percent of top students could identify Alexander Hamilton and 60 percent of “A” students could handle questions about the Bill of Rights. But only 46 percent of the highest achieving students knew about the Emancipation Proclamation.
Certainly, there were many reasons for students’ lack of knowledge of school history. But conservative efforts to inject a stunted version of Black history into classrooms played a role — even when they didn’t affect curriculum. As one survey of teachers found in 1941, teachers tended to shy away from any topic considered politically divisive, even if there was no controversy among scholars. As one California teacher told the pollsters, “Controversial subjects are dynamite to teachers.”
History education remained skewed with regard to topics of race and racism throughout most of the 20th century. In the 1940s, textbooks by Harold Rugg were burned in bonfires after critics charged they taught children too much about conflict and racial animosity. By the 1970s, conservative activists in organizations such as the Heritage Foundation led boycotts of textbooks that included Black militant writers such as George Jackson and Eldridge Cleaver.
This conservative pressure encouraged publishers to shy away from addressing painful racial topics. And students’ comprehension about the past has continued to reflect it. In recent surveys, only 8 percent of high school seniors could identify slavery as the cause of the Civil War, and only 10 percent of eighth-graders could explain the reasons for the Confederacy’s defeat.
Despite the signs that their version of history education has left students ill-informed about the past, conservatives keep pushing for it. In the waning days of the Trump administration, the 1776 Commission, a hastily assembled group headed by Larry Arnn, the president of Hillsdale College, released a report that hoped to convince students that their country has always been “an example to be admired and emulated by nations of the world that wish to steer their government toward greater liberty and justice.”
Like its 20th-century predecessors, the 1776 Commission’s report included a deeply distorted version of Black history. It conceded that racial slavery was a “brutal, humiliating fact.” But happily, just as we oppose slavery now, the report argued — in stark contradiction to historical facts — the Founding Fathers “condemned it then.” It was only the Founders’ heroism that “started the new nation on a path that would lead to the end of slavery.”
These objectively false statements, like DeSantis’s grandstanding, are not merely blocking students from learning Black history. They are a brash grab for control of it, one that negates the true story of struggle and inequality and turns it into something that proves the fundamental heroism of White leaders. But the data about historical comprehension indicates that this patently saccharine version of the past probably will do little more than confuse students and leave them with a poor understanding of the past.
|
<urn:uuid:b7632f3b-f22e-4704-99ac-2a7cbec48bc3>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00377.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.967149555683136,
"pii_count": 0,
"score": 2.765625,
"token_count": 1852,
"url": "https://www.washingtonpost.com/made-by-history/2023/02/07/desantis-history-education/?tid=pm_opinions_pop"
}
|
When Florida Gov. Ron DeSantis (R) blocked the first draft of an Advanced Placement African American studies course, he insisted he did not want to eliminate Black history, but only to control it. It might seem that his campaign has succeeded: The College Board announced a new watered-down curriculum that transformed resistance figures such as Frederick Douglass into “Black Conservatives,” even as they insisted the changes had nothing to do with political blowback.
Yet history tells us these efforts to use Black history to teach a heroic story of White America probably won’t succeed in the long run. For more than a century, conservatives have tried to use history classes to infuse students with, in the words of one 1920s activist, “a patriotic and unswerving loyalty to our United States.” They’ve done this by insisting on a curriculum that twists and distorts the United States’ racial history, turning centuries of struggle and oppression into patriotic tales of American heroism. They fear that a more accurate narrative might diminish students’ love of country.
The results have been poor. It has always been hard enough for students to learn basic facts about America’s past. By making it unacceptable to teach the truth of America’s racial history — even when the historical facts are unambiguous — conservatives have managed to ensure that students learn even less.
As historians like Bethany Bell have pointed out, White Southerners tried to control history textbooks in the former Confederacy beginning almost as soon as the Civil War ended. In the 1920s, White conservatives in organizations such as the American Legion and the Ku Klux Klan expanded this campaign across the entire country. Leaders of both groups were aghast at the content of modern history textbooks, such as David Muzzey’s “An American History” (1917), which emphasized the importance of the first importation of enslaved Africans in 1619 and the “Horrors of the Slave Trade.” Mortified conservatives accused Muzzey of focusing too much on the centrality of conflict in America’s past. They charged him with fomenting “class hatred” and racial division.
States such as Wisconsin, Oregon, New York and New Jersey passed laws or considered bills to ban books like Muzzey’s. The 1923 New Jersey bill stated that no textbook could be adopted that “belittles … noted American patriots” such as the slave-owning Thomas Jefferson or George Washington. If a book “questions the
|
worthiness of their motives” it was not fit for students. In the eyes of conservatives in organizations such as the Daughters of the American Revolution, American Legion and Ku Klux Klan, such questioning was anathema, a treasonous attack on America’s uniquely heroic past.
The right didn’t only try to block and remove objectionable textbooks. They also produced one of their own, one that they hoped would “portray,” in the words of one ambitious conservative, “in colorful outline the heroic incidents” of U.S. history. Leaders of the American Legion contracted with Charles F. Horne, an English professor at City College of New York, to write this kind of American history textbook, one that aimed to inculcate the proper patriotism in students.
Horne had an ambitious vision. His book endeavored to tell a heroic story of America that could unify White people north and south. It included Black history and difficult topics such as slavery and genocide — but in a skewed way. The “horrors” of the slave ships were acknowledged as “unspeakable,” but overall, Horne wrote, slavery was a benefit for Africans and a burden for White leaders. As he put it, “The blight of slavery fell less upon their race than on their masters.”
To politicians hoping to capitalize on conservative angst about modern history textbooks, Horne’s book was a smashing success. In the words of Oregon’s Klan-backed Gov. Walter Pierce, “It is the finest history of early America that we have ever had.”
Other readers, however, disagreed. One critic called it nothing but the “old moth-eaten, discredited, and dangerous” history, “the placing of the seal of unquestioning historical approval upon the thoughts and deeds of one’s ancestors.”
Horne’s book didn’t only horrify liberals. Even its backers in the American Legion noted that the book was plagued with “misleading statements … [and] inaccuracies.” In 1925, the Legion withdrew its support and the book languished as a conservative never-ran.
The gap in assessments of Horne’s book stemmed from a divergence of goals. Its champions weren’t concerned with providing the most accurate account of the past. Instead, they wanted a version that would leave students with an uncritical love of the United States.
But as they continued to push for history education that might achieve this goal, they did not recognize that their efforts were only leaving students confused about the past. The evidence showed that students did not retain much from their history classes. One 1915 survey from Texas, for example, showed that students could barely identify the key terms of U.S. history. At the low end, students knew about one in five key terms; even the best students still only identified just under half. Similarly, a 1922 survey from Indiana found a slightly better range of scores, but the top grade was still just under 60 percent. By the 1960s, the average score on a national history test administered by the Cooperative Test Service was only about 55 percent.
Moreover, conservative efforts to minimize the ugly, racist chapters of America’s past seemed also to minimize student learning. Questions about racial topics proved more challenging for students to answer. When scholar Dale Whittington combined the results from multiple student surveys between 1933 and 1944, she found that 81 percent of top students could identify Alexander Hamilton and 60 percent of “A” students could handle questions about the Bill of Rights. But only 46 percent of the highest achieving students knew about the Emancipation Proclamation.
Certainly, there were many reasons for students’ lack of knowledge of school history. But conservative efforts to inject a stunted version of Black history into classrooms played a role — even when they didn’t affect curriculum. As one survey of teachers found in 1941, teachers tended to shy away from any topic considered politically divisive, even if there was no controversy among scholars. As one California teacher told the pollsters, “Controversial subjects are dynamite to teachers.”
History education remained skewed with regard to topics of race and racism throughout most of the 20th century. In the 1940s, textbooks by Harold Rugg were burned in bonfires after critics charged they taught children too much about conflict and racial animosity. By the 1970s, conservative activists in organizations such as the Heritage Foundation led boycotts of textbooks that included Black militant writers such as George Jackson and Eldridge Cleaver.
This conservative pressure encouraged publishers to shy away from addressing painful racial topics. And students’ comprehension about the past has continued to reflect it. In recent surveys, only 8 percent of high school seniors could identify slavery as the cause of the Civil War, and only 10 percent of eighth-graders could explain the reasons for the Confederacy’s defeat.
Despite the signs that their version of history education has left students ill-informed about the past, conservatives keep pushing for it. In the waning days of the Trump administration, the 1776 Commission, a hastily assembled group headed by Larry Arnn, the president of Hillsdale College, released a report that hoped to convince students that their country has always been “an example to be admired and emulated by nations of the world that wish to steer their government toward greater liberty and justice.”
Like its 20th-century predecessors, the 1776 Commission’s report included a deeply distorted version of Black history. It conceded that racial slavery was a “brutal, humiliating fact.” But happily, just as we oppose slavery now, the report argued — in stark contradiction to historical facts — the Founding Fathers “condemned it then.” It was only the Founders’ heroism that “started the new nation on a path that would lead to the end of slavery.”
These objectively false statements, like DeSantis’s grandstanding, are not merely blocking students from learning Black history. They are a brash grab for control of it, one that negates the true story of struggle and inequality and turns it into something that proves the fundamental heroism of White leaders. But the data about historical comprehension indicates that this patently saccharine version of the past probably will do little more than confuse students and leave them with a poor understanding of the past.
|
Strong leaders with unwavering voices are required for major changes in our society
The U.S. Capitol on Dec. 18, 2019 in Washington, D.C. (Samuel Corum/Getty Images).
In times of a major crisis, discord, division or a crossroads situation where a country or society finds itself at the precipice of what will define its future, a committed leader with a commanding voice emerges.
Who will be that leader — that strong, dedicated and unifying voice to lead us to a solution to end senseless mass gun violence?
History is our greatest teacher.
When a society faces crisis situations that can either define its future direction or could destroy it, a leader emerges.
At critical crossroads in American history, Abraham Lincoln during the Civil War or Franklin Delano Roosevelt during the Great Depression come to mind.
But it need not only be a president or powerful person who carves the right path and direction at critical inflection points of a society in crisis.
It could be a caring and committed citizen determined to bring about needed and meaningful change to make things better.
Below are a few examples of familiar leaders that mobilized a concerned public and brought about meaningful change.
At the turn of the 20th Century, it was Susan B. Anthony, Katie Stanton, Ida B. Wells, Mary Church Terrell and others who worked tirelessly in the Suffragist Movement to secure voting rights for American women.
Later in the 1970s, it was Gloria Steinem and Dorothy Pitman Hughes who took up the cause of getting equality for women in the workplace.
It was Sojourner Truth, and a network of abolitionists — white and Black — who risked their lives to free thousands of slaves through the Underground Railroad.
It was Martin Luther King Jr. and a host of freedom fighters who picked up the baton to lead and run yet another leg in the relay race to secure equal rights and protections for Blacks and poor whites.
There have been many other gains made to improve different conditions and causes in American society by opposing political organizations, views, efforts, and movements. But good outcomes emerged. At the end of the day, many of those efforts improved the quality of life we enjoy today.
Public mobilization has been a necessary component of those achievements.
Why would stopping senseless mass gun violence be any different?
America has been known as a leader, trailblazer and conqueror in so many areas. But she is a laggard when it comes to controlling gun violence. Why can’t we stop the wanton killing and carnage of unsuspecting citizens who are simply going about the activities in their daily lives?
The recent shooting in a Louisville, Kentucky, bank shows once again that no place is safe from wanton mass gun violence. Instead of the emergence of a sustained call for change and course corrections, the loudest voice seems to be one of resignation. There are many things that can be done.
Imagine where we would be as a society if resignation and acceptance were the dominant voice when it came to ending unhealthy, oppressive and abusive conditions.
Would we have ever achieved safe spaces and places for women, children, the poor, the disabled and many other vulnerable groups among us? While there is still work to be done, imagine where we would be if we assumed an attitude of acceptance of wretched conditions.
Tackling the causes of mass gun violence — uncontrolled access to AR-15s and other assault weapons, the ongoing infusion and proliferation of guns and mental illness — will require the same steadfast commitment and vigilance.
Where is the leader or leaders? Where are the coalitions of caring citizens and organizations to bring about the needed policies and regulations to stop mass gun violence?
Most major changes or movements begin small with one or a few people. There are often bursts of activities or protests. We have seen it following many of the horrendous school shootings. Some safety measures have been put in place as a result.
But mass shootings are occurring more frequently in all sorts of places. What will it take to stop this deadly acceleration?
Will it take mass boycotts of public places where merchants feel an economic loss?
The COVID-19 pandemic taught us the economic and social costs of the fear of gathering in public places. Must we get to the place where we alter our normal routines and activities for fear of being gunned down?
Such scenes are not as farfetched as one might think if we continue to witness mass gun violence on almost a weekly basis. Can you name a venue where there hasn’t been a mass shooting?
It seems clear that elected officials at the state level or in Congress seem unable or unwilling to take the necessary steps that have been implemented in other countries to abate or stop gun violence.
Perhaps we should be surprised. But we certainly should not continue to count on a change to come without the public demanding it.
In the meantime, there is a lot that local governmental entities and community organizations can do.
Recently, the public and a few legislators came together at the state house in Tennessee to protest gun violence. Could it be the start of something big?
Stopping gun violence in America is possible. Will strong committed leaders and caring citizens please stand up and apply pressure until a real change comes at the national level?
Such sustained pressure has worked before.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.
|
<urn:uuid:4a4026c6-dbd8-4532-a394-43e9be971f88>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511021.4/warc/CC-MAIN-20231002200740-20231002230740-00847.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9523934721946716,
"pii_count": 0,
"score": 3.0625,
"token_count": 1162,
"url": "https://missouriindependent.com/2023/04/17/strong-leaders-with-unwavering-voices-are-required-for-major-changes-in-our-society/"
}
|
Strong leaders with unwavering voices are required for major changes in our society
The U.S. Capitol on Dec. 18, 2019 in Washington, D.C. (Samuel Corum/Getty Images).
In times of a major crisis, discord, division or a crossroads situation where a country or society finds itself at the precipice of what will define its future, a committed leader with a commanding voice emerges.
Who will be that leader — that strong, dedicated and unifying voice to lead us to a solution to end senseless mass gun violence?
History is our greatest teacher.
When a society faces crisis situations that can either define its future direction or could destroy it, a leader emerges.
At critical crossroads in American history, Abraham Lincoln during the Civil War or Franklin Delano Roosevelt during the Great Depression come to mind.
But it need not only be a president or powerful person who carves the right path and direction at critical inflection points of a society in crisis.
It could be a caring and committed citizen determined to bring about needed and meaningful change to make things better.
Below are a few examples of familiar leaders that mobilized a concerned public and brought about meaningful change.
At the turn of the 20th Century, it was Susan B. Anthony, Katie Stanton, Ida B. Wells, Mary Church Terrell and others who worked tirelessly in the Suffragist Movement to secure voting rights for American women.
Later in the 1970s, it was Gloria Steinem and Dorothy Pitman Hughes who took up the cause of getting equality for women in the workplace.
It was Sojourner Truth, and a network of abolitionists — white and Black — who risked their lives to free thousands of slaves through the Underground Railroad.
It was Martin Luther King Jr. and a host of freedom fighters who picked up the baton to lead and run yet another leg in the relay race to secure equal rights and protections for Blacks and poor whites.
There have been many other gains made to improve different conditions and causes in American society by opposing political organizations, views, efforts, and movements. But good outcomes emerged. At the end of the day, many of those efforts improved the quality of life we enjoy today.
Public mobilization has been a necessary component of those achievements.
Why would stopping senseless mass gun violence be any different?
America has been known as a leader, trailblazer and conqueror in so many areas. But she is a laggard when it comes to controlling gun violence
|
. Why can’t we stop the wanton killing and carnage of unsuspecting citizens who are simply going about the activities in their daily lives?
The recent shooting in a Louisville, Kentucky, bank shows once again that no place is safe from wanton mass gun violence. Instead of the emergence of a sustained call for change and course corrections, the loudest voice seems to be one of resignation. There are many things that can be done.
Imagine where we would be as a society if resignation and acceptance were the dominant voice when it came to ending unhealthy, oppressive and abusive conditions.
Would we have ever achieved safe spaces and places for women, children, the poor, the disabled and many other vulnerable groups among us? While there is still work to be done, imagine where we would be if we assumed an attitude of acceptance of wretched conditions.
Tackling the causes of mass gun violence — uncontrolled access to AR-15s and other assault weapons, the ongoing infusion and proliferation of guns and mental illness — will require the same steadfast commitment and vigilance.
Where is the leader or leaders? Where are the coalitions of caring citizens and organizations to bring about the needed policies and regulations to stop mass gun violence?
Most major changes or movements begin small with one or a few people. There are often bursts of activities or protests. We have seen it following many of the horrendous school shootings. Some safety measures have been put in place as a result.
But mass shootings are occurring more frequently in all sorts of places. What will it take to stop this deadly acceleration?
Will it take mass boycotts of public places where merchants feel an economic loss?
The COVID-19 pandemic taught us the economic and social costs of the fear of gathering in public places. Must we get to the place where we alter our normal routines and activities for fear of being gunned down?
Such scenes are not as farfetched as one might think if we continue to witness mass gun violence on almost a weekly basis. Can you name a venue where there hasn’t been a mass shooting?
It seems clear that elected officials at the state level or in Congress seem unable or unwilling to take the necessary steps that have been implemented in other countries to abate or stop gun violence.
Perhaps we should be surprised. But we certainly should not continue to count on a change to come without the public demanding it.
In the meantime, there is a lot that local governmental entities and community organizations can do.
Recently, the public and a few legislators came together at the state house in Tennessee to protest gun violence. Could it be the start of something big?
Stopping gun violence in America is possible. Will strong committed leaders and caring citizens please stand up and apply pressure until a real change comes at the national level?
Such sustained pressure has worked before.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.
|
During WWI, Missouri's white, middle-class Home Guard filled in for the National Guard
The Great War depleted the states’ National Guard troops, sending them overseas. Missouri was one of the states that backfilled the domestic duties with unpaid volunteers.
During World War I, Missouri was among a handful of states that implemented a volunteer, unpaid version of the National Guard.
A new book from a Missouri University of Science and Technology historian examines the little-known Home Guard.
The United States didn’t have much of a standing army when it entered World War I. So every state’s National Guard was sent overseas. That left no one available to handle the domestic duties normally assigned to the guard.
Gov. Frederick Gardner responded by invoking a state law that allowed him to establish a Home Guard.
“There is no one single reason why men joined the Home Guard,” said Petra DeWitt, author of "The Missouri Home Guard: Protecting the Home Front during the Great War" and an associate professor of history and political science at Missouri S&T. “For some it was simply patriotism, the willingness to do one’s best to help in the war effort.”
The men who joined who were too old or exempt from the draft. They found meaning in helping with the war effort through the Home Guard while also garnering respect from the community.
Since the members were unpaid and had to purchase their own uniforms and supplies, they were predominantly upper-middle-class professionals.
“The poor schmuck who was working 14 hours in the factory didn’t have that money, didn’t have that time. And it’s also an expression of masculinity. These men that sit at the desk, they feel kind of weak,” DeWitt said. “Putting on a uniform and drilling and marching through their community made them feel strong.”
The Home Guard was an unarmed force tasked with protecting infrastructure from potential enemy activities and maintaining law and order during labor activism common at the time.
The Missouri Home Guard had broad support across the state, especially in urban areas including Kansas City and St. Louis, DeWitt said. But there were pockets of resistance.
“In the Ozarks there were a dozen counties where there was no support for them, where there was no desire to set up Home Guard companies,” DeWitt said. “Because those were the areas where there was general opposition to government involvement in daily lives.”
DeWitt’s research also showed there was a barrier for Black men joining the Home Guard.
“In newspaper reports, I found that the insistence by African Americans to demonstrate their patriotism through membership in the Home Guard reflected their and the NAACP’s activism of the time period,” said DeWitt. “At the same time, the pushback by white leaders in the Missouri Council of Defense and the state’s adjutant general also reflected the persistence of white opposition to racial equality.”
Missouri women also attempted to support the war efforts and Home Guard but were met with resistance.
While the Home Guard hasn’t been activated since then, the law authorizing the force remains on the books, and the governor could activate the force again.
"The Missouri Home Guard: Protecting the Home Front during the Great War" was published this month by the University of Missouri Press.
Copyright 2022 St. Louis Public Radio. To see more, visit St. Louis Public Radio.
|
<urn:uuid:b48fa2a6-a5ea-464e-a0eb-cf8e9263a435>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511021.4/warc/CC-MAIN-20231002200740-20231002230740-00598.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9750686883926392,
"pii_count": 0,
"score": 3.578125,
"token_count": 734,
"url": "https://www.kcur.org/arts-life/2023-01-01/during-wwi-missouris-white-middle-class-home-guard-filled-in-for-the-national-guard"
}
|
During WWI, Missouri's white, middle-class Home Guard filled in for the National Guard
The Great War depleted the states’ National Guard troops, sending them overseas. Missouri was one of the states that backfilled the domestic duties with unpaid volunteers.
During World War I, Missouri was among a handful of states that implemented a volunteer, unpaid version of the National Guard.
A new book from a Missouri University of Science and Technology historian examines the little-known Home Guard.
The United States didn’t have much of a standing army when it entered World War I. So every state’s National Guard was sent overseas. That left no one available to handle the domestic duties normally assigned to the guard.
Gov. Frederick Gardner responded by invoking a state law that allowed him to establish a Home Guard.
“There is no one single reason why men joined the Home Guard,” said Petra DeWitt, author of "The Missouri Home Guard: Protecting the Home Front during the Great War" and an associate professor of history and political science at Missouri S&T. “For some it was simply patriotism, the willingness to do one’s best to help in the war effort.”
The men who joined who were too old or exempt from the draft. They found meaning in helping with the war effort through the Home Guard while also garnering respect from the community.
Since the members were unpaid and had to purchase their own uniforms and supplies, they were predominantly upper-middle-class professionals.
“The poor schmuck who was working 14 hours in the factory didn’t have that money, didn’t have that time. And it’s also an expression of masculinity. These men that sit at the desk, they feel kind of weak,” DeWitt said. “Putting on a uniform and drilling and marching through their community made them feel strong.”
The Home Guard was an unarmed force tasked with protecting infrastructure from potential enemy activities and maintaining law and order during labor activism common at the time.
The Missouri Home Guard had broad support across the state, especially in urban areas including Kansas City and St. Louis, DeWitt said. But there were pockets of resistance.
“In the Ozarks there were a dozen counties where there was no support for them, where there was no desire to set up Home Guard companies,” DeWitt said. “Because those were the areas where there was general opposition to government involvement in daily lives.”
DeWitt’s research also showed there was a barrier for Black men joining the Home Guard.
“In newspaper reports, I found that
|
the insistence by African Americans to demonstrate their patriotism through membership in the Home Guard reflected their and the NAACP’s activism of the time period,” said DeWitt. “At the same time, the pushback by white leaders in the Missouri Council of Defense and the state’s adjutant general also reflected the persistence of white opposition to racial equality.”
Missouri women also attempted to support the war efforts and Home Guard but were met with resistance.
While the Home Guard hasn’t been activated since then, the law authorizing the force remains on the books, and the governor could activate the force again.
"The Missouri Home Guard: Protecting the Home Front during the Great War" was published this month by the University of Missouri Press.
Copyright 2022 St. Louis Public Radio. To see more, visit St. Louis Public Radio.
|
A familiar scene is unfolding in Florida as officials tally up the damages from the latest tropical storm. Though feared to be worse, Nicole made landfall on Nov. 10 as a Category 1 storm, though it still caused damage in the tens of billions of dollars across Florida and North Carolina. Nicole is the first tropical storm to hit the U.S. in November in almost 40 years. But just two months earlier, Category 4 Hurricane Ian tore through Florida, causing 137 deaths.
Climate change is making hurricanes more frequent and more destructive. Those of us in prison have little agency over how to prepare. In some cases, prisoners can be the proverbial “sitting ducks” in the path of hurricanes, as their relocation inland or to a safer locale depends solely on correctional administrators.
Every incarcerated person interviewed for this article in my South Carolina facility declined to give his name, out of concern that authorities will frown on them for talking to media. Few doubted the reality of climate change, and many felt it is already too late to reverse its effects.
During preparations for Hurricane Ian, in September, one prison maintenance worker told me: “Well, they got us taping windows and puttin’ away anything that the wind might pick up and hurl. There ain’t really a lot you can do if they don’t wanna send the guys upstate,” he said, referring to prisons in the western parts of the Carolinas.
Most facilities, especially in the South Carolina low country, anxiously watch a hurricane’s progress. Typically the storms make landfall in Florida and travel up Interstate 95, as if on a return trip north after a Sunshine State vacation.
But not always. In recent years, hurricanes making landfall in North Carolina have caused heavy damage to prisons. Hurricane Irene’s effects on Pamlico Correctional Institution, near North Carolina’s Pamlico Sound, is one example.
In 2018, a jail in North Carolina near Elizabethtown was affected by a hurricane. According to one man I spoke with who was incarcerated there, prisoners were rushed to evacuate. Most were unable to pack what meager possessions they had been able to maintain in jail.
“We was wading in knee high water jus’ to get outta the unit. It was scary in the dark,” he recalled. “Man, I had enough time to grab my photos and Bible, but nothing else.”
As Hurricane Harvey battered Gulf states in 2017, some Texas prisons were evacuated, but over 8,000 people across four prisons were left in the storm’s path, according to reporting by The Nation. Some people waited out the storm in flooded prisons without sufficient food or water. Ahead of the monster Hurricane Florence, in 2018, South Carolina Gov. Henry McMaster issued a mandatory evacuation order along the coast. But as The New Yorker detailed, several South Carolina state prisons did not evacuate, despite lying within the mandatory evacuation zone.
It can be scary for prisoners here with family members in the low country, coastal areas and even in Florida. They are naturally worried but helpless to do anything while behind bars. “What can I do?” asked one inmate who just started a 15-year sentence. “I ain’t there, so I can’t help my people. I gotta pray that God will do what I can’t.”
Some incarcerated people feel the preparations facilities do make for natural disasters like hurricanes is too much. When storms loom, rules might prohibit outdoor activities, visitations and meaIs in dining halls. If generators are needed, cooking stops and meals become cold cuts and cereal. Outside medical appointments get canceled too. Inmates in Carolina prisons are just now beginning to see an easing of lockdown restrictions from the COVID-19 pandemic, so when these storm preparations are put in place, it feels to some like a return to the isolation of the pandemic days.
“By the time ol’ Ian gets to us, he’ll be nothing more than a little wind and rain,” said one old-timer ahead of the September 2022 hurricane. He feels the prison system tends to have a knee-jerk reaction to such storms.
Whatever the case, for prison populations in the path of increasingly dangerous storms, the choice about how to respond will remain in someone else’s hands.
Disclaimer: The views in this article are those of the author. Prison Journalism Project has verified the writer’s identity and basic facts such as the names of institutions mentioned.
|
<urn:uuid:e8963c84-cc39-4294-b94d-8c953d899d7d>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506421.14/warc/CC-MAIN-20230922170343-20230922200343-00754.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9568870663642883,
"pii_count": 0,
"score": 2.65625,
"token_count": 941,
"url": "https://prisonjournalismproject.org/2023/02/15/how-do-you-prepare-for-hurricane-prison/"
}
|
A familiar scene is unfolding in Florida as officials tally up the damages from the latest tropical storm. Though feared to be worse, Nicole made landfall on Nov. 10 as a Category 1 storm, though it still caused damage in the tens of billions of dollars across Florida and North Carolina. Nicole is the first tropical storm to hit the U.S. in November in almost 40 years. But just two months earlier, Category 4 Hurricane Ian tore through Florida, causing 137 deaths.
Climate change is making hurricanes more frequent and more destructive. Those of us in prison have little agency over how to prepare. In some cases, prisoners can be the proverbial “sitting ducks” in the path of hurricanes, as their relocation inland or to a safer locale depends solely on correctional administrators.
Every incarcerated person interviewed for this article in my South Carolina facility declined to give his name, out of concern that authorities will frown on them for talking to media. Few doubted the reality of climate change, and many felt it is already too late to reverse its effects.
During preparations for Hurricane Ian, in September, one prison maintenance worker told me: “Well, they got us taping windows and puttin’ away anything that the wind might pick up and hurl. There ain’t really a lot you can do if they don’t wanna send the guys upstate,” he said, referring to prisons in the western parts of the Carolinas.
Most facilities, especially in the South Carolina low country, anxiously watch a hurricane’s progress. Typically the storms make landfall in Florida and travel up Interstate 95, as if on a return trip north after a Sunshine State vacation.
But not always. In recent years, hurricanes making landfall in North Carolina have caused heavy damage to prisons. Hurricane Irene’s effects on Pamlico Correctional Institution, near North Carolina’s Pamlico Sound, is one example.
In 2018, a jail in North Carolina near Elizabethtown was affected by a hurricane. According to one man I spoke with who was incarcerated there, prisoners were rushed to evacuate. Most were unable to pack what meager possessions they had been able to maintain in jail.
“We was wading in knee high water jus’ to get outta the unit. It was scary in the dark,” he recalled. “Man, I had enough time to grab my photos and Bible, but nothing else.”
As Hurricane Harvey battered Gulf states in 201
|
7, some Texas prisons were evacuated, but over 8,000 people across four prisons were left in the storm’s path, according to reporting by The Nation. Some people waited out the storm in flooded prisons without sufficient food or water. Ahead of the monster Hurricane Florence, in 2018, South Carolina Gov. Henry McMaster issued a mandatory evacuation order along the coast. But as The New Yorker detailed, several South Carolina state prisons did not evacuate, despite lying within the mandatory evacuation zone.
It can be scary for prisoners here with family members in the low country, coastal areas and even in Florida. They are naturally worried but helpless to do anything while behind bars. “What can I do?” asked one inmate who just started a 15-year sentence. “I ain’t there, so I can’t help my people. I gotta pray that God will do what I can’t.”
Some incarcerated people feel the preparations facilities do make for natural disasters like hurricanes is too much. When storms loom, rules might prohibit outdoor activities, visitations and meaIs in dining halls. If generators are needed, cooking stops and meals become cold cuts and cereal. Outside medical appointments get canceled too. Inmates in Carolina prisons are just now beginning to see an easing of lockdown restrictions from the COVID-19 pandemic, so when these storm preparations are put in place, it feels to some like a return to the isolation of the pandemic days.
“By the time ol’ Ian gets to us, he’ll be nothing more than a little wind and rain,” said one old-timer ahead of the September 2022 hurricane. He feels the prison system tends to have a knee-jerk reaction to such storms.
Whatever the case, for prison populations in the path of increasingly dangerous storms, the choice about how to respond will remain in someone else’s hands.
Disclaimer: The views in this article are those of the author. Prison Journalism Project has verified the writer’s identity and basic facts such as the names of institutions mentioned.
|
It's the biofuel option that you and your car can use, and scientists hope it could soon be coming to a bowser near you.
- Algae can be used for human consumption as well as a fuel source
- Unlike other biofuel crops, algae doesn't compete for space with food farming
- A Queensland sugar mill is planning to use its wastewater to grow algae for biofuel, with hopes it will one day be available to the public at the bowser
While you might recognise algae growing in your fish tank or floating on a lake, it can be transformed into a human food source that can also power your car.
And unlike some other biofuels, growing algae doesn't compete for space with food crops.
"We see agricultural land being used to produce crops specifically for fuel – sugar-type crops or starch-producing crops," Queensland University of Technology biochemistry associate professor Mark Harrison said.
"That's food that could be going into people's mouths instead of fuel tanks."
While eating algae is becoming more popular, such as nori seaweed or algal oil in dietary supplements, Dr Harrison has no doubt algae will also feature at the fuel bowser.
"We will get to a point where you will go to your local petrol bowser and you will fill up your vehicle with biodiesel produced from algae," he said.
With demand for fossils fuels including coal, oil and gas predicted to peak by the end of the decade, the International Energy Agency is asking countries to embrace clean energy technologies.
The first step is to grow the algae.
In an unassuming paddock next to a sugarcane mill in Queensland's Bundaberg region, the sickly sweet and slightly sour smell of crushing cane hangs heavily in the air.
The site at Isis Central Sugar Mill, 300km north of Brisbane, will soon be home to ponds growing algae fed by the mill's wastewater.
"If it works out properly, we should be able to do it at a viable proposition so it assists the growers to be able to put biodiesel into their farm tractors," chief operating officer Craig Wood said.
"This will reduce the amount of greenhouse gases that the whole industry uses in this area."
The term algae refers to a large group of photosynthetic organisms that use light, water and carbon dioxide to grow.
Mr Wood said the mill would harvest the carbon dioxide created when the mill burnt fibre left over from crushing cane to make electricity and use the nutrients in the wastewater to feed the algae.
Timeline and viability
Dr Harrison said using algae to filter wastewater had many positive flow-on effects.
"It potentially allows that water to be reused in a process within the facility," he said.
Water that did run off would be cleaner, reducing the volume of nutrients such as nitrogen and phosphorus reaching rivers and the ocean where they can cause harmful algae blooms.
But Dr Harrison said scaling up to make algal production a viable business or diversification opportunity would be the biggest challenge.
"There's a lot of technical know-how … but the test is going to be how those processes perform through scale-up," he said.
"[The timelines] will primarily be driven, I think, by the capacity to reach sufficient scale where cost of production comes down to a point where it's competitive with existing fuels in the market."
Dr Harrison said it was part of decarbonising the transport system.
"The transformation of algae into biofuels and the integration of those biofuels into our existing liquid fuel network is one of those key opportunities," he said.
"It's not just about how the fuels are produced, it's also about the regulatory and policy environment surrounding that sector."
Fuel as food
Back at the mill, Mr Wood is hopeful the ponds will be under construction by the end of the year.
"Hopefully by next crushing we'll have a good product coming out," he said.
"We've just got to make sure the feasibility is right, but all our prospects at the moment are looking towards it being a good, viable project."
Mr Wood said the mill may also look at using the algae as a human food source down the track.
"The chances of being able to do that are quite good," he said.
"It would be a good diversification for the algae so we're not sustained on one particular type of product.
"There is no reason why we couldn't go to proteins – that will help with food for everybody, everyone can be eating a green hamburger," he laughed.
|
<urn:uuid:40170850-f509-4106-9fd9-103be5d6e363>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00693.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9628258347511292,
"pii_count": 0,
"score": 2.875,
"token_count": 937,
"url": "https://www.abc.net.au/news/2023-11-13/sugar-mill-using-wastewater-to-grow-algae-for-biofuel-bundaberg/103083672"
}
|
It's the biofuel option that you and your car can use, and scientists hope it could soon be coming to a bowser near you.
- Algae can be used for human consumption as well as a fuel source
- Unlike other biofuel crops, algae doesn't compete for space with food farming
- A Queensland sugar mill is planning to use its wastewater to grow algae for biofuel, with hopes it will one day be available to the public at the bowser
While you might recognise algae growing in your fish tank or floating on a lake, it can be transformed into a human food source that can also power your car.
And unlike some other biofuels, growing algae doesn't compete for space with food crops.
"We see agricultural land being used to produce crops specifically for fuel – sugar-type crops or starch-producing crops," Queensland University of Technology biochemistry associate professor Mark Harrison said.
"That's food that could be going into people's mouths instead of fuel tanks."
While eating algae is becoming more popular, such as nori seaweed or algal oil in dietary supplements, Dr Harrison has no doubt algae will also feature at the fuel bowser.
"We will get to a point where you will go to your local petrol bowser and you will fill up your vehicle with biodiesel produced from algae," he said.
With demand for fossils fuels including coal, oil and gas predicted to peak by the end of the decade, the International Energy Agency is asking countries to embrace clean energy technologies.
The first step is to grow the algae.
In an unassuming paddock next to a sugarcane mill in Queensland's Bundaberg region, the sickly sweet and slightly sour smell of crushing cane hangs heavily in the air.
The site at Isis Central Sugar Mill, 300km north of Brisbane, will soon be home to ponds growing algae fed by the mill's wastewater.
"If it works out properly, we should be able to do it at a viable proposition so it assists the growers to be able to put biodiesel into their farm tractors," chief operating officer Craig Wood said.
"This will reduce the amount of greenhouse gases that the whole industry uses in this area."
The term algae refers to a large group of photosynthetic organisms that use light, water and carbon dioxide to grow.
Mr Wood said the mill would harvest the carbon dioxide created when the mill burnt fibre left over from crushing cane to make electricity and use the nutrients in the wastewater to feed the algae.
Timeline and viability
Dr Harrison said using
|
algae to filter wastewater had many positive flow-on effects.
"It potentially allows that water to be reused in a process within the facility," he said.
Water that did run off would be cleaner, reducing the volume of nutrients such as nitrogen and phosphorus reaching rivers and the ocean where they can cause harmful algae blooms.
But Dr Harrison said scaling up to make algal production a viable business or diversification opportunity would be the biggest challenge.
"There's a lot of technical know-how … but the test is going to be how those processes perform through scale-up," he said.
"[The timelines] will primarily be driven, I think, by the capacity to reach sufficient scale where cost of production comes down to a point where it's competitive with existing fuels in the market."
Dr Harrison said it was part of decarbonising the transport system.
"The transformation of algae into biofuels and the integration of those biofuels into our existing liquid fuel network is one of those key opportunities," he said.
"It's not just about how the fuels are produced, it's also about the regulatory and policy environment surrounding that sector."
Fuel as food
Back at the mill, Mr Wood is hopeful the ponds will be under construction by the end of the year.
"Hopefully by next crushing we'll have a good product coming out," he said.
"We've just got to make sure the feasibility is right, but all our prospects at the moment are looking towards it being a good, viable project."
Mr Wood said the mill may also look at using the algae as a human food source down the track.
"The chances of being able to do that are quite good," he said.
"It would be a good diversification for the algae so we're not sustained on one particular type of product.
"There is no reason why we couldn't go to proteins – that will help with food for everybody, everyone can be eating a green hamburger," he laughed.
|
The Zika virus, a mosquito-borne flavivirus, has been a significant public health threat in recent years. While it has low mortality rates, the virus is known to cause symptoms like red eyes, rashes, headache, muscle and joint pain lasting for two to seven days. But one of the most concerning aspects of Zika is its potential impact on pregnant women and their unborn babies.
The virus also results in congenital Zika syndrome, leading to severe birth defects such as microcephaly, where babies are born with abnormally small heads due to incomplete brain development. This threat to maternal and child health raised alarms worldwide, prompting health authorities to intensify efforts to investigate and understand Zika’s implications on public health.
Now, a study has looked at the risks of Zika infections in pregnant females who have contracted dengue before and harbour pre-existing dengue antibodies. Its findings reveal these mothers have a higher chance of giving birth to babies with severe birth defects compared to Zika-infected foetuses from mothers lacking dengue antibodies.
In order to test out their hypothesis, the research team first infected non-human primates with the dengue virus in order to build immunity. Subsequently, the dengue-immune females were bred and exposed to the Zika virus early in gestation. A control group received Zika infection during pregnancy without prior dengue exposure.
The differences in the outcomes were stark: foetuses from dengue-immune mothers infected with Zika displayed severe microcephaly — a birth defect where a baby's head is smaller than expected. They exhibited head circumferences approximately 40% smaller than healthy foetuses throughout pregnancy, while also showing extensive brain abnormalities, including a thinner cerebral cortex, neuronal destruction, haemorrhaging, inflammation, and calcium accumulation — signatures of severe Zika-related damage.
The control group, however, showed mild to no birth defects, underscoring the role of dengue antibodies in Zika infection outcomes during pregnancy.
These findings may explain why Zika infection outcomes in human pregnancy seem to vary dramatically. The study also opens the door to reducing Zika-related birth defects by devising strategies that target harmful maternal antibodies and prevent them from reaching the fetus. Most importantly, it comes at an opportune time for India, with the country experiencing an upward trend in Zika infections lately.
Heightened Zika Surveillance in Maharashtra
The confirmation of the seventh Zika virus case in Maharashtra, specifically in Pune, over the weekend has prompted the state's health department to issue stringent directives for increased surveillance.
The most recent case emerged in Pratik Nagar in Pune’s Yerwada, involving a 64-year-old woman who began exhibiting symptoms on November 5. Subsequently, the individual received a confirmed Zika virus diagnosis at Jehangir Hospital in Pune on November 9. What is encouraging is that the patient has since recovered, and investigations indicate that her five close contacts have displayed no symptoms. Notably, the patient had previously travelled to Kerala in October.
Meanwhile, comprehensive monitoring within a 5-kilometre radius of the Zika patients’ locations, specifically targeting pregnant women, has been ordered. The emphasis extends beyond childbirth, necessitating sustained follow-ups to ensure continued vigilance and care.
Dr Pratapsingh Sarnikar, the joint director of health, highlighted that Maharashtra has recorded a total of ten Zika cases within the past three years, with no significant reports of widespread transmission or severe complications. However, the imperative for heightened surveillance remains particularly critical concerning pregnant women, owing to the potential risks associated with congenital Zika syndrome.
This development in Maharashtra comes in the wake of rising Zika cases in neighbouring Karnataka, intensifying concerns across the region. Both states are taking proactive measures to combat the spread of Zika.
For weather, science, space, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!
|
<urn:uuid:cca9ccd2-a93b-4293-bac0-3516b68e7b11>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00094.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9462462067604065,
"pii_count": 0,
"score": 3.859375,
"token_count": 813,
"url": "https://weather.com/en-IN/india/health/news/2023-11-16-link-between-zika-and-dengue-antibodies-in-pregnant-women"
}
|
The Zika virus, a mosquito-borne flavivirus, has been a significant public health threat in recent years. While it has low mortality rates, the virus is known to cause symptoms like red eyes, rashes, headache, muscle and joint pain lasting for two to seven days. But one of the most concerning aspects of Zika is its potential impact on pregnant women and their unborn babies.
The virus also results in congenital Zika syndrome, leading to severe birth defects such as microcephaly, where babies are born with abnormally small heads due to incomplete brain development. This threat to maternal and child health raised alarms worldwide, prompting health authorities to intensify efforts to investigate and understand Zika’s implications on public health.
Now, a study has looked at the risks of Zika infections in pregnant females who have contracted dengue before and harbour pre-existing dengue antibodies. Its findings reveal these mothers have a higher chance of giving birth to babies with severe birth defects compared to Zika-infected foetuses from mothers lacking dengue antibodies.
In order to test out their hypothesis, the research team first infected non-human primates with the dengue virus in order to build immunity. Subsequently, the dengue-immune females were bred and exposed to the Zika virus early in gestation. A control group received Zika infection during pregnancy without prior dengue exposure.
The differences in the outcomes were stark: foetuses from dengue-immune mothers infected with Zika displayed severe microcephaly — a birth defect where a baby's head is smaller than expected. They exhibited head circumferences approximately 40% smaller than healthy foetuses throughout pregnancy, while also showing extensive brain abnormalities, including a thinner cerebral cortex, neuronal destruction, haemorrhaging, inflammation, and calcium accumulation — signatures of severe Zika-related damage.
The control group, however, showed mild to no birth defects, underscoring the role of dengue antibodies in Zika infection outcomes during pregnancy.
These findings may explain why Zika infection outcomes in human pregnancy seem to vary dramatically. The study also opens the door to reducing Zika-related birth defects by devising strategies that target harmful maternal antibodies and prevent them from reaching the fetus. Most importantly, it comes at an opportune time for India, with the country experiencing an upward trend in Zika infections lately.
Heightened Zika Surveillance in Maharashtra
The confirmation of the seventh Zika virus case in Maharashtra, specifically in Pune, over the weekend has prompted the state's health department to issue stringent directives for increased surveillance.
The most recent case emerged in Pratik Nag
|
ar in Pune’s Yerwada, involving a 64-year-old woman who began exhibiting symptoms on November 5. Subsequently, the individual received a confirmed Zika virus diagnosis at Jehangir Hospital in Pune on November 9. What is encouraging is that the patient has since recovered, and investigations indicate that her five close contacts have displayed no symptoms. Notably, the patient had previously travelled to Kerala in October.
Meanwhile, comprehensive monitoring within a 5-kilometre radius of the Zika patients’ locations, specifically targeting pregnant women, has been ordered. The emphasis extends beyond childbirth, necessitating sustained follow-ups to ensure continued vigilance and care.
Dr Pratapsingh Sarnikar, the joint director of health, highlighted that Maharashtra has recorded a total of ten Zika cases within the past three years, with no significant reports of widespread transmission or severe complications. However, the imperative for heightened surveillance remains particularly critical concerning pregnant women, owing to the potential risks associated with congenital Zika syndrome.
This development in Maharashtra comes in the wake of rising Zika cases in neighbouring Karnataka, intensifying concerns across the region. Both states are taking proactive measures to combat the spread of Zika.
For weather, science, space, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!
|
For Bobby Schaeffer, observing the weather in Kotzebue is a crucial daily task. In the continuously changing Arctic, the ocean freezes later in recent years, exposing coastal communities to storms. And the warming waters affect seals and birds that locals rely on for subsistence.
Schaeffer is one of the Inupiaq experts who contributed to almost 10,000 observations about the changing environment in a research paper published in May in the journal Arctic Science. The research project was aimed to continue tracking Arctic change, support documenting Indigenous knowledge and nurture the next generation of Indigenous scholars, said the primary author of the paper Donna Hauser. The next step is finding more practical applications for the observations’ database.
“Arctic Alaskans have been stewarding and monitoring the environment since time immemorial,” Hauser said. “Those voices, I think, in science have been underrepresented. ... So how do we center Indigenous perspectives and observations, particularly ones that have been neglected for a long time period?”
The research paper is based on the data gathered between 2006 and 2021, combining earlier databases with observations from the Alaska Arctic Observatory and Knowledge Hub — the partnership between the University of Alaska Fairbanks researchers and Iñupiaq observers.
During the latest step, between 2016 and 2021, observers from Kotzebue, Point Hope, Wainwright, Utqiaġvik and Kaktovik provided regular reports about sea ice, wildlife, weather and coastal waters and collected data on water salinity and temperature.
In the paper, physical measurements — such as water temperature and salinity — were paired with narrative observations to show what that oceanographic data means for the locals’ food security, travel safety and infrastructure, Hauser said.
“That’s where you get the context of what does it mean that it’s been so hot,” she said. “By pairing those with some community observations, we get the broader, long-term, holistic perspective that comes from the Indigenous knowledge of the place and being reliant on the land.”
Bobby Schaeffer joined the Knowledge Hub as an observer in 2018 but he has been in tune with the environmental changes around him long before that.
“I’ve always been observant because my father was very, very alert to the changes. He was always talking about the changes that he had witnessed during his lifetime” Schaeffer said. “He had really big concerns about how they will affect our ability to hunt and be successful.”
Historically, Inupiaq peoples relied on their observations and stories to pass down their knowledge about the environment, Schaeffer said.
“The way to remember was through language and therefore discussing the changes with others and then talking about the changes to their children,” Schaeffer said.
In recent years, paying attention to the changes in the Arctic has become even more important because “the weather changed tremendously,” Schaeffer said. In the Kotzebue area, he said the seasons are shifting, with spring coming a month earlier and fall arriving much later than in September. The changes are speeding up permafrost melts, increasing the number of coastal storms and affecting travel and food security in the Arctic.
“We talk about the weather all the time. It drives everything: It drives stronger storms. It affects the landscape and especially on the coast, where natural erosion is occurring,” he said.
In 2019, the Kotzebue area saw extremely diminishing sea ice and uncharacteristically early breakup, which allowed bearded seal hunters to start hunting earlier that year, Hauser said. In July of the same year, the water in Kotzebue Sound was so hot that when the hunters would normally be fishing for king crabs, the crabs moved to deeper cooler waters.
This past year, Kotzebue and other Arctic communities were battered by storms, including Typhoon Merbok in September 2022, which hit Western Alaska towns south of Kotzebue the hardest but also affected Kotzebue and even Utqiaġvik.
“We’ve never seen water this high before. People’s homes got washed away,” Schaeffer said. “People’s lives were in danger.”
The importance of recording these changes is clear to Schaeffer.
“You see all this happening but the policymakers are slow to react. If we get past that tipping point, we are at a point of no return,” Schaeffer said. “The reason I’m really concerned is, I have children and grandchildren and great-grandchildren.”
In the process of collecting data, scientists worked with the observers to understand what changes they are the most concerned about, Roberta Tuurraq Glenn said. Glenn is a Knowledge Hub project coordinator and co-author of the paper who previously published a story map that communicates the effects of climate change from the perspective of local observers.
To address the priorities and needs of the local observers, several projects based on the observations came about — including the whaling trail mapping project in Utqiaġvik and Ocean, Snow and Sea Ice Monitoring project in Kotzebue.
“People are concerned about Kotzebue Sound, the sea ice conditions and how the ocean chemistry is going to affect the sea ice conditions and their hunting for that year,” Glenn said.
“Overall, the Knowledge Hub researchers and observers are now looking for more ways to connect their observations with people and organizations that can utilize the observations in scientific and decision-making contexts. For example, they are looking into how detailed observations of sea ice conditions can inform the National Weather Service and their sea ice data tools, Glenn said.
“Our database of 10,000 observations — that’s a sitting resource that no one has used or been able to use or has thought to use,” Glenn said. “There is a database of observations here that you can just scroll through, and you can get a sense of what’s going on, of what people care about, of what people are observing, and what those observations mean for community equity and safety.”
Focusing on practical applications of the data also reflects the ability of the Indigenous communities to adapt to the changes around them, Glenn said.
“We’re still hunting animals, and we’re still able to carry out our traditional subsistence lifestyles,” Glenn said. “It’s really important for the outside world to know that we are still here, and we’re still thriving.”
|
<urn:uuid:910c7d19-1b00-454e-a1b6-044da3f0dc97>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511000.99/warc/CC-MAIN-20231002132844-20231002162844-00732.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.952997624874115,
"pii_count": 0,
"score": 2.90625,
"token_count": 1377,
"url": "https://www.adn.com/alaska-news/science/2023/08/06/indigenous-knowledge-holders-share-thousands-of-observations-on-the-changing-arctic-in-a-new-study/"
}
|
For Bobby Schaeffer, observing the weather in Kotzebue is a crucial daily task. In the continuously changing Arctic, the ocean freezes later in recent years, exposing coastal communities to storms. And the warming waters affect seals and birds that locals rely on for subsistence.
Schaeffer is one of the Inupiaq experts who contributed to almost 10,000 observations about the changing environment in a research paper published in May in the journal Arctic Science. The research project was aimed to continue tracking Arctic change, support documenting Indigenous knowledge and nurture the next generation of Indigenous scholars, said the primary author of the paper Donna Hauser. The next step is finding more practical applications for the observations’ database.
“Arctic Alaskans have been stewarding and monitoring the environment since time immemorial,” Hauser said. “Those voices, I think, in science have been underrepresented. ... So how do we center Indigenous perspectives and observations, particularly ones that have been neglected for a long time period?”
The research paper is based on the data gathered between 2006 and 2021, combining earlier databases with observations from the Alaska Arctic Observatory and Knowledge Hub — the partnership between the University of Alaska Fairbanks researchers and Iñupiaq observers.
During the latest step, between 2016 and 2021, observers from Kotzebue, Point Hope, Wainwright, Utqiaġvik and Kaktovik provided regular reports about sea ice, wildlife, weather and coastal waters and collected data on water salinity and temperature.
In the paper, physical measurements — such as water temperature and salinity — were paired with narrative observations to show what that oceanographic data means for the locals’ food security, travel safety and infrastructure, Hauser said.
“That’s where you get the context of what does it mean that it’s been so hot,” she said. “By pairing those with some community observations, we get the broader, long-term, holistic perspective that comes from the Indigenous knowledge of the place and being reliant on the land.”
Bobby Schaeffer joined the Knowledge Hub as an observer in 2018 but he has been in tune with the environmental changes around him long before that.
“I’ve always been observant because my father was very, very alert to the changes. He was always talking about the changes that he had witnessed during his lifetime” Schaeffer said. “He had really big concerns about how they will
|
affect our ability to hunt and be successful.”
Historically, Inupiaq peoples relied on their observations and stories to pass down their knowledge about the environment, Schaeffer said.
“The way to remember was through language and therefore discussing the changes with others and then talking about the changes to their children,” Schaeffer said.
In recent years, paying attention to the changes in the Arctic has become even more important because “the weather changed tremendously,” Schaeffer said. In the Kotzebue area, he said the seasons are shifting, with spring coming a month earlier and fall arriving much later than in September. The changes are speeding up permafrost melts, increasing the number of coastal storms and affecting travel and food security in the Arctic.
“We talk about the weather all the time. It drives everything: It drives stronger storms. It affects the landscape and especially on the coast, where natural erosion is occurring,” he said.
In 2019, the Kotzebue area saw extremely diminishing sea ice and uncharacteristically early breakup, which allowed bearded seal hunters to start hunting earlier that year, Hauser said. In July of the same year, the water in Kotzebue Sound was so hot that when the hunters would normally be fishing for king crabs, the crabs moved to deeper cooler waters.
This past year, Kotzebue and other Arctic communities were battered by storms, including Typhoon Merbok in September 2022, which hit Western Alaska towns south of Kotzebue the hardest but also affected Kotzebue and even Utqiaġvik.
“We’ve never seen water this high before. People’s homes got washed away,” Schaeffer said. “People’s lives were in danger.”
The importance of recording these changes is clear to Schaeffer.
“You see all this happening but the policymakers are slow to react. If we get past that tipping point, we are at a point of no return,” Schaeffer said. “The reason I’m really concerned is, I have children and grandchildren and great-grandchildren.”
In the process of collecting data, scientists worked with the observers to understand what changes they are the most concerned about, Roberta Tuurraq Glenn said. Glenn is a Knowledge Hub project coordinator and co-author of the paper who previously published a story map that communicates the effects of climate change from the perspective of local observers.
To address the priorities and needs of the local observers, several projects based on the observations came about — including the whaling trail mapping project in Utqiaġvik and Ocean, Snow and Sea Ice Monitoring project in Kotzebue.
“People are concerned about Kotzebue Sound, the sea ice conditions and how the ocean chemistry is going to affect the sea ice conditions and their hunting for that year,” Glenn said.
“Overall, the Knowledge Hub researchers and observers are now looking for more ways to connect their observations with people and organizations that can utilize the observations in scientific and decision-making contexts. For example, they are looking into how detailed observations of sea ice conditions can inform the National Weather Service and their sea ice data tools, Glenn said.
“Our database of 10,000 observations — that’s a sitting resource that no one has used or been able to use or has thought to use,” Glenn said. “There is a database of observations here that you can just scroll through, and you can get a sense of what’s going on, of what people care about, of what people are observing, and what those observations mean for community equity and safety.”
Focusing on practical applications of the data also reflects the ability of the Indigenous communities to adapt to the changes around them, Glenn said.
“We’re still hunting animals, and we’re still able to carry out our traditional subsistence lifestyles,” Glenn said. “It’s really important for the outside world to know that we are still here, and we’re still thriving.”
|
The US government has received over 350 new reports of what the US government terms “unidentified aerial phenomenon,” commonly known as UFOs, since March of 2021 – roughly half of which are so far unexplained, according to a report from the Office of the Director of National Intelligence released on Thursday.
According to the report, the Pentagon office responsible for tracking and studying the sightings has preliminarily identified 163 of the reports as “balloon or balloon-entities.” A handful of other reports have been attributed to drones, birds, weather events or airborne debris like plastic bags.
But “initial characterization does not mean positively resolved or unidentified,” the report cautioned. And the remaining 171 reported sightings of UAPs or UFOs continue to be unexplained by the US government.
“Some of these uncharacterized UAP appear to have demonstrated unusual flight characteristics or performance capabilities and require further analysis,” the report found.
In short, the intelligence community and the Pentagon still appear to have no explanation for at least some of a series of mysterious flying objects that have been seen moving through restricted military airspace over the last several decades. The majority of the new reports came from US Navy and Air Force pilots and operators “who witnessed UAP during the course of their operational duties and reported the events,” according to the report.
Although the report warned that UAP “pose a safety of flight and collision hazard to air assets” that could require pilots to “adjust flight patterns,” the report stated that there have been no reported collisions between US aircraft and UAP to date.
The Defense Department, under pressure from Congress to investigate so-called UFO or UAP sightings, has actively encouraged pilots and other personnel to report unexplained sightings. The intelligence community released its first report on the matter in 2021.
That report examined 144 reports of UAPs, only one of which investigators were able to explain by the end of the study. Investigators found no evidence that the sightings represented either extraterrestrial life or a major technological advancement by a foreign adversary like Russia or China, but acknowledge that is a possible explanation.
Congress in its year-end defense spending bill then required the Pentagon and the intelligence community to study and report on the matter.
The Thursday report showed a dramatic increase in reported incidents since the 2021 report was issued, an increase that investigators attribute in part to “a better understanding of the possible threats that UAP may represent, either as safety of flight hazards or as potential adversary collection platforms” and in part due to “reduced stigma surrounding UAP reporting.”
Although some of the 366 newly identified reports cover incidents that occurred in the 17 years prior to March of 2021, 250 of the recorded sightings have taken place since that date.
The Thursday report acknowledged the ongoing possibility that the sightings may represent a foreign intelligence-collection platform, but investigators do not appear to have amassed any evidence to support that conclusion.
“UAP events continue to occur in restricted or sensitive airspace, highlighting possible concerns for safety of flight or adversary collection activity,” the report said. “We continue to assess that this may result from a collection bias due to the number of active aircraft and sensors, combined with focused attention and guidance to report anomalies.”
The Pentagon and the intelligence community “will continue to investigate any evidence of possible foreign government involvement in UAP events,” the report said.
California Democratic Rep. Adam Schiff, the former House Intelligence Committee chair, welcomed the release of the report.
“I appreciate the effort undertaken by the ODNI to study and characterize unidentified aerial phenomena reports, and their commitment to ensuring transparency by releasing an unclassified summary to the American public. … Unidentified aerial phenomena remain a national security matter, and I will continue to support thorough investigations of all UAP reports and oversight by the Congress.”
|
<urn:uuid:d4a52c01-fe73-4b7d-a4ca-83bf84d8d27d>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511361.38/warc/CC-MAIN-20231004052258-20231004082258-00586.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9598280787467957,
"pii_count": 0,
"score": 2.6875,
"token_count": 796,
"url": "https://edition.cnn.com/2023/01/12/politics/us-government-ufo-reports/index.html"
}
|
The US government has received over 350 new reports of what the US government terms “unidentified aerial phenomenon,” commonly known as UFOs, since March of 2021 – roughly half of which are so far unexplained, according to a report from the Office of the Director of National Intelligence released on Thursday.
According to the report, the Pentagon office responsible for tracking and studying the sightings has preliminarily identified 163 of the reports as “balloon or balloon-entities.” A handful of other reports have been attributed to drones, birds, weather events or airborne debris like plastic bags.
But “initial characterization does not mean positively resolved or unidentified,” the report cautioned. And the remaining 171 reported sightings of UAPs or UFOs continue to be unexplained by the US government.
“Some of these uncharacterized UAP appear to have demonstrated unusual flight characteristics or performance capabilities and require further analysis,” the report found.
In short, the intelligence community and the Pentagon still appear to have no explanation for at least some of a series of mysterious flying objects that have been seen moving through restricted military airspace over the last several decades. The majority of the new reports came from US Navy and Air Force pilots and operators “who witnessed UAP during the course of their operational duties and reported the events,” according to the report.
Although the report warned that UAP “pose a safety of flight and collision hazard to air assets” that could require pilots to “adjust flight patterns,” the report stated that there have been no reported collisions between US aircraft and UAP to date.
The Defense Department, under pressure from Congress to investigate so-called UFO or UAP sightings, has actively encouraged pilots and other personnel to report unexplained sightings. The intelligence community released its first report on the matter in 2021.
That report examined 144 reports of UAPs, only one of which investigators were able to explain by the end of the study. Investigators found no evidence that the sightings represented either extraterrestrial life or a major technological advancement by a foreign adversary like Russia or China, but acknowledge that is a possible explanation.
Congress in its year-end defense spending bill then required the Pentagon and the intelligence community to study and report on the matter.
The Thursday report showed a dramatic increase in reported incidents since the 2021 report was issued, an increase that investigators attribute in part to “a better understanding of the possible threats that UAP may represent, either as safety
|
of flight hazards or as potential adversary collection platforms” and in part due to “reduced stigma surrounding UAP reporting.”
Although some of the 366 newly identified reports cover incidents that occurred in the 17 years prior to March of 2021, 250 of the recorded sightings have taken place since that date.
The Thursday report acknowledged the ongoing possibility that the sightings may represent a foreign intelligence-collection platform, but investigators do not appear to have amassed any evidence to support that conclusion.
“UAP events continue to occur in restricted or sensitive airspace, highlighting possible concerns for safety of flight or adversary collection activity,” the report said. “We continue to assess that this may result from a collection bias due to the number of active aircraft and sensors, combined with focused attention and guidance to report anomalies.”
The Pentagon and the intelligence community “will continue to investigate any evidence of possible foreign government involvement in UAP events,” the report said.
California Democratic Rep. Adam Schiff, the former House Intelligence Committee chair, welcomed the release of the report.
“I appreciate the effort undertaken by the ODNI to study and characterize unidentified aerial phenomena reports, and their commitment to ensuring transparency by releasing an unclassified summary to the American public. … Unidentified aerial phenomena remain a national security matter, and I will continue to support thorough investigations of all UAP reports and oversight by the Congress.”
|
Column: In the desert, history blooms as Arizona tops records with a fifth female governor
The state, which has a proud history of going its own way, boasts another, more salutary distinction: When Democrat Katie Hobbs was inaugurated Thursday, it marked the formal installation of Arizona’s fifth female governor, a number that easily surpasses that of any other state.
(Hobbs took the oath of office Monday as required by the state constitution. The public ceremony was delayed for observance of the New Year holiday.)
Kansas and New Hampshire, the runners-up to Arizona, have had three women serve as governor. Oregon will match that number on Monday when Democrat Tina Kotek is sworn in.
A handful of states have had two. Nineteen, including cutting-edge California, have never had a female chief executive.
So what is it about Arizona? Is it something in the water? Or the lack of it?
The answer, it seems, lies in some combination of the state’s frontier history, the rascality of two sordid politicians and, perhaps above all, an unusual line of succession that has made the secretary of state next up when the governor leaves — or is pushed out — early.
Paradoxically, a dash of sexism may have also helped elevate women to the state’s highest office.
The Bay Area’s Jackie Speier is retiring after more than 40 years in public life that began when she was shot five times in a deadly ambush in Guyana.
From its beginning as a state, Arizona has been less wed than others to traditional gender roles. Unlike in the snooty East, with its fixed ways and straitened norms, it was not unusual to find women holding positions of authority in the less-settled West.
“In large part you’re talking about a cowboy culture where women literally held down the fort, or held down the ranch,” said Stacy Pearson, a Democratic strategist in Phoenix.
Indeed, Arizona’s admission to the union was delayed until 1912 due to its push for women’s suffrage. The 19th Amendment, guaranteeing women the right to vote, was not ratified until eight years later.
Still, accustomed as Arizona voters may have been to powerful women, it was not until 1988 that the state got its first female governor, Democrat Rose Mofford. Because Arizona has no lieutenant governor, Mofford, the secretary of state, became governor when Republican Evan Mecham was tried and impeached for obstructing justice and misusing state funds.
That established a pattern of female secretaries of state moving into the top job. (That same year, Arizona voters overwhelmingly passed a ballot measure cleaning up language in the state constitution to make it clear women were eligible to serve as chief executive.)
In 1997, Secretary of State Jane Hull became governor when fellow Republican Fife Symington stepped down after being convicted of fraud. Hull won a full term in 1998 and became one of the “Fabulous Five” — the women who were elected in Arizona that year as governor, secretary of state, attorney general, treasurer and superintendent of public instruction.
It was the first and only time that a state has been run by an all-female slate of top officeholders.
Arizona’s Democratic Atty. Gen. Janet Napolitano was elected its third female governor in 2002, and Secretary of State Jan Brewer, a Republican, became the fourth in 2009 when Napolitano resigned to lead the Department of Homeland Security under President Obama.
Each of those women struck a blow for gender equality. But some less-than-broad-minded thinking may have also helped contribute to Arizona’s record number of female governors.
The word “secretary” in secretary of state suggests an administrative job that many women — either by choice or lack of options — had been successfully holding down for decades. Many voters, consciously or not, were probably predisposed to support a female running for that office, said campaign strategist Chuck Coughlin, a Republican-turned-independent who helped elect Brewer secretary of state and reelect her as governor.
Serving as secretary of state put Mofford, Hull and Brewer in place to become chief executive when the job opened up.
The post also served as a springboard for Hobbs, the overseer of Arizona’s elections, who defeated Republican Kari Lake in November after the TV personality and Trump wannabe turned off voters by shamelessly parroting his election lies.
A falsehood spread in a Bay Area school board race shows how disinformation is being used even in small-bore elections. “It was only a matter of time,” says one expert.
On the same ballot, voters also approved a measure creating the job of lieutenant governor, to be elected on a joint ticket with Arizona’s governor starting in 2026. Going forth, that underling will take over if a governor leaves office early.
By now, though, it seems women don’t need the beneficial line of succession that made Brewer and the others governor. The Hobbs-Lake matchup demonstrated that.
“I think Arizona voters have simply gotten used to having a woman chief executive,” said Napolitano, who headed the University of California system for nearly seven years before stepping down in 2020 to teach public policy at UC Berkeley.
She wondered when California will break its “gubernatorial gender barrier.”
Good question. Maybe 2026?
Get the latest from Mark Z. Barabak
Focusing on politics out West, from the Golden Gate to the U.S. Capitol.
You may occasionally receive promotional content from the Los Angeles Times.
|
<urn:uuid:59100e6c-ece7-4ec4-a324-223f068237e7>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00258.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9762836694717407,
"pii_count": 0,
"score": 2.78125,
"token_count": 1182,
"url": "https://www.latimes.com/politics/story/2023-01-05/arizona-governor-katie-hobbs-inaugurated-as-fifth-female-governor"
}
|
Column: In the desert, history blooms as Arizona tops records with a fifth female governor
The state, which has a proud history of going its own way, boasts another, more salutary distinction: When Democrat Katie Hobbs was inaugurated Thursday, it marked the formal installation of Arizona’s fifth female governor, a number that easily surpasses that of any other state.
(Hobbs took the oath of office Monday as required by the state constitution. The public ceremony was delayed for observance of the New Year holiday.)
Kansas and New Hampshire, the runners-up to Arizona, have had three women serve as governor. Oregon will match that number on Monday when Democrat Tina Kotek is sworn in.
A handful of states have had two. Nineteen, including cutting-edge California, have never had a female chief executive.
So what is it about Arizona? Is it something in the water? Or the lack of it?
The answer, it seems, lies in some combination of the state’s frontier history, the rascality of two sordid politicians and, perhaps above all, an unusual line of succession that has made the secretary of state next up when the governor leaves — or is pushed out — early.
Paradoxically, a dash of sexism may have also helped elevate women to the state’s highest office.
The Bay Area’s Jackie Speier is retiring after more than 40 years in public life that began when she was shot five times in a deadly ambush in Guyana.
From its beginning as a state, Arizona has been less wed than others to traditional gender roles. Unlike in the snooty East, with its fixed ways and straitened norms, it was not unusual to find women holding positions of authority in the less-settled West.
“In large part you’re talking about a cowboy culture where women literally held down the fort, or held down the ranch,” said Stacy Pearson, a Democratic strategist in Phoenix.
Indeed, Arizona’s admission to the union was delayed until 1912 due to its push for women’s suffrage. The 19th Amendment, guaranteeing women the right to vote, was not ratified until eight years later.
Still, accustomed as Arizona voters may have been to powerful women, it was not until 1988 that the state got its first female governor, Democrat Rose Mofford. Because Arizona has no lieutenant governor, Mofford, the secretary of state, became governor when Republican Evan Mecham was tried and impeached for obstructing justice
|
and misusing state funds.
That established a pattern of female secretaries of state moving into the top job. (That same year, Arizona voters overwhelmingly passed a ballot measure cleaning up language in the state constitution to make it clear women were eligible to serve as chief executive.)
In 1997, Secretary of State Jane Hull became governor when fellow Republican Fife Symington stepped down after being convicted of fraud. Hull won a full term in 1998 and became one of the “Fabulous Five” — the women who were elected in Arizona that year as governor, secretary of state, attorney general, treasurer and superintendent of public instruction.
It was the first and only time that a state has been run by an all-female slate of top officeholders.
Arizona’s Democratic Atty. Gen. Janet Napolitano was elected its third female governor in 2002, and Secretary of State Jan Brewer, a Republican, became the fourth in 2009 when Napolitano resigned to lead the Department of Homeland Security under President Obama.
Each of those women struck a blow for gender equality. But some less-than-broad-minded thinking may have also helped contribute to Arizona’s record number of female governors.
The word “secretary” in secretary of state suggests an administrative job that many women — either by choice or lack of options — had been successfully holding down for decades. Many voters, consciously or not, were probably predisposed to support a female running for that office, said campaign strategist Chuck Coughlin, a Republican-turned-independent who helped elect Brewer secretary of state and reelect her as governor.
Serving as secretary of state put Mofford, Hull and Brewer in place to become chief executive when the job opened up.
The post also served as a springboard for Hobbs, the overseer of Arizona’s elections, who defeated Republican Kari Lake in November after the TV personality and Trump wannabe turned off voters by shamelessly parroting his election lies.
A falsehood spread in a Bay Area school board race shows how disinformation is being used even in small-bore elections. “It was only a matter of time,” says one expert.
On the same ballot, voters also approved a measure creating the job of lieutenant governor, to be elected on a joint ticket with Arizona’s governor starting in 2026. Going forth, that underling will take over if a governor leaves office early.
By now, though, it seems women don’t need the beneficial line of succession that made Brewer and the others governor. The Hobbs-Lake matchup demonstrated that.
“I think Arizona voters have simply gotten used to having a woman chief executive,” said Napolitano, who headed the University of California system for nearly seven years before stepping down in 2020 to teach public policy at UC Berkeley.
She wondered when California will break its “gubernatorial gender barrier.”
Good question. Maybe 2026?
Get the latest from Mark Z. Barabak
Focusing on politics out West, from the Golden Gate to the U.S. Capitol.
You may occasionally receive promotional content from the Los Angeles Times.
|
Climate change and overuse are depleting the Colorado River and threatening the future of the American Southwest. Federal scientists estimate that the river has declined 20 percent over the last century, and they predict it will drop further if the region continues to warm.
Demand for Colorado River water has not dropped to match the dwindling supplies: Cities and suburbs that rely on the river have grown exponentially. Water-hungry industries and a desert region that grows and ships food all over the world continue to expand.
The 20th century was a time for building dams and diverting water across the region to power its growth. Now, water managers and some residents of the Southwest are re-examining our relationship with water to figure out how we can continue to live — and thrive — here with less.
Listen to Parched, a podcast about people who rely on the river that shaped the West – and have ideas to save it.
What if we moved water from the Mississippi River?
As the Colorado River gets overused, replenishing its dwindling water supplies could be as straightforward as piping in water from other U.S. river basins.
This idea of moving water from places where it's considered plentiful to dry regions that need it isn’t new. In the 1960s, a billionaire California industrialist pitched his idea for the North American Power and Water Alliance, which imagined a continent-sized network of water pipelines in the spirit of the U.S. highway system.
There are lots of different versions of this idea proposed to prop up the Colorado River. These include tapping the Missouri River, a tributary of the Columbia River in the Pacific Northwest or the Bear River in Utah.
But moving water — or floodwater — from the Mississippi River is the idea that has gained the most public attention. It has been touted as a possible solution to accommodate the Colorado River drought by people like comedian Bill Maher and Republican Arizona state legislator Tim Dunn.
Wildfire, water and wolves: Sign up for our weekly Climate newsletter.
The federal government studied some of the water-moving proposals in 2012 and concluded that the two main drawbacks are cost and time.
The U.S. Bureau of Reclamation estimated it would cost at least $14 billion and take 30 years to build the infrastructure to bring Mississippi River water to the Colorado River Basin. Much of that cost would come from the infrastructure itself, from acquiring the land and rights-of-way to building the pipelines, pumps and other equipment to move water across the country.
But Edie Zagona, a water resources engineer at the University of Colorado, said that was just a guess. Given that it’s been over a decade since the agency released its estimate, Zagona said building a pipeline would likely cost at least $30 billion.
In 2021, Arizona lawmakers, including Dunn, approved a resolution to ask the federal government to resume its formal inquiry into the idea.
Even proponents of this idea acknowledge it would take a long time and couldn’t resolve the short-term possibility that the country’s two largest reservoirs may drop so much they wouldn’t be able to deliver Colorado River water to Arizona and California.
What if we removed salt from seawater?
Humans thirsty for freshwater have always had their eyes on the ocean.
For thousands of years, people have worked to find ways to remove that salt and keep the fresh water. The favored method today is to use membranes that trap salty particles and let the water pass through. With the help of lots of energy to push the water through the membranes, it’s the basic operation behind most modern-day desalination projects worldwide.
The U.S. desalination boom that started in the 1970s has grown to include projects in nearly 36 states. California has embraced it the most. Researchers say desalination could eventually provide as much as half of that state’s potable water, and some people in the regional water planning community hope it could lead California cities to stop using Colorado River water in the future.
But desalination has major downsides.
The first is economic: Desalination plants are expensive to build and maintain, and they need a lot of electricity to power the pumps that push seawater through filters. Ratepayers are usually on the hook for those costs, which disproportionately affect low-income households.
The other major drawback is desalination’s waste problem: The primary desalination byproduct is super salty waste brine that has to be disposed of. Governments and companies that operate desalination plants say this brine can be discharged safely, but environmental groups say the salty plume can kill marine life and disrupt sensitive ecosystems.
However, some proponents say smaller desalination plants and ones built with certain features can offer promising environmental compromises.
In Dana Point, California, the Doheny Ocean Desalination Plant will be built to clean up ocean water for the South Coast Water District. According to plans, the plant will have a smaller footprint and will get 15 percent of its power from a new solar energy project. To prevent the pumps from sucking in and killing marine life, the pipes that draw in seawater will be built under a sandy seabed. Dana Point will also send its salty brine back into the ocean where an already existing wastewater treatment plant does the same.
What if we banned grass lawns?
For ideas on how to cut municipal water use, cities along the Colorado River often look to Las Vegas.
Sin City drew people to this area of the Mojave Desert with the promise of year-round sunny weather and green lawns made of Kentucky bluegrass. It was an oasis in the middle of the desert — except it was unsustainable.
In the late 1980s, when several planned communities were being built, the population in Clark County, where Las Vegas resides, was 700,000. Today, that number is closer to 2.3 million. All of those people need water, which comes almost exclusively from the Colorado River supplies stored in Lake Mead.
By the 1990s, the regional water authority in Las Vegas began paying homeowners and businesses to tear out their grass lawns and replace them with native trees, rocks, cacti and artificial grass. They even created a team of water cops to monitor use throughout the city, which includes educating people and handing out fines. But as Lake Mead has dropped to near-emergency levels, Nevada has had to take more drastic measures.
By 2027, Nevadans won’t be able to use Colorado River water to keep decorative grass alive, according to a state law passed in 2021.
Other cities along the Colorado River are paying attention.
By November 2022, 30 water agencies that supply water from Denver to San Diego committed to removing 30 percent of non-functional grass. But the agreement doesn’t include anything binding. Nevada is still the only state that has banned watering ornamental grass lawns with Colorado River water.
Still, some cities across the Colorado River Basin are realizing that they can do more to save water at a time when every drop counts.
But Vegas’ experience shows the longer those cities wait, the more expensive it’ll be if they do eventually decide to outlaw lawns or require other landscape changes. Since the 1990s, when Nevada first started its cash-for-grass program, the water authority has paid people and businesses close to $300 million to rip out grass.
What if we recycled all of our water?
As freshwater gets scarcer in the Southwest, some people are looking to recycle the water we use in our homes and on farms.
Water recycling, or reuse, is a process that cleans wastewater so that it’s either pristine enough to drink or so it’s at least clean enough to water grass and plants outside.
Just 25 years ago, a federal agency deemed wastewater reuse a “solution of last resort” for our drinking water supplies. In Los Angeles in 2000, public opposition to the idea of drinking wastewater helped shut down a plant that the Los Angeles Department of Water and Power had spent $55 million developing.
But since then, at least 15 water utilities in Colorado River states — including in Colorado’s Aurora and Castle Rock — have built facilities that clean their own wastewater to safe drinking standards, and they are sending that water back to taps in people’s homes.
More cities could embrace potable reuse in the years ahead because states are individually passing rules for how the water is cleaned and enacting safety standards for delivering it back to taps.
While only a handful of water utilities in the Colorado River region are actively cleaning wastewater to drinking standards, hundreds of them are recycling wastewater for non-potable uses like outdoor watering, including on golf courses.
Universally, non-potable recycled water comes through pipes and sprinklers that are painted purple to indicate that the water is unsafe to drink. Not every city can afford to lay down the pipes required to deliver non-potable recycled water, nor find customers to pay for the infrastructure, so there are still hundreds of water utilities that are not recycling their water.
In most of the Colorado River region, individual homeowners and multi-family buildings can set up their own reuse systems. Unlike at the citywide level, they can’t reuse all of their wastewater, but they can separate laundry water and bathwater from toilet and kitchen water. The minimally contaminated water is known as greywater, and homeowners can rig up a way to reuse it outside or clean it through a commercial unit to flush toilets or top off pools.
What if we increased the price of water?
Normally when things get scarcer, they get more expensive. But that’s not true with water. Even as climate change makes it less available, we’re not paying a premium price for water.
“Even when you write a check to the local utility, it's not for water. It's for the cost of service,” said Robert Glennon, a water law expert at the University of Arizona. “It's for the cost of the utility to pump the water, treat the water, deliver the water. There is no charge for the commodity of water. None.”
However, there’s evidence that raising the price of water can make people use less. We would have more incentive to conserve or reuse existing water resources if customers had to pay more for a lush lawn.
Tucson, Ariz., experimented with this in the mid-1970s. The city was vastly overusing its groundwater resources and didn’t have access to the Colorado River. It got so bad that the local utility couldn’t guarantee that it could fight fires or deliver water to customers who lived in the hills around town. The sinking water table was also making the ground move, threatening home foundations, sewer pipes, gas lines and anything else buried underground.
This led some members of the City Council in 1976 to raise the price of water. They wanted to encourage conservation and pay for infrastructure to get new water sources.
A few years later, economists at the University of Arizona studied the effects of the change in water rates. They found that people in the city used 45 gallons of water less per day on average. That means people used about a quarter less water than they had before the rates went up.
But the city councilors who had ushered in the changes paid a political price: three were recalled, and a fourth resigned.
“I suspect that what happened to us has scared a lot of people,” said Margot Garcia, who was one of the city officials who was recalled. She surmised that many elected officials are afraid of raising water rates.
Utilities willing to raise water rates need to consider how that might impact low-income residents. One option is to offer subsidies, the way many places do with electricity bills. Another is to keep base rates very low to meet essential water needs. For pricing to effectively influence people’s water use, economists say the prices need to go up precipitously from that base rate to discourage discretionary things like lawns and pools.
Pricing can be a solution not just for cities but for all users of the Colorado River. The biggest user of all — the Imperial Valley farming district in California — is thinking of raising water rates for the first time in decades to nudge farmers to conserve and to raise funds that can be used to modernize irrigation systems to make them more water efficient.
What if we paid farmers to not use their Colorado River water?
Most of the Colorado River’s water is used for agriculture. Farmers and ranchers control some of the basin's oldest and most protected water rights. These two points underscore why some policymakers have endorsed programs that pay farmers and ranchers to keep their water in the river instead of using it for irrigation.
The potential benefits could be substantial, depending on how many farmers volunteer to participate. Enough water could be saved to promote healthy Colorado River flows and preserve water levels in key reservoirs while avoiding mandatory cutbacks in urban and suburban areas.
One key to the success of such a program, advocates say, is ensuring it can pay farmers and ranchers the true value of their water — meaning the payout would need to take into account the financial consequences of getting rid of livestock or letting fields go fallow.
Wildfire, water and wolves: Sign up for our weekly Climate newsletter.
There’s been a lot of interest in this idea recently. The federal government is now allocating millions, and possibly billions of dollars, to programs that pay farmers to pause the use of their water temporarily.
Farmers downstream of Lake Mead can already get paid to keep their water in the reservoir. But many farmers in other areas are reluctant to endorse the idea. This is one reason a proposal for such a program in Colorado and other upper-basin states has failed to gain traction.
Some agricultural producers are also suspicious that these programs have attracted the interest of Wall Street firms that are buying up farmland and water rights. They worry efforts that end up turning water itself into a profitable commodity — instead of the food it grows — could be a long-term threat to farms and surrounding communities.
What if we farmed differently?
A lot of the food eaten in the U.S. comes from farms and ranches in the Southwestern desert. Cantaloupe, lettuce, almonds, dates, cherries and many other types of food are all grown in Arizona and California, which often don’t get more than two inches of precipitation a year.
The water that nourishes these crops comes from the Colorado River through an extensive maze of aqueducts and irrigation ditches that create an agricultural oasis in the desert.
These areas are home to many large-scale farms because they have a lot of sun, and the temperature is steady and ideal for growing crops — all of which add up to a long growing season. Desert farmers, including ones who use Colorado River water, can also control when and how much water their crops get.
Many farmers in the region water their crops by flooding the fields. It’s an irrigation practice that goes back thousands of years and is particularly effective in ensuring the plants get adequate moisture. But it’s not the most efficient form of irrigation, and it uses a lot of water.
In response, some farmers are looking at ways to better use what little rain falls on the area. To do that, they’re employing technology that can determine the best time of day to water crops and reduce evaporation. Other farmers are going one step further and using drip irrigation, which distributes water directly to the crops.
Another way farmers could reduce their water use would be to switch the crops they grow.
One of the Southwest’s most profitable and widely grown crops is alfalfa, and it is notoriously thirsty. Humans don’t consume this variety of the pea plant directly, but dairy and beef cows do, and they eat a lot of it.
The crop is shipped to feed cattle all over the U.S.; it’s even exported to feed cows in Saudi Arabia, which has prohibited growing the hay domestically to preserve its own water resources.
Some farmers are experimenting with growing other types of crops, including Durum wheat, cotton and produce that doesn’t use as much water. Tire companies like Bridgestone have been investing in growing Guayule, which would create a domestic supply of rubber.
Reporting contributed by Michael Elizabeth Sakas
|
<urn:uuid:ead0ffa1-d270-4022-8be6-7d2ee1420490>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510481.79/warc/CC-MAIN-20230929022639-20230929052639-00012.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9620108008384705,
"pii_count": 0,
"score": 3.265625,
"token_count": 3410,
"url": "https://www.cpr.org/2023/07/05/how-might-we-save-the-colorado-river/"
}
|
Climate change and overuse are depleting the Colorado River and threatening the future of the American Southwest. Federal scientists estimate that the river has declined 20 percent over the last century, and they predict it will drop further if the region continues to warm.
Demand for Colorado River water has not dropped to match the dwindling supplies: Cities and suburbs that rely on the river have grown exponentially. Water-hungry industries and a desert region that grows and ships food all over the world continue to expand.
The 20th century was a time for building dams and diverting water across the region to power its growth. Now, water managers and some residents of the Southwest are re-examining our relationship with water to figure out how we can continue to live — and thrive — here with less.
Listen to Parched, a podcast about people who rely on the river that shaped the West – and have ideas to save it.
What if we moved water from the Mississippi River?
As the Colorado River gets overused, replenishing its dwindling water supplies could be as straightforward as piping in water from other U.S. river basins.
This idea of moving water from places where it's considered plentiful to dry regions that need it isn’t new. In the 1960s, a billionaire California industrialist pitched his idea for the North American Power and Water Alliance, which imagined a continent-sized network of water pipelines in the spirit of the U.S. highway system.
There are lots of different versions of this idea proposed to prop up the Colorado River. These include tapping the Missouri River, a tributary of the Columbia River in the Pacific Northwest or the Bear River in Utah.
But moving water — or floodwater — from the Mississippi River is the idea that has gained the most public attention. It has been touted as a possible solution to accommodate the Colorado River drought by people like comedian Bill Maher and Republican Arizona state legislator Tim Dunn.
Wildfire, water and wolves: Sign up for our weekly Climate newsletter.
The federal government studied some of the water-moving proposals in 2012 and concluded that the two main drawbacks are cost and time.
The U.S. Bureau of Reclamation estimated it would cost at least $14 billion and take 30 years to build the infrastructure to bring Mississippi River water to the Colorado River Basin. Much of that cost would come from the infrastructure itself, from acquiring the land and rights-of-way to building the pipelines, pumps and other equipment to move
|
water across the country.
But Edie Zagona, a water resources engineer at the University of Colorado, said that was just a guess. Given that it’s been over a decade since the agency released its estimate, Zagona said building a pipeline would likely cost at least $30 billion.
In 2021, Arizona lawmakers, including Dunn, approved a resolution to ask the federal government to resume its formal inquiry into the idea.
Even proponents of this idea acknowledge it would take a long time and couldn’t resolve the short-term possibility that the country’s two largest reservoirs may drop so much they wouldn’t be able to deliver Colorado River water to Arizona and California.
What if we removed salt from seawater?
Humans thirsty for freshwater have always had their eyes on the ocean.
For thousands of years, people have worked to find ways to remove that salt and keep the fresh water. The favored method today is to use membranes that trap salty particles and let the water pass through. With the help of lots of energy to push the water through the membranes, it’s the basic operation behind most modern-day desalination projects worldwide.
The U.S. desalination boom that started in the 1970s has grown to include projects in nearly 36 states. California has embraced it the most. Researchers say desalination could eventually provide as much as half of that state’s potable water, and some people in the regional water planning community hope it could lead California cities to stop using Colorado River water in the future.
But desalination has major downsides.
The first is economic: Desalination plants are expensive to build and maintain, and they need a lot of electricity to power the pumps that push seawater through filters. Ratepayers are usually on the hook for those costs, which disproportionately affect low-income households.
The other major drawback is desalination’s waste problem: The primary desalination byproduct is super salty waste brine that has to be disposed of. Governments and companies that operate desalination plants say this brine can be discharged safely, but environmental groups say the salty plume can kill marine life and disrupt sensitive ecosystems.
However, some proponents say smaller desalination plants and ones built with certain features can offer promising environmental compromises.
In Dana Point, California, the Doheny Ocean Desalination Plant will be built to clean up ocean water for the South Coast Water District. According to plans, the plant will have a smaller footprint and will get 15 percent of its power from a new solar energy project. To prevent the pumps from sucking in and killing marine life, the pipes that draw in seawater will be built under a sandy seabed. Dana Point will also send its salty brine back into the ocean where an already existing wastewater treatment plant does the same.
What if we banned grass lawns?
For ideas on how to cut municipal water use, cities along the Colorado River often look to Las Vegas.
Sin City drew people to this area of the Mojave Desert with the promise of year-round sunny weather and green lawns made of Kentucky bluegrass. It was an oasis in the middle of the desert — except it was unsustainable.
In the late 1980s, when several planned communities were being built, the population in Clark County, where Las Vegas resides, was 700,000. Today, that number is closer to 2.3 million. All of those people need water, which comes almost exclusively from the Colorado River supplies stored in Lake Mead.
By the 1990s, the regional water authority in Las Vegas began paying homeowners and businesses to tear out their grass lawns and replace them with native trees, rocks, cacti and artificial grass. They even created a team of water cops to monitor use throughout the city, which includes educating people and handing out fines. But as Lake Mead has dropped to near-emergency levels, Nevada has had to take more drastic measures.
By 2027, Nevadans won’t be able to use Colorado River water to keep decorative grass alive, according to a state law passed in 2021.
Other cities along the Colorado River are paying attention.
By November 2022, 30 water agencies that supply water from Denver to San Diego committed to removing 30 percent of non-functional grass. But the agreement doesn’t include anything binding. Nevada is still the only state that has banned watering ornamental grass lawns with Colorado River water.
Still, some cities across the Colorado River Basin are realizing that they can do more to save water at a time when every drop counts.
But Vegas’ experience shows the longer those cities wait, the more expensive it’ll be if they do eventually decide to outlaw lawns or require other landscape changes. Since the 1990s, when Nevada first started its cash-for-grass program, the water authority has paid people and businesses close to $300 million to rip out grass.
What if we recycled all of our water?
As freshwater gets scarcer in the Southwest, some people are looking to recycle the water we use in our homes and on farms.
Water recycling, or reuse, is a process that cleans wastewater so that it’s either pristine enough to drink or so it’s at least clean enough to water grass and plants outside.
Just 25 years ago, a federal agency deemed wastewater reuse a “solution of last resort” for our drinking water supplies. In Los Angeles in 2000, public opposition to the idea of drinking wastewater helped shut down a plant that the Los Angeles Department of Water and Power had spent $55 million developing.
But since then, at least 15 water utilities in Colorado River states — including in Colorado’s Aurora and Castle Rock — have built facilities that clean their own wastewater to safe drinking standards, and they are sending that water back to taps in people’s homes.
More cities could embrace potable reuse in the years ahead because states are individually passing rules for how the water is cleaned and enacting safety standards for delivering it back to taps.
While only a handful of water utilities in the Colorado River region are actively cleaning wastewater to drinking standards, hundreds of them are recycling wastewater for non-potable uses like outdoor watering, including on golf courses.
Universally, non-potable recycled water comes through pipes and sprinklers that are painted purple to indicate that the water is unsafe to drink. Not every city can afford to lay down the pipes required to deliver non-potable recycled water, nor find customers to pay for the infrastructure, so there are still hundreds of water utilities that are not recycling their water.
In most of the Colorado River region, individual homeowners and multi-family buildings can set up their own reuse systems. Unlike at the citywide level, they can’t reuse all of their wastewater, but they can separate laundry water and bathwater from toilet and kitchen water. The minimally contaminated water is known as greywater, and homeowners can rig up a way to reuse it outside or clean it through a commercial unit to flush toilets or top off pools.
What if we increased the price of water?
Normally when things get scarcer, they get more expensive. But that’s not true with water. Even as climate change makes it less available, we’re not paying a premium price for water.
“Even when you write a check to the local utility, it's not for water. It's for the cost of service,” said Robert Glennon, a water law expert at the University of Arizona. “It's for the cost of the utility to pump the water, treat the water, deliver the water. There is no charge for the commodity of water. None.”
However, there’s evidence that raising the price of water can make people use less. We would have more incentive to conserve or reuse existing water resources if customers had to pay more for a lush lawn.
Tucson, Ariz., experimented with this in the mid-1970s. The city was vastly overusing its groundwater resources and didn’t have access to the Colorado River. It got so bad that the local utility couldn’t guarantee that it could fight fires or deliver water to customers who lived in the hills around town. The sinking water table was also making the ground move, threatening home foundations, sewer pipes, gas lines and anything else buried underground.
This led some members of the City Council in 1976 to raise the price of water. They wanted to encourage conservation and pay for infrastructure to get new water sources.
A few years later, economists at the University of Arizona studied the effects of the change in water rates. They found that people in the city used 45 gallons of water less per day on average. That means people used about a quarter less water than they had before the rates went up.
But the city councilors who had ushered in the changes paid a political price: three were recalled, and a fourth resigned.
“I suspect that what happened to us has scared a lot of people,” said Margot Garcia, who was one of the city officials who was recalled. She surmised that many elected officials are afraid of raising water rates.
Utilities willing to raise water rates need to consider how that might impact low-income residents. One option is to offer subsidies, the way many places do with electricity bills. Another is to keep base rates very low to meet essential water needs. For pricing to effectively influence people’s water use, economists say the prices need to go up precipitously from that base rate to discourage discretionary things like lawns and pools.
Pricing can be a solution not just for cities but for all users of the Colorado River. The biggest user of all — the Imperial Valley farming district in California — is thinking of raising water rates for the first time in decades to nudge farmers to conserve and to raise funds that can be used to modernize irrigation systems to make them more water efficient.
What if we paid farmers to not use their Colorado River water?
Most of the Colorado River’s water is used for agriculture. Farmers and ranchers control some of the basin's oldest and most protected water rights. These two points underscore why some policymakers have endorsed programs that pay farmers and ranchers to keep their water in the river instead of using it for irrigation.
The potential benefits could be substantial, depending on how many farmers volunteer to participate. Enough water could be saved to promote healthy Colorado River flows and preserve water levels in key reservoirs while avoiding mandatory cutbacks in urban and suburban areas.
One key to the success of such a program, advocates say, is ensuring it can pay farmers and ranchers the true value of their water — meaning the payout would need to take into account the financial consequences of getting rid of livestock or letting fields go fallow.
Wildfire, water and wolves: Sign up for our weekly Climate newsletter.
There’s been a lot of interest in this idea recently. The federal government is now allocating millions, and possibly billions of dollars, to programs that pay farmers to pause the use of their water temporarily.
Farmers downstream of Lake Mead can already get paid to keep their water in the reservoir. But many farmers in other areas are reluctant to endorse the idea. This is one reason a proposal for such a program in Colorado and other upper-basin states has failed to gain traction.
Some agricultural producers are also suspicious that these programs have attracted the interest of Wall Street firms that are buying up farmland and water rights. They worry efforts that end up turning water itself into a profitable commodity — instead of the food it grows — could be a long-term threat to farms and surrounding communities.
What if we farmed differently?
A lot of the food eaten in the U.S. comes from farms and ranches in the Southwestern desert. Cantaloupe, lettuce, almonds, dates, cherries and many other types of food are all grown in Arizona and California, which often don’t get more than two inches of precipitation a year.
The water that nourishes these crops comes from the Colorado River through an extensive maze of aqueducts and irrigation ditches that create an agricultural oasis in the desert.
These areas are home to many large-scale farms because they have a lot of sun, and the temperature is steady and ideal for growing crops — all of which add up to a long growing season. Desert farmers, including ones who use Colorado River water, can also control when and how much water their crops get.
Many farmers in the region water their crops by flooding the fields. It’s an irrigation practice that goes back thousands of years and is particularly effective in ensuring the plants get adequate moisture. But it’s not the most efficient form of irrigation, and it uses a lot of water.
In response, some farmers are looking at ways to better use what little rain falls on the area. To do that, they’re employing technology that can determine the best time of day to water crops and reduce evaporation. Other farmers are going one step further and using drip irrigation, which distributes water directly to the crops.
Another way farmers could reduce their water use would be to switch the crops they grow.
One of the Southwest’s most profitable and widely grown crops is alfalfa, and it is notoriously thirsty. Humans don’t consume this variety of the pea plant directly, but dairy and beef cows do, and they eat a lot of it.
The crop is shipped to feed cattle all over the U.S.; it’s even exported to feed cows in Saudi Arabia, which has prohibited growing the hay domestically to preserve its own water resources.
Some farmers are experimenting with growing other types of crops, including Durum wheat, cotton and produce that doesn’t use as much water. Tire companies like Bridgestone have been investing in growing Guayule, which would create a domestic supply of rubber.
Reporting contributed by Michael Elizabeth Sakas
|
- The U.S. has the highest maternal mortality rate among developed countries, and seven states have instituted abortion bans while keeping postpartum Medicaid coverage limited to 60 days.
- These states are Arkansas, Idaho, Mississippi, Missouri, South Dakota, Texas and Wisconsin.
- Such a combination could have a devastating effect on maternal mortality rates, experts have told Newsweek.
- States with restrictive abortion bans and laws generally have poorer health outcomes and weaker social safety net programs.
- Studies suggest that 52 percent of pregnancy-related deaths can occur up to a year after birth.
Many U.S. states have for the past year been seeking to extend the healthcare coverage new mothers receive under Medicaid to 12 months after the birth of their child, to protect them during a difficult and vulnerable period.
Some of those states, such as Florida and Georgia, have also legislated to limit or ban abortions following the overturning of Roe v. Wade by the Supreme Court last summer—which experts who spoke to Newsweek have linked to expanding postpartum Medicaid coverage—and many have increased postpartum Medicaid without new limits on abortion being imposed.
However, there are seven states—Arkansas, Idaho, Mississippi, Missouri, South Dakota, Texas and Wisconsin—that have instituted abortion bans while keeping postpartum Medicaid coverage limited to 60 days.
This combination, experts and campaigners believe, will have a devastating effect on maternal mortality rates, and could come to demonstrate why other states have already moved to increase their coverage.
“There is strong evidence that states with restrictive abortion bans and laws have higher maternal mortality rates,” Cat Duffy, a policy analyst at the National Health Law Program (NHLP), told Newsweek.
A 2022 study by the University of Colorado, Boulder, found that a year after a total abortion ban is implemented, the maternal mortality rate is estimated to rise by 24 percent. A similar study by Tulane University in 2021 suggested a 7 percent increase in areas with greater restrictions on abortion access.
As a result of the ban, there would be a greater number of women carrying babies to term, with the risks that go with pregnancy and childbirth—and limited healthcare provision in the months after a child is born makes the period even more dangerous.
“We know that carrying a pregnancy to term has more risks than abortion care,” said Dr. Smita Carroll, an OBGYN in New Mexico and a Physicians for Reproductive Health fellow.
Without extended Medicaid coverage, particularly those from poorer backgrounds, who might struggle to afford medical bills, are less likely to seek care, and health conditions can go untreated.
“The truth of it is, is that when people don’t have access to affordable healthcare, they aren’t seeing doctors—which means they aren’t getting the care that they need, whether it be preventative or not,” said Blair Wallace, a policy and advocacy strategist at the American Civil Liberties Union in Texas.
Why 60 Days of Medicaid Isn’t Always Enough
Even at the most restrictive end of Medicaid coverage, new mothers still have at least 60 days from the birth of their child to seek medical help. However, this is arguably not long enough: a 2020 report by the Commonwealth Fund found 52 percent of pregnancy-related deaths can occur up to a year after birth.
“This potential loss of coverage is important because continuity of health care coverage is critical to catching and treating the health conditions and emergencies that are in part to blame for the United States’ high maternal mortality and morbidity rates,” said Amy Chen, a senior attorney at NHLP.
Carroll said that maternal mortality “is largely preventable,” and maintaining Medicaid coverage for a year allows doctors such as herself to reduce the chance of pregnancy-related deaths, but also treat other medical conditions, such as postpartum depression, diabetes and heart conditions, that can crop up during and after pregnancy.
“Having any barrier to coverage in the postpartum setting is a direct barrier for accessing necessary care,” Carroll told Newsweek.
To compound the issue, “generally it is true that states with the most restrictive abortion laws also tend to have the weakest social safety net programs and poorer health outcomes,” Duffy said, citing a 2022 report by the Center for American Progress, which notes: “States hostile to abortion also have some of the highest rates of uninsured and comprise the majority of those still refusing to expand Medicaid.”
‘In Texas, I Barely Have Reproductive Rights’
For instance, Texas already ranks eighth worst for maternal mortality rates in the U.S., according to the latest figures from World Population Review. Kaiser Family Foundation analysis of federal data shows that half of all births in the state are covered by Medicaid.
“Currently, in Texas, I barely have reproductive rights,” Wallace said. “We are certainly facing a maternal mortality and generally a pregnancy care crisis.”
Wallace said that when she had a child herself, she had to hand over thousands of dollars to get into the hospital to deliver, despite having pregnancy insurance. Without Medicaid supplementing this, she says she would not have been able to afford that care.
“In Texas, there is a long history of the state not putting the money where they say their values are,” she argues. “They say that they value mothers and children, and yet our maternal mortality data does not reflect that.”
A Texas Health and Human Services Commission spokesperson noted that “lawmakers have submitted bills this legislative session to extend postpartum Medicaid coverage to a full year.” As of March 16, it had been left pending in committee. Newsweek reached out to the governor’s office via email for comment on Friday.
Texas proposed extending postpartum Medicaid coverage to just six months in 2021, but was denied approval by the federal government in 2022. While the state government said this would still have had “long-term benefits of improving continuity of care,” the Texas Tribune reported that it had been rejected as the language of the bill appeared to tacitly exclude women who had abortions.
Responding to the proposal in 2022, Jamila Taylor, then-director of health care reform at The Century Foundation think-tank, wrote that while an extension was a good thing, 12-month postpartum coverage was “the ideal and the most evidence-based approach” to avoiding negative health outcomes.
State legislation has been introduced to extend postpartum coverage to 12 months in Idaho in February, and Mississippi and South Dakota in March, respectively, while Wisconsin’s health department proposed a 30-day extension in 2022. A bill to extend Medicaid to a year passed the Missouri senate on March 3, but faces similar issues with wording as Texas’s bill.
However, with abortion bans already in place, the delay in implementing Medicaid extensions poses an immediate problem for pregnant women in the states. Newsweek reached out to the governor’s office and relevant authorities of the six other states, excluding Texas, for comment via email on Saturday.
Similarly, there are other states, such as Utah and Arizona, that have implemented limits on abortion without yet increasing postpartum Medicaid. Arizona submitted an initial application to extend in 2022, while a 2023 legislative effort in Utah was struck down.
Both Chen and Wallace note that maternal mortality rates are disproportionately higher among black and indigenous women. Wallace said: “Black women are two times more likely to die during childbirth than me, as a white woman—that is despicable; there is just no reason for that.” In fact, the Commonwealth Fund found that the rate among black women was 2.5 times higher than white women, and more than double for indigenous women.
According to Chen, there is even disparity in the coverage individual mothers may receive in states with 60-day postpartum Medicaid.
“The technical coverage time is the end of the month in which the 60th day following the end of the pregnancy falls,” she said. “So if your pregnancy ends early in the month it can be closer to 3 months, but if your pregnancy ends later in the month it’s closer to 2 months.”
A Dangerous Time
For states that have instituted anti-abortion laws, to avoid their maternal mortality rates worsening, extending Medicaid coverage provides healthcare workers with a better shot of identifying and treating conditions new mothers may have.
“Having a child is an incredibly transformative period of time,” Carroll said, adding that extended Medicaid “really does help in terms of providing vital care at a really important time of their life.”
Since 2022, Kentucky, Louisiana and Oklahoma have increased post-partum Medicaid coverage to 12 months while instituting full abortion bans. Louisiana has the worst maternal mortality rate of all U.S. states, according to the 2023 figures.
It is as-yet unclear what effect these two interventions will have on maternal mortality rates; due to the coronavirus pandemic, official statistics for 2021 were only released on Thursday.
The Political Dimension
While these states may be viewed as looking to prevent unnecessary deaths as they uphold their anti-abortion stance, Wallace believes it to be cynical.
“Polling goes hand-in-hand with policy, and I think that a lot of these states are realizing that some of the laws they put into effect are wildly unpopular,” she told Newsweek. According to a February Gallup poll, the proportion of Americans dissatisfied with abortion laws has risen significantly, to 46 percent, since 2021.
“In some regards, Republicans are saying, ‘well gosh, what can we do to make us look better in this light?’” Wallace added.
As of February, 47 percent of Texans believe abortion laws should be less strict, according to figures by the University of Texas. In neighboring New Mexico—which has no limits on abortion and 12-month postpartum coverage—since the overturning of Roe v. Wade, Carroll says there has been a “large increase” in the number of patients coming to the state from elsewhere, including Texas, seeking abortions.
There are a raft of other states—such as Virginia, California and Hawaii, to name just a few—which have made similar increases in coverage in the past year without changing their abortion laws.
While these changes may not be explicitly linked to the Supreme Court’s ruling on abortion, the U.S. has the highest maternal mortality rate among developed countries, of 21 deaths per 100,000 people, according to UNICEF data, making the issue pressing for all states. As studies show, abortion laws are likely to push the nationwide rate higher.
“I would like to think that it’s people just seizing on an opportunity, but also: it’s time,” Wallace said. “There’s just no good reason that our country should have people in these terrible situations where they’re able to go bankrupt… for giving birth.”
The post These Seven States Are Sleepwalking Into a Maternal Mortality Crisis appeared first on Newsweek.
|
<urn:uuid:5641f3fc-9b26-4804-9302-e340be18a0af>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644309.7/warc/CC-MAIN-20230528150639-20230528180639-00168.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9604145288467407,
"pii_count": 0,
"score": 2.609375,
"token_count": 2309,
"url": "https://dnyuz.com/2023/03/20/these-seven-states-are-sleepwalking-into-a-maternal-mortality-crisis/"
}
|
- The U.S. has the highest maternal mortality rate among developed countries, and seven states have instituted abortion bans while keeping postpartum Medicaid coverage limited to 60 days.
- These states are Arkansas, Idaho, Mississippi, Missouri, South Dakota, Texas and Wisconsin.
- Such a combination could have a devastating effect on maternal mortality rates, experts have told Newsweek.
- States with restrictive abortion bans and laws generally have poorer health outcomes and weaker social safety net programs.
- Studies suggest that 52 percent of pregnancy-related deaths can occur up to a year after birth.
Many U.S. states have for the past year been seeking to extend the healthcare coverage new mothers receive under Medicaid to 12 months after the birth of their child, to protect them during a difficult and vulnerable period.
Some of those states, such as Florida and Georgia, have also legislated to limit or ban abortions following the overturning of Roe v. Wade by the Supreme Court last summer—which experts who spoke to Newsweek have linked to expanding postpartum Medicaid coverage—and many have increased postpartum Medicaid without new limits on abortion being imposed.
However, there are seven states—Arkansas, Idaho, Mississippi, Missouri, South Dakota, Texas and Wisconsin—that have instituted abortion bans while keeping postpartum Medicaid coverage limited to 60 days.
This combination, experts and campaigners believe, will have a devastating effect on maternal mortality rates, and could come to demonstrate why other states have already moved to increase their coverage.
“There is strong evidence that states with restrictive abortion bans and laws have higher maternal mortality rates,” Cat Duffy, a policy analyst at the National Health Law Program (NHLP), told Newsweek.
A 2022 study by the University of Colorado, Boulder, found that a year after a total abortion ban is implemented, the maternal mortality rate is estimated to rise by 24 percent. A similar study by Tulane University in 2021 suggested a 7 percent increase in areas with greater restrictions on abortion access.
As a result of the ban, there would be a greater number of women carrying babies to term, with the risks that go with pregnancy and childbirth—and limited healthcare provision in the months after a child is born makes the period even more dangerous.
“We know that carrying a pregnancy to term has more risks than abortion care,” said Dr. Smita Carroll, an OBGYN in New Mexico and a Physicians for Reproductive Health fellow.
Without extended Medicaid coverage, particularly those from poorer backgrounds, who
|
might struggle to afford medical bills, are less likely to seek care, and health conditions can go untreated.
“The truth of it is, is that when people don’t have access to affordable healthcare, they aren’t seeing doctors—which means they aren’t getting the care that they need, whether it be preventative or not,” said Blair Wallace, a policy and advocacy strategist at the American Civil Liberties Union in Texas.
Why 60 Days of Medicaid Isn’t Always Enough
Even at the most restrictive end of Medicaid coverage, new mothers still have at least 60 days from the birth of their child to seek medical help. However, this is arguably not long enough: a 2020 report by the Commonwealth Fund found 52 percent of pregnancy-related deaths can occur up to a year after birth.
“This potential loss of coverage is important because continuity of health care coverage is critical to catching and treating the health conditions and emergencies that are in part to blame for the United States’ high maternal mortality and morbidity rates,” said Amy Chen, a senior attorney at NHLP.
Carroll said that maternal mortality “is largely preventable,” and maintaining Medicaid coverage for a year allows doctors such as herself to reduce the chance of pregnancy-related deaths, but also treat other medical conditions, such as postpartum depression, diabetes and heart conditions, that can crop up during and after pregnancy.
“Having any barrier to coverage in the postpartum setting is a direct barrier for accessing necessary care,” Carroll told Newsweek.
To compound the issue, “generally it is true that states with the most restrictive abortion laws also tend to have the weakest social safety net programs and poorer health outcomes,” Duffy said, citing a 2022 report by the Center for American Progress, which notes: “States hostile to abortion also have some of the highest rates of uninsured and comprise the majority of those still refusing to expand Medicaid.”
‘In Texas, I Barely Have Reproductive Rights’
For instance, Texas already ranks eighth worst for maternal mortality rates in the U.S., according to the latest figures from World Population Review. Kaiser Family Foundation analysis of federal data shows that half of all births in the state are covered by Medicaid.
“Currently, in Texas, I barely have reproductive rights,” Wallace said. “We are certainly facing a maternal mortality and generally a pregnancy care crisis.”
Wallace said that when she had a child herself, she had to hand over thousands of dollars to get into the hospital to deliver, despite having pregnancy insurance. Without Medicaid supplementing this, she says she would not have been able to afford that care.
“In Texas, there is a long history of the state not putting the money where they say their values are,” she argues. “They say that they value mothers and children, and yet our maternal mortality data does not reflect that.”
A Texas Health and Human Services Commission spokesperson noted that “lawmakers have submitted bills this legislative session to extend postpartum Medicaid coverage to a full year.” As of March 16, it had been left pending in committee. Newsweek reached out to the governor’s office via email for comment on Friday.
Texas proposed extending postpartum Medicaid coverage to just six months in 2021, but was denied approval by the federal government in 2022. While the state government said this would still have had “long-term benefits of improving continuity of care,” the Texas Tribune reported that it had been rejected as the language of the bill appeared to tacitly exclude women who had abortions.
Responding to the proposal in 2022, Jamila Taylor, then-director of health care reform at The Century Foundation think-tank, wrote that while an extension was a good thing, 12-month postpartum coverage was “the ideal and the most evidence-based approach” to avoiding negative health outcomes.
State legislation has been introduced to extend postpartum coverage to 12 months in Idaho in February, and Mississippi and South Dakota in March, respectively, while Wisconsin’s health department proposed a 30-day extension in 2022. A bill to extend Medicaid to a year passed the Missouri senate on March 3, but faces similar issues with wording as Texas’s bill.
However, with abortion bans already in place, the delay in implementing Medicaid extensions poses an immediate problem for pregnant women in the states. Newsweek reached out to the governor’s office and relevant authorities of the six other states, excluding Texas, for comment via email on Saturday.
Similarly, there are other states, such as Utah and Arizona, that have implemented limits on abortion without yet increasing postpartum Medicaid. Arizona submitted an initial application to extend in 2022, while a 2023 legislative effort in Utah was struck down.
Both Chen and Wallace note that maternal mortality rates are disproportionately higher among black and indigenous women. Wallace said: “Black women are two times more likely to die during childbirth than me, as a white woman—that is despicable; there is just no reason for that.” In fact, the Commonwealth Fund found that the rate among black women was 2.5 times higher than white women, and more than double for indigenous women.
According to Chen, there is even disparity in the coverage individual mothers may receive in states with 60-day postpartum Medicaid.
“The technical coverage time is the end of the month in which the 60th day following the end of the pregnancy falls,” she said. “So if your pregnancy ends early in the month it can be closer to 3 months, but if your pregnancy ends later in the month it’s closer to 2 months.”
A Dangerous Time
For states that have instituted anti-abortion laws, to avoid their maternal mortality rates worsening, extending Medicaid coverage provides healthcare workers with a better shot of identifying and treating conditions new mothers may have.
“Having a child is an incredibly transformative period of time,” Carroll said, adding that extended Medicaid “really does help in terms of providing vital care at a really important time of their life.”
Since 2022, Kentucky, Louisiana and Oklahoma have increased post-partum Medicaid coverage to 12 months while instituting full abortion bans. Louisiana has the worst maternal mortality rate of all U.S. states, according to the 2023 figures.
It is as-yet unclear what effect these two interventions will have on maternal mortality rates; due to the coronavirus pandemic, official statistics for 2021 were only released on Thursday.
The Political Dimension
While these states may be viewed as looking to prevent unnecessary deaths as they uphold their anti-abortion stance, Wallace believes it to be cynical.
“Polling goes hand-in-hand with policy, and I think that a lot of these states are realizing that some of the laws they put into effect are wildly unpopular,” she told Newsweek. According to a February Gallup poll, the proportion of Americans dissatisfied with abortion laws has risen significantly, to 46 percent, since 2021.
“In some regards, Republicans are saying, ‘well gosh, what can we do to make us look better in this light?’” Wallace added.
As of February, 47 percent of Texans believe abortion laws should be less strict, according to figures by the University of Texas. In neighboring New Mexico—which has no limits on abortion and 12-month postpartum coverage—since the overturning of Roe v. Wade, Carroll says there has been a “large increase” in the number of patients coming to the state from elsewhere, including Texas, seeking abortions.
There are a raft of other states—such as Virginia, California and Hawaii, to name just a few—which have made similar increases in coverage in the past year without changing their abortion laws.
While these changes may not be explicitly linked to the Supreme Court’s ruling on abortion, the U.S. has the highest maternal mortality rate among developed countries, of 21 deaths per 100,000 people, according to UNICEF data, making the issue pressing for all states. As studies show, abortion laws are likely to push the nationwide rate higher.
“I would like to think that it’s people just seizing on an opportunity, but also: it’s time,” Wallace said. “There’s just no good reason that our country should have people in these terrible situations where they’re able to go bankrupt… for giving birth.”
The post These Seven States Are Sleepwalking Into a Maternal Mortality Crisis appeared first on Newsweek.
|
By Daniel Trotta
(Reuters) – High levels of inflation and increased costs of living are contributing to worldwide hunger, according to a survey of 16 countries commissioned by the humanitarian group World Vision International.
The survey, which comes ahead of World Food Day on Monday, found that 59% of parents surveyed were very concerned about child hunger and malnutrition in their families, with 46% worried about finding the money to buy food.
It also found that 37% of parents said their children fail to receive proper nutrition each day and 21% said their children have gone hungry in the last month.
The percentage of children going to bed hungry rises to 38% in low-income countries. In the United States, 18% of respondents said a child has gone to bed hungry in their home.
“Hunger is a global problem, and it isn’t limited to any one country or part of the globe,” Andrew Morley, the president of World Vision International, said in a statement.
The survey, conducted by Ipsos, polled more than 14,000 people of all income levels.
Among respondents who said their children went to sleep hungry, 46% cited inflation and the cost of living as the main reason. The next two most common reasons cited were low household income (39%) and not enough government focus on ending hunger (25%).
Surging prices have affected economies around the globe due to factors including pandemic-related disruptions to global supply chains and the effects of Russia’s war in Ukraine. Persistently high inflation was the biggest economic concern of economists this year, according to Reuters polls.
The 16 countries included Australia, Bangladesh, Brazil, Britain, Canada, Germany, Japan, Mexico, Peru, the Philippines, South Korea and the United States where Ipsos surveyed about 1,000 adults in each nation. In Chad, the Democratic Republic of the Congo, Iraq and Malawi, it surveyed about 500 people in each country.
Inflation and increased cost of living was the most-cited cause of child hunger in 11 of the 16 countries, reaching a high of 70% in Bangladesh. But wealthy countries such as Canada (68%), Australia (66%) and Britain (66%) also cited higher prices as the main cause.
(This story has been corrected to fix the spelling of Andrew Morley in paragraph 5)
(Reporting by Daniel Trotta; Editing by Edwina Gibbs)
|
<urn:uuid:9b3841e8-b092-4649-b69a-a526dc07660d>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00485.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9665535092353821,
"pii_count": 0,
"score": 2.59375,
"token_count": 494,
"url": "https://ec2-15-188-152-128.eu-west-3.compute.amazonaws.com/2023/10/12/high-inflation-means-children/"
}
|
By Daniel Trotta
(Reuters) – High levels of inflation and increased costs of living are contributing to worldwide hunger, according to a survey of 16 countries commissioned by the humanitarian group World Vision International.
The survey, which comes ahead of World Food Day on Monday, found that 59% of parents surveyed were very concerned about child hunger and malnutrition in their families, with 46% worried about finding the money to buy food.
It also found that 37% of parents said their children fail to receive proper nutrition each day and 21% said their children have gone hungry in the last month.
The percentage of children going to bed hungry rises to 38% in low-income countries. In the United States, 18% of respondents said a child has gone to bed hungry in their home.
“Hunger is a global problem, and it isn’t limited to any one country or part of the globe,” Andrew Morley, the president of World Vision International, said in a statement.
The survey, conducted by Ipsos, polled more than 14,000 people of all income levels.
Among respondents who said their children went to sleep hungry, 46% cited inflation and the cost of living as the main reason. The next two most common reasons cited were low household income (39%) and not enough government focus on ending hunger (25%).
Surging prices have affected economies around the globe due to factors including pandemic-related disruptions to global supply chains and the effects of Russia’s war in Ukraine. Persistently high inflation was the biggest economic concern of economists this year, according to Reuters polls.
The 16 countries included Australia, Bangladesh, Brazil, Britain, Canada, Germany, Japan, Mexico, Peru, the Philippines, South Korea and the United States where Ipsos surveyed about 1,000 adults in each nation. In Chad, the Democratic Republic of the Congo, Iraq and Malawi, it surveyed about 500 people in each country.
Inflation and increased cost of living was the most-cited cause of child hunger in 11 of the 16 countries, reaching a high of 70% in Bangladesh. But wealthy countries such as Canada (68%), Australia (66%) and Britain (66%) also cited higher prices as the main cause.
(This story has been corrected to fix the spelling of Andrew Morley in paragraph 5)
(Reporting by Daniel Trotta; Editing by
|
Edwina Gibbs)
|
The 15 hottest days, in the world's hottest month
By the end of the week, it is likely that 15 days just this month will have breached an unprecedented global temperature threshold — a clarion wakeup call in the form of extreme weather.
Why it matters: Nearly every facet of the climate system is flashing red this summer, from record-low sea ice extent in Antarctica to hot tub-like ocean waters surrounding South Florida, and all-time high temperature records set in multiple countries on at least three continents.
- And all this is occurring as human-caused emissions of greenhouse gases from the burning of fossil fuels and other sources continue to increase, despite the existence of ever-cheaper technologies to generate electricity and power certain modes of transportation.
Zoom in: Already this month, 14 days have recorded surface air temperatures greater than 17°C (62.6°F) — spikes that have not been seen for roughly 125,000 years.
- In fact, Wednesday marked the 17th straight day with global temperatures hotter than any prior days on record.
Several more records are all but certain to fall in the coming weeks:
- July will be the hottest month on Earth since instrument records began in the 19th century.
- The milestone for the hottest summer worldwide is in jeopardy, according to the European Center for Medium-Range Weather Forecasts.
- NOAA and the ECMWF has said additional heat waves are likely in coming weeks, particularly if the dominant weather pattern — featuring multiple, stuck heat domes around the hemisphere — is not disrupted.
By the numbers: The heat domes worldwide have been noteworthy for their duration, expansiveness and severity. This is the case even in places famous for hot weather, where people would normally shrug off a hot stretch.
- The overnight minimum temperature Wednesday morning in Phoenix was a sweltering 97°F, an all time high for that location.
- On Friday, the city is forecast to have its record 20th-straight day with a high temperature of 110°F or greater.
- Austin, Texas, has had 10-straight days with a high temperature of 105°F or greater, an unprecedented streak.
Threat level: This heat is not a mere inconvenience or something to power through.
- It is downright deadly, particularly for vulnerable populations including the elderly, young children, those with chronic illnesses and people without access to air conditioning.
- Early figures on heat-related deaths in parts of the U.S., for example, are starting to emerge, and in coming weeks it is likely these will climb.
- A recent study in the journal Nature Medicine found severe heat waves in Europe last summer killed as many as 61,000 people.
The intrigue: The 17°C demarcation line is not a hard climate boundary beyond which ice sheets will melt and the oceans rise inexorably. Rather, it represents yet another warning — a "Stop, turn back" sign on the march to a more treacherous and less familiar planet.
- It is also the product of a modern but not 100% precise method of carefully estimating the planet's temperature on a daily basis.
- It involves computer model reanalysis, in which data from ships, buoys, surface weather stations, satellites and other sources are crunched and calculated in near-real-time.
What they're saying: Forget about the overused expression, "the new normal," to describe our current climate moment. As Friedericke Otto, a climate scientist at Imperial College London told Axios, "We’re nowhere near a normal."
- "Whenever we stop burning fossil fuels we can begin to figure out what 'normal' means again," Otto said.
- Former NASA climate scientist James Hansen, who rose to prominence by warning the Senate and the American people about the dangers of climate change in the sizzling American summer of 1988, told The Guardian: "We are headed wittingly into the new reality – we knew it was coming."
- He termed humanity "damned fools," saying: "We have to taste it to believe it."
The bottom line: Perhaps this is our taste of the new type of extreme heat, as we transition into even more unstable times. Because this is both one of the hottest summers of our lives, and one of the coolest of the rest of our lives.
|
<urn:uuid:59ea6c16-44f6-4d81-89e3-23a8e6ee2690>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511055.59/warc/CC-MAIN-20231003060619-20231003090619-00018.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9361034631729126,
"pii_count": 0,
"score": 3.453125,
"token_count": 895,
"url": "https://www.axios.com/2023/07/20/world-heat-wave-records-us-europe-china"
}
|
The 15 hottest days, in the world's hottest month
By the end of the week, it is likely that 15 days just this month will have breached an unprecedented global temperature threshold — a clarion wakeup call in the form of extreme weather.
Why it matters: Nearly every facet of the climate system is flashing red this summer, from record-low sea ice extent in Antarctica to hot tub-like ocean waters surrounding South Florida, and all-time high temperature records set in multiple countries on at least three continents.
- And all this is occurring as human-caused emissions of greenhouse gases from the burning of fossil fuels and other sources continue to increase, despite the existence of ever-cheaper technologies to generate electricity and power certain modes of transportation.
Zoom in: Already this month, 14 days have recorded surface air temperatures greater than 17°C (62.6°F) — spikes that have not been seen for roughly 125,000 years.
- In fact, Wednesday marked the 17th straight day with global temperatures hotter than any prior days on record.
Several more records are all but certain to fall in the coming weeks:
- July will be the hottest month on Earth since instrument records began in the 19th century.
- The milestone for the hottest summer worldwide is in jeopardy, according to the European Center for Medium-Range Weather Forecasts.
- NOAA and the ECMWF has said additional heat waves are likely in coming weeks, particularly if the dominant weather pattern — featuring multiple, stuck heat domes around the hemisphere — is not disrupted.
By the numbers: The heat domes worldwide have been noteworthy for their duration, expansiveness and severity. This is the case even in places famous for hot weather, where people would normally shrug off a hot stretch.
- The overnight minimum temperature Wednesday morning in Phoenix was a sweltering 97°F, an all time high for that location.
- On Friday, the city is forecast to have its record 20th-straight day with a high temperature of 110°F or greater.
- Austin, Texas, has had 10-straight days with a high temperature of 105°F or greater, an unprecedented streak.
Threat level: This heat is not a mere inconvenience or something to power through.
- It is downright deadly, particularly for vulnerable populations including the elderly, young children, those with
|
chronic illnesses and people without access to air conditioning.
- Early figures on heat-related deaths in parts of the U.S., for example, are starting to emerge, and in coming weeks it is likely these will climb.
- A recent study in the journal Nature Medicine found severe heat waves in Europe last summer killed as many as 61,000 people.
The intrigue: The 17°C demarcation line is not a hard climate boundary beyond which ice sheets will melt and the oceans rise inexorably. Rather, it represents yet another warning — a "Stop, turn back" sign on the march to a more treacherous and less familiar planet.
- It is also the product of a modern but not 100% precise method of carefully estimating the planet's temperature on a daily basis.
- It involves computer model reanalysis, in which data from ships, buoys, surface weather stations, satellites and other sources are crunched and calculated in near-real-time.
What they're saying: Forget about the overused expression, "the new normal," to describe our current climate moment. As Friedericke Otto, a climate scientist at Imperial College London told Axios, "We’re nowhere near a normal."
- "Whenever we stop burning fossil fuels we can begin to figure out what 'normal' means again," Otto said.
- Former NASA climate scientist James Hansen, who rose to prominence by warning the Senate and the American people about the dangers of climate change in the sizzling American summer of 1988, told The Guardian: "We are headed wittingly into the new reality – we knew it was coming."
- He termed humanity "damned fools," saying: "We have to taste it to believe it."
The bottom line: Perhaps this is our taste of the new type of extreme heat, as we transition into even more unstable times. Because this is both one of the hottest summers of our lives, and one of the coolest of the rest of our lives.
|
Almost 40% of jobs around the world could be affected by the rise of artificial intelligence (AI), a trend that is likely to deepen inequality, according to the International Monetary Fund. In a Sunday blog post, IMF chief Kristalina Georgieva called for governments to establish social safety nets and offer retraining programs to counter the impact of AI. “In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions,” she wrote ahead of the annual meeting of the World Economic Forum (WEF) in Davos, Switzerland, where the topic is set to be high on the agenda. The ski resort town of Davos was already bedecked with AI advertisements and branding as the summit got underway Monday. Sam Altman, chief executive of ChatGPT-maker OpenAI, and his biggest backer — Microsoft CEO Satya Nadella — will speak at the event later this week as part of a program that includes a debate Tuesday on “Generative AI: Steam Engine of the Fourth Industrial Revolution?” As AI continues to be adapted by more workers and businesses, it’s expected to both help and hurt the human workforce, Georgieva noted in her blog. Echoing previous warnings from other experts, Georgieva said the effects were expected to be felt more deeply in advanced economies than emerging markets, partly because white-collar workers are seen to be more at risk than manual laborers. In more developed economies, for example, as much as 60% of jobs could be impacted by AI. Approximately half of those may benefit from how AI promotes higher productivity, she said. “For the other half, AI applications may execute key tasks currently performed by humans, which could lower labor demand, leading to lower wages and reduced hiring,” wrote Georgieva, citing the IMF’s analysis. “In the most extreme cases, some of these jobs may disappear.” In emerging markets and lower income nations, 40% and 26% of jobs are expected to be affected by AI, respectively. Emerging markets refer to places such as India and Brazil with sustained economic growth, while low-income countries refer to developing economies with per capita income falling within a certain level such as Burundi and Sierra Leone. “Many of these countries don’t have the infrastructure or skilled workforces to harness the benefits of AI, raising the risk that over time the technology could worsen inequality,” noted Georgieva. She warned that the use of AI could increase chances of social unrest, particularly if younger, less experienced workers seized on the technology as a way to help boost their output while more senior workers struggle to keep up. AI became a hot topic at the WEF in Davos last year as ChatGPT took the world by storm. The chatbot sensation, which is powered by generative AI, sparked conversations on how it could change the way people work around the world due to its ability to write essays, speeches, poems and more. Since then, upgrades to the technology have expanded the use of AI chatbots and systems, making them more mainstream and spurring massive investment. Some tech firms have already directly pointed to AI as a reason they are rethinking staffing levels. While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity and boost global GDP by 7% annually over a 10-year period, according to a March 2023 estimate by Goldman Sachs economists. Georgieva, in her blog post, also cited opportunities to boost output and incomes around the world with the use of AI. “AI will transform the global economy,” she wrote. “Let’s make sure it benefits humanity.” Rob North in Davos contributed reporting.
|
<urn:uuid:cc645133-c7dd-4e77-bbe5-11654c8cadd5>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00503.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9732151031494141,
"pii_count": 0,
"score": 2.8125,
"token_count": 774,
"url": "https://www.cnn.com/2024/01/15/tech/imf-global-employment-risk-ai-intl-hnk/index.html"
}
|
Almost 40% of jobs around the world could be affected by the rise of artificial intelligence (AI), a trend that is likely to deepen inequality, according to the International Monetary Fund. In a Sunday blog post, IMF chief Kristalina Georgieva called for governments to establish social safety nets and offer retraining programs to counter the impact of AI. “In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions,” she wrote ahead of the annual meeting of the World Economic Forum (WEF) in Davos, Switzerland, where the topic is set to be high on the agenda. The ski resort town of Davos was already bedecked with AI advertisements and branding as the summit got underway Monday. Sam Altman, chief executive of ChatGPT-maker OpenAI, and his biggest backer — Microsoft CEO Satya Nadella — will speak at the event later this week as part of a program that includes a debate Tuesday on “Generative AI: Steam Engine of the Fourth Industrial Revolution?” As AI continues to be adapted by more workers and businesses, it’s expected to both help and hurt the human workforce, Georgieva noted in her blog. Echoing previous warnings from other experts, Georgieva said the effects were expected to be felt more deeply in advanced economies than emerging markets, partly because white-collar workers are seen to be more at risk than manual laborers. In more developed economies, for example, as much as 60% of jobs could be impacted by AI. Approximately half of those may benefit from how AI promotes higher productivity, she said. “For the other half, AI applications may execute key tasks currently performed by humans, which could lower labor demand, leading to lower wages and reduced hiring,” wrote Georgieva, citing the IMF’s analysis. “In the most extreme cases, some of these jobs may disappear.” In emerging markets and lower income nations, 40% and 26% of jobs are expected to be affected by AI, respectively. Emerging markets refer to places such as India and Brazil with sustained economic growth, while low-income countries refer to developing economies with per capita income falling within a certain level such as Burundi and Sierra Leone. “Many of these countries don’t have the infrastructure or skilled workforces to harness the benefits of AI, raising the risk that over time the technology could worsen inequality,” noted Georgieva. She
|
warned that the use of AI could increase chances of social unrest, particularly if younger, less experienced workers seized on the technology as a way to help boost their output while more senior workers struggle to keep up. AI became a hot topic at the WEF in Davos last year as ChatGPT took the world by storm. The chatbot sensation, which is powered by generative AI, sparked conversations on how it could change the way people work around the world due to its ability to write essays, speeches, poems and more. Since then, upgrades to the technology have expanded the use of AI chatbots and systems, making them more mainstream and spurring massive investment. Some tech firms have already directly pointed to AI as a reason they are rethinking staffing levels. While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity and boost global GDP by 7% annually over a 10-year period, according to a March 2023 estimate by Goldman Sachs economists. Georgieva, in her blog post, also cited opportunities to boost output and incomes around the world with the use of AI. “AI will transform the global economy,” she wrote. “Let’s make sure it benefits humanity.” Rob North in Davos contributed reporting.
|
Teenagers in Ireland are among the best in the world at reading, according to tests taken by 15-year-olds in 81 countries and regions.
The Pisa tests, run by the Organisation for Economic Co-operation and Development (OECD) every three years, also show Irish teens perform significantly better than average in maths and science.
Overall, 15-year-olds in Ireland score highest for reading (second, up from eighth in 2018) and above average for maths (11th, up from 21st) and science (12th, up from 22nd).
In reading only Singapore ranked higher than Ireland, followed by Japan, Korea, Taiwan and Estonia.
However, the performance of students at maths slipped significantly in Ireland since the last study in 2018. Science improved and reading remained broadly the same.
The fact that many other EU and OECD countries saw declines across these areas resulted in Ireland climbing up the rankings.
The Pisa tests, which were delayed from 2021 due to the pandemic, were carried out by 600,000 15-year-olds across jurisdictions which included 37 developed OECD countries and 26 EU member states. In Ireland, more than 5,500 students in 170 schools participated in the Pisa tests.
Ireland’s broadly positive performance comes despite challenges such as disruption caused by Covid-19 and will also likely be seen as a vote of confidence in highly contested Junior Cycle reforms.
A note of caution, however, has been sounded by statisticians who say the sample number of students in Ireland who completed standardised tests in October and November last year was under the minimum threshold.
This likely led to more higher achievers sitting tests in Ireland compared to previous years which, in turn, may have slightly inflated our results.
One source of concern is that Ireland has significantly fewer high-achieving students in maths and science compared to other developed countries.
This trend has been notable since 2012 and raises questions over whether our most talented students are being challenged enough.
On a positive note, Ireland has a smaller proportion of low-achieving pupils compared to most other countries.
This, say researchers, is likely to be the result of a focus on tackling literacy and numeracy in recent years, as well as investment in disadvantaged or Deis schools.
The variation in performance between schools in Ireland is also lower than many other countries – a sign Irish secondary schools are more equitable compared to other developed countries.
When broken down by gender, the results show girls in Ireland significantly outperform boys in reading. However, boys outperform girls in maths, while there is no sign of a gender gap in science.
Minister for Education Norma Foley welcomed the findings as “extremely positive news for Ireland”.
“We have retained our place among a small set of high achieving countries at a time where particular strain was put on school communities globally due to Covid-19. We have also ensured that the number of low achieving students remains among the lowest in the 81 countries tested,” she said.
Ms Foley added that although the pandemic presented unprecedented challenges, the resilience shown by schools during that time was to be commended.
The global findings show students in east Asian countries such as Singapore, China, Japan, Korea, Taiwan and Hong Kong dominate the rankings and outperform most other countries in reading, maths and science.
In maths, Singapore was top, followed by Japan, China and Taiwan, while in science Singapore was also top, followed by Japan, China and Taiwan.
A breakdown of the scores shows these countries have significantly more high achievers than in Ireland.
Department of Education officials say there is scope to make progress on this in Ireland through developments in curriculum, teaching and policy. It has established a working group to develop a policy on students who are exceptionally able or gifted.
On the positive side, officials believe that fact we have fewer low achievers than average is a sign of the success of a range of initiatives such as the Deis system of supports for school in more deprived areas, as well as literacy and numeracy strategies.
The decline in maths performance among Irish students will also raise eyebrows, even if Ireland climbed up the rankings due to poorer scores for students in comparator countries. Officials say policymakers can learn from these results and put measures in place to help to further develop our students’ critical thinking, for example.
The increase in science score for Irish students comes as on the back of changes to the Junior Cycle science curriculum that have proved controversial among some teachers. Officials, however, believe we are seeing the effects of the changes in the system and will continue to see this develop over time.
Try a Pisa question:
This is an example of the kind of problem-solving and critical thinking skills that Pisa aims to measure among 15-year-olds. on a global scale. Can you solve it?
Unlike traditional assessments, Pisa seeks to assess not just students’ ability to reproduce learned material but also their capacity to apply knowledge creatively in unexpected scenarios, think critically across disciplines and demonstrate effective learning strategies.
While some critics argue that Pisa tests are unfair as they present students with unfamiliar problems, the OECD argues that life is full of unforeseen challenges. In the real world, it says, people must solve problems that they have not anticipated; it is not just about remembering lessons in a classroom.
Note: the answer to the triangular pattern question is 40.0%.
|
<urn:uuid:dc22e63c-78d2-4c33-8722-c97cbbfafc46>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474775.80/warc/CC-MAIN-20240229003536-20240229033536-00087.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9702885746955872,
"pii_count": 0,
"score": 2.75,
"token_count": 1114,
"url": "https://www.irishtimes.com/ireland/education/2023/12/05/irish-teenagers-are-second-best-in-the-world-at-reading/"
}
|
Teenagers in Ireland are among the best in the world at reading, according to tests taken by 15-year-olds in 81 countries and regions.
The Pisa tests, run by the Organisation for Economic Co-operation and Development (OECD) every three years, also show Irish teens perform significantly better than average in maths and science.
Overall, 15-year-olds in Ireland score highest for reading (second, up from eighth in 2018) and above average for maths (11th, up from 21st) and science (12th, up from 22nd).
In reading only Singapore ranked higher than Ireland, followed by Japan, Korea, Taiwan and Estonia.
However, the performance of students at maths slipped significantly in Ireland since the last study in 2018. Science improved and reading remained broadly the same.
The fact that many other EU and OECD countries saw declines across these areas resulted in Ireland climbing up the rankings.
The Pisa tests, which were delayed from 2021 due to the pandemic, were carried out by 600,000 15-year-olds across jurisdictions which included 37 developed OECD countries and 26 EU member states. In Ireland, more than 5,500 students in 170 schools participated in the Pisa tests.
Ireland’s broadly positive performance comes despite challenges such as disruption caused by Covid-19 and will also likely be seen as a vote of confidence in highly contested Junior Cycle reforms.
A note of caution, however, has been sounded by statisticians who say the sample number of students in Ireland who completed standardised tests in October and November last year was under the minimum threshold.
This likely led to more higher achievers sitting tests in Ireland compared to previous years which, in turn, may have slightly inflated our results.
One source of concern is that Ireland has significantly fewer high-achieving students in maths and science compared to other developed countries.
This trend has been notable since 2012 and raises questions over whether our most talented students are being challenged enough.
On a positive note, Ireland has a smaller proportion of low-achieving pupils compared to most other countries.
This, say researchers, is likely to be the result of a focus on tackling literacy and numeracy in recent years, as well as investment in disadvantaged or Deis schools.
The variation in performance between schools in Ireland is also lower than many other countries – a sign Irish secondary schools are
|
more equitable compared to other developed countries.
When broken down by gender, the results show girls in Ireland significantly outperform boys in reading. However, boys outperform girls in maths, while there is no sign of a gender gap in science.
Minister for Education Norma Foley welcomed the findings as “extremely positive news for Ireland”.
“We have retained our place among a small set of high achieving countries at a time where particular strain was put on school communities globally due to Covid-19. We have also ensured that the number of low achieving students remains among the lowest in the 81 countries tested,” she said.
Ms Foley added that although the pandemic presented unprecedented challenges, the resilience shown by schools during that time was to be commended.
The global findings show students in east Asian countries such as Singapore, China, Japan, Korea, Taiwan and Hong Kong dominate the rankings and outperform most other countries in reading, maths and science.
In maths, Singapore was top, followed by Japan, China and Taiwan, while in science Singapore was also top, followed by Japan, China and Taiwan.
A breakdown of the scores shows these countries have significantly more high achievers than in Ireland.
Department of Education officials say there is scope to make progress on this in Ireland through developments in curriculum, teaching and policy. It has established a working group to develop a policy on students who are exceptionally able or gifted.
On the positive side, officials believe that fact we have fewer low achievers than average is a sign of the success of a range of initiatives such as the Deis system of supports for school in more deprived areas, as well as literacy and numeracy strategies.
The decline in maths performance among Irish students will also raise eyebrows, even if Ireland climbed up the rankings due to poorer scores for students in comparator countries. Officials say policymakers can learn from these results and put measures in place to help to further develop our students’ critical thinking, for example.
The increase in science score for Irish students comes as on the back of changes to the Junior Cycle science curriculum that have proved controversial among some teachers. Officials, however, believe we are seeing the effects of the changes in the system and will continue to see this develop over time.
Try a Pisa question:
This is an example of the kind of problem-solving and critical thinking skills that Pisa aims to measure among 15-year-olds. on a global scale. Can you solve it?
Unlike traditional assessments, Pisa seeks to assess not just students’ ability to reproduce learned material but also their capacity to apply knowledge creatively in unexpected scenarios, think critically across disciplines and demonstrate effective learning strategies.
While some critics argue that Pisa tests are unfair as they present students with unfamiliar problems, the OECD argues that life is full of unforeseen challenges. In the real world, it says, people must solve problems that they have not anticipated; it is not just about remembering lessons in a classroom.
Note: the answer to the triangular pattern question is 40.0%.
|
The fast-approaching El Niño transition has sent a chill down the spine of climate scientists worldwide. And for a good reason, too — most models forecast a dangerous level of warming that most nations, if not all, simply aren't prepared for.
The latest report by the World Meteorological Organisation (WMO) outlines that natural disaster-related events have cost economies upwards of $4 trillion since 1970. However, even this seems like a drop in the ocean compared to the looming price tag that El Niño threatens to impose on the world economy.
A recent study has estimated that as climate change exacerbates future El Niño events, global economies could pay an excruciating $84 trillion by the end of the 21st century, even if the much-needed and elusive carbon reduction goals have been achieved.
We are well aware of how high maintenance El Niño events are, with the two costliest cycles in the last 60 years putting the world economy back by about $4-5 trillion, each. Furthermore, about 56% of all countries had to endure slowed economic development even five years after the event had passed, showing how deep the scars linger.
"We know El Niño is costly. We are showing that these costs are far greater than we understood them to be previously," explains Justin Mankin, one of the study's researchers.
To stress their point, they remark that the average income in Peru would've been $1,246 higher if the 1998 Niño event had not occurred. They even note that while tropical and poorer countries endure the brunt of the El Niño-Southern Oscillation (ENSO) phases, even developed superpowers such as the US aren’t spared from dipping economic growth during major cycles.
"There are places like Peru, Ecuador and Indonesia that are experiencing losses that are far larger than the global number," remarks Chris Callahan, another study author. "This includes some of the most vulnerable people on the planet, some of the people that don't have the kind of adaptive capacity that places like the United States and Europe have."
However, the jury is still out on whether these revised estimates will match real numbers. Climate models still need ways to accurately project how global warming pairs up with El Niño in a way. But one thing is crystal: major economic losses await, no matter what.
Stoking warming ocean temperatures
Ghastly economics aside, El Niño still very much remains a weather event affecting entire ecosystems around the globe. As we witness catastrophic climate-frenzied ocean warming, experts reckon there is no question the El Niño transition will only add to the worry.
Our oceans are instrumental in keeping our world from getting too hot by soaking up an astronomical amount of heat from the Sun. However, the extra heat that El Niño will supply could cascade into severe atmospheric tantrums, experts worry.
Since the last El Niño, global ocean temperatures have spiked by about 0.04°C, and the prospect of a strong El Niño phase this time could have disastrous consequences not just for weather unpredictability, but also for marine life living in the world's oceans.
"More than 70% of our planet is covered by the ocean. It plays an enormous role in the climate system. We all know that our climate is warming – but I imagine that most people first think of warmer air temperatures. In fact, our oceans have been soaking up much of this extra heat, keeping the atmosphere relatively cool," explains ESA's lead ocean scientist, Craig Donlon.
"This has come at a cost, and we are now seeing the temperature of our oceans at their hottest since records began."
For weather, science, space, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!
|
<urn:uuid:9d584419-5dc8-4856-8b1b-f4288f2f38cf>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00221.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9479467868804932,
"pii_count": 0,
"score": 3.40625,
"token_count": 770,
"url": "https://weather.com/en-IN/india/news/news/2023-05-24-el-nino-could-cost-world-economy-84-trillion-by-2099"
}
|
The fast-approaching El Niño transition has sent a chill down the spine of climate scientists worldwide. And for a good reason, too — most models forecast a dangerous level of warming that most nations, if not all, simply aren't prepared for.
The latest report by the World Meteorological Organisation (WMO) outlines that natural disaster-related events have cost economies upwards of $4 trillion since 1970. However, even this seems like a drop in the ocean compared to the looming price tag that El Niño threatens to impose on the world economy.
A recent study has estimated that as climate change exacerbates future El Niño events, global economies could pay an excruciating $84 trillion by the end of the 21st century, even if the much-needed and elusive carbon reduction goals have been achieved.
We are well aware of how high maintenance El Niño events are, with the two costliest cycles in the last 60 years putting the world economy back by about $4-5 trillion, each. Furthermore, about 56% of all countries had to endure slowed economic development even five years after the event had passed, showing how deep the scars linger.
"We know El Niño is costly. We are showing that these costs are far greater than we understood them to be previously," explains Justin Mankin, one of the study's researchers.
To stress their point, they remark that the average income in Peru would've been $1,246 higher if the 1998 Niño event had not occurred. They even note that while tropical and poorer countries endure the brunt of the El Niño-Southern Oscillation (ENSO) phases, even developed superpowers such as the US aren’t spared from dipping economic growth during major cycles.
"There are places like Peru, Ecuador and Indonesia that are experiencing losses that are far larger than the global number," remarks Chris Callahan, another study author. "This includes some of the most vulnerable people on the planet, some of the people that don't have the kind of adaptive capacity that places like the United States and Europe have."
However, the jury is still out on whether these revised estimates will match real numbers. Climate models still need ways to accurately project how global warming pairs up with El Niño in a way. But one thing is crystal: major economic losses await, no matter what.
Stoking warming ocean temperatures
Ghastly economics aside, El Niño still very much remains a weather event affecting entire ecosystems around the globe.
|
As we witness catastrophic climate-frenzied ocean warming, experts reckon there is no question the El Niño transition will only add to the worry.
Our oceans are instrumental in keeping our world from getting too hot by soaking up an astronomical amount of heat from the Sun. However, the extra heat that El Niño will supply could cascade into severe atmospheric tantrums, experts worry.
Since the last El Niño, global ocean temperatures have spiked by about 0.04°C, and the prospect of a strong El Niño phase this time could have disastrous consequences not just for weather unpredictability, but also for marine life living in the world's oceans.
"More than 70% of our planet is covered by the ocean. It plays an enormous role in the climate system. We all know that our climate is warming – but I imagine that most people first think of warmer air temperatures. In fact, our oceans have been soaking up much of this extra heat, keeping the atmosphere relatively cool," explains ESA's lead ocean scientist, Craig Donlon.
"This has come at a cost, and we are now seeing the temperature of our oceans at their hottest since records began."
For weather, science, space, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!
|
Wildlife saved from extinction as the Endangered Species Act turns 50
What a country chooses to save is what a country chooses to say about itself.
— Mollie Beattie, U.S. Fish and Wildlife Service Director (1993-1996)
On Dec. 28, 1973, President Richard Nixon signed the world’s first Endangered Species Act, a revolutionary piece of legislation whose value has only grown in the half-century since its implementation. The legislation came about after a century of abuse of wildlife.
From the late 1800s, market hunting, poaching, game hogs and using feathers for women’s fashion had brought many species to the edge of extinction:
- The more than 60 million bison that had dominated the Great Plains for thousands of years had been reduced to 541 by 1889.
- The passenger pigeon, more numerous than any other bird in the world, had been reduced from more than 3 billion birds to extinction by 1914.
- The pesticide DDT decimated our nation’s symbol. reducing bald eagle numbers to a relict 417 nesting pairs in the lower 48 states by 1963.
- The largest North American land bird, the California condor, became extinct in the wild in 1987, and was reduced to 27 individuals sent to captive-breeding sites.
This large-scale loss of wildlife was the context for this new legislation that set out to list, protect and restore endangered and threatened species across the U.S. If the law had a motto, it would be “no more passenger pigeon-style extinctions!”
And over the last five decades, the ESA has been remarkably successful in stabilizing populations of species at risk — preventing the type of wide-scale extinction that occurred in previous centuries and conserving the habitats upon which these species, and the humans who appreciate them, depend.
Perhaps the poster child for this success is the American bald eagle. Nearly wiped out in the 20th century by hunting and toxins, it was finally taken off the endangered species list in 2007 when population recovered sufficiently.
Closer to home, the Delmarva Peninsula fox squirrel was highly endangered. Its habitat was reduced to Maryland by the early 1900s, and finally to only four Eastern Shore counties by 1967. Over the next decades the species was actively restored and translocated, and is recovered and delisted. It can be found on most of the Eastern Shore, and also in Virginia and Delaware.
But my favorite story is the black-footed ferret. Although North America’s only native ferret species once numbered between 500,000 and 1 million, in 1979 the presumed last member of the species died in captivity.
It was reminiscent of the last passenger pigeon, Martha, who died alone in the Cincinnati Zoo in 1914.
But this time there was a different ending.
A black-footed ferret was discovered in 1981 by a rancher’s dog near Meeteetse, Wyo., which launched the Black-Footed Ferret Recovery Program. Today more than 300 ferrets live in the wild scattered throughout their former range, still endangered but, happily, not extinct. The original rediscovered ferret is proudly on display at the National Conservation Training Center in Shepherdstown, W.Va.
If extinction is forever, then endangered means there is still time. We won’t always get a second chance as we did with the black-footed ferret. But thanks to this far-sighted legislation, we can at least hope that 50 years from now our children’s children enjoy the rich species diversity we are blessed with.
Mark Madison teaches environmental history, environmental ethics and environmental policy at Shepherd University.
|
<urn:uuid:51f6d053-ac89-4e12-adb6-266e8f0dd6e1>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00156.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9587992429733276,
"pii_count": 0,
"score": 3.484375,
"token_count": 755,
"url": "https://www.heraldmailmedia.com/story/lifestyle/2023/01/15/2023-marks-the-50th-anniversary-of-the-endangered-species-act/69792747007/"
}
|
Wildlife saved from extinction as the Endangered Species Act turns 50
What a country chooses to save is what a country chooses to say about itself.
— Mollie Beattie, U.S. Fish and Wildlife Service Director (1993-1996)
On Dec. 28, 1973, President Richard Nixon signed the world’s first Endangered Species Act, a revolutionary piece of legislation whose value has only grown in the half-century since its implementation. The legislation came about after a century of abuse of wildlife.
From the late 1800s, market hunting, poaching, game hogs and using feathers for women’s fashion had brought many species to the edge of extinction:
- The more than 60 million bison that had dominated the Great Plains for thousands of years had been reduced to 541 by 1889.
- The passenger pigeon, more numerous than any other bird in the world, had been reduced from more than 3 billion birds to extinction by 1914.
- The pesticide DDT decimated our nation’s symbol. reducing bald eagle numbers to a relict 417 nesting pairs in the lower 48 states by 1963.
- The largest North American land bird, the California condor, became extinct in the wild in 1987, and was reduced to 27 individuals sent to captive-breeding sites.
This large-scale loss of wildlife was the context for this new legislation that set out to list, protect and restore endangered and threatened species across the U.S. If the law had a motto, it would be “no more passenger pigeon-style extinctions!”
And over the last five decades, the ESA has been remarkably successful in stabilizing populations of species at risk — preventing the type of wide-scale extinction that occurred in previous centuries and conserving the habitats upon which these species, and the humans who appreciate them, depend.
Perhaps the poster child for this success is the American bald eagle. Nearly wiped out in the 20th century by hunting and toxins, it was finally taken off the endangered species list in 2007 when population recovered sufficiently.
Closer to home, the Delmarva Peninsula fox squirrel was highly endangered. Its habitat was reduced to Maryland by the early 1900s, and finally to only four Eastern Shore counties by 1967. Over the next decades the species was actively restored and translocated,
|
and is recovered and delisted. It can be found on most of the Eastern Shore, and also in Virginia and Delaware.
But my favorite story is the black-footed ferret. Although North America’s only native ferret species once numbered between 500,000 and 1 million, in 1979 the presumed last member of the species died in captivity.
It was reminiscent of the last passenger pigeon, Martha, who died alone in the Cincinnati Zoo in 1914.
But this time there was a different ending.
A black-footed ferret was discovered in 1981 by a rancher’s dog near Meeteetse, Wyo., which launched the Black-Footed Ferret Recovery Program. Today more than 300 ferrets live in the wild scattered throughout their former range, still endangered but, happily, not extinct. The original rediscovered ferret is proudly on display at the National Conservation Training Center in Shepherdstown, W.Va.
If extinction is forever, then endangered means there is still time. We won’t always get a second chance as we did with the black-footed ferret. But thanks to this far-sighted legislation, we can at least hope that 50 years from now our children’s children enjoy the rich species diversity we are blessed with.
Mark Madison teaches environmental history, environmental ethics and environmental policy at Shepherd University.
|
For the first time in 20 years, nine cases of locally acquired malaria have occurred this summer in the United States: seven in Florida, one in Texas, and one in Maryland. The feverish illness is caused by infection through the bites of mosquitoes carrying protozoan parasites.
The parasites were introduced to the Americas via colonization and the transatlantic slave trade and adapted to local mosquito species. "By 1850 malaria had become established in practically every settlement from New England westward to the Columbia River valley and from the southernmost part of Florida to the inland valleys of California," wrote the Tulane University parasitologist Ernest C. Faust in 1951. The U.S. Census Bureau reported that in 1850, malaria was responsible for 45.7 out of every 1,000 deaths nationally and 7.8 percent of deaths in the South.
The Communicable Disease Center declared malaria eradicated in this country in 1951. Today that entity, now known as the Centers for Disease Control and Prevention, typically reports around 2,000 cases of malaria annually contracted by travelers returning to the U.S. from abroad. The World Health Organization estimates that there were nearly 250 million cases and 620,000 deaths from malaria in 2021, concentrated in sub-Saharan Africa.
Back in 2014, in my article "Let's Play God," I asked, "Wouldn't it be great if scientists could genetically engineer mosquitoes to be immune to the malaria parasite, thus protecting people from that disease?" Nearly 10 years later, a team of biotechnologists associated with the University of California, Irvine; the University of California, Berkeley; and Johns Hopkins University report that they have achieved just that.
According to the researchers' July article in the Proceedings of the National Academy of Sciences, their malaria-carrying mosquito species received genes coding for anti-malaria proteins combined with a gene drive to spread the code quickly in wild mosquito populations that mate with the engineered insects. Normally, genes have a 50–50 chance of being inherited, but in this case, the gene drive systems increased the chance of inheriting the anti-malaria genes to upward of 99 percent. Applying an epidemiological model, the researchers calculated releasing the bioengineered mosquitoes to interbreed with wild ones would reduce the incidence of human malaria infections by more than 90 percent within three months.
Unfortunately, various Luddite activist groups seeking a global moratorium on gene drives have managed to tie up research and deployment with red tape, using United Nations Convention on Biological Diversity (CBD) procedures. A CBD technical committee is supposed to issue a risk assessment report on the technology in 2026, while hundreds of millions already know the real and present risks of life without it.
|
<urn:uuid:26a2eb34-aa06-4ff0-bab2-7929dc81d62b>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00301.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9520955085754395,
"pii_count": 0,
"score": 3.84375,
"token_count": 546,
"url": "https://reason.com/2023/11/14/new-mosquitos-can-help-beat-malaria/"
}
|
For the first time in 20 years, nine cases of locally acquired malaria have occurred this summer in the United States: seven in Florida, one in Texas, and one in Maryland. The feverish illness is caused by infection through the bites of mosquitoes carrying protozoan parasites.
The parasites were introduced to the Americas via colonization and the transatlantic slave trade and adapted to local mosquito species. "By 1850 malaria had become established in practically every settlement from New England westward to the Columbia River valley and from the southernmost part of Florida to the inland valleys of California," wrote the Tulane University parasitologist Ernest C. Faust in 1951. The U.S. Census Bureau reported that in 1850, malaria was responsible for 45.7 out of every 1,000 deaths nationally and 7.8 percent of deaths in the South.
The Communicable Disease Center declared malaria eradicated in this country in 1951. Today that entity, now known as the Centers for Disease Control and Prevention, typically reports around 2,000 cases of malaria annually contracted by travelers returning to the U.S. from abroad. The World Health Organization estimates that there were nearly 250 million cases and 620,000 deaths from malaria in 2021, concentrated in sub-Saharan Africa.
Back in 2014, in my article "Let's Play God," I asked, "Wouldn't it be great if scientists could genetically engineer mosquitoes to be immune to the malaria parasite, thus protecting people from that disease?" Nearly 10 years later, a team of biotechnologists associated with the University of California, Irvine; the University of California, Berkeley; and Johns Hopkins University report that they have achieved just that.
According to the researchers' July article in the Proceedings of the National Academy of Sciences, their malaria-carrying mosquito species received genes coding for anti-malaria proteins combined with a gene drive to spread the code quickly in wild mosquito populations that mate with the engineered insects. Normally, genes have a 50–50 chance of being inherited, but in this case, the gene drive systems increased the chance of inheriting the anti-malaria genes to upward of 99 percent. Applying an epidemiological model, the researchers calculated releasing the bioengineered mosquitoes to interbreed with wild ones would reduce the incidence of human malaria infections by more than 90 percent within three months.
Unfortunately, various
|
Luddite activist groups seeking a global moratorium on gene drives have managed to tie up research and deployment with red tape, using United Nations Convention on Biological Diversity (CBD) procedures. A CBD technical committee is supposed to issue a risk assessment report on the technology in 2026, while hundreds of millions already know the real and present risks of life without it.
|
Yale astronomer helps find two supermassive black holes at closest distance yet
Yale professor and astronomer, Meg Urry, was involved in the discovery of a pair of active supermassive black holes at the center of a nearby galaxy.
Courtesy of ALMA
A paper in The Astrophysical Journal Letters on Jan. 1 outlines the discovery of two active supermassive black holes at the center of a galaxy about 500 million light years away — which is close on the galactic scale.
The galaxy, UGC 4211, was formed by the merger of two different galaxies, each of which had a supermassive black hole — black holes with masses hundreds to billions of times greater than that of our Sun — at its center. The two black holes then gravitated inwards and are now 750 light years away from each other, which is the closest separation of two supermassive black holes that we have conclusive evidence for. The black holes are also “active,” which means that they are currently feeding on the surrounding stellar material and growing. Such active supermassive black holes at the centers of galaxies are called active galactic nuclei, or AGN.
“We know that over the billions of years that the universe has been evolving, galaxies have merged and continue to merge,” Meg Urry, an author of the paper, a professor of physics and astronomy and the director of the Yale Center for Astronomy and Astrophysics, said. “And, we also know that most galaxies, at least above a certain mass, have a supermassive black hole at their center. So, one of the questions that is not really solved yet is what happens to the two black holes in two galaxies when they merge?”
According to Urry, the plausible theory is that after the galaxy merger, the two black holes slowly sink to the center, form a binary — which are a pair of gravitationally-bound stellar objects in orbit around a common center of mass — and then eventually merge, emitting enormous amounts of gravitational waves in the process. Initially, the black holes sink due to friction with galactic gas and dust. Once the black holes get close to the center, however, the density of galactic material decreases and the pull of friction becomes negligible. If they get really close, they can lose energy and merge by emitting gravitational waves but it remains unclear how the black holes make the leap between friction and gravitational waves.
Such a merger takes millions of years to happen — the two black holes from this study are not expected to merge for at least another 200 million years. Based on this information, we should expect to see more instances of black holes orbiting each other. However, Urry said that such observations have been surprisingly few, which makes this discovery particularly significant.
Michael Koss, an astrophysicist at Eureka Scientific and the principal investigator of this research project, said that the discovery was a culmination of a ten-year endeavor to find AGNs in merged galaxies.
“The weird thing about doing astronomy is that you get a time machine but you only get to see one moment,” Koss said. “The entire timescale of these mergers is over a billion years. We don’t get to watch it play out for a particular system. So, either we can look at a bunch of systems or we can run simulations. And, when I was postdoc in Hawaii, there were these ideas based on simulations of galaxies that as two galaxies come close together, their black holes grow very quickly — become AGNs. So, if those simulations are true, then we should look for AGNs in mergers.”
Thus, Koss began looking for AGNs in galaxies that appeared to have been in mergers, and his team’s finding now provides observational evidence for the simulations.
For this study, Yale supplied telescope time at the Keck Observatory in Hawaii. First, Urry describes, the team used the near-infrared imager NIRC2 to survey the centers of galaxies that looked “disturbed,” which is an indicator of a merger. A large fraction of the observed galaxies had dual sources in the center — an exciting finding since the sources could potentially be two active black holes, but the results were not definitive. Thus, the team followed the survey with near-infrared spectrograph OSIRIS to confirm that in this particular galaxy, the sources were indeed a pair of gravitationally-bound active black holes.
OSIRIS proved two things: first, that the two sources had the same red-shift and thus are at the same distance from us; and second, that the two sources are not the same object distorted by gravitational lensing — the bending of light around mass — since the spectra of the two sources were not identical. In this case, distortion from gravitational lensing could be caused by any cloud of gas and dust between the observing point and the source.
Keck was not the only telescope used for the research, however. The Very Large Telescope, the Atacama Large Millimeter/submillimeter Array and the Hubble Space Telescope all observed the same black hole pair in multiple wavelengths, which means that, unlike past discoveries, this one is highly unlikely to be a false positive.
Locating a supermassive black hole pair in such a close proximity suggests that pairs like this could be quite common in the universe, giving hope that the merging of two supermassive black holes in other systems could be observed and that scientists could detect the gravitational waves emitted from these events.
Since 2015, when the Laser Interferometer Gravitational Wave Observatory detected the first gravitational wave, many such detections have been made but they have all been gravitational waves from mergers of either stellar mass black holes — which are much smaller than their supermassive counterparts — or other astronomical objects, like neutron stars. Chiara M. F. Mingarelli, assistant professor at the University of Connecticut and gravitational-wave astrophysicist, explains that this is because detectors like LIGO can only pick up high-frequency gravitational waves but not the low-frequency ones that would be released when two supermassive black holes collide. These low frequency waves would give rise to a gravitational wave background.
Mingarelli’s current project involves trying to detect this background. Thus, she finds the dual black hole discovery particularly interesting because it has significant implications for the rate of supermassive black hole mergers and the intensity of gravitational wave background.
“This black hole pair is so nearby that either we got really lucky and found it because we are extremely lucky, or we found it because there are lots of them,” Mingarelli said. “We won’t know until we make more such observations, but this particular galaxy could be crucial to understanding the population statistics of black hole mergers and the gravitational wave background.”
The paper was published in Volume 942 of the Astrophysical Journal Letters.
|
<urn:uuid:89442719-35fb-4e77-9a11-d33ba5540f1a>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510284.49/warc/CC-MAIN-20230927071345-20230927101345-00754.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.952271044254303,
"pii_count": 0,
"score": 3.84375,
"token_count": 1423,
"url": "https://yaledailynews.com/blog/2023/01/23/yale-astronomer-helps-find-two-supermassive-black-holes-at-closest-distance-yet/"
}
|
Yale astronomer helps find two supermassive black holes at closest distance yet
Yale professor and astronomer, Meg Urry, was involved in the discovery of a pair of active supermassive black holes at the center of a nearby galaxy.
Courtesy of ALMA
A paper in The Astrophysical Journal Letters on Jan. 1 outlines the discovery of two active supermassive black holes at the center of a galaxy about 500 million light years away — which is close on the galactic scale.
The galaxy, UGC 4211, was formed by the merger of two different galaxies, each of which had a supermassive black hole — black holes with masses hundreds to billions of times greater than that of our Sun — at its center. The two black holes then gravitated inwards and are now 750 light years away from each other, which is the closest separation of two supermassive black holes that we have conclusive evidence for. The black holes are also “active,” which means that they are currently feeding on the surrounding stellar material and growing. Such active supermassive black holes at the centers of galaxies are called active galactic nuclei, or AGN.
“We know that over the billions of years that the universe has been evolving, galaxies have merged and continue to merge,” Meg Urry, an author of the paper, a professor of physics and astronomy and the director of the Yale Center for Astronomy and Astrophysics, said. “And, we also know that most galaxies, at least above a certain mass, have a supermassive black hole at their center. So, one of the questions that is not really solved yet is what happens to the two black holes in two galaxies when they merge?”
According to Urry, the plausible theory is that after the galaxy merger, the two black holes slowly sink to the center, form a binary — which are a pair of gravitationally-bound stellar objects in orbit around a common center of mass — and then eventually merge, emitting enormous amounts of gravitational waves in the process. Initially, the black holes sink due to friction with galactic gas and dust. Once the black holes get close to the center, however, the density of galactic material decreases and the pull of friction becomes negligible. If they get really close, they can lose energy and merge by emitting gravitational waves but it remains unclear how the black holes make the leap between friction and gravitational waves.
Such a merger takes millions of years to happen — the two
|
black holes from this study are not expected to merge for at least another 200 million years. Based on this information, we should expect to see more instances of black holes orbiting each other. However, Urry said that such observations have been surprisingly few, which makes this discovery particularly significant.
Michael Koss, an astrophysicist at Eureka Scientific and the principal investigator of this research project, said that the discovery was a culmination of a ten-year endeavor to find AGNs in merged galaxies.
“The weird thing about doing astronomy is that you get a time machine but you only get to see one moment,” Koss said. “The entire timescale of these mergers is over a billion years. We don’t get to watch it play out for a particular system. So, either we can look at a bunch of systems or we can run simulations. And, when I was postdoc in Hawaii, there were these ideas based on simulations of galaxies that as two galaxies come close together, their black holes grow very quickly — become AGNs. So, if those simulations are true, then we should look for AGNs in mergers.”
Thus, Koss began looking for AGNs in galaxies that appeared to have been in mergers, and his team’s finding now provides observational evidence for the simulations.
For this study, Yale supplied telescope time at the Keck Observatory in Hawaii. First, Urry describes, the team used the near-infrared imager NIRC2 to survey the centers of galaxies that looked “disturbed,” which is an indicator of a merger. A large fraction of the observed galaxies had dual sources in the center — an exciting finding since the sources could potentially be two active black holes, but the results were not definitive. Thus, the team followed the survey with near-infrared spectrograph OSIRIS to confirm that in this particular galaxy, the sources were indeed a pair of gravitationally-bound active black holes.
OSIRIS proved two things: first, that the two sources had the same red-shift and thus are at the same distance from us; and second, that the two sources are not the same object distorted by gravitational lensing — the bending of light around mass — since the spectra of the two sources were not identical. In this case, distortion from gravitational lensing could be caused by any cloud of gas and dust between the observing point and the source.
Keck was not the only telescope used for the research, however. The Very Large Telescope, the Atacama Large Millimeter/submillimeter Array and the Hubble Space Telescope all observed the same black hole pair in multiple wavelengths, which means that, unlike past discoveries, this one is highly unlikely to be a false positive.
Locating a supermassive black hole pair in such a close proximity suggests that pairs like this could be quite common in the universe, giving hope that the merging of two supermassive black holes in other systems could be observed and that scientists could detect the gravitational waves emitted from these events.
Since 2015, when the Laser Interferometer Gravitational Wave Observatory detected the first gravitational wave, many such detections have been made but they have all been gravitational waves from mergers of either stellar mass black holes — which are much smaller than their supermassive counterparts — or other astronomical objects, like neutron stars. Chiara M. F. Mingarelli, assistant professor at the University of Connecticut and gravitational-wave astrophysicist, explains that this is because detectors like LIGO can only pick up high-frequency gravitational waves but not the low-frequency ones that would be released when two supermassive black holes collide. These low frequency waves would give rise to a gravitational wave background.
Mingarelli’s current project involves trying to detect this background. Thus, she finds the dual black hole discovery particularly interesting because it has significant implications for the rate of supermassive black hole mergers and the intensity of gravitational wave background.
“This black hole pair is so nearby that either we got really lucky and found it because we are extremely lucky, or we found it because there are lots of them,” Mingarelli said. “We won’t know until we make more such observations, but this particular galaxy could be crucial to understanding the population statistics of black hole mergers and the gravitational wave background.”
The paper was published in Volume 942 of the Astrophysical Journal Letters.
|
Tribes in Shasta, Siskiyou teach students about culture, history of 'resiliency and hope'
Indigenous communities are partnering with a network of educators and writers to teach North State students about their culture and history.
Theirs and similar collaborations offer teachers curriculum they can use to give children and youths a holistic history about their geographic region, an area seldom mentioned in state curriculum, according to educators.
The result is a more complete story of the past and present that includes the "richness of our culture" told in Indigenous people's voices, said Kayla Super, spokesperson for the Quartz Valley Reservation, west of Fort Jones. "I also feel it will give our Tribal Member students a sense of identity and (a) better understanding of what it means to be Indigenous" today, "instead of hearing about Native American history (that) seemed to stop in the 1900s."
This winter, the reservation ― which includes Karuk, Klamath and Shasta peoples ― and the Northern California Writing Project will launch an online resource with lesson plans and other curriculum for kindergarten to high school teachers, and college faculty.
'Voices are still here and deserve to be amplified'
It's the first step in a larger project called "Rebuilding the Humanities at the Quartz Valley Indian Reservation." Future plans include annual culture camps at the Siskiyou County reservation, and an educational website open to everyone, said Project Coordinator Jasmine Corona Alcazar and the Northern California Writing Project, a network of teachers, researchers and writers based in Chico.
Planners invited tribal and community members, and high school and community college students to help create “a comprehensive history of the reservation,” Corona Alcazar said. Topics covered include the Land Back Movement that seeks to place seized land under Indigenous and cooperative governance. Activities include an online scavenger hunt and digital storytelling.
Related:‘America’s dark past’: Emotional scars linger from Indian boarding schools
They hope to roll out the project's online components at the end of February on the reservation's website: qvir.com.
“This website is a testament that their (Indigenous people's) voices are still here and deserve to be amplified and heard,” Corona Alcazar said. It’s a “jumping off point for their amazing story of utter human perseverance and resilience to keep tight to the traditions and knowledge of what makes a community ― thrive in the face of a history that has tried to erase (it).”
Plans also include outreach through the reservation's annual Cultural Camp, offered in July.
"It’s a week long event where cultural knowledge is shared with our youth, families of all ages and surrounding community members," Super said. Attendees learn traditional cooking from salmon to acorns, songs and language; and how to weave baskets and make cultural regalia, drums and cards. Visitors can camp overnight. The reservation serves meals on site.
Tribal history part of Shasta County curriculum
The Quartz Valley Reservation’s is one of several collaborations between North State Tribes and educational institutions, seeking to celebrate Indigenous people’s culture and contributions, and to report historic events accurately.
In 2017, former Shasta Lake mayor and tribal elder Rod Lindsay reached out to the Shasta County Office of Education, according to Director of School and District Support Kelly Rizzi and Learning Community for Native Success Coordinator Cindy Hogue of the Dawnom Wintu Band. The result was the American Indian Advisory, a collaboration of tribal leaders and elders from four local tribes, and representatives from Native American agencies, SCOE and nonprofits.
The consortium works ongoing to give SCOE feedback on curriculum about tribal history for students in grades three, four, five and eight. Each grade level has a team of teachers who work with a cultural consultant from each of the four tribes: Winnemem Wintu, Wintu Tribe of Northern California, Pit River and Redding Rancheria.
Resulting lessons fit state standards, but remain true to tribal culture and history, Hogue and Rizzi said. They're taught in addition ― not as replacements ― to state curriculum already in place, which seldom touches on Shasta County history.
More:Life experiences inspire this North State educator to uplift Native students
Feedback from students, parents and faculty has been positive, Hogue and Rizzi said, because lessons cover local history. Teachers requested local history curriculum for years prior to the introduction of these lessons in their classrooms, they said. Students connect with history when they're learning about the places and communities in which they live.
The North State’s story is one of “resiliency and hope," Rizzi said. "The tribes that are involved in this curriculum have been here since time immemorial and they are still thriving here today.”
Jessica Skropanic is a features reporter for the Record Searchlight/USA Today Network. She covers science, arts, social issues and entertainment stories. Follow her on Twitter@RS_JSkropanic and onFacebook. Join Jessica in the Get Out! Nor Cal recreation Facebook group. To support and sustain this work,please subscribe today. Thank you.
|
<urn:uuid:4257f3f7-efea-46f6-b481-fcc25d9e4072>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00473.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9500364661216736,
"pii_count": 0,
"score": 3.234375,
"token_count": 1083,
"url": "https://www.redding.com/story/news/2023/02/14/northern-california-tribal-communities-help-students-learn-history/69890642007/"
}
|
Tribes in Shasta, Siskiyou teach students about culture, history of 'resiliency and hope'
Indigenous communities are partnering with a network of educators and writers to teach North State students about their culture and history.
Theirs and similar collaborations offer teachers curriculum they can use to give children and youths a holistic history about their geographic region, an area seldom mentioned in state curriculum, according to educators.
The result is a more complete story of the past and present that includes the "richness of our culture" told in Indigenous people's voices, said Kayla Super, spokesperson for the Quartz Valley Reservation, west of Fort Jones. "I also feel it will give our Tribal Member students a sense of identity and (a) better understanding of what it means to be Indigenous" today, "instead of hearing about Native American history (that) seemed to stop in the 1900s."
This winter, the reservation ― which includes Karuk, Klamath and Shasta peoples ― and the Northern California Writing Project will launch an online resource with lesson plans and other curriculum for kindergarten to high school teachers, and college faculty.
'Voices are still here and deserve to be amplified'
It's the first step in a larger project called "Rebuilding the Humanities at the Quartz Valley Indian Reservation." Future plans include annual culture camps at the Siskiyou County reservation, and an educational website open to everyone, said Project Coordinator Jasmine Corona Alcazar and the Northern California Writing Project, a network of teachers, researchers and writers based in Chico.
Planners invited tribal and community members, and high school and community college students to help create “a comprehensive history of the reservation,” Corona Alcazar said. Topics covered include the Land Back Movement that seeks to place seized land under Indigenous and cooperative governance. Activities include an online scavenger hunt and digital storytelling.
Related:‘America’s dark past’: Emotional scars linger from Indian boarding schools
They hope to roll out the project's online components at the end of February on the reservation's website: qvir.com.
“This website is a testament that their (Indigenous people's) voices are still here and deserve to be amplified and heard,” Corona Alcazar said. It’s a “jumping off point for their amazing story of utter human perseverance and resilience to keep tight to the traditions and knowledge of what makes a community ― thrive in the face of a history that has tried to erase (it).”
Plans also include outreach through the
|
reservation's annual Cultural Camp, offered in July.
"It’s a week long event where cultural knowledge is shared with our youth, families of all ages and surrounding community members," Super said. Attendees learn traditional cooking from salmon to acorns, songs and language; and how to weave baskets and make cultural regalia, drums and cards. Visitors can camp overnight. The reservation serves meals on site.
Tribal history part of Shasta County curriculum
The Quartz Valley Reservation’s is one of several collaborations between North State Tribes and educational institutions, seeking to celebrate Indigenous people’s culture and contributions, and to report historic events accurately.
In 2017, former Shasta Lake mayor and tribal elder Rod Lindsay reached out to the Shasta County Office of Education, according to Director of School and District Support Kelly Rizzi and Learning Community for Native Success Coordinator Cindy Hogue of the Dawnom Wintu Band. The result was the American Indian Advisory, a collaboration of tribal leaders and elders from four local tribes, and representatives from Native American agencies, SCOE and nonprofits.
The consortium works ongoing to give SCOE feedback on curriculum about tribal history for students in grades three, four, five and eight. Each grade level has a team of teachers who work with a cultural consultant from each of the four tribes: Winnemem Wintu, Wintu Tribe of Northern California, Pit River and Redding Rancheria.
Resulting lessons fit state standards, but remain true to tribal culture and history, Hogue and Rizzi said. They're taught in addition ― not as replacements ― to state curriculum already in place, which seldom touches on Shasta County history.
More:Life experiences inspire this North State educator to uplift Native students
Feedback from students, parents and faculty has been positive, Hogue and Rizzi said, because lessons cover local history. Teachers requested local history curriculum for years prior to the introduction of these lessons in their classrooms, they said. Students connect with history when they're learning about the places and communities in which they live.
The North State’s story is one of “resiliency and hope," Rizzi said. "The tribes that are involved in this curriculum have been here since time immemorial and they are still thriving here today.”
Jessica Skropanic is a features reporter for the Record Searchlight/USA Today Network. She covers science, arts, social issues and entertainment stories. Follow her on Twitter@RS_JSkropanic and onFacebook. Join Jessica in the Get Out! Nor Cal recreation Facebook group. To support and sustain this work,please subscribe today. Thank you.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.