text
stringlengths 1.81k
125k
| id
stringlengths 47
47
| metadata
dict | input_text
stringlengths 1.28k
3.13k
| target_text
stringlengths 1
19k
|
---|---|---|---|---|
A curiosity of the innovation ecosystem is that while inventors are celebrated for their uniquely important contributions, they’re also underrepresented. Inventors are rarely speakers at conferences about intellectual property and technology. They do not hold leadership positions at the U.S. Patent and Trademark Office. A lone inventor sits on the committee tasked with reviewing the agency’s operations and making recommendations to the Director. Search Google for inventors, and you will notice that nearly all of the images are from the past.
The lack of widespread attention paid to contemporary inventors has produced interesting consequences—namely, a lack of inventors. It’s difficult to envision yourself doing something that you have little awareness of, nor context for. There is no degree program you can obtain that proves you’re an inventor. You have to be willing to give yourself that title. In my experience, most people are reluctant to do that. Even professionals—people who use their creativity to solve problems, file patent applications, and design new products for a living—struggle to call themselves an inventor.
This is partly because the concept itself isn’t relatable. When you think of an inventor, who comes to mind? If you’re like most people, you probably think of American legends like Benjamin Franklin, Albert Einstein, Thomas Edison, or Nikola Tesla. When contemporary inventors compare themselves to these larger-than-life figures, they feel their accomplishments fall short—way short. They don’t perceive that they’ve earned the right to call themselves an inventor. This is very unfortunate.
Because inventors possess so many of the same qualities that leaders do, we need people to readily embrace their inventiveness. Inventors view the world through a can-do lens. They’re empathetic, curious, visionary, and prone to action.
To be fair, organizations like the National Inventors Hall of Fame have been shining the light on the impact of inventors for decades. Some companies have too: Intel has honored one of its employees as “Inventor of the Year” since at least 2019. The AAAS-Lemelson Invention Ambassador program—which honored contemporary inventors and gave them a platform to share about their work—is a notable exception.
The Sunk Cruiser ‘Moskva’ ‘Will Soon Become An Aircraft Carrier,’ Ukraine Jokes After Shooting Down A Russian Bomber Over The Black Sea
Broadly speaking, though, modern-day inventors are not particularly well-known.
What Makes Someone An Inventor?
Inventors are proud when the USPTO issues them a patent. They should be: The patenting process is an investment. Typically, working with a registered patent practitioner to get a utility patent issued—in which the parameters of what you own are defined by negotiating with your patent examiner—is expensive and time-consuming. But invention cannot be measured by patenting alone, because filing intellectual property on an invention isn’t required. Plenty of inventions go unpatented.
Making it easier for people to identify as inventors (before and whether or not) they get a patent is a complex challenge. Contemporary inventors often go by other names, including engineer, scientist, entrepreneur, innovator, and designer. (While Steve Jobs, Jeff Bezos, Jack Ma, and Elon Musk are named inventors on patents, they’re much better known for being businessmen.) There’s also the fact that secrecy is part of inventing. If you share your invention publicly before filing intellectual property, you could lose your rights to obtain a patent. Because inventors fear having their inventions stolen, they’re often wary of discussing their work with others.
There are some less than positive connotations with the word, too. The image of an eccentric scientist toiling away in obscurity, like Doc Brown in the Back to the Future franchise, doesn’t connote respect. For years, Forbes contributor Stephen Key encouraged inventors to refer to themselves professionally as product developers because of how inventors are portrayed in popular culture.
One way to make it easier for people to see themselves as inventors is to make visible and celebrate the inventors among us right now, who exist in every imaginable setting. Progress is being made in this regard. Last month, USA Today named inventor Dasia Taylor as its Woman of the Year from Iowa. Taylor is the founder and CEO of VariegateHealth, a medical device company. Recently, Cisco launched a social media campaign featuring women to showcase how the company is cultivating the next generation of inventors. (It, along with 50 other companies, is a signee of the U.S. Intellectual Property Alliance’s Diversity Pledge.)
The organization leading the way in honoring and celebrating the importance of inventors is without a doubt the National Academy of Inventors, the non-profit founded by University of South Florida neuroscientist Paul Sanberg in 2010. At the USF, Sanberg is a leader, entrepreneur, and inventor. He believes universities should be assessed based on how they impact society at large and change the world in addition to typical metrics—which requires prioritizing invention.
How The National Academy Of Inventors Shines The Light On Inventors
Some of our greatest inventions emerge from the work taking place at universities. In academia, however, success is generally tied with publishing papers, not patenting and commercializing inventions. Knowing how to identify what’s patentable, file an invention disclosure, and work with a patent attorney isn’t part of the training that academics receive. There’s also the lingering judgement that commercialization is a form of selling out that makes you less of a “real” academic.
Unless the leadership of a university places an emphasis on translating research into economic development—which necessitates working with industry, starting companies, and obtaining intellectual property—invention really isn’t part of their core mission, explained Dr. Sanberg in a Zoom interview. He founded the NAI in part to change that. He believes it’s critical that academic scientists (especially young ones) learn about intellectual property, understand the patenting process, and are rewarded for participating in innovation to take their science to the next level. In other words, that scientists view themselves as and are celebrated for being inventors.
“It’s like everything else: You do better science when you know a lot more, including what innovation is, who the companies in your field are, and what they’re doing. For example, if you're an engineer, what's manufacturing like in the community? How can you enhance that?” he explained.
His efforts to shine the light on inventors been successful. Today, the NAI is a prestigious organization that includes 4,600 members and is affiliated with more than 300 academic institutions around the world. Becoming an NAI fellow is the highest recognition that an academic inventor can receive. At its annual gathering, new fellows are rockstars.
To provide a forum for exploring the relationship between academics and invention, the NAI publishes the journal of Technology & Innovation four times a year. In collaboration with the Intellectual Property Owners Association, it publishes an annual list of the Top 100 universities obtaining U.S. patents. All of these efforts, which treat inventors with the respect they deserve, help us understand who contemporary inventors are and the importance of their work on an ongoing basis. Predictably, NAI member organizations have followed suit in recognizing inventors on their campuses. After recently becoming an NAI member, for example, Tufts University held an event honoring inventors on its campus.
The Next Generation Of Inventors
Among the innovation ecosystem, there is a lot of energy directed towards raising awareness of intellectual property. But devoid of its human origins, intellectual property just isn’t that interesting. People don’t need to be made aware of intellectual property; they need to be convinced it’s actually for them. The best way of making this happen is with storytelling that centers the individual creator—the more imaginative, the better. When people understand who contemporary inventors are and what motivates them, they will apply these insights to their own lives.
The good news is, there are encouraging trends among the next generation. Invention education programs, which are growing nationwide, are making it easier for young people to understand themselves as inventors and envision leading an inventive life. Who we think of when we hear the word inventor is changing too. There are teenage inventors doing remarkable work who are highly visible as leaders, including Forbes 30 Under 30 honoree Gitanjali Rao, Neha Shukla, and Samaira Mehta. Since TIME named Rao as its first “Kid of the Year” in 2020, the inventor has used her platform to teach innovation workshops to more than 74,000 students across 44 countries.
This is a step in the right direction. It’s impossible to overstate the value of invention. Inventors are heroes whose names we should know, but don’t. To get more people to participate in the innovation ecosystem, the stories of contemporary inventors, in all their glory, nuance, and failure, must be told.
|
<urn:uuid:92ec4c11-a264-4b61-b5c1-1179df286917>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00819.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9442546367645264,
"pii_count": 0,
"score": 2.875,
"token_count": 1915,
"url": "https://www.forbes.com/sites/madeleinekey/2023/04/18/inventors-dont-receive-the-respect-they-deserve-thats-changing/"
}
|
A curiosity of the innovation ecosystem is that while inventors are celebrated for their uniquely important contributions, they’re also underrepresented. Inventors are rarely speakers at conferences about intellectual property and technology. They do not hold leadership positions at the U.S. Patent and Trademark Office. A lone inventor sits on the committee tasked with reviewing the agency’s operations and making recommendations to the Director. Search Google for inventors, and you will notice that nearly all of the images are from the past.
The lack of widespread attention paid to contemporary inventors has produced interesting consequences—namely, a lack of inventors. It’s difficult to envision yourself doing something that you have little awareness of, nor context for. There is no degree program you can obtain that proves you’re an inventor. You have to be willing to give yourself that title. In my experience, most people are reluctant to do that. Even professionals—people who use their creativity to solve problems, file patent applications, and design new products for a living—struggle to call themselves an inventor.
This is partly because the concept itself isn’t relatable. When you think of an inventor, who comes to mind? If you’re like most people, you probably think of American legends like Benjamin Franklin, Albert Einstein, Thomas Edison, or Nikola Tesla. When contemporary inventors compare themselves to these larger-than-life figures, they feel their accomplishments fall short—way short. They don’t perceive that they’ve earned the right to call themselves an inventor. This is very unfortunate.
Because inventors possess so many of the same qualities that leaders do, we need people to readily embrace their inventiveness. Inventors view the world through a can-do lens. They’re empathetic, curious, visionary, and prone to action.
To be fair, organizations like the National Inventors Hall of Fame have been shining the light on the impact of inventors for decades. Some companies have too: Intel has honored one of its employees as “Inventor of the Year” since at least 2019. The AAAS-Lemelson Invention Ambassador program—which honored contemporary inventors and gave them a platform to share about their work—is a notable exception.
The Sunk Cruiser ‘Moskva’ ‘Will Soon Become An Aircraft Carrier,’ Ukraine Jokes After Shooting Down A Russian Bomber Over The Black Sea
Broadly speaking, though, modern-day inventors are not particularly well-known.
What Makes Someone An Inventor?
Inventors are proud when
|
the USPTO issues them a patent. They should be: The patenting process is an investment. Typically, working with a registered patent practitioner to get a utility patent issued—in which the parameters of what you own are defined by negotiating with your patent examiner—is expensive and time-consuming. But invention cannot be measured by patenting alone, because filing intellectual property on an invention isn’t required. Plenty of inventions go unpatented.
Making it easier for people to identify as inventors (before and whether or not) they get a patent is a complex challenge. Contemporary inventors often go by other names, including engineer, scientist, entrepreneur, innovator, and designer. (While Steve Jobs, Jeff Bezos, Jack Ma, and Elon Musk are named inventors on patents, they’re much better known for being businessmen.) There’s also the fact that secrecy is part of inventing. If you share your invention publicly before filing intellectual property, you could lose your rights to obtain a patent. Because inventors fear having their inventions stolen, they’re often wary of discussing their work with others.
There are some less than positive connotations with the word, too. The image of an eccentric scientist toiling away in obscurity, like Doc Brown in the Back to the Future franchise, doesn’t connote respect. For years, Forbes contributor Stephen Key encouraged inventors to refer to themselves professionally as product developers because of how inventors are portrayed in popular culture.
One way to make it easier for people to see themselves as inventors is to make visible and celebrate the inventors among us right now, who exist in every imaginable setting. Progress is being made in this regard. Last month, USA Today named inventor Dasia Taylor as its Woman of the Year from Iowa. Taylor is the founder and CEO of VariegateHealth, a medical device company. Recently, Cisco launched a social media campaign featuring women to showcase how the company is cultivating the next generation of inventors. (It, along with 50 other companies, is a signee of the U.S. Intellectual Property Alliance’s Diversity Pledge.)
The organization leading the way in honoring and celebrating the importance of inventors is without a doubt the National Academy of Inventors, the non-profit founded by University of South Florida neuroscientist Paul Sanberg in 2010. At the USF, Sanberg is a leader, entrepreneur, and inventor. He believes universities should be assessed based on how they impact society at large and change the world in addition to typical metrics—which requires prioritizing invention.
How The National Academy Of Inventors Shines The Light On Inventors
Some of our greatest inventions emerge from the work taking place at universities. In academia, however, success is generally tied with publishing papers, not patenting and commercializing inventions. Knowing how to identify what’s patentable, file an invention disclosure, and work with a patent attorney isn’t part of the training that academics receive. There’s also the lingering judgement that commercialization is a form of selling out that makes you less of a “real” academic.
Unless the leadership of a university places an emphasis on translating research into economic development—which necessitates working with industry, starting companies, and obtaining intellectual property—invention really isn’t part of their core mission, explained Dr. Sanberg in a Zoom interview. He founded the NAI in part to change that. He believes it’s critical that academic scientists (especially young ones) learn about intellectual property, understand the patenting process, and are rewarded for participating in innovation to take their science to the next level. In other words, that scientists view themselves as and are celebrated for being inventors.
“It’s like everything else: You do better science when you know a lot more, including what innovation is, who the companies in your field are, and what they’re doing. For example, if you're an engineer, what's manufacturing like in the community? How can you enhance that?” he explained.
His efforts to shine the light on inventors been successful. Today, the NAI is a prestigious organization that includes 4,600 members and is affiliated with more than 300 academic institutions around the world. Becoming an NAI fellow is the highest recognition that an academic inventor can receive. At its annual gathering, new fellows are rockstars.
To provide a forum for exploring the relationship between academics and invention, the NAI publishes the journal of Technology & Innovation four times a year. In collaboration with the Intellectual Property Owners Association, it publishes an annual list of the Top 100 universities obtaining U.S. patents. All of these efforts, which treat inventors with the respect they deserve, help us understand who contemporary inventors are and the importance of their work on an ongoing basis. Predictably, NAI member organizations have followed suit in recognizing inventors on their campuses. After recently becoming an NAI member, for example, Tufts University held an event honoring inventors on its campus.
The Next Generation Of Inventors
Among the innovation ecosystem, there is a lot of energy directed towards raising awareness of intellectual property. But devoid of its human origins, intellectual property just isn’t that interesting. People don’t need to be made aware of intellectual property; they need to be convinced it’s actually for them. The best way of making this happen is with storytelling that centers the individual creator—the more imaginative, the better. When people understand who contemporary inventors are and what motivates them, they will apply these insights to their own lives.
The good news is, there are encouraging trends among the next generation. Invention education programs, which are growing nationwide, are making it easier for young people to understand themselves as inventors and envision leading an inventive life. Who we think of when we hear the word inventor is changing too. There are teenage inventors doing remarkable work who are highly visible as leaders, including Forbes 30 Under 30 honoree Gitanjali Rao, Neha Shukla, and Samaira Mehta. Since TIME named Rao as its first “Kid of the Year” in 2020, the inventor has used her platform to teach innovation workshops to more than 74,000 students across 44 countries.
This is a step in the right direction. It’s impossible to overstate the value of invention. Inventors are heroes whose names we should know, but don’t. To get more people to participate in the innovation ecosystem, the stories of contemporary inventors, in all their glory, nuance, and failure, must be told.
|
- Potential applications of artificial intelligence continue to expand as more people adopt the technology
- AI usage is particularly prominent in finance, digital spaces (like social media, ecommerce, and e-marketing) and even healthcare
- For investors who want to put AI to work for them, Q.ai’s AI-backed Investment Kits could be just the thing
As the use cases for artificial intelligence grow, it’s inevitable that we’ll discover more ways it can improve our lives. And the space has plenty of oomph: The global AI software market is expected to reach $22.6 billion by 2025.
With AI’s popularity on the rise, we thought we’d explore a few especially promising applications of artificial intelligence.
What is AI?
AI, or artificial intelligence, is a complex topic with many layers. At its core, a “true” AI is a machine that can simulate human intelligence, behaviors and even emotions.
While no machine has reached that level, modern AIs can complete moderately complex tasks like:
- Solving problems and making decisions using data inputs
- Recognizing and interpreting visual information
- Recognizing, interpreting and responding to written and verbal language
In other words, artificial intelligence is software that’s programmed to “think” intelligently. Typically, AI models are trained using enormous amounts of information that help it “learn.” Advanced AIs can then process new data and draw unique, intelligent conclusions based on the presented information.
Modern applications of artificial intelligence
Artificial intelligence is a rapidly-growing field that has sprawled into dozens of industries. Companies and individuals use AI to perform repetitive tasks, analyze information and optimize other programs. From Google’s
Here are just some of the applications of artificial intelligence contributing to our technological advancement.
The release of ChatGPT gave the world a taste of what future chatbots could look like. ChatGPT interacts with users in a conversational way to answer questions and even challenge certain ideas.
But ChatGPT is an advanced, experimental iteration of a previous technology: AI chatbots. Thousands of companies have adopted AI-based chatbots to provide 24/7 customer support and resolve quick issues. As AI continues to evolve, it’s likely that chatbots’ language processing will grow more sophisticated.
Perhaps surprisingly, AI has also risen to prominence in agriculture. Computer vision and machine learning have produced apps that can identify soil deficiencies and provide planting recommendations.
AI also informs “precision agriculture,” whereby farmers use AI to:
- Analyze weather patterns to predict forecasts and planting schedules
- Determine the best crops to grow
- Address pest attacks
- Measure soil conductivity and pH
Plus, the combination of AI and robotics helps farmers harvest crops faster and more efficiently than human laborers.
The ecommerce industry has capitalized on AI in a major way. Companies use AI to predict trends, analyze performance, assist with inventory management and more.
AI’s ability to track usage patterns and verify information has also made it a powerful tool in the fight against credit card fraud and fake online reviews.
Further, AI forms the basis of “recommendation engines” that show shoppers products based on their browsing history and preferences. And of course, virtual assistants and chatbots show up here, too.
While education is still dominated by human personnel, artificial intelligence helps boost educators’ potential. Often, AI is used to facilitate automation in repetitive and data-heavy tasks, like:
- Grading homework
- Scheduling meetings
- Managing multiple online courses at once
- Sending personalized communications to students
- Creating or digitizing lectures and study guides
Yet again, chatbot-style AIs pop up – this time, to quickly answer routine questions and allow educators to spend more time on complex tasks.
The field of finance has leaned heavily into the use of AI at every level.
Customers can take advantage of AI to get information about their banking and investment accounts.
Banks and credit card firms rely on AI to detect changes in transaction patterns to catch fraud in action.
Lenders use artificial intelligence to predict and assess borrowers’ risk levels and make lending decisions.
Venture capital firms adopt AI to generate customized insights and financial risk management decisions.
And of course, robo-advisors and financial management services have leaned into AI to automate trading.
As artificial intelligence has grown more accurate, it’s made its debut in the medical field as well. On the less interesting side, AI helps administrators process data, schedule meetings, organize files and transcribe medical notes.
For more eye-popping illustrations of applications of artificial intelligence, consider how robots rely on AI to automate surgeries. Machine-led surgeries are more precise and less invasive, have a smaller margin for error and can run 24/7.
AI can assist in medical diagnoses by tracking health using wearable devices and indicating problems before patients are aware. Some programs have also adopted AI to help interpret body scans (like MRIs) to detect harmful growths with greater speed and accuracy.
Pharmaceutical companies even use AIs to analyze historical and modern data to discover new potential drugs.
Another common application of artificial intelligence can be found in companies’ marketing teams. AI’s ability to quickly analyze data is useful for teams that need to quickly generate and act on insights. AI is used to:
- Generate campaign reports
- Improve customer engagement
- Personalize messages
- Deliver online retargeting campaigns
- Pivot advertising methodology mid-campaign based on new insights
Chatbots also fall into this category, as language processing plays a key role in analyzing and producing marketing campaigns.
Editing programs like Grammarly also make the cut, as AI can analyze grammar, vocabulary and sentence construction to keep brands on-message.
Social media makes another excellent use case for artificial intelligence. Firms like Meta and Twitter use AI to analyze massive amounts of data and generate actionable insights. Many companies also use AI to cultivate their social media brand.
In particular, AI can:
- Track user behavior to inform marketing and advertisement tactics
- Monitor comments to suggest new posts and accounts to follow
- Determine what’s currently trending
- Help generate targeted content based on demographic and behavioral data
- Combat cyberbullying and harmful or illegal content
In-home applications of artificial intelligence
Consumers also make frequent use of artificial intelligence.
Aside from testing out ChatGPT, you can find artificial intelligence in the automated driving programs used by Tesla, Audi, Volvo and others.
And whether you knew it or not, your email account likely uses AI to filter out spam and illicit content.
Your smart devices also use AI for facial recognition programs that log into devices and authenticate transactions.
Domestic robots, such as automated vacuums and lawn mowers, may also rely on AI to avoid obstacles and learn the best season or time of day to work.
And of course, Siri, Amazon Alexa and Google Assistance, as well as a variety of advanced home security systems, rely on artificial intelligence.
Don’t forget about AI in investing
We already mentioned that a major application of artificial intelligence is helping investors automate their accounts and make smarter decisions.
Here at Q.ai, we put that theory into practice.
Our Investment Kits rely on AI data analysis to select, balance and rebalance investments and manage risk between each Kit in your portfolio.
And for investors who activate Portfolio Protection, our AI works even harder to adjust for potential risks based on predictions about future market behavior.
With Q.ai’s artificial intelligence, you can invest smarter, not harder – and enjoy the financial rewards that come your way.
Download Q.ai today for access to AI-powered investment strategies.
|
<urn:uuid:897594e5-59b5-465f-afd2-69256d01b351>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474412.46/warc/CC-MAIN-20240223121413-20240223151413-00182.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9212977886199951,
"pii_count": 0,
"score": 2.71875,
"token_count": 1624,
"url": "https://www.forbes.com/sites/qai/2023/01/06/applications-of-artificial-intelligence/?sh=6bf7bbe63be4"
}
|
- Potential applications of artificial intelligence continue to expand as more people adopt the technology
- AI usage is particularly prominent in finance, digital spaces (like social media, ecommerce, and e-marketing) and even healthcare
- For investors who want to put AI to work for them, Q.ai’s AI-backed Investment Kits could be just the thing
As the use cases for artificial intelligence grow, it’s inevitable that we’ll discover more ways it can improve our lives. And the space has plenty of oomph: The global AI software market is expected to reach $22.6 billion by 2025.
With AI’s popularity on the rise, we thought we’d explore a few especially promising applications of artificial intelligence.
What is AI?
AI, or artificial intelligence, is a complex topic with many layers. At its core, a “true” AI is a machine that can simulate human intelligence, behaviors and even emotions.
While no machine has reached that level, modern AIs can complete moderately complex tasks like:
- Solving problems and making decisions using data inputs
- Recognizing and interpreting visual information
- Recognizing, interpreting and responding to written and verbal language
In other words, artificial intelligence is software that’s programmed to “think” intelligently. Typically, AI models are trained using enormous amounts of information that help it “learn.” Advanced AIs can then process new data and draw unique, intelligent conclusions based on the presented information.
Modern applications of artificial intelligence
Artificial intelligence is a rapidly-growing field that has sprawled into dozens of industries. Companies and individuals use AI to perform repetitive tasks, analyze information and optimize other programs. From Google’s
Here are just some of the applications of artificial intelligence contributing to our technological advancement.
The release of ChatGPT gave the world a taste of what future chatbots could look like. ChatGPT interacts with users in a conversational way to answer questions and even challenge certain ideas.
But ChatGPT is an advanced, experimental iteration of a previous technology: AI chatbots. Thousands of companies have adopted AI-based chatbots to provide 24/7 customer support and resolve quick issues. As AI continues to evolve, it’s likely that chatbots’ language processing will grow more sophisticated.
Perhaps surprisingly, AI has also risen to prominence in agriculture. Computer vision and machine learning have produced apps that can identify soil deficiencies and provide planting recommendations.
AI also informs “precision agriculture,” whereby farmers use AI to:
- Analyze weather patterns
|
to predict forecasts and planting schedules
- Determine the best crops to grow
- Address pest attacks
- Measure soil conductivity and pH
Plus, the combination of AI and robotics helps farmers harvest crops faster and more efficiently than human laborers.
The ecommerce industry has capitalized on AI in a major way. Companies use AI to predict trends, analyze performance, assist with inventory management and more.
AI’s ability to track usage patterns and verify information has also made it a powerful tool in the fight against credit card fraud and fake online reviews.
Further, AI forms the basis of “recommendation engines” that show shoppers products based on their browsing history and preferences. And of course, virtual assistants and chatbots show up here, too.
While education is still dominated by human personnel, artificial intelligence helps boost educators’ potential. Often, AI is used to facilitate automation in repetitive and data-heavy tasks, like:
- Grading homework
- Scheduling meetings
- Managing multiple online courses at once
- Sending personalized communications to students
- Creating or digitizing lectures and study guides
Yet again, chatbot-style AIs pop up – this time, to quickly answer routine questions and allow educators to spend more time on complex tasks.
The field of finance has leaned heavily into the use of AI at every level.
Customers can take advantage of AI to get information about their banking and investment accounts.
Banks and credit card firms rely on AI to detect changes in transaction patterns to catch fraud in action.
Lenders use artificial intelligence to predict and assess borrowers’ risk levels and make lending decisions.
Venture capital firms adopt AI to generate customized insights and financial risk management decisions.
And of course, robo-advisors and financial management services have leaned into AI to automate trading.
As artificial intelligence has grown more accurate, it’s made its debut in the medical field as well. On the less interesting side, AI helps administrators process data, schedule meetings, organize files and transcribe medical notes.
For more eye-popping illustrations of applications of artificial intelligence, consider how robots rely on AI to automate surgeries. Machine-led surgeries are more precise and less invasive, have a smaller margin for error and can run 24/7.
AI can assist in medical diagnoses by tracking health using wearable devices and indicating problems before patients are aware. Some programs have also adopted AI to help interpret body scans (like MRIs) to detect harmful growths with greater speed and accuracy.
Pharmaceutical companies even use AIs to analyze historical and modern data to discover new potential drugs.
Another common application of artificial intelligence can be found in companies’ marketing teams. AI’s ability to quickly analyze data is useful for teams that need to quickly generate and act on insights. AI is used to:
- Generate campaign reports
- Improve customer engagement
- Personalize messages
- Deliver online retargeting campaigns
- Pivot advertising methodology mid-campaign based on new insights
Chatbots also fall into this category, as language processing plays a key role in analyzing and producing marketing campaigns.
Editing programs like Grammarly also make the cut, as AI can analyze grammar, vocabulary and sentence construction to keep brands on-message.
Social media makes another excellent use case for artificial intelligence. Firms like Meta and Twitter use AI to analyze massive amounts of data and generate actionable insights. Many companies also use AI to cultivate their social media brand.
In particular, AI can:
- Track user behavior to inform marketing and advertisement tactics
- Monitor comments to suggest new posts and accounts to follow
- Determine what’s currently trending
- Help generate targeted content based on demographic and behavioral data
- Combat cyberbullying and harmful or illegal content
In-home applications of artificial intelligence
Consumers also make frequent use of artificial intelligence.
Aside from testing out ChatGPT, you can find artificial intelligence in the automated driving programs used by Tesla, Audi, Volvo and others.
And whether you knew it or not, your email account likely uses AI to filter out spam and illicit content.
Your smart devices also use AI for facial recognition programs that log into devices and authenticate transactions.
Domestic robots, such as automated vacuums and lawn mowers, may also rely on AI to avoid obstacles and learn the best season or time of day to work.
And of course, Siri, Amazon Alexa and Google Assistance, as well as a variety of advanced home security systems, rely on artificial intelligence.
Don’t forget about AI in investing
We already mentioned that a major application of artificial intelligence is helping investors automate their accounts and make smarter decisions.
Here at Q.ai, we put that theory into practice.
Our Investment Kits rely on AI data analysis to select, balance and rebalance investments and manage risk between each Kit in your portfolio.
And for investors who activate Portfolio Protection, our AI works even harder to adjust for potential risks based on predictions about future market behavior.
With Q.ai’s artificial intelligence, you can invest smarter, not harder – and enjoy the financial rewards that come your way.
Download Q.ai today for access to AI-powered investment strategies.
|
IT is difficult to imagine that just one hundred years ago horses were still the primary means of transportation. For some presidents, horses were not just a necessity but also a part of their image. Before photographs, the military presidents, especially, were often portrayed in paintings on horseback.
Numerous portraits of George Washington in his role as general during the American Revolution depict him on a horse. Andrew Jackson’s equestrian statue climaxes Lafayette Park, across the street from the White House. Other presidents known for their military exploits include William Henry Harrison, who rode on horseback to his inauguration.
Horses that belonged to the presidents often achieved fame in their own right. The public was interested in knowing what horses and what style of carriage the president had. Zachary Taylor’s horse from the Mexican War, Old Whitey, accompanied him to Washington and enjoyed a pampered retirement on the White House grounds.
Ulysses S. Grant, well known for his interest in horses, visited the White House stables daily. Cincinnati, Jeff Davis, and Egypt were three of the horses that had served with him during the Civil War and also traveled to the capital with the president. Newspapers and magazines fueled the public’s fascination with the president’s horses and carriages. An article from an 1887 issue of the Magazine of American History detailed the equestrian interests of the presidents and passed judgment on their abilities as horsemen.
In 1902 Theodore Roosevelt and his horse Bleistein were featured in a full-page spread in the Washington Times, with photographs of the president and the jumper fearlessly going over a course at Chevy Chase.
Some of the earlier presidents were interested in horses for sport. George Washington, an avid foxhunter and a founding member of the Alexandria Jockey Club, was admired for his horsemanship. Thomas Jefferson also frequently attended horse races at the National Race Course, which was established at the new capital even before the White House was completed. Andrew Jackson was passionately involved in horse racing and even kept some racehorses at the White House stables for a time.
Later presidents were more interested in pleasure riding. Theodore Roosevelt and his family frequently went out riding together. William Howard Taft was the first president to make the transition to automobiles, although he and many presidents after him kept horses for exercise.
It was common for residents of Washington, D.C., to see the president riding down the street. Even after the White House stables were demolished, Warren G. Harding and Calvin Coolidge took up horseback riding in an effort to escape the pressures of the Oval Office.
President Reagan riding horses with Queen Elizabeth II during a visit to Windsor Castle in 1982. | Credit: Courtesy White House.
While John Kennedy was in office, a temporary stable was erected on the South Lawn for family pony Macaroni as well as Tex, who was given to the Kennedy children by Lyndon Johnson. Ronald Reagan and First Lady Nancy Reagan enjoyed horseback riding at their ranch in California.
Happy Presidents’ Day.
Source: This article was originally published in White History Number 19.
» Nelson and Blueskin: The First Horses of the United States, Horse Nation, Horsing Around the World
» Theodore Roosevelt’s Bleistein, Presidential Pet Museum
» Riding the Storm with Queen Elizabeth, Santa Barbara–Style, A Little-Known Tale of the Queen’s Historic Visit to the Western White House in 1983
Featured Image: ‘Washington Rallying the Americans at the Battle of Princeton’ William Tylee Ranney, Astride Blueskin, Princeton University Art Museum. Public domain.
Official Blog of The Fund for Horses
1 thought on “Presidential Horses”
Reblogged this on "OUR WORLD".
|
<urn:uuid:f390feae-7351-46b4-a4f8-2443b876681a>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00106.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9741169810295105,
"pii_count": 0,
"score": 3.8125,
"token_count": 788,
"url": "https://tuesdayshorse.wordpress.com/2023/02/20/presidential-horses/"
}
|
IT is difficult to imagine that just one hundred years ago horses were still the primary means of transportation. For some presidents, horses were not just a necessity but also a part of their image. Before photographs, the military presidents, especially, were often portrayed in paintings on horseback.
Numerous portraits of George Washington in his role as general during the American Revolution depict him on a horse. Andrew Jackson’s equestrian statue climaxes Lafayette Park, across the street from the White House. Other presidents known for their military exploits include William Henry Harrison, who rode on horseback to his inauguration.
Horses that belonged to the presidents often achieved fame in their own right. The public was interested in knowing what horses and what style of carriage the president had. Zachary Taylor’s horse from the Mexican War, Old Whitey, accompanied him to Washington and enjoyed a pampered retirement on the White House grounds.
Ulysses S. Grant, well known for his interest in horses, visited the White House stables daily. Cincinnati, Jeff Davis, and Egypt were three of the horses that had served with him during the Civil War and also traveled to the capital with the president. Newspapers and magazines fueled the public’s fascination with the president’s horses and carriages. An article from an 1887 issue of the Magazine of American History detailed the equestrian interests of the presidents and passed judgment on their abilities as horsemen.
In 1902 Theodore Roosevelt and his horse Bleistein were featured in a full-page spread in the Washington Times, with photographs of the president and the jumper fearlessly going over a course at Chevy Chase.
Some of the earlier presidents were interested in horses for sport. George Washington, an avid foxhunter and a founding member of the Alexandria Jockey Club, was admired for his horsemanship. Thomas Jefferson also frequently attended horse races at the National Race Course, which was established at the new capital even before the White House was completed. Andrew Jackson was passionately involved in horse racing and even kept some racehorses at the White House stables for a time.
Later presidents were more interested in pleasure riding. Theodore Roosevelt and his family frequently went out riding together. William Howard Taft was the first president to make the transition to automobiles, although he and many presidents after him kept horses for exercise.
It was common for residents of Washington, D.C., to see the president riding down the street. Even after the White House stables were demolished, Warren G. Harding and Calvin Cool
|
idge took up horseback riding in an effort to escape the pressures of the Oval Office.
President Reagan riding horses with Queen Elizabeth II during a visit to Windsor Castle in 1982. | Credit: Courtesy White House.
While John Kennedy was in office, a temporary stable was erected on the South Lawn for family pony Macaroni as well as Tex, who was given to the Kennedy children by Lyndon Johnson. Ronald Reagan and First Lady Nancy Reagan enjoyed horseback riding at their ranch in California.
Happy Presidents’ Day.
Source: This article was originally published in White History Number 19.
» Nelson and Blueskin: The First Horses of the United States, Horse Nation, Horsing Around the World
» Theodore Roosevelt’s Bleistein, Presidential Pet Museum
» Riding the Storm with Queen Elizabeth, Santa Barbara–Style, A Little-Known Tale of the Queen’s Historic Visit to the Western White House in 1983
Featured Image: ‘Washington Rallying the Americans at the Battle of Princeton’ William Tylee Ranney, Astride Blueskin, Princeton University Art Museum. Public domain.
Official Blog of The Fund for Horses
1 thought on “Presidential Horses”
Reblogged this on "OUR WORLD".
|
The idea of eating bugs has, once again, found its way back into the news. NPR recently released a piece that suggested the pushback against eating bugs is founded on a baseless conspiracy theory that elites want the population to consume bugs. However, the evidence seems to suggest that elites do, in fact, want the population to consume the likes of grasshoppers and fly larvae.
"Including insects in human food has been an emerging," NPR wrote, "but still marginal, idea among climate scientists and food security experts. In countries where insects have not been a part of the diet, it's an idea that has long been met with hesitancy and occasional ridicule.
"In recent years, however," they considered, "this aversion has fused with an amorphous and shapeshifting conspiracy theory in which a shadowy global elite conspires to control the world's population. For those who espouse the theory, eating bugs isn't just a matter of disgust, or questioning the impacts of climate change. It's framed as a matter of individual freedom and government control."
Though bug-eating is not a new development, world leaders have certainly encouraged the notion that those in the West ought to consider consuming bugs. President Joe Biden and Canadian Prime Minister Justin Trudeau suggested that there would be food shortages in early 2022, following Russia’s invasion of Ukraine.
Call it a coincidence, but just two months later, it was reported that primary school children in Wales were possibly going to be fed crickets and mealworms, as a scientist suggested that this would set the tone for an eco-friendly living environment. It was just a few days after this report that the Toronto Sun suggested that consuming crickets could effectively combat food shortages.
In late June 2022, Aspire Food Group had announced that it had set out to produce 9,000 metric tonnes of crickets every year for “human and pet consumption,” which would amount to two billion crickets. Just one month after Aspire Food Group mentioned their cricket goals, food company Actually Foods had listed “organic cricket flour” as one of its primary ingredients.
It is not difficult to see how Biden and Trudeau’s mention of food shortages appear to have ignited serious efforts to introduce bugs as a leading dietary element. There is nothing about this series of events that suggests conspiracy, but rather just plain facts. It is also important to consider that bug-eating has not been a grassroots effort, kicked off by small-town companies attempting to do good in their community. Bug consumption has been embraced and pushed by the largest organizations in the world, including the World Economic Forum (WEF).
However, our so-called food shortages have only been one of the many justifications for introducing bugs into food. Another common justification of those in the bug-eating industry is how superior bugs are in protein compared to what we currently eat. The Vancouver Sun published a piece in 2016 that covered how Enterra intended to “replace unsustainable fish meal and soy as sources of protein and fat with bugs grown on waste food.” The piece also mentioned that bugs would be a hard sell to “hikers and snowboarders.”
CNN reported that the average American, in the not-too-distant future, would maybe “toast bread with cricket flour, drink a protein smoothie made from locust powder, and eat scrambled eggs (made extra-creamy with the fat from mopane caterpillars) with a side of mealworm bacon.”
The piece continued by noting that this hypothetical meal would provide “four times the iron, more than three times the protein and more key vitamins and minerals than the bread, smoothie, eggs and bacon you eat today - all while saving the planet.”What better way to solidify the fact that bug-eating is being pushed by elites than the World Economic Forum suggesting in 2021, in the midst of the COVID-19 pandemic, that everyone should embrace the eating of bugs, as the global population increases, apparently leaving few other alternatives than consuming bugs.
But for NPR, the anti-bug-eating sentiment is basically just racist. They link not wanting to eat bugs with colonialist sentiments.
"There was very much an idea that you are what you eat back then. And so the Europeans felt they needed European foods," Julie Lesnik, an associate professor of biological anthropology at Wayne State University in Detroit told NPR. "There is very much a worry that if you ate the Indigenous foods, you would become a savage."
"Conservative media influencers continue to tap into this sentiment today," NPR declared, before writing that "Lesnik sees a throughline between the early colonizers and the conservative outrage today."
"The easiest punching bag ... is to pick on something that looks uncivilized," Lesnik said.
But of course it could just be that people just don't want to eat bugs.
|
<urn:uuid:f49b5d3a-e49f-4744-89d8-7c65f1ba5338>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474715.58/warc/CC-MAIN-20240228112121-20240228142121-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9782639741897583,
"pii_count": 0,
"score": 2.6875,
"token_count": 1008,
"url": "https://humanevents.com/2023/04/03/npr-calls-the-backlash-over-not-wanting-to-eat-bugs-a-conspiracy"
}
|
The idea of eating bugs has, once again, found its way back into the news. NPR recently released a piece that suggested the pushback against eating bugs is founded on a baseless conspiracy theory that elites want the population to consume bugs. However, the evidence seems to suggest that elites do, in fact, want the population to consume the likes of grasshoppers and fly larvae.
"Including insects in human food has been an emerging," NPR wrote, "but still marginal, idea among climate scientists and food security experts. In countries where insects have not been a part of the diet, it's an idea that has long been met with hesitancy and occasional ridicule.
"In recent years, however," they considered, "this aversion has fused with an amorphous and shapeshifting conspiracy theory in which a shadowy global elite conspires to control the world's population. For those who espouse the theory, eating bugs isn't just a matter of disgust, or questioning the impacts of climate change. It's framed as a matter of individual freedom and government control."
Though bug-eating is not a new development, world leaders have certainly encouraged the notion that those in the West ought to consider consuming bugs. President Joe Biden and Canadian Prime Minister Justin Trudeau suggested that there would be food shortages in early 2022, following Russia’s invasion of Ukraine.
Call it a coincidence, but just two months later, it was reported that primary school children in Wales were possibly going to be fed crickets and mealworms, as a scientist suggested that this would set the tone for an eco-friendly living environment. It was just a few days after this report that the Toronto Sun suggested that consuming crickets could effectively combat food shortages.
In late June 2022, Aspire Food Group had announced that it had set out to produce 9,000 metric tonnes of crickets every year for “human and pet consumption,” which would amount to two billion crickets. Just one month after Aspire Food Group mentioned their cricket goals, food company Actually Foods had listed “organic cricket flour” as one of its primary ingredients.
It is not difficult to see how Biden and Trudeau’s mention of food shortages appear to have ignited serious efforts to introduce bugs as a leading dietary element. There is nothing about this series of events that suggests conspiracy, but rather just plain facts. It is also important to consider that bug-eating has not been a grassroots effort, kicked off by small-town companies
|
attempting to do good in their community. Bug consumption has been embraced and pushed by the largest organizations in the world, including the World Economic Forum (WEF).
However, our so-called food shortages have only been one of the many justifications for introducing bugs into food. Another common justification of those in the bug-eating industry is how superior bugs are in protein compared to what we currently eat. The Vancouver Sun published a piece in 2016 that covered how Enterra intended to “replace unsustainable fish meal and soy as sources of protein and fat with bugs grown on waste food.” The piece also mentioned that bugs would be a hard sell to “hikers and snowboarders.”
CNN reported that the average American, in the not-too-distant future, would maybe “toast bread with cricket flour, drink a protein smoothie made from locust powder, and eat scrambled eggs (made extra-creamy with the fat from mopane caterpillars) with a side of mealworm bacon.”
The piece continued by noting that this hypothetical meal would provide “four times the iron, more than three times the protein and more key vitamins and minerals than the bread, smoothie, eggs and bacon you eat today - all while saving the planet.”What better way to solidify the fact that bug-eating is being pushed by elites than the World Economic Forum suggesting in 2021, in the midst of the COVID-19 pandemic, that everyone should embrace the eating of bugs, as the global population increases, apparently leaving few other alternatives than consuming bugs.
But for NPR, the anti-bug-eating sentiment is basically just racist. They link not wanting to eat bugs with colonialist sentiments.
"There was very much an idea that you are what you eat back then. And so the Europeans felt they needed European foods," Julie Lesnik, an associate professor of biological anthropology at Wayne State University in Detroit told NPR. "There is very much a worry that if you ate the Indigenous foods, you would become a savage."
"Conservative media influencers continue to tap into this sentiment today," NPR declared, before writing that "Lesnik sees a throughline between the early colonizers and the conservative outrage today."
"The easiest punching bag ... is to pick on something that looks uncivilized," Lesnik said.
But of course it could just be that people just don't want to eat bugs.
|
The climate change real estate bubble risks billions
A climate housing bubble threatens to erode real estate prices in much of the U.S. in the coming years, posing particular challenges for low-income residents, a new study finds.
Why it matters: With more severe and frequent extreme weather events, the resilience of homeowners and communities is on the line.
- How lenders, insurance companies and others incorporate escalating flood risks into property prices is a key question facing at-risk communities.
Zoom in: The study, published Thursday in Nature Climate Change, finds that nationally, property prices are currently overvalued by between $121 billion and $237 billion, when compared to their actual flood risk.
- The current prices mask the true danger that these properties are exposed to, because of factors such as outdated FEMA flood maps, incentives in the National Flood Insurance Program and home buyers who lack climate change information.
- The paper is the result of a collaboration between experts at the Environmental Defense Fund, First Street Foundation, Resources for the Future, the Federal Reserve and two universities.
- Scientists relied on First Street’s updated modeling that simulates rainfall-induced, or pluvial flooding, as well as coastal flood events.
Between the lines: The authors found that right now, 14.6 million properties face at least a 1% annual probability of flooding, putting them in the so-called 100-year flood zone.
- However, this is expected to increase by 11% in a mid-range emissions scenario, with average annual losses spiking by at least 26% by 2050.
- In dollar terms, the areas with the greatest property overvaluations are along the coasts, where there is overlap between rising seas, fewer flood disclosure laws, and a high number of residents who may not view climate change as a near-term threat.
- Much of the overvaluation comes from vulnerable properties located outside of FEMA's 100-year flood zone.
- Once the higher flood risks become evident, homeowners will lose equity in their property, which is a particular threat to lower-income homeowners.
The big picture: The pattern of the total overvaluation of at-risk properties in the Lower 48 states reveals hot spots of risk.
- Specifically, coastal areas show high amounts of overvaluation.
- Spikes also show up in West Virginia and other parts of Appalachia.
- In Texas, it is clear that the biggest cities, including Houston and Dallas, have a significant amount of overvaluation.
- Florida tops the list, accounting for about $50.2 billion based on the actual threat, the study found.
What they're saying: "There is a significant amount of 'unknown' flood risk across the country based solely on the differences in the publicly available federal flood maps and the reality of actual flood risk," Jeremy Porter, head of climate implications at First Street Foundation, said in a statement.
|
<urn:uuid:399de114-439b-455a-9729-982cf9d54108>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474893.90/warc/CC-MAIN-20240229234355-20240301024355-00293.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9387335777282715,
"pii_count": 0,
"score": 3.015625,
"token_count": 598,
"url": "https://www.axios.com/2023/02/17/climate-change-real-estate-bubble?utm_campaign=Hot%20News&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz--aRHimLJnDfJYC4TkEBhLlML1L7YHnBPJ2WMiIhiavIRAOuPi0ajsycp1hfFcsTLR_r1ji"
}
|
The climate change real estate bubble risks billions
A climate housing bubble threatens to erode real estate prices in much of the U.S. in the coming years, posing particular challenges for low-income residents, a new study finds.
Why it matters: With more severe and frequent extreme weather events, the resilience of homeowners and communities is on the line.
- How lenders, insurance companies and others incorporate escalating flood risks into property prices is a key question facing at-risk communities.
Zoom in: The study, published Thursday in Nature Climate Change, finds that nationally, property prices are currently overvalued by between $121 billion and $237 billion, when compared to their actual flood risk.
- The current prices mask the true danger that these properties are exposed to, because of factors such as outdated FEMA flood maps, incentives in the National Flood Insurance Program and home buyers who lack climate change information.
- The paper is the result of a collaboration between experts at the Environmental Defense Fund, First Street Foundation, Resources for the Future, the Federal Reserve and two universities.
- Scientists relied on First Street’s updated modeling that simulates rainfall-induced, or pluvial flooding, as well as coastal flood events.
Between the lines: The authors found that right now, 14.6 million properties face at least a 1% annual probability of flooding, putting them in the so-called 100-year flood zone.
- However, this is expected to increase by 11% in a mid-range emissions scenario, with average annual losses spiking by at least 26% by 2050.
- In dollar terms, the areas with the greatest property overvaluations are along the coasts, where there is overlap between rising seas, fewer flood disclosure laws, and a high number of residents who may not view climate change as a near-term threat.
- Much of the overvaluation comes from vulnerable properties located outside of FEMA's 100-year flood zone.
- Once the higher flood risks become evident, homeowners will lose equity in their property, which is a particular threat to lower-income homeowners.
The big picture: The pattern of the total overvaluation of at-risk properties in the Lower 48 states reveals hot spots of risk.
- Specifically, coastal areas show high amounts of overvaluation.
- Spikes also show up in West Virginia and other parts of Appalachia.
- In Texas, it is clear that the biggest cities, including Houston and Dallas, have a significant amount of
|
overvaluation.
- Florida tops the list, accounting for about $50.2 billion based on the actual threat, the study found.
What they're saying: "There is a significant amount of 'unknown' flood risk across the country based solely on the differences in the publicly available federal flood maps and the reality of actual flood risk," Jeremy Porter, head of climate implications at First Street Foundation, said in a statement.
|
(CNN) An iceberg nearly the size of Greater London broke off the Brunt Ice Shelf in Antarctica on Sunday according to the British Antarctic Survey.
Scientists first discovered significant cracks in the ice shelf a decade ago, but in the last two years there have been two major breaks. The BAS Halley Research Station is located on the Brunt Ice Shelf and glaciologists say the research station is safe.
The iceberg is around 600 square miles, or 1550 square kilometers. The researchers say this event was expected and not a result of climate change.
"This calving event has been expected and is part of the natural behavior of the Brunt Ice Shelf. It is not linked to climate change. Our science and operational teams continue to monitor the ice shelf in real-time to ensure it is safe, and to maintain the delivery of the science we undertake at Halley," Professor Dominic Hodgson a BAS glaciologist said in a news release.
The calving comes amid record-low sea ice extent in Antarctica, where it is summer.
"While the decline in Antarctic sea ice extent is always steep at this time of year, it has been unusually rapid this year," scientists at the National Snow and Ice Data Center reported in early January, "and at the end of December, Antarctic sea ice extent stood at the lowest in the 45-year satellite record."
Researchers at the data center say the low sea ice has been due in part to a large band of warmer-than-normal air temperatures, which climbed to 2 degrees Celsius above average over the Ross Sea in November and December. Strong winds have also hastened the sea ice decline, they reported.
Recent data shows the sea ice has not since recovered, suggesting the continent could end the summer with a new record on the books for the second year in a row.
Antarctica has experienced a roller-coaster of sea ice extent over the past couple of decades, swinging wildly from record highs to record lows. Unlike the Arctic, where scientists say climate change is accelerating its impacts, Antarctica's sea ice extent is highly variable.
"There's a link between what's going on in Antarctica and the general warming trend around the rest of the world, but it's different from what we see in mountain glaciers and what we see in the Arctic," Ted Scambos, a glaciologist at the University of Colorado Boulder and lead scientist at the National Snow and Ice Data Center, previously told CNN.
Satellite data that stretches back to 1978 shows that the region was still producing record-high sea ice extent as recently as 2014 and 2015. Then it suddenly plunged in 2016 and has stayed lower than average since.
|
<urn:uuid:a40cdece-0216-49fa-96e4-88bf46f61252>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00608.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9581588506698608,
"pii_count": 0,
"score": 3.34375,
"token_count": 544,
"url": "https://amp.cnn.com/cnn/2023/01/25/world/antarctica-brunt-iceberg-climate/index.html"
}
|
(CNN) An iceberg nearly the size of Greater London broke off the Brunt Ice Shelf in Antarctica on Sunday according to the British Antarctic Survey.
Scientists first discovered significant cracks in the ice shelf a decade ago, but in the last two years there have been two major breaks. The BAS Halley Research Station is located on the Brunt Ice Shelf and glaciologists say the research station is safe.
The iceberg is around 600 square miles, or 1550 square kilometers. The researchers say this event was expected and not a result of climate change.
"This calving event has been expected and is part of the natural behavior of the Brunt Ice Shelf. It is not linked to climate change. Our science and operational teams continue to monitor the ice shelf in real-time to ensure it is safe, and to maintain the delivery of the science we undertake at Halley," Professor Dominic Hodgson a BAS glaciologist said in a news release.
The calving comes amid record-low sea ice extent in Antarctica, where it is summer.
"While the decline in Antarctic sea ice extent is always steep at this time of year, it has been unusually rapid this year," scientists at the National Snow and Ice Data Center reported in early January, "and at the end of December, Antarctic sea ice extent stood at the lowest in the 45-year satellite record."
Researchers at the data center say the low sea ice has been due in part to a large band of warmer-than-normal air temperatures, which climbed to 2 degrees Celsius above average over the Ross Sea in November and December. Strong winds have also hastened the sea ice decline, they reported.
Recent data shows the sea ice has not since recovered, suggesting the continent could end the summer with a new record on the books for the second year in a row.
Antarctica has experienced a roller-coaster of sea ice extent over the past couple of decades, swinging wildly from record highs to record lows. Unlike the Arctic, where scientists say climate change is accelerating its impacts, Antarctica's sea ice extent is highly variable.
"There's a link between what's going on in Antarctica and the general warming trend around the rest of the world, but it's different from what we see in mountain glaciers and what we see in the Arctic," Ted Scambos, a glaciologist at the University of Colorado Boulder and lead scientist at the National Snow and Ice Data Center, previously told CNN.
Satellite data that
|
stretches back to 1978 shows that the region was still producing record-high sea ice extent as recently as 2014 and 2015. Then it suddenly plunged in 2016 and has stayed lower than average since.
|
An approach that’s worked in colleges and workplaces can be brought into K-12 schools with great effect on girls’ persistence in STEM.
Women comprise only 28% of the STEM workforce in the United States. And a recent survey by MetLife found that women in STEM were nearly twice as likely than women in other industries to say they are considering leaving the workforce, citing burnout, being passed over for promotions, not being paid equally, and lack of purposeful and meaningful work.
New data from the Department of Education’s College Scorecard show that STEM majors still vastly outpace liberal arts and humanities majors in terms of future earnings, with all but five of the top 100 programs in STEM fields. Several two-year associate degrees in STEM fields lead to significantly higher median earnings for graduates than over half of the four-year degrees included in the study. With jobs in STEM fields expected to grow twice as fast as those in non-STEM fields, there's a window of opportunity for young women to step into this rapidly growing, financially lucrative sector that is shaping our future.
Yet it’s been shown that girls and young women lose interest in math and science as they move through their school years, even though NAEP testing data consistently show no measurable difference in science aptitude between fourth grade boys and girls. I’ve written extensively about the importance of belonging to persisting in STEM. On this International Day of the Girl, I want to focus on relevance as a key to unlocking women’s brilliance and impact in STEM.
Studies focusing on college and the workforce have consistently shown that when women understand the impact of STEM on improving the world, they’re more likely to persist in STEM classes, majors, and fields. According to new research from the University of Wisconsin–Madison, simply asking college students to explain in writing how the scientific concept they’re studying applies either to their own life or to helping others led more people, especially those under-represented in STEM, to stay in the field. Judith Harackiewicz, the professor who studies motivation and whose lab found these results, thinks these short prompts tap into a powerful source of motivation: relevance.
A recent study by Girls Who Code in conjunction with Logitech found that an overwhelming majority of women (92%) said the ability to make a meaningful contribution to society is a primary factor in their career progression. Delphine Donné, General Manager, Creativity & Productivity at Logitech, told me it was “eye-opening” to see the “importance of inspiring women of the role they can have and understanding the impact of their work.”
These insights from post-secondary offer a timely lesson to K-12. A study released by the Bill & Melinda Gates Foundation earlier this year found that when asked “what K-12 math education should ideally be like,” survey respondents were most likely to say “relevant to the real world” and “useful.” The problem, the authors noted, is that math education is perceived as unengaging, outdated, and disconnected from the real world, causing students to be uninterested in the subject. The solution they identified is to “make math education more relevant and engaging so that more students will succeed in math and, thus, later in life.” In focus groups, parents and teachers defined relevant as “teaching math through the prism of real world and societal examples” and “drawing connections between content and students’ lives outside the classroom.”
A clear take-away for teachers wanting to keep more girls engaged in STEM, Donné underscored, is to emphasize the impact of what you can do in STEM fields and that it isn’t “technical or boring.” Shannon Richardson, a high school science teacher in Brooklyn, NY, reinforced the same theme when she told me that what draws people most to STEM fields, in addition to curiosity and a desire to create, is a “desire to solve the world’s problems.” And Leuna, a South Asian woman from New York, shared that she persisted in STEM because “my whole family was very prone to diseases. I watched my grandfather die of lung cancer, my grandmother suffering from severe diabetes for years and my mother be diagnosed and healed from breast cancer. . . Therefore my dream of becoming a healthcare personnel was birthed, so that I could someday provide aid to those who desperately need it.”
International Day of the Girl challenges us to center girls and invest in their leadership. As Mattie Kahn’s new book, “Young and Restless: The Girls Who Sparked America’s Revolutions”, reminds us, girls have always been changing the world. Drawing the connections between STEM and the real world opens up more paths for them to do it.
|
<urn:uuid:c55be67d-1268-49ef-bb7e-4e155aecb007>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00739.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9638619422912598,
"pii_count": 0,
"score": 2.84375,
"token_count": 1004,
"url": "https://www.forbes.com/sites/taliamilgromelcott/2023/10/11/workplaces-do-it-so-can-schools-real-world-relevance-keeps-girls-in-stem/"
}
|
An approach that’s worked in colleges and workplaces can be brought into K-12 schools with great effect on girls’ persistence in STEM.
Women comprise only 28% of the STEM workforce in the United States. And a recent survey by MetLife found that women in STEM were nearly twice as likely than women in other industries to say they are considering leaving the workforce, citing burnout, being passed over for promotions, not being paid equally, and lack of purposeful and meaningful work.
New data from the Department of Education’s College Scorecard show that STEM majors still vastly outpace liberal arts and humanities majors in terms of future earnings, with all but five of the top 100 programs in STEM fields. Several two-year associate degrees in STEM fields lead to significantly higher median earnings for graduates than over half of the four-year degrees included in the study. With jobs in STEM fields expected to grow twice as fast as those in non-STEM fields, there's a window of opportunity for young women to step into this rapidly growing, financially lucrative sector that is shaping our future.
Yet it’s been shown that girls and young women lose interest in math and science as they move through their school years, even though NAEP testing data consistently show no measurable difference in science aptitude between fourth grade boys and girls. I’ve written extensively about the importance of belonging to persisting in STEM. On this International Day of the Girl, I want to focus on relevance as a key to unlocking women’s brilliance and impact in STEM.
Studies focusing on college and the workforce have consistently shown that when women understand the impact of STEM on improving the world, they’re more likely to persist in STEM classes, majors, and fields. According to new research from the University of Wisconsin–Madison, simply asking college students to explain in writing how the scientific concept they’re studying applies either to their own life or to helping others led more people, especially those under-represented in STEM, to stay in the field. Judith Harackiewicz, the professor who studies motivation and whose lab found these results, thinks these short prompts tap into a powerful source of motivation: relevance.
A recent study by Girls Who Code in conjunction with Logitech found that an overwhelming majority of women (92%) said the ability to make a meaningful contribution to society is a primary factor in their career progression. Delphine Donné, General Manager, Creativity & Productivity at Logitech, told me it
|
was “eye-opening” to see the “importance of inspiring women of the role they can have and understanding the impact of their work.”
These insights from post-secondary offer a timely lesson to K-12. A study released by the Bill & Melinda Gates Foundation earlier this year found that when asked “what K-12 math education should ideally be like,” survey respondents were most likely to say “relevant to the real world” and “useful.” The problem, the authors noted, is that math education is perceived as unengaging, outdated, and disconnected from the real world, causing students to be uninterested in the subject. The solution they identified is to “make math education more relevant and engaging so that more students will succeed in math and, thus, later in life.” In focus groups, parents and teachers defined relevant as “teaching math through the prism of real world and societal examples” and “drawing connections between content and students’ lives outside the classroom.”
A clear take-away for teachers wanting to keep more girls engaged in STEM, Donné underscored, is to emphasize the impact of what you can do in STEM fields and that it isn’t “technical or boring.” Shannon Richardson, a high school science teacher in Brooklyn, NY, reinforced the same theme when she told me that what draws people most to STEM fields, in addition to curiosity and a desire to create, is a “desire to solve the world’s problems.” And Leuna, a South Asian woman from New York, shared that she persisted in STEM because “my whole family was very prone to diseases. I watched my grandfather die of lung cancer, my grandmother suffering from severe diabetes for years and my mother be diagnosed and healed from breast cancer. . . Therefore my dream of becoming a healthcare personnel was birthed, so that I could someday provide aid to those who desperately need it.”
International Day of the Girl challenges us to center girls and invest in their leadership. As Mattie Kahn’s new book, “Young and Restless: The Girls Who Sparked America’s Revolutions”, reminds us, girls have always been changing the world. Drawing the connections between STEM and the real world opens up more paths for them to do it.
|
While people often associate deaths at festivals with drug overdoses, there's another even more dangerous element that experts say often gets overlooked.
Sydney is expected to swelter above 40 degrees Celsius today as thousands are expected to flock to the sold-out Epik festival at Sydney Olympic Park.
ABC Sydney meteorologist Tom Saunders predicts temperatures will reach the mid-40s in Sydney's west.
"The summer solstice is also approaching, meaning the sun's angle is near its highest point. This leads to direct heating of your skin which can make it feel 8 degrees warmer, even over cool surfaces like grass and water," he said.
Two men in their 20s died from suspected overdoses after attending the Knockout Festival on September 30, which was held in the same arena that will host Epik.
The investigation into what occurred is still ongoing.
As the pill-testing debate rages on, several doctors say the conversation is swamped by political discourse and are concerned partygoers don't understand the risks in mixing certain drugs, how potency can turn lethal, and why extreme heat is deadly.
What is serotonin toxicity?
Serotonin is a chemical naturally found in the brain and body that carries messages between nerve cells which, among many other functions, regulates mood and wellbeing.
Serotonin-deficiency can lead to depression, hence why many anti-depressants work on serotonin receptors.
When the body is flooded with too much of this compound, the result can be serious.
"Mild to moderate serotonin toxicity can cause nervousness, tremors, increased heart rate, excitable muscles, lack of coordination and sweating," Darren Roberts, medical director of NSW Poisons Information Centre, said.
Severe serotonin toxicity can result in symptoms including agitation, rigid muscles, seizures, high temperatures, cardiac arrhythmias and collapse.
"If left untreated, or if treatment is delayed, it can lead to multi-organ damage and death," Dr Roberts said.
A particularly potent MDMA capsule, or the combination of ingesting it while taking certain anti-depressants, can cause this to occur.
NSW Health in September warned of high-dose "Gucci" pills on the market, containing more than 400mg of MDMA — four times the average amount.
David Caldicott, clinical lead of CanTEST in Canberra, said the quality of drugs in Australia varies dramatically.
He cited results from his trial, where nearly half all tests return unexpected substances, or a completely different drug to what the person believed they had bought.
He said there's also a common perception the drugs won't be strong, leading people to double or triple dump at a time.
"If you double drop that [Gucci pill], you will be fortunate if you're able to get to a [health professional], before something terrible happens," he said.
"So double dropping a 300-milligram tablet of MDMA is a sure-fire, absolute certain way of getting your way to hospital, at best."
Heat causes death
Professor of clinical pharmacology at the University of Sydney, Nicholas Buckley, said it was more helpful to think about festival drug overdoses in terms of a stimulant toxicity problem.
"In terms of say, what people are dying from at festivals, you can get the same effects from cocaine, which has no serotonin effects, or from methamphetamine, which has very little serotonin effects," he said.
"So it's really a stimulant toxicity problem."
But this is not what ultimately causes death, he said.
"Basically, people die from overheating," he said.
"MDMA stops a body clearing, or shedding heat effectively.
"So if people heat up to around 43 degrees, so not a huge jump from their normal 37, then they die.
"Your body is designed not to work at that temperature. Blood starts clotting, all sorts of stuff goes wrong."
He said hot temperatures had an impact on how the effects of serotonin syndrome could play out.
"Festivals are dangerous places for overheating, especially on 36 degree days.
"If they're suffering from the serotonin syndrome in the middle of a hot field in the 40-degree day, their outcomes will be considerably different than if they're suffering from serotonin syndrome, in the middle of a paddock with temperatures of 15 degrees."
Mixing prescription and other drugs
Dr Caldicott said young people don't often discuss with their doctor mixing prescription medication with illegal substances.
Anti-psychotics, for example, inhibits sweating, and increases risk of overheating.
Monoamine oxidate inhibitors, or MOIs, prevent the breakdown of serotonin in the brain.
"If you combine that with a serotonin-releaser, like MDMA, then you can have a nasty interaction," he said.
Dr Caldicott said pill-testing could facilitate conversations with a health expert these people would otherwise never have.
"I think that's a terribly important thing to acknowledge when you talk to young people about their drug use, is about why they're using it and what benefits they perceive it to give them.
"And I think the fact that [MDMA] is an SSRI [selective serotonin reuptake inhibitors] mechanism explains a lot about why it's so popular in a generation of young people who have never been as anxious or stressed out about their situation in the world."
How to stay safe
The easiest way to avoid any harm from drugs is, obviously, not taking them.
Staying cool at festivals is also key.
Regular dance breaks, and water, may sound simple, but when a person is taking stimulants, their brain may not be aware this is exactly what they need, Professor Buckley said.
"Most people, if they're not full of drugs, don't do a whole lot of exercise on a really hot day, or in a hot environment where they overheat."
Professor Buckley pointed out that if one pill is of very low potency, another could be extremely high strength, so it's never safe to assume its strength.
"It's generally just very poor quality control," he said.
"If you're selling MDMA, and you're a good maker of tablets, you want the drug to be very smoothly mixed.
"And for every dose to be 60 to 100 milligrams, then you have happy customers.
"But if you just have a backyard operation where you get bored of mixing, then you end up with 400 milligram tablets and tablets with nothing in them."
|
<urn:uuid:6d173808-bcc7-4518-81ee-da7864e199bf>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00849.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9615129828453064,
"pii_count": 0,
"score": 2.53125,
"token_count": 1345,
"url": "https://www.abc.net.au/news/2023-12-09/nsw-heat-drug-overdose-festival-deaths/103205522"
}
|
While people often associate deaths at festivals with drug overdoses, there's another even more dangerous element that experts say often gets overlooked.
Sydney is expected to swelter above 40 degrees Celsius today as thousands are expected to flock to the sold-out Epik festival at Sydney Olympic Park.
ABC Sydney meteorologist Tom Saunders predicts temperatures will reach the mid-40s in Sydney's west.
"The summer solstice is also approaching, meaning the sun's angle is near its highest point. This leads to direct heating of your skin which can make it feel 8 degrees warmer, even over cool surfaces like grass and water," he said.
Two men in their 20s died from suspected overdoses after attending the Knockout Festival on September 30, which was held in the same arena that will host Epik.
The investigation into what occurred is still ongoing.
As the pill-testing debate rages on, several doctors say the conversation is swamped by political discourse and are concerned partygoers don't understand the risks in mixing certain drugs, how potency can turn lethal, and why extreme heat is deadly.
What is serotonin toxicity?
Serotonin is a chemical naturally found in the brain and body that carries messages between nerve cells which, among many other functions, regulates mood and wellbeing.
Serotonin-deficiency can lead to depression, hence why many anti-depressants work on serotonin receptors.
When the body is flooded with too much of this compound, the result can be serious.
"Mild to moderate serotonin toxicity can cause nervousness, tremors, increased heart rate, excitable muscles, lack of coordination and sweating," Darren Roberts, medical director of NSW Poisons Information Centre, said.
Severe serotonin toxicity can result in symptoms including agitation, rigid muscles, seizures, high temperatures, cardiac arrhythmias and collapse.
"If left untreated, or if treatment is delayed, it can lead to multi-organ damage and death," Dr Roberts said.
A particularly potent MDMA capsule, or the combination of ingesting it while taking certain anti-depressants, can cause this to occur.
NSW Health in September warned of high-dose "Gucci" pills on the market, containing more than 400mg of MDMA — four times the average amount.
David Caldicott, clinical lead of CanTEST in Canberra, said the quality of drugs in Australia varies dramatically.
He cited results from his trial, where nearly half all tests return unexpected substances, or a completely different drug to what the person believed they had bought.
He
|
said there's also a common perception the drugs won't be strong, leading people to double or triple dump at a time.
"If you double drop that [Gucci pill], you will be fortunate if you're able to get to a [health professional], before something terrible happens," he said.
"So double dropping a 300-milligram tablet of MDMA is a sure-fire, absolute certain way of getting your way to hospital, at best."
Heat causes death
Professor of clinical pharmacology at the University of Sydney, Nicholas Buckley, said it was more helpful to think about festival drug overdoses in terms of a stimulant toxicity problem.
"In terms of say, what people are dying from at festivals, you can get the same effects from cocaine, which has no serotonin effects, or from methamphetamine, which has very little serotonin effects," he said.
"So it's really a stimulant toxicity problem."
But this is not what ultimately causes death, he said.
"Basically, people die from overheating," he said.
"MDMA stops a body clearing, or shedding heat effectively.
"So if people heat up to around 43 degrees, so not a huge jump from their normal 37, then they die.
"Your body is designed not to work at that temperature. Blood starts clotting, all sorts of stuff goes wrong."
He said hot temperatures had an impact on how the effects of serotonin syndrome could play out.
"Festivals are dangerous places for overheating, especially on 36 degree days.
"If they're suffering from the serotonin syndrome in the middle of a hot field in the 40-degree day, their outcomes will be considerably different than if they're suffering from serotonin syndrome, in the middle of a paddock with temperatures of 15 degrees."
Mixing prescription and other drugs
Dr Caldicott said young people don't often discuss with their doctor mixing prescription medication with illegal substances.
Anti-psychotics, for example, inhibits sweating, and increases risk of overheating.
Monoamine oxidate inhibitors, or MOIs, prevent the breakdown of serotonin in the brain.
"If you combine that with a serotonin-releaser, like MDMA, then you can have a nasty interaction," he said.
Dr Caldicott said pill-testing could facilitate conversations with a health expert these people would otherwise never have.
"I think that's a terribly important thing to acknowledge when you talk to young people about their drug use, is about why they're using it and what benefits they perceive it to give them.
"And I think the fact that [MDMA] is an SSRI [selective serotonin reuptake inhibitors] mechanism explains a lot about why it's so popular in a generation of young people who have never been as anxious or stressed out about their situation in the world."
How to stay safe
The easiest way to avoid any harm from drugs is, obviously, not taking them.
Staying cool at festivals is also key.
Regular dance breaks, and water, may sound simple, but when a person is taking stimulants, their brain may not be aware this is exactly what they need, Professor Buckley said.
"Most people, if they're not full of drugs, don't do a whole lot of exercise on a really hot day, or in a hot environment where they overheat."
Professor Buckley pointed out that if one pill is of very low potency, another could be extremely high strength, so it's never safe to assume its strength.
"It's generally just very poor quality control," he said.
"If you're selling MDMA, and you're a good maker of tablets, you want the drug to be very smoothly mixed.
"And for every dose to be 60 to 100 milligrams, then you have happy customers.
"But if you just have a backyard operation where you get bored of mixing, then you end up with 400 milligram tablets and tablets with nothing in them."
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
It’s not every day that effective fungus-killing compounds are discovered, so researchers in Germany knew their recent find needed a special name. Identifying and testing three natural compounds that proved lethal to fungi, they were so impressed they’ve named the chemicals after actor Keanu Reeves, a nod to how he eliminates villains in movies such as “John Wick” and “The Matrix.”
The potential treatment for fungi comes at a time when the organisms are becoming more and more resistant to known antifungals, according to study coauthor Sebastian Götze, a researcher with Germany’s Leibniz Institute for Natural Product Research and Infection Biology. Not only are the newly named microbes effective against plants, researchers found the compounds — molecules commonly found in bacteria called lipopeptides — also to be an effective treatment against human fungal infections.
The study was published recently in the Journal of the American Chemical Society.
“The lipopeptides kill so efficiently that we named them after Keanu Reeves because he, too, is extremely deadly in his roles,” Götze said in a statement.
“We have a crisis in anti-infectives. … Many human-pathogenic fungi are now resistant to antimycotics (antifungals) — partly because they are used in large quantities in agricultural fields.”
Called “keanumycins,” the newly found antimicrobial compounds are a natural byproduct of the bacteria Pseudomonas typically found in soil and water. Researchers came across the compounds when studying Pseudomonas for their effectiveness against predatory amoebas.
Scientists have known that “many of these bacterial species (Pseudomonas) are very toxic to amoebae, which feed on bacteria,” said lead study author Pierre Stallforth, head of the department of paleobiotechnology at the Leibniz Institute, in a statement. Stallforth and his fellow researchers wanted to explore the bacteria’s effectiveness against fungi, which have a cell structure similar to that of amoebas, according to the study.
What keanumycins can do
Researchers initially tested keanumycins A, B and C on a hydrangea that had been infected with Botrytis cinerea, a plant pest better known as the trigger for gray mold rot. The fungus commonly infects certain fruits and vegetables and causes collateral damage to harvests.
The compounds are biodegradable, according to the study, and could provide an environmentally friendly alternative to pesticides in efforts to save food supply.
Further testing also showed that the keanumycins are effective against Candida albicans, a natural yeast that’s typically found in the human microbiome but can overgrow and turn into a severe infection.
Fungal infections have been a hot topic recently due to HBO’s “The Last of Us,” and as the show suggests, the conditions can be difficult to treat but not impossible. (HBO, like CNN, is a unit of Warner Bros. Discovery.) Testing of the keanumycins has shown that the compounds are not especially harmful and toxic to human cells, a problem often seen in the development of antifungal treatments since fungal cells share similar properties with animal cells.
“This study documents another exciting means by which microbes have evolved to compete with and fight other organisms,” said Dr. Matt Nelsen, a researcher from Chicago’s Field Museum, in an email.
“Previous efforts have sought to exploit such natural products for human use to combat animal and plant pathogens,” Nelsen added. “However, over time, many pathogenic organisms — including fungi — have evolved resistance to the chemicals we use to battle them. Consequently, we need to find a new way to ‘outsmart’ or ‘one-up’ them.”
Keanumycins are “good lead structure candidates for the development of antifungal drugs,” according to the study, and could be a new treatment option in an area where they are “desperately needed.” Researchers said they will be carrying out further testing on the compounds.
“One means by which organisms engage in this battle (competition with other organisms) is through the synthesis of chemicals that may inhibit the growth of or kill other organisms,” Nelsen said. With further research, it will be exciting to understand how widespread keanumycins are, Nelsen added, and to see how many other species in the Pseudomonas genus can produce these compounds.
|
<urn:uuid:8413c5f1-b264-4724-96f3-9a772d00586b>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651815.80/warc/CC-MAIN-20230605085657-20230605115657-00193.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9541463255882263,
"pii_count": 0,
"score": 3.03125,
"token_count": 1008,
"url": "https://amp.cnn.com/cnn/2023/03/03/world/keanumycin-fungus-killer-discovery-scn/index.html"
}
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
It’s not every day that effective fungus-killing compounds are discovered, so researchers in Germany knew their recent find needed a special name. Identifying and testing three natural compounds that proved lethal to fungi, they were so impressed they’ve named the chemicals after actor Keanu Reeves, a nod to how he eliminates villains in movies such as “John Wick” and “The Matrix.”
The potential treatment for fungi comes at a time when the organisms are becoming more and more resistant to known antifungals, according to study coauthor Sebastian Götze, a researcher with Germany’s Leibniz Institute for Natural Product Research and Infection Biology. Not only are the newly named microbes effective against plants, researchers found the compounds — molecules commonly found in bacteria called lipopeptides — also to be an effective treatment against human fungal infections.
The study was published recently in the Journal of the American Chemical Society.
“The lipopeptides kill so efficiently that we named them after Keanu Reeves because he, too, is extremely deadly in his roles,” Götze said in a statement.
“We have a crisis in anti-infectives. … Many human-pathogenic fungi are now resistant to antimycotics (antifungals) — partly because they are used in large quantities in agricultural fields.”
Called “keanumycins,” the newly found antimicrobial compounds are a natural byproduct of the bacteria Pseudomonas typically found in soil and water. Researchers came across the compounds when studying Pseudomonas for their effectiveness against predatory amoebas.
Scientists have known that “many of these bacterial species (Pseudomonas) are very toxic to amoebae, which feed on bacteria,” said lead study author Pierre Stallforth, head of the department of paleobiotechnology at the Leibniz Institute, in a statement. Stallforth and his fellow researchers wanted to explore the bacteria’s effectiveness against fungi, which have a cell structure similar to that of amoebas, according to the study.
What keanumycins can do
Researchers initially tested keanumycins A, B and C on a hydrangea that had been infected with Botrytis cinerea, a plant pest better known as the trigger for gray mold rot. The fungus commonly infects certain fruits and vegetables and causes collateral damage to harvests.
The compounds are biodegradable, according to the study, and could provide an environmentally friendly alternative to pesticides in efforts to save food supply.
Further
|
testing also showed that the keanumycins are effective against Candida albicans, a natural yeast that’s typically found in the human microbiome but can overgrow and turn into a severe infection.
Fungal infections have been a hot topic recently due to HBO’s “The Last of Us,” and as the show suggests, the conditions can be difficult to treat but not impossible. (HBO, like CNN, is a unit of Warner Bros. Discovery.) Testing of the keanumycins has shown that the compounds are not especially harmful and toxic to human cells, a problem often seen in the development of antifungal treatments since fungal cells share similar properties with animal cells.
“This study documents another exciting means by which microbes have evolved to compete with and fight other organisms,” said Dr. Matt Nelsen, a researcher from Chicago’s Field Museum, in an email.
“Previous efforts have sought to exploit such natural products for human use to combat animal and plant pathogens,” Nelsen added. “However, over time, many pathogenic organisms — including fungi — have evolved resistance to the chemicals we use to battle them. Consequently, we need to find a new way to ‘outsmart’ or ‘one-up’ them.”
Keanumycins are “good lead structure candidates for the development of antifungal drugs,” according to the study, and could be a new treatment option in an area where they are “desperately needed.” Researchers said they will be carrying out further testing on the compounds.
“One means by which organisms engage in this battle (competition with other organisms) is through the synthesis of chemicals that may inhibit the growth of or kill other organisms,” Nelsen said. With further research, it will be exciting to understand how widespread keanumycins are, Nelsen added, and to see how many other species in the Pseudomonas genus can produce these compounds.
|
What time is it on the moon? Europe and others push for a standard lunar time zone
With more lunar missions than ever on the horizon, the European Space Agency wants to give the moon its own time zone.
This week, the agency said space organizations around the world are considering how best to keep time on the moon. The idea came up during a meeting in the Netherlands late last year, with participants agreeing on the urgent need to establish “a common lunar reference time,” said the space agency’s Pietro Giordano, a navigation system engineer.
“A joint international effort is now being launched towards achieving this,” Giordano said in a statement.
For now, a moon mission runs on the time of the country that is operating the spacecraft. European space officials said an internationally accepted lunar time zone would make it easier for everyone, especially as more countries and even private companies aim for the moon and NASA gets set to send astronauts there again.
NASA had to grapple with the time question while designing and building the International Space Station, which is fast approaching the 25th anniversary of the launch of its first piece.
While the space station doesn’t have its own time zone, it runs on Coordinated Universal Time, or UTC, which is meticulously based on atomic clocks. That helps to split the time difference between NASA and the Canadian Space Agency, and the other partnering space programs in Russia, Japan and Europe.
With discovery of 12 more moons, Jupiter now has 92, the most in our solar system
Astronomers have discovered 12 new moons around Jupiter, pushing it past Saturn as the planet in our solar system with the most moons — a whopping 92.
The international team looking into lunar time is debating whether a single organization should set and maintain time on the moon, according to the European Space Agency.
There are also technical issues to consider. Clocks run faster on the moon than on Earth, gaining about 56 microseconds each day, the space agency said. Further complicating matters, ticking occurs differently on the lunar surface from the way it does in lunar orbit.
Perhaps most important, lunar time will have to be practical for astronauts there, said the space agency’s Bernhard Hufenbach. NASA is shooting for its first flight to the moon with astronauts in more than a half-century in 2024, with a lunar landing as early as 2025.
“This will be quite a challenge” with each lunar day lasting as long as 29.5 Earth days, Hufenbach said in a statement. “But having established a working time system for the moon, we can go on to do the same for other planetary destinations.”
Mars Standard Time, anyone?
Must-read stories from the L.A. Times
Get the day's top news with our Today's Headlines newsletter, sent every weekday morning.
You may occasionally receive promotional content from the Los Angeles Times.
|
<urn:uuid:f5dc5aa5-7b75-42fb-8f31-dab04edb8320>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00228.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.937736451625824,
"pii_count": 0,
"score": 3.40625,
"token_count": 607,
"url": "https://www.latimes.com/world-nation/story/2023-03-01/moon-standardized-lunar-time-zone"
}
|
What time is it on the moon? Europe and others push for a standard lunar time zone
With more lunar missions than ever on the horizon, the European Space Agency wants to give the moon its own time zone.
This week, the agency said space organizations around the world are considering how best to keep time on the moon. The idea came up during a meeting in the Netherlands late last year, with participants agreeing on the urgent need to establish “a common lunar reference time,” said the space agency’s Pietro Giordano, a navigation system engineer.
“A joint international effort is now being launched towards achieving this,” Giordano said in a statement.
For now, a moon mission runs on the time of the country that is operating the spacecraft. European space officials said an internationally accepted lunar time zone would make it easier for everyone, especially as more countries and even private companies aim for the moon and NASA gets set to send astronauts there again.
NASA had to grapple with the time question while designing and building the International Space Station, which is fast approaching the 25th anniversary of the launch of its first piece.
While the space station doesn’t have its own time zone, it runs on Coordinated Universal Time, or UTC, which is meticulously based on atomic clocks. That helps to split the time difference between NASA and the Canadian Space Agency, and the other partnering space programs in Russia, Japan and Europe.
With discovery of 12 more moons, Jupiter now has 92, the most in our solar system
Astronomers have discovered 12 new moons around Jupiter, pushing it past Saturn as the planet in our solar system with the most moons — a whopping 92.
The international team looking into lunar time is debating whether a single organization should set and maintain time on the moon, according to the European Space Agency.
There are also technical issues to consider. Clocks run faster on the moon than on Earth, gaining about 56 microseconds each day, the space agency said. Further complicating matters, ticking occurs differently on the lunar surface from the way it does in lunar orbit.
Perhaps most important, lunar time will have to be practical for astronauts there, said the space agency’s Bernhard Hufenbach. NASA is shooting for its first flight to the moon with astronauts in more than a half-century in 2024, with a lunar landing as early as 2025.
“This will be quite a challenge” with each lunar
|
day lasting as long as 29.5 Earth days, Hufenbach said in a statement. “But having established a working time system for the moon, we can go on to do the same for other planetary destinations.”
Mars Standard Time, anyone?
Must-read stories from the L.A. Times
Get the day's top news with our Today's Headlines newsletter, sent every weekday morning.
You may occasionally receive promotional content from the Los Angeles Times.
|
As if it needed one, California received a new reminder Tuesday that, despite its trappings of sybaritic wealth, it’s home to millions of families that struggle each day to put roofs over their heads and food in their bellies.
United Ways of California issued updated calculations of real world poverty, revealing that 34% of the state’s families lack enough income to meet basic living costs, primarily because those costs – particularly for housing – are extraordinarily high.
The estimate is based on 2021 data, but there’s no reason to believe the situation has improved significantly, if at all, since then.
The federal government’s official poverty number is based strictly on income, and California’s rate is not particularly high by that methodology. But the U.S. Census Bureau also has an alternative measure that includes the cost of living and it generally places California at or near the top in poverty among the states.
United Way’s methodology is similar to the Census Bureau’s alternative poverty measure, and also resembles the Public Policy Institute of California’s calculations of poverty and near-poverty. The 34% poverty level also comports with the 15 million Californians who receive health care through the state’s Medi-Cal program.
In a sense, therefore, the United Ways report is just telling us something we already know. However, its interactive feature provides important details about which communities and which demographic subgroups are most likely to experience severe economic stress in a state with the world’s fifth – and perhaps the fourth – largest economy.
It reveals, for instance, that rural counties and the cores of urban areas are most likely to be poverty-stricken, and that 51% of Latino families have incomes below the “real cost measure” of what it takes to meet basic living costs, the highest of any ethnic group.
Additionally, 68% of Californians without high school diplomas are in poverty, as are 70% of single mothers and 57% of non-citizen immigrants.
High Housing Costs a Major Factor
The United Ways report once again implicitly asks what, if anything, can California’s political apparatus do about its high poverty since especially high living costs – housing mostly – rather than especially low incomes, are the major factor.
High housing costs stem from the state’s chronic housing shortage and while there’s been a recent uptick in housing construction, it still falls very short of the 180,000 units a year the state says are needed to close the gap.
The state has made some noteworthy efforts to jump-start housing construction, mostly by removing local barriers to development, but its major anti-poverty approach has been to increase family incomes through intermittent programs such as increasing welfare grants, raising minimum wages, expanding health care and child care services and providing earned income tax credits and direct cash payments.
However those are generally short-term, marginal benefits that rely on the state’s erratic revenue flow, rather than permanent income supports. There are some state-level efforts to create a guaranteed basic income program to lift low-income families out of poverty, but the potential costs are enormous.
The United Ways study says that 3.8 million families live below its “real cost measure” for a decent way of life and typically would need about $40,000 more in annual income to meet it. Providing that supplemental income would, therefore, cost about $150 billion, or roughly a 50% increase in the $300 billion state budget.
That’s not going to happen.
If California’s politicians want to get serious about poverty, rather than engage in superficial virtue-signaling, they will become more vigorous – even ruthless – about eliminating barriers to housing construction, improving educational shortcomings, and making the state more attractive to job-creating investment rather than chasing employers away.
About the Author
Dan Walters has been a journalist for nearly 60 years, spending all but a few of those years working for California newspapers. He began his professional career in 1960, at age 16, at the Humboldt Times. For more columns by Walters, go to calmatters.org/commentary.
Make Your Voice Heard
GV Wire encourages vigorous debate from people and organizations on local, state, and national issues. Submit your op-ed to <email-pii> for consideration.
|
<urn:uuid:e5ed52f1-6120-4b0f-8129-f73636b52df8>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510454.60/warc/CC-MAIN-20230928194838-20230928224838-00401.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9606012105941772,
"pii_count": 1,
"score": 2.609375,
"token_count": 917,
"url": "https://ec2-13-52-0-112.us-west-1.compute.amazonaws.com/2023/06/07/despite-californias-immense-wealth-millions-remain-mired-in-poverty/"
}
|
As if it needed one, California received a new reminder Tuesday that, despite its trappings of sybaritic wealth, it’s home to millions of families that struggle each day to put roofs over their heads and food in their bellies.
United Ways of California issued updated calculations of real world poverty, revealing that 34% of the state’s families lack enough income to meet basic living costs, primarily because those costs – particularly for housing – are extraordinarily high.
The estimate is based on 2021 data, but there’s no reason to believe the situation has improved significantly, if at all, since then.
The federal government’s official poverty number is based strictly on income, and California’s rate is not particularly high by that methodology. But the U.S. Census Bureau also has an alternative measure that includes the cost of living and it generally places California at or near the top in poverty among the states.
United Way’s methodology is similar to the Census Bureau’s alternative poverty measure, and also resembles the Public Policy Institute of California’s calculations of poverty and near-poverty. The 34% poverty level also comports with the 15 million Californians who receive health care through the state’s Medi-Cal program.
In a sense, therefore, the United Ways report is just telling us something we already know. However, its interactive feature provides important details about which communities and which demographic subgroups are most likely to experience severe economic stress in a state with the world’s fifth – and perhaps the fourth – largest economy.
It reveals, for instance, that rural counties and the cores of urban areas are most likely to be poverty-stricken, and that 51% of Latino families have incomes below the “real cost measure” of what it takes to meet basic living costs, the highest of any ethnic group.
Additionally, 68% of Californians without high school diplomas are in poverty, as are 70% of single mothers and 57% of non-citizen immigrants.
High Housing Costs a Major Factor
The United Ways report once again implicitly asks what, if anything, can California’s political apparatus do about its high poverty since especially high living costs – housing mostly – rather than especially low incomes, are the major factor.
High housing costs stem from the state’s chronic housing shortage and while there’s been a recent uptick in housing construction, it still falls very short of the 180,000 units a year the state says are needed to close the gap
|
.
The state has made some noteworthy efforts to jump-start housing construction, mostly by removing local barriers to development, but its major anti-poverty approach has been to increase family incomes through intermittent programs such as increasing welfare grants, raising minimum wages, expanding health care and child care services and providing earned income tax credits and direct cash payments.
However those are generally short-term, marginal benefits that rely on the state’s erratic revenue flow, rather than permanent income supports. There are some state-level efforts to create a guaranteed basic income program to lift low-income families out of poverty, but the potential costs are enormous.
The United Ways study says that 3.8 million families live below its “real cost measure” for a decent way of life and typically would need about $40,000 more in annual income to meet it. Providing that supplemental income would, therefore, cost about $150 billion, or roughly a 50% increase in the $300 billion state budget.
That’s not going to happen.
If California’s politicians want to get serious about poverty, rather than engage in superficial virtue-signaling, they will become more vigorous – even ruthless – about eliminating barriers to housing construction, improving educational shortcomings, and making the state more attractive to job-creating investment rather than chasing employers away.
About the Author
Dan Walters has been a journalist for nearly 60 years, spending all but a few of those years working for California newspapers. He began his professional career in 1960, at age 16, at the Humboldt Times. For more columns by Walters, go to calmatters.org/commentary.
Make Your Voice Heard
GV Wire encourages vigorous debate from people and organizations on local, state, and national issues. Submit your op-ed to <email-pii> for consideration.
|
The United States isn’t the only country that puts a limit on how much money its government is allowed to borrow. But it is the only nation regularly pushed to the brink of a political and economic crisis as a result. President Joe Biden will host Republican House Speaker Kevin McCarthy and other congressional leaders at the White House on Tuesday for a critical meeting on raising the US debt ceiling. Treasury Secretary Janet Yellen has warned that if a deal isn’t reached soon, the United States could run out of cash to pay its bills as early as June 1. That could mean the United States defaults on its debt, which economists have said would unleash a financial meltdown and a recession. Stakes that high raise a question: Do other countries have this problem? The answer is no. Few countries set formal limits on public borrowing needed to meet legal obligations, precisely because they can become tools of political brinkmanship, according to Mrugank Bhusari, an assistant director at the Atlantic Council, a think tank. The only other advanced economy that limits borrowing in absolute terms is Denmark. But in the Scandinavian country, the ceiling is intentionally set high enough to avoid political dramas like the one playing out in Washington. “It’s really rare for debt limits to pose genuine threats to economic stability in a country,” Bhusari said. An American problem Congress first imposed a limit of $45 billion on overall US government borrowing in 1939. That was about 10% above total federal debt at the time. Since then, the country’s economy has expanded substantially — as has its borrowing. US federal debt increased to $30.9 trillion in 2022 (from $870 billion in 1939, in current dollars). The ratio of general government debt to gross domestic product, or GDP, stood at about 128% in 2021, according to the International Monetary Fund. That’s meant Congress has had to frequently step in. Since 1960, legislators have acted 78 times — on average more than once a year — to raise or modify the US debt limit so the government can continue to pay its bills. It’s a problem unique to the United States. Countries that have opted for debt limits, with the goal of encouraging fiscal restraint, tend to structure them as a percentage of GDP instead of choosing a nominal value, according to Bhusari. They also tend to be non-binding. Malaysia, Namibia and Pakistan are all in this camp, he said. The European Union asks member states to limit debt to 60% of GDP, though many consistently break that rule and it was suspended during the pandemic. It may yet be revised to spur spending on the green and digital transitions. The closest analogue to the US debt ceiling is the set-up in Denmark. Yet lawmakers in Copenhagen don’t find themselves locked in perennial political confrontations. When Denmark implemented a debt ceiling in 1993 — a constitutional necessity following a structural overhaul of its government — it was determined that the upper limit for borrowing should be 950 billion Danish kroner ($140 billion), significantly above government debt levels at the time. Las Olsen, chief economist at Denmark’s Danske Bank, said this was a strategic decision. Lawmakers did not want the debt ceiling to become a proxy for difficult conversations about the government’s fiscal plans. “The logic is Parliament sets tax and spending, and once it does that, there’s no way around allowing the government to borrow the difference,” Olsen said. Political leaders were also cognizant that as a small country, Denmark couldn’t afford to spook investors with regular political stand-offs. The debt limit in Denmark has been increased only once. It was roughly doubled in 2010 to deal with the economic aftermath of the 2008 financial crisis. But the increase was ultimately unnecessary; borrowing stayed well below the cap. “It’s not a political issue at all,” Olsen said. “It’s seen as a complete formality.” ‘Entirely self-imposed’ There are limitations to comparing the United States and Denmark. The latter borrows far less compared with the size of its economy, with a debt-to-GDP ratio of 37% in 2021. It often runs budget surpluses. That makes Denmark less likely to run into problems with the ceiling — regardless of the level at which it’s set. “The country is so much more fiscally conservative in many ways than the United States,” said Jacob Funk Kirkegaard, a nonresident senior fellow at the Peterson Institute for International Economics, based in Washington, D.C. “It has far, far, far lower levels of debt.” There are also major political differences. While the United States has a clearer separation of powers, often leading to gridlock between the executive and legislative branches, Denmark’s parliament elects the head of government, often on the basis of a coalition of parties. That makes it less likely the debt ceiling could be turned into a political football. There are also distinct processes for creating annual budgets. Even so, it’s clear that in Denmark, the debt limit has enabled the smooth functioning of government, Kirkegaard said. In the United States, it’s had the opposite effect. “We have to spend an awful lot of time on this at regular intervals,” Kirkegaard said. “All we’re doing is avoiding a disaster scenario we’re creating for ourselves.” Bhusari of the Atlantic Council also described the US debt ceiling crisis as “entirely self-imposed.” When it comes to tackling issues like debt sustainability, he said, investors “often think of [a ceiling] as more of a problem than a solution,” even if they prize fiscal caution. Most likely, the limit will just keep being raised. He noted the example of Australia, which introduced a debt ceiling in 2008 to bolster its fiscal credentials, raised it multiple times, and ultimately ditched it in 2013 when it became a constant source of political friction. The United States may be singular in its approach to debt management. But should the country default, it will become the world’s problem — at a time when high interest rates and inflation are already causing pain. “No one fully knows what will happen and it poses a lot of uncertainty,” Bhusari said. Financial markets are built on an understanding that owning US debt, or Treasuries, is safe. If the United States is unable to pay its creditors for an extended period, White House economists have predicted the value of the stock market could crater, and the country could suffer a deep recession, with the loss of more than 8 million jobs.
|
<urn:uuid:766a9097-ae45-4ef6-b0c0-7fc9e7d4528d>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.79/warc/CC-MAIN-20240227153053-20240227183053-00145.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9679959416389465,
"pii_count": 0,
"score": 3.109375,
"token_count": 1385,
"url": "https://edition.cnn.com/2023/05/09/business/us-debt-ceiling-denmark-global-comparison/index.html"
}
|
The United States isn’t the only country that puts a limit on how much money its government is allowed to borrow. But it is the only nation regularly pushed to the brink of a political and economic crisis as a result. President Joe Biden will host Republican House Speaker Kevin McCarthy and other congressional leaders at the White House on Tuesday for a critical meeting on raising the US debt ceiling. Treasury Secretary Janet Yellen has warned that if a deal isn’t reached soon, the United States could run out of cash to pay its bills as early as June 1. That could mean the United States defaults on its debt, which economists have said would unleash a financial meltdown and a recession. Stakes that high raise a question: Do other countries have this problem? The answer is no. Few countries set formal limits on public borrowing needed to meet legal obligations, precisely because they can become tools of political brinkmanship, according to Mrugank Bhusari, an assistant director at the Atlantic Council, a think tank. The only other advanced economy that limits borrowing in absolute terms is Denmark. But in the Scandinavian country, the ceiling is intentionally set high enough to avoid political dramas like the one playing out in Washington. “It’s really rare for debt limits to pose genuine threats to economic stability in a country,” Bhusari said. An American problem Congress first imposed a limit of $45 billion on overall US government borrowing in 1939. That was about 10% above total federal debt at the time. Since then, the country’s economy has expanded substantially — as has its borrowing. US federal debt increased to $30.9 trillion in 2022 (from $870 billion in 1939, in current dollars). The ratio of general government debt to gross domestic product, or GDP, stood at about 128% in 2021, according to the International Monetary Fund. That’s meant Congress has had to frequently step in. Since 1960, legislators have acted 78 times — on average more than once a year — to raise or modify the US debt limit so the government can continue to pay its bills. It’s a problem unique to the United States. Countries that have opted for debt limits, with the goal of encouraging fiscal restraint, tend to structure them as a percentage of GDP instead of choosing a nominal value, according to Bhusari. They also tend to be non-binding. Malaysia, Namibia and Pakistan
|
are all in this camp, he said. The European Union asks member states to limit debt to 60% of GDP, though many consistently break that rule and it was suspended during the pandemic. It may yet be revised to spur spending on the green and digital transitions. The closest analogue to the US debt ceiling is the set-up in Denmark. Yet lawmakers in Copenhagen don’t find themselves locked in perennial political confrontations. When Denmark implemented a debt ceiling in 1993 — a constitutional necessity following a structural overhaul of its government — it was determined that the upper limit for borrowing should be 950 billion Danish kroner ($140 billion), significantly above government debt levels at the time. Las Olsen, chief economist at Denmark’s Danske Bank, said this was a strategic decision. Lawmakers did not want the debt ceiling to become a proxy for difficult conversations about the government’s fiscal plans. “The logic is Parliament sets tax and spending, and once it does that, there’s no way around allowing the government to borrow the difference,” Olsen said. Political leaders were also cognizant that as a small country, Denmark couldn’t afford to spook investors with regular political stand-offs. The debt limit in Denmark has been increased only once. It was roughly doubled in 2010 to deal with the economic aftermath of the 2008 financial crisis. But the increase was ultimately unnecessary; borrowing stayed well below the cap. “It’s not a political issue at all,” Olsen said. “It’s seen as a complete formality.” ‘Entirely self-imposed’ There are limitations to comparing the United States and Denmark. The latter borrows far less compared with the size of its economy, with a debt-to-GDP ratio of 37% in 2021. It often runs budget surpluses. That makes Denmark less likely to run into problems with the ceiling — regardless of the level at which it’s set. “The country is so much more fiscally conservative in many ways than the United States,” said Jacob Funk Kirkegaard, a nonresident senior fellow at the Peterson Institute for International Economics, based in Washington, D.C. “It has far, far, far lower levels of debt.” There are also major political differences. While the United States has a clearer separation of powers, often leading to gridlock between the executive and legislative branches, Denmark’s parliament elects the head of government, often on the basis of a coalition of parties. That makes it less likely the debt ceiling could be turned into a political football. There are also distinct processes for creating annual budgets. Even so, it’s clear that in Denmark, the debt limit has enabled the smooth functioning of government, Kirkegaard said. In the United States, it’s had the opposite effect. “We have to spend an awful lot of time on this at regular intervals,” Kirkegaard said. “All we’re doing is avoiding a disaster scenario we’re creating for ourselves.” Bhusari of the Atlantic Council also described the US debt ceiling crisis as “entirely self-imposed.” When it comes to tackling issues like debt sustainability, he said, investors “often think of [a ceiling] as more of a problem than a solution,” even if they prize fiscal caution. Most likely, the limit will just keep being raised. He noted the example of Australia, which introduced a debt ceiling in 2008 to bolster its fiscal credentials, raised it multiple times, and ultimately ditched it in 2013 when it became a constant source of political friction. The United States may be singular in its approach to debt management. But should the country default, it will become the world’s problem — at a time when high interest rates and inflation are already causing pain. “No one fully knows what will happen and it poses a lot of uncertainty,” Bhusari said. Financial markets are built on an understanding that owning US debt, or Treasuries, is safe. If the United States is unable to pay its creditors for an extended period, White House economists have predicted the value of the stock market could crater, and the country could suffer a deep recession, with the loss of more than 8 million jobs.
|
The Princes in the Tower were not murdered by Richard III but spirited to Europe and later tried to retake the crown, according to new research.
Philippa Langley, the amateur historian credited with finding Richard’s remains under a Leicester car park, has presented a series of “extraordinary discoveries” to back-up her theory.
She believes that a duo dismissed by history as pretenders to the throne – Lambert Simnel and Perkin Warbeck, who each launched failed bids to depose Henry VII in the late 15th century – were the real princes.
The two boys, sons of Edward IV and nephews to Richard, disappeared from the record in 1483 after being taken to the Tower of London.
A common theory, dramatised by Shakespeare, is that they were murdered on the orders of their uncle.
Skeletons discovered under a staircase at the Tower in the 17th century were identified as the princes and moved to Westminster Abbey but have never been DNA-tested.
However, Ms Langley claimed that documents unearthed in European archives point to their escape and subsequent attempts to invade England.
One is an account that is purportedly a witness statement from Richard, the youngest prince, who was nine at the time of his disappearance.
Written a decade later, the author describes being smuggled out of the Tower by Henry and Thomas Percy.
“They shaved my hair and put a poor and drab shirt on me and we went to St Katharine’s [dock],” the account reads, going on to say that they took a boat and came “ashore in the dunes” at Boulogne-sur-Mer, before travelling on to Portugal.
The document was “absolutely mind-blowing”, said Ms Langley, stating her belief that the level of detail made it unlikely to be a fake.
Independent experts have authenticated it as being written during that period, although there is no other evidence that Richard was the author.
A second document from 1483, which appears to bear a royal seal and the signature of “Richard, Duke of York”, pledges that Richard will pay 30,000 florins to Duke Albert of Saxony within three months of gaining the English throne.
In 1495, a man claiming to be Richard landed in England with a small army. After fleeing to Scotland, he launched a second invasion in 1497, which resulted in his capture.
He signed a confession declaring that he was really a boatman’s son named Perkin Warbeck but, according to Ms Langley, it is likely that he really was Richard.
Another document claims that King Maximilian, leader of the Holy Roman Empire, had identified a man as the prince in 1493 by three distinguishing birthmarks.
Ms Langley also presented two documents that she claimed as evidence that Edward, the elder prince who disappeared aged 12, also survived and attempted to reclaim his birthright.
A 1487 French receipt for weapons for a Yorkist invasion of England states that they were to arm troops acting for Margaret of Burgundy, the princes’ aunt.
The receipt states that the invasion would be led by her nephew, son of Edward IV, who had been “expelled from his dominion”.
The invasion was led by the young Lambert Simnel, who was captured at the Battle of Stoke Field and later pardoned.
History has it that Simnel claimed – or really was – Edward, Earl of Warwick, but the Ms Langley suggests that Simnel was really Edward, the elder prince.
The evidence – collected by some of the 300 volunteers recruited for Ms Langley’s Missing Princes Project – is laid out in a Channel 4 documentary, The Princes in the Tower: The New Evidence, to be broadcast this Saturday.
Ms Langley, who led the successful search to locate the grave of Richard III in 2012 and is a passionate Ricardian, said she expected some historians to disagree with her theories.
However, she said that history should challenge the established narrative.
“The young historians who get in touch with me through my website are saying: ‘Look, we’ve had enough of just repeating something because a famous writer has said it, we want to start our own questioning,’” she said.
Seeing the documents, she said, had left her “in seventh heaven”.
Emily Shields, commissioning editor at Channel 4, said: “Philippa represents, with her willingness not to accept the established story, everything that Channel 4 is about.
“Philippa has made an extraordinarily compelling case. I think viewers will watch the film and make up their own minds.”
It is unclear how the latest theories fit with a previous claim from the Missing Princes Project, made in 2021, that the elder prince lived out his days in a Devon village under the name John Evans.
|
<urn:uuid:f9093a1a-af92-4d01-bba0-25553cf0e1ac>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00290.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9801552295684814,
"pii_count": 0,
"score": 3.09375,
"token_count": 1024,
"url": "https://www.telegraph.co.uk/news/2023/11/16/princes-in-the-tower-richard-iii-philippa-langley-channel-4/"
}
|
The Princes in the Tower were not murdered by Richard III but spirited to Europe and later tried to retake the crown, according to new research.
Philippa Langley, the amateur historian credited with finding Richard’s remains under a Leicester car park, has presented a series of “extraordinary discoveries” to back-up her theory.
She believes that a duo dismissed by history as pretenders to the throne – Lambert Simnel and Perkin Warbeck, who each launched failed bids to depose Henry VII in the late 15th century – were the real princes.
The two boys, sons of Edward IV and nephews to Richard, disappeared from the record in 1483 after being taken to the Tower of London.
A common theory, dramatised by Shakespeare, is that they were murdered on the orders of their uncle.
Skeletons discovered under a staircase at the Tower in the 17th century were identified as the princes and moved to Westminster Abbey but have never been DNA-tested.
However, Ms Langley claimed that documents unearthed in European archives point to their escape and subsequent attempts to invade England.
One is an account that is purportedly a witness statement from Richard, the youngest prince, who was nine at the time of his disappearance.
Written a decade later, the author describes being smuggled out of the Tower by Henry and Thomas Percy.
“They shaved my hair and put a poor and drab shirt on me and we went to St Katharine’s [dock],” the account reads, going on to say that they took a boat and came “ashore in the dunes” at Boulogne-sur-Mer, before travelling on to Portugal.
The document was “absolutely mind-blowing”, said Ms Langley, stating her belief that the level of detail made it unlikely to be a fake.
Independent experts have authenticated it as being written during that period, although there is no other evidence that Richard was the author.
A second document from 1483, which appears to bear a royal seal and the signature of “Richard, Duke of York”, pledges that Richard will pay 30,000 florins to Duke Albert of Saxony within three months of gaining the English throne.
In 1495, a man claiming to be Richard landed in England with a small army. After fleeing to Scotland, he launched a second invasion in 1497, which resulted in his capture.
He signed a confession declaring that he was really a boat
|
man’s son named Perkin Warbeck but, according to Ms Langley, it is likely that he really was Richard.
Another document claims that King Maximilian, leader of the Holy Roman Empire, had identified a man as the prince in 1493 by three distinguishing birthmarks.
Ms Langley also presented two documents that she claimed as evidence that Edward, the elder prince who disappeared aged 12, also survived and attempted to reclaim his birthright.
A 1487 French receipt for weapons for a Yorkist invasion of England states that they were to arm troops acting for Margaret of Burgundy, the princes’ aunt.
The receipt states that the invasion would be led by her nephew, son of Edward IV, who had been “expelled from his dominion”.
The invasion was led by the young Lambert Simnel, who was captured at the Battle of Stoke Field and later pardoned.
History has it that Simnel claimed – or really was – Edward, Earl of Warwick, but the Ms Langley suggests that Simnel was really Edward, the elder prince.
The evidence – collected by some of the 300 volunteers recruited for Ms Langley’s Missing Princes Project – is laid out in a Channel 4 documentary, The Princes in the Tower: The New Evidence, to be broadcast this Saturday.
Ms Langley, who led the successful search to locate the grave of Richard III in 2012 and is a passionate Ricardian, said she expected some historians to disagree with her theories.
However, she said that history should challenge the established narrative.
“The young historians who get in touch with me through my website are saying: ‘Look, we’ve had enough of just repeating something because a famous writer has said it, we want to start our own questioning,’” she said.
Seeing the documents, she said, had left her “in seventh heaven”.
Emily Shields, commissioning editor at Channel 4, said: “Philippa represents, with her willingness not to accept the established story, everything that Channel 4 is about.
“Philippa has made an extraordinarily compelling case. I think viewers will watch the film and make up their own minds.”
It is unclear how the latest theories fit with a previous claim from the Missing Princes Project, made in 2021, that the elder prince lived out his days in a Devon village under the name John Evans.
|
Vaccination rates for measles and other diseases dropped again last school year, according to a study published Thursday by the US Centers for Disease Control and Prevention. Coverage against measles dropped to the lowest it’s been in more than a decade.
School requirements do not include the Covid-19 vaccine, which is explicitly banned from being included in school mandates in at least 20 states. However, that vaccine will become part of the CDC’s recommended immunization schedule for both children and adults this year.
About 93% of kindergarteners enrolled in the 2021-22 school year got the required vaccines, including measles, mumps and rubella (MMR); diphtheria, tetanus and acellular pertussis (DTaP); and polio. Coverage fell for the second year in a row amid the Covid-19 pandemic, from about 94% the previous year and below the federal target of 95%.
“As schools return to in-person learning, high vaccination coverage is critical to continue protecting children and communities from vaccine-preventable diseases,” the CDC researchers wrote in the study. Clusters of unvaccinated children can lead to outbreaks, they said, and a vaccination rate of about 93.5% leaves about 250,000 kindergartners who may not be protected against measles.
High vaccination rates among children help protect the broader community too, says Dr. Sean O’Leary, chair of the American Academy of Pediatrics’ committee on infectious diseases.
“It’s important to emphasize again that this affects everyone in these communities. High immunization rates help everyone stay healthier,” he said Thursday.
Ohio is one of nine states where fewer than 90% of kindergartners were vaccinated against measles last school year. An outbreak in the Columbus area that began in November has resulted in 83 cases among children, the vast majority of whom were unvaccinated.
“Outbreaks like this are entirely preventable. These outbreaks harm children and cause significant disruption in their opportunities to learn, grow and thrive,” O’Leary said, also citing the recent case of polio reported in New York.
“Vaccination throughout childhood is essential because it equips children’s immune systems to recognize and resist disease so they develop and live healthy lives into adulthood. A healthy childhood contributes to a healthy adulthood.”
Although in-person learning has returned to schools across the country, Covid-19 continues to disrupt vaccination assessment and coverage, according to the report. About half of states cited reduced access to vaccination appointments, extended timelines for enforcement and delays in data collection.
A recent survey from the Kaiser Family Foundation separately found that more than a third of parents oppose vaccine requirements in schools, even if the option for individual choice could create health risks.
Experts say they’re paying close attention to vaccine hesitancy that increased during the pandemic, but it’s likely that the decline in vaccination rates among kindergarteners’ is more complicated than just one factor.
“I think part of it is that well child visits maybe were missed and people are still trying to catch up on those well child visits. Also, when we look at data from the schools, we know that the schools had a lot of things to focus on, and in some cases, maybe they were not able to gather all that documentation on the vaccinations,” Dr. Georgina Peacock, director of the CDC’s Immunization Services Sivision, said Thursday.
“That may have not been the emphasis while they were focused on testing and doing all those other things related to the pandemic to make sure that children were getting the education that they needed.”
Less than 3% of kindergarteners had an official exemption from the required vaccinations, most of which were for non-medical reasons, according to the CDC. This increased slightly from the year before but remained low.
Another 4% of kindergartners were not fully vaccinated or formally exempt but were allowed to attend school during a grace period for provisional enrollment. This group of students particularly highlights the importance of “rigorously enforced school vaccination requirements, school-based vaccination clinics, reminder and recall systems, and follow-up with undervaccinated students by school nurses,” according to the CDC researchers; most states could reach 95% coverage against measles if all of these kindergarteners received their required shots.
A recent CNN analysis of data from the 2020-21 school year found that students in states with stricter school vaccine requirements are more likely to have their shots.
Aside from school requirements, the CDC recommends routine vaccination against 14 diseases for children before they turn 2. Another study published Thursday by the CDC found that vaccination rates remained “high and stable for most vaccines” for children born in 2018 and 2019, who turned 2 during the Covid-19 pandemic. Less than 1% of these children were completely unvaccinated by the time they turned 2, which is better than the federal goal outlined in the Healthy People 2030 objectives.
The Covid-19 pandemic did not appear to affect vaccination rates overall. For most vaccines, coverage rates among children born in 2018 and 2019 were slightly higher than they were for children born two years earlier.
However, key disparities were noted. Vaccination rates for children living below the federal poverty level and in rural areas did decline, with coverage with the combined seven-vaccine series dropping 4 to 5 percentage points.
Get CNN Health's weekly newsletter
Sign up here to get The Results Are In with Dr. Sanjay Gupta every Tuesday from the CNN Health team.
According to the CDC researchers, key methods to improve equity in vaccination coverage include addressing vaccine hesitancy among parents, strong and persistent recommendations from health care providers and reducing logistical and financial barriers to access vaccines.
|
<urn:uuid:1876d64a-3231-4205-a8f9-88898584bb3e>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00114.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9744305610656738,
"pii_count": 0,
"score": 3.265625,
"token_count": 1198,
"url": "https://edition.cnn.com/2023/01/12/health/child-vaccination-rates-declining/index.html"
}
|
Vaccination rates for measles and other diseases dropped again last school year, according to a study published Thursday by the US Centers for Disease Control and Prevention. Coverage against measles dropped to the lowest it’s been in more than a decade.
School requirements do not include the Covid-19 vaccine, which is explicitly banned from being included in school mandates in at least 20 states. However, that vaccine will become part of the CDC’s recommended immunization schedule for both children and adults this year.
About 93% of kindergarteners enrolled in the 2021-22 school year got the required vaccines, including measles, mumps and rubella (MMR); diphtheria, tetanus and acellular pertussis (DTaP); and polio. Coverage fell for the second year in a row amid the Covid-19 pandemic, from about 94% the previous year and below the federal target of 95%.
“As schools return to in-person learning, high vaccination coverage is critical to continue protecting children and communities from vaccine-preventable diseases,” the CDC researchers wrote in the study. Clusters of unvaccinated children can lead to outbreaks, they said, and a vaccination rate of about 93.5% leaves about 250,000 kindergartners who may not be protected against measles.
High vaccination rates among children help protect the broader community too, says Dr. Sean O’Leary, chair of the American Academy of Pediatrics’ committee on infectious diseases.
“It’s important to emphasize again that this affects everyone in these communities. High immunization rates help everyone stay healthier,” he said Thursday.
Ohio is one of nine states where fewer than 90% of kindergartners were vaccinated against measles last school year. An outbreak in the Columbus area that began in November has resulted in 83 cases among children, the vast majority of whom were unvaccinated.
“Outbreaks like this are entirely preventable. These outbreaks harm children and cause significant disruption in their opportunities to learn, grow and thrive,” O’Leary said, also citing the recent case of polio reported in New York.
“Vaccination throughout childhood is essential because it equips children’s immune systems to recognize and resist disease so they develop and live healthy lives into adulthood. A healthy childhood contributes to a healthy adulthood.”
Although in-person learning has returned to schools across the country, Covid-1
|
9 continues to disrupt vaccination assessment and coverage, according to the report. About half of states cited reduced access to vaccination appointments, extended timelines for enforcement and delays in data collection.
A recent survey from the Kaiser Family Foundation separately found that more than a third of parents oppose vaccine requirements in schools, even if the option for individual choice could create health risks.
Experts say they’re paying close attention to vaccine hesitancy that increased during the pandemic, but it’s likely that the decline in vaccination rates among kindergarteners’ is more complicated than just one factor.
“I think part of it is that well child visits maybe were missed and people are still trying to catch up on those well child visits. Also, when we look at data from the schools, we know that the schools had a lot of things to focus on, and in some cases, maybe they were not able to gather all that documentation on the vaccinations,” Dr. Georgina Peacock, director of the CDC’s Immunization Services Sivision, said Thursday.
“That may have not been the emphasis while they were focused on testing and doing all those other things related to the pandemic to make sure that children were getting the education that they needed.”
Less than 3% of kindergarteners had an official exemption from the required vaccinations, most of which were for non-medical reasons, according to the CDC. This increased slightly from the year before but remained low.
Another 4% of kindergartners were not fully vaccinated or formally exempt but were allowed to attend school during a grace period for provisional enrollment. This group of students particularly highlights the importance of “rigorously enforced school vaccination requirements, school-based vaccination clinics, reminder and recall systems, and follow-up with undervaccinated students by school nurses,” according to the CDC researchers; most states could reach 95% coverage against measles if all of these kindergarteners received their required shots.
A recent CNN analysis of data from the 2020-21 school year found that students in states with stricter school vaccine requirements are more likely to have their shots.
Aside from school requirements, the CDC recommends routine vaccination against 14 diseases for children before they turn 2. Another study published Thursday by the CDC found that vaccination rates remained “high and stable for most vaccines” for children born in 2018 and 2019, who turned 2 during the Covid-19 pandemic. Less than 1% of these children were completely unvaccinated by the time they turned 2, which is better than the federal goal outlined in the Healthy People 2030 objectives.
The Covid-19 pandemic did not appear to affect vaccination rates overall. For most vaccines, coverage rates among children born in 2018 and 2019 were slightly higher than they were for children born two years earlier.
However, key disparities were noted. Vaccination rates for children living below the federal poverty level and in rural areas did decline, with coverage with the combined seven-vaccine series dropping 4 to 5 percentage points.
Get CNN Health's weekly newsletter
Sign up here to get The Results Are In with Dr. Sanjay Gupta every Tuesday from the CNN Health team.
According to the CDC researchers, key methods to improve equity in vaccination coverage include addressing vaccine hesitancy among parents, strong and persistent recommendations from health care providers and reducing logistical and financial barriers to access vaccines.
|
America still doesn't put enough women on pedestals
It's easier in the United States to find a sculpture of a mermaid than of any American-born woman who actually is part of this world.
- That's according to Monument Lab, a nonprofit that in 2021 counted who and what Americans honor in their public art — 22 sculptures of mermaids, to 21 honoring abolitionist Harriet Tubman.
Driving the news: For Women's History Month, we looked into whether increased awareness of the lack of diversity in American monuments and sculptures has created actual change.
- The answer: Not really. Despite some new statues of women, bridging the gap between the number of memorialized white men and any other American demographic would be expensive and take time.
Why it matters: Monuments have historically represented our values by putting concepts and people on literal pedestals, then enshrining them with protective status and decades-long upkeep.
- But public art in the U.S. has long presented a lopsided view that can leave the impression that American history is all horses and white male military veterans.
What they're saying: "When we don't see people on pedestals that look like us or tell our stories, that tells us that we don't belong within veneration, we don't belong within honor, and often that we don't belong within that space," says Sue Mobley, director of research at Monument Lab, who co-authored the 2021 audit.
By the numbers: No comprehensive, up-to-date ledger of American public art installations exists, but researchers agree that women and people of color are deeply underrepresented.
- Of the top 50 historical figures represented in Monument Lab data, only three are women, and only five are Black or Indigenous. Half are people who enslaved others.
- Only one woman is featured in more sculptures than Tubman, according to the Monument Lab audit: Joan of Arc, the patron saint of France who became popular here when her image became a symbol of the Allies in World War I. She died more than three centuries before the founding of the United States.
- Only 6% of American monuments feature real women as their subjects, according to research by Sierra Rooney, assistant professor of art history at the University of Wisconsin-La Crosse.
- Only 32% of monuments to women are figurative. "The rest are super abstract, so … it looks like a fountain or a bird bath," Mobley says.
Zoom out: When women are represented in monuments and statues, they are often allegorical or fictional characters, such as the depiction of Little Nell alongside Charles Dickens in Philadelphia or the statue of Dorothy from "The Wizard of Oz" in Chicago.
Flashback: New York's Central Park hosted only statues of men and fictional female characters until 2020 — 100 years after white women gained the right to vote — when a bronze monument was installed depicting suffragist pioneers Susan B. Anthony, Sojourner Truth and Elizabeth Cady Stanton.
- The statue spurred national media coverage and calls for an even playing field in municipal art.
- "The fact that nobody, for a long time, even noticed that women were missing in Central Park — what does that say about the invisibility of women?" Pam Elam, president of Monumental Women, which campaigned for the sculpture, told the New York Times.
The big picture: This discrepancy plays out across the country. On the National Mall, only two real women are memorialized alongside more than 40 other figures, including historical men and allegorical interpretations of freedom and justice.
- New Orleans is home to one of the first American sculptures of a woman, unveiled in 1884 and honoring philanthropist Margaret Haughery.
- Yet it took 54 years for the same city to commemorate Ruby Bridges' Civil Rights Era-attendance at William Frantz Elementary School. Compare that to the 17 years it took to install a statue of Ignatius J. Reilly, the haphazard fictional character of John Kennedy Toole's southern classic "A Confederacy of Dunces."
Between the lines: Nothing is permanent. In 2020, nearly 100 Confederate monuments were removed as the nation grappled with police violence against Black people, and public officials are starting to back projects to improve representation.
- San Francisco passed an ordinance in 2018 requiring at least 30% of new public art projects to depict real women.
- In 2022, the National Statuary Hall in the U.S. Capitol, which hosts two statues from each state, received a donation from Florida of a statue representing Mary McLeod Bethune, the first Black person to be represented in the hall, and Kansas sent a sculpture of Amelia Earhart, the collection's 11th woman.
- A $10 million campaign in New York will memorialize seven women.
- For International Women's Day this year, Atlanta unveiled a statue of civil rights leader Xernona Clayton.
The bottom line: "Monument-building is a slow process, and it will be decades — if ever — before gender parity exists in public art," Rooney tells Axios.
- Creative, multifigure monuments could also be part of the solution. "All the big things require more than one person," Mobley says. "Just putting up a bronze woman across from the bronze man on a horse doesn't do as much as we need it to do."
Go deeper with Monument Lab's interactive map of locations and depicted gender and ethnicities.
- Make sure your local monuments are represented on OpenStreetMap.org, which Mobley calls the fastest way to ensure researchers know they exist.
Editor's note: This story has been corrected to show Sue Mobley’s title at Monument Lab is director of research, not senior research scholar.
|
<urn:uuid:01ba1b08-2e36-4f95-a736-0062e339f7d4>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9504839777946472,
"pii_count": 0,
"score": 3.375,
"token_count": 1183,
"url": "https://www.axios.com/2023/03/08/women-history-month-us-monuments"
}
|
America still doesn't put enough women on pedestals
It's easier in the United States to find a sculpture of a mermaid than of any American-born woman who actually is part of this world.
- That's according to Monument Lab, a nonprofit that in 2021 counted who and what Americans honor in their public art — 22 sculptures of mermaids, to 21 honoring abolitionist Harriet Tubman.
Driving the news: For Women's History Month, we looked into whether increased awareness of the lack of diversity in American monuments and sculptures has created actual change.
- The answer: Not really. Despite some new statues of women, bridging the gap between the number of memorialized white men and any other American demographic would be expensive and take time.
Why it matters: Monuments have historically represented our values by putting concepts and people on literal pedestals, then enshrining them with protective status and decades-long upkeep.
- But public art in the U.S. has long presented a lopsided view that can leave the impression that American history is all horses and white male military veterans.
What they're saying: "When we don't see people on pedestals that look like us or tell our stories, that tells us that we don't belong within veneration, we don't belong within honor, and often that we don't belong within that space," says Sue Mobley, director of research at Monument Lab, who co-authored the 2021 audit.
By the numbers: No comprehensive, up-to-date ledger of American public art installations exists, but researchers agree that women and people of color are deeply underrepresented.
- Of the top 50 historical figures represented in Monument Lab data, only three are women, and only five are Black or Indigenous. Half are people who enslaved others.
- Only one woman is featured in more sculptures than Tubman, according to the Monument Lab audit: Joan of Arc, the patron saint of France who became popular here when her image became a symbol of the Allies in World War I. She died more than three centuries before the founding of the United States.
- Only 6% of American monuments feature real women as their subjects, according to research by Sierra Rooney, assistant professor of art history at the University of Wisconsin-La Crosse.
- Only 32% of monuments to women are figurative. "The rest are super abstract, so … it looks like a fountain or a bird bath," Mobley says.
Zoom out: When women
|
are represented in monuments and statues, they are often allegorical or fictional characters, such as the depiction of Little Nell alongside Charles Dickens in Philadelphia or the statue of Dorothy from "The Wizard of Oz" in Chicago.
Flashback: New York's Central Park hosted only statues of men and fictional female characters until 2020 — 100 years after white women gained the right to vote — when a bronze monument was installed depicting suffragist pioneers Susan B. Anthony, Sojourner Truth and Elizabeth Cady Stanton.
- The statue spurred national media coverage and calls for an even playing field in municipal art.
- "The fact that nobody, for a long time, even noticed that women were missing in Central Park — what does that say about the invisibility of women?" Pam Elam, president of Monumental Women, which campaigned for the sculpture, told the New York Times.
The big picture: This discrepancy plays out across the country. On the National Mall, only two real women are memorialized alongside more than 40 other figures, including historical men and allegorical interpretations of freedom and justice.
- New Orleans is home to one of the first American sculptures of a woman, unveiled in 1884 and honoring philanthropist Margaret Haughery.
- Yet it took 54 years for the same city to commemorate Ruby Bridges' Civil Rights Era-attendance at William Frantz Elementary School. Compare that to the 17 years it took to install a statue of Ignatius J. Reilly, the haphazard fictional character of John Kennedy Toole's southern classic "A Confederacy of Dunces."
Between the lines: Nothing is permanent. In 2020, nearly 100 Confederate monuments were removed as the nation grappled with police violence against Black people, and public officials are starting to back projects to improve representation.
- San Francisco passed an ordinance in 2018 requiring at least 30% of new public art projects to depict real women.
- In 2022, the National Statuary Hall in the U.S. Capitol, which hosts two statues from each state, received a donation from Florida of a statue representing Mary McLeod Bethune, the first Black person to be represented in the hall, and Kansas sent a sculpture of Amelia Earhart, the collection's 11th woman.
- A $10 million campaign in New York will memorialize seven women.
- For International Women's Day this year, Atlanta unveiled a statue of civil rights leader Xernona Clayton.
The bottom line: "Monument-building is a slow process, and it will be decades — if ever — before gender parity exists in public art," Rooney tells Axios.
- Creative, multifigure monuments could also be part of the solution. "All the big things require more than one person," Mobley says. "Just putting up a bronze woman across from the bronze man on a horse doesn't do as much as we need it to do."
Go deeper with Monument Lab's interactive map of locations and depicted gender and ethnicities.
- Make sure your local monuments are represented on OpenStreetMap.org, which Mobley calls the fastest way to ensure researchers know they exist.
Editor's note: This story has been corrected to show Sue Mobley’s title at Monument Lab is director of research, not senior research scholar.
|
California’s endangered salmon population plummets amid new threat
They’ve been pushed to the brink of extinction by dams, drought, extreme heat and even the flare of wildfires, but now California’s endangered winter-run Chinook salmon appear to be facing an entirely new threat — their own ravenous hunger for anchovies.
After the worst spawning season ever in 2022, scientists now suspect the species’ precipitous decline is being driven by its ocean diet.
Researchers hypothesize that the salmon are feasting too heavily on anchovies, a fish that is now swarming the California coast in record numbers. Unfortunately for the salmon, anchovies carry an enzyme called thiaminase, which breaks down thiamine — a vitamin that is essential to cell function in all living things.
“These are fish that returned to the river early [last] year and then spawned in the spring and early summer. They had really low thiamine,” said Nate Mantua, a fisheries researcher with the National Marine Fisheries Service in Santa Cruz. Concentrations were “worse than” the previous year.
In humans, a critical deficiency of thiamine, or vitamin B1, can lead to heart failure and nerve damage. In female salmon that are returning to rivers and streams to spawn, thiamine deficiency can be passed on to their many hatchlings, which suffer problems swimming and experience high rates of death, researchers say.
Now, with government agencies and Native American tribes fearing the collapse of the winter-run Chinook, scientists are embarking on a campaign to determine why the anchovy population has exploded off the California coast, and why winter-run Chinook are seemingly ignoring all other prey.
As dams and global warming push endangered California salmon to the brink, a rescue plan is taking shape — and a tribe pushes for recovering their sacred fish.
“The very unusual thing about their diet is that it’s been so focused on anchovies and so lacking in other things that historically they have been found eating,” Mantua said. “It is something we don’t have great information on.”
Researchers at the National Oceanic and Atmospheric Administration, California’s Department of Fish and Wildlife and UC Davis are employing new technologies, such as environmental DNA sampling and isotopic analyses of fish eye lenses, along with older methods — such as plankton sampling and fish ear bone studies — to better understand how and why the salmon ocean diet has changed.
Scientists first discovered salmon were suffering from a vitamin deficiency in 2020, after hatchery workers noticed salmon fry behaving strangely — swimming repeatedly in tight, corkscrew-like patterns before spiraling to their deaths at the bottom of the tanks. They learned a similar situation had occurred in the Great Lakes in the 1960s, when lake trout had exhibited similar behaviors after gorging on alewives, another fish chock-full of thiaminase.
State, federal and UC Davis researchers quickly treated the swirling salmon fry with thiamine — infusing the water in their tanks with the vitamin; the salmon soon recovered.
But over the last three years, thiamine concentrations in salmon eggs have continued to drop.
“We thought initially it was just a one-year thing, maybe the way we thought of COVID,” said Rachel Johnson, a fisheries biologist with the National Oceanic and Atmospheric Administration and UC Davis. “I was cautiously optimistic that the ocean was going to rearrange itself back to normal. And we just haven’t seen that.”
Chinook salmon start their lives in the rivers of Central and Northern California, before migrating downstream to the Pacific Ocean. There, they typically spend the next two to three years feeding on a variety of fish and invertebrates — such as squid — off the coast.
But ever since anchovy numbers began to balloon in 2016, they have triggered feeding frenzies among salmon and other predators. Humpback and gray whales have been seen in record numbers lunge feeding on the forage fish, and last summer San Francisco residents complained of fish falling from the sky — probably the result of birds dropping fish from their over-stuffed talons or beaks.
Mantua and Johnson are investigating whether there is a seasonal component to the winter-run Chinook’s taste for anchovies.
“Some of the diet data we have from the ‘50s and ‘70s and ‘80s show that salmon that were caught off of Central California would typically have herring, crab and krill in the winter, early-spring diets. Then juvenile rockfish would become a bigger component in the spring and early summer. And it wasn’t really until August and September that anchovies and sardines were the dominant prey item,” Mantua said.
Johnson’s lab is attempting to figure that out by examining the lenses of fish eyes.
Like an onion, the lenses accumulate layer upon layer over a salmon’s lifetime. Examining the chemical isotopes in each layer, Johnson and her colleagues can get an idea of what kinds of foods the salmon were eating and when.
“It’s kind of like a diet journal ... that allows us to check in over the lifetime of a salmon,” she said.
Salmon fry in California are suffering from an apparent vitamin deficiency that is threatening fish and other wildlife worldwide. But what is the cause?
Meanwhile, she and her colleagues at the hatcheries continue to treat fry with thiamine and inject the vitamin into egg-bearing females.
Winter-run Chinook are one of four distinct seasonal runs of salmon that populate the Sacramento River and its tributaries, but are the only one that has been declared endangered by the state and federal government. The name “winter-run” refers to the season in which sea-faring salmon return to San Francisco Bay to make their long journey to spawn in ancestral headwaters.
Those cool headwaters, however, have long since been blocked by dams, and the fish have been forced to lay their eggs in Central Valley waters in the heat of summer, causing many eggs to die. Today, the winter-run Chinook survive only through the intervention of government hatcheries and periodic releases of cold water from the same dams that block their passage upstream.
In the last several years, drought, extreme heat and debris flows from wildfire burn scars have taken a huge toll on their numbers, along with thiamine deficiency.
According to federal data, the total number of juvenile winter-run Chinook that were counted swimming downstream past the Red Bluff Diversion Dam in 2022 was 181,000 — the lowest on record. In 2021, the number was 558,000, and in 2020, it was just over 2 million.
Egg-to-fry survival was also low, said Michael Milstein, a spokesman for the federal agency. Despite the fact that river temperatures remained cooler in 2022 and most eggs survived, the young salmon struggled after they hatched. A preliminary survival percentage released recently was 1.94% — once again, the lowest ever recorded. In 2021 the egg-to-fry survival percentage was 2.56%. In 2020, it was 11.46%
To give the endangered fish a better shot at survival, state and federal officials have been studying ways of restoring salmon to their traditional cold-water habitats upriver from dams, such as the McCloud River, upstream of Shasta Lake.
From last September through early December, biologists and members of the Winnemem Wintu Tribe worked together on an experimental project on the McCloud River, releasing thousands of juvenile winter-run salmon and later recapturing some of them.
By mid-December, more than 1,600 of the fish had been recaptured, loaded into aerated coolers and trucked downstream of the dam, where they were released to continue their journey.
A business in Hawaii is trying to close the life cycle of the octopus. Should it?
“They looked great,” said Matt Johnson, a senior environmental scientist with the California Department of Fish and Wildlife. The fish, he said, looked larger than hatchery raised salmon. “It was strong evidence that the McCloud provides great habitat for juvenile Chinook — not a surprise to us, given the quality and quantity of the habitat in that river system.”
He described the project as a success.
Jason Roberts, an environmental program manager for the Department of Fish and Wildlife, said the Winnemem Wintu Tribe’s participation in the project was vital. He said the department’s officials want to repeat the project next year and are talking with tribal leaders and federal officials about co-managing the effort.
“In the face of climate change, we have to get winter-run off the valley floor back into their historical habitat if they’re going to have a chance of surviving,” Roberts said.
For the Winnemem Wintu, salmon are central to cultural and spiritual traditions, and leaders have long sought to return salmon to the river where their ancestors lived.
Caleen Sisk, the tribe’s chief and spiritual leader, said last year’s effort was a good step.
“I think it has the potential to achieve the restoration of salmon in rivers above the dams — not just McCloud, but this is a prime example of what could happen, and what would be good for fish,” Sisk said.
For years, the Winnemem Wintu Tribe has advocated an approach to reintroducing salmon that would involve developing a “swimway” so that fish could travel upstream and downstream on their own around Shasta Dam. The tribe also wants to use salmon that once lived in the Sacramento River but were transplanted to New Zealand more than a century ago. The salmon have been thriving in mountain rivers in New Zealand, and tribal leaders say those eggs should be brought back.
“We believe that whatever happens to salmon happens to us,” Sisk said. “Maybe this is a step that we get to return to the river too.”
Stay tuned for more Repowering the West
Get our Boiling Point newsletter for the next installment in this series — and behind-the-scenes stories.
You may occasionally receive promotional content from the Los Angeles Times.
|
<urn:uuid:9e8f2802-1093-4a8f-8747-7d880d72c6d3>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00657.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.963022768497467,
"pii_count": 0,
"score": 3.28125,
"token_count": 2164,
"url": "https://www.latimes.com/environment/story/2023-01-03/a-hunger-for-anchovies-is-killing-off-endangered-salmon"
}
|
California’s endangered salmon population plummets amid new threat
They’ve been pushed to the brink of extinction by dams, drought, extreme heat and even the flare of wildfires, but now California’s endangered winter-run Chinook salmon appear to be facing an entirely new threat — their own ravenous hunger for anchovies.
After the worst spawning season ever in 2022, scientists now suspect the species’ precipitous decline is being driven by its ocean diet.
Researchers hypothesize that the salmon are feasting too heavily on anchovies, a fish that is now swarming the California coast in record numbers. Unfortunately for the salmon, anchovies carry an enzyme called thiaminase, which breaks down thiamine — a vitamin that is essential to cell function in all living things.
“These are fish that returned to the river early [last] year and then spawned in the spring and early summer. They had really low thiamine,” said Nate Mantua, a fisheries researcher with the National Marine Fisheries Service in Santa Cruz. Concentrations were “worse than” the previous year.
In humans, a critical deficiency of thiamine, or vitamin B1, can lead to heart failure and nerve damage. In female salmon that are returning to rivers and streams to spawn, thiamine deficiency can be passed on to their many hatchlings, which suffer problems swimming and experience high rates of death, researchers say.
Now, with government agencies and Native American tribes fearing the collapse of the winter-run Chinook, scientists are embarking on a campaign to determine why the anchovy population has exploded off the California coast, and why winter-run Chinook are seemingly ignoring all other prey.
As dams and global warming push endangered California salmon to the brink, a rescue plan is taking shape — and a tribe pushes for recovering their sacred fish.
“The very unusual thing about their diet is that it’s been so focused on anchovies and so lacking in other things that historically they have been found eating,” Mantua said. “It is something we don’t have great information on.”
Researchers at the National Oceanic and Atmospheric Administration, California’s Department of Fish and Wildlife and UC Davis are employing new technologies, such as environmental DNA sampling and isotopic analyses of fish eye lenses, along with older methods — such as plankton sampling and fish ear bone studies — to better understand how and why the salmon ocean diet has changed.
Scientists first discovered salmon were suffering from a vitamin deficiency in 2020,
|
after hatchery workers noticed salmon fry behaving strangely — swimming repeatedly in tight, corkscrew-like patterns before spiraling to their deaths at the bottom of the tanks. They learned a similar situation had occurred in the Great Lakes in the 1960s, when lake trout had exhibited similar behaviors after gorging on alewives, another fish chock-full of thiaminase.
State, federal and UC Davis researchers quickly treated the swirling salmon fry with thiamine — infusing the water in their tanks with the vitamin; the salmon soon recovered.
But over the last three years, thiamine concentrations in salmon eggs have continued to drop.
“We thought initially it was just a one-year thing, maybe the way we thought of COVID,” said Rachel Johnson, a fisheries biologist with the National Oceanic and Atmospheric Administration and UC Davis. “I was cautiously optimistic that the ocean was going to rearrange itself back to normal. And we just haven’t seen that.”
Chinook salmon start their lives in the rivers of Central and Northern California, before migrating downstream to the Pacific Ocean. There, they typically spend the next two to three years feeding on a variety of fish and invertebrates — such as squid — off the coast.
But ever since anchovy numbers began to balloon in 2016, they have triggered feeding frenzies among salmon and other predators. Humpback and gray whales have been seen in record numbers lunge feeding on the forage fish, and last summer San Francisco residents complained of fish falling from the sky — probably the result of birds dropping fish from their over-stuffed talons or beaks.
Mantua and Johnson are investigating whether there is a seasonal component to the winter-run Chinook’s taste for anchovies.
“Some of the diet data we have from the ‘50s and ‘70s and ‘80s show that salmon that were caught off of Central California would typically have herring, crab and krill in the winter, early-spring diets. Then juvenile rockfish would become a bigger component in the spring and early summer. And it wasn’t really until August and September that anchovies and sardines were the dominant prey item,” Mantua said.
Johnson’s lab is attempting to figure that out by examining the lenses of fish eyes.
Like an onion, the lenses accumulate layer upon layer over a salmon’s lifetime. Examining the chemical isotopes in each layer, Johnson and her colleagues can get an idea of what kinds of foods the salmon were eating and when.
“It’s kind of like a diet journal ... that allows us to check in over the lifetime of a salmon,” she said.
Salmon fry in California are suffering from an apparent vitamin deficiency that is threatening fish and other wildlife worldwide. But what is the cause?
Meanwhile, she and her colleagues at the hatcheries continue to treat fry with thiamine and inject the vitamin into egg-bearing females.
Winter-run Chinook are one of four distinct seasonal runs of salmon that populate the Sacramento River and its tributaries, but are the only one that has been declared endangered by the state and federal government. The name “winter-run” refers to the season in which sea-faring salmon return to San Francisco Bay to make their long journey to spawn in ancestral headwaters.
Those cool headwaters, however, have long since been blocked by dams, and the fish have been forced to lay their eggs in Central Valley waters in the heat of summer, causing many eggs to die. Today, the winter-run Chinook survive only through the intervention of government hatcheries and periodic releases of cold water from the same dams that block their passage upstream.
In the last several years, drought, extreme heat and debris flows from wildfire burn scars have taken a huge toll on their numbers, along with thiamine deficiency.
According to federal data, the total number of juvenile winter-run Chinook that were counted swimming downstream past the Red Bluff Diversion Dam in 2022 was 181,000 — the lowest on record. In 2021, the number was 558,000, and in 2020, it was just over 2 million.
Egg-to-fry survival was also low, said Michael Milstein, a spokesman for the federal agency. Despite the fact that river temperatures remained cooler in 2022 and most eggs survived, the young salmon struggled after they hatched. A preliminary survival percentage released recently was 1.94% — once again, the lowest ever recorded. In 2021 the egg-to-fry survival percentage was 2.56%. In 2020, it was 11.46%
To give the endangered fish a better shot at survival, state and federal officials have been studying ways of restoring salmon to their traditional cold-water habitats upriver from dams, such as the McCloud River, upstream of Shasta Lake.
From last September through early December, biologists and members of the Winnemem Wintu Tribe worked together on an experimental project on the McCloud River, releasing thousands of juvenile winter-run salmon and later recapturing some of them.
By mid-December, more than 1,600 of the fish had been recaptured, loaded into aerated coolers and trucked downstream of the dam, where they were released to continue their journey.
A business in Hawaii is trying to close the life cycle of the octopus. Should it?
“They looked great,” said Matt Johnson, a senior environmental scientist with the California Department of Fish and Wildlife. The fish, he said, looked larger than hatchery raised salmon. “It was strong evidence that the McCloud provides great habitat for juvenile Chinook — not a surprise to us, given the quality and quantity of the habitat in that river system.”
He described the project as a success.
Jason Roberts, an environmental program manager for the Department of Fish and Wildlife, said the Winnemem Wintu Tribe’s participation in the project was vital. He said the department’s officials want to repeat the project next year and are talking with tribal leaders and federal officials about co-managing the effort.
“In the face of climate change, we have to get winter-run off the valley floor back into their historical habitat if they’re going to have a chance of surviving,” Roberts said.
For the Winnemem Wintu, salmon are central to cultural and spiritual traditions, and leaders have long sought to return salmon to the river where their ancestors lived.
Caleen Sisk, the tribe’s chief and spiritual leader, said last year’s effort was a good step.
“I think it has the potential to achieve the restoration of salmon in rivers above the dams — not just McCloud, but this is a prime example of what could happen, and what would be good for fish,” Sisk said.
For years, the Winnemem Wintu Tribe has advocated an approach to reintroducing salmon that would involve developing a “swimway” so that fish could travel upstream and downstream on their own around Shasta Dam. The tribe also wants to use salmon that once lived in the Sacramento River but were transplanted to New Zealand more than a century ago. The salmon have been thriving in mountain rivers in New Zealand, and tribal leaders say those eggs should be brought back.
“We believe that whatever happens to salmon happens to us,” Sisk said. “Maybe this is a step that we get to return to the river too.”
Stay tuned for more Repowering the West
Get our Boiling Point newsletter for the next installment in this series — and behind-the-scenes stories.
You may occasionally receive promotional content from the Los Angeles Times.
|
A 4.1 magnitude earthquake startled dogs near Bay Point last June. Then a pair of 4.3s toppled Santa Rosa picture frames in September. The largest quake of them all struck in October when a 5.1 magnitude shake east of San Jose jolted residents across the region.
These were the four most powerful earthquakes that shook the Bay Area in the past year—from April 2022 through March 2023—but they were far from the only ones. There were 941 earthquakes in the region during that time period, according to data from the U.S. Geological Survey (USGS). The vast majority—926 of them—registered less than a magnitude 3.0, leaving it likely that they went largely unnoticed.
Tuesday is the 117th anniversary of the Great 1906 San Francisco Earthquake. The magnitude 7.8 quake and subsequent fire decimated the city and went on to become one of the most scientifically significant shakes of all time. To mark the occasion, The Standard combed through USGS data to understand this past year in earthquakes and asked local seismologists the question on everyone’s mind: When might the “Big One” strike again?
Hundreds of earthquakes rumbling through the Bay Area in a year is totally normal, explained USGS Observational Earthquake Seismologist Annemarie Baltay.
“It’s basically the Earth telling us that it’s just doing its thing,” Baltay said.
The land and ocean on the Earth’s surface sit upon massive slabs of solid rock called tectonic plates. These plates lie on molten rock and each one moves, very slowly, at a different speed.
Two of the Earth’s major tectonic plates—the North American and Pacific—meet underneath California. These plates move past each other about 2 inches per year, and at the intersections, known as faults, the motion can produce sudden slips, which we feel as earthquakes.
The San Andreas Fault is the main point of much of the slippage between the two plates under California, earning it the titular role in a major motion picture. It’s also a close San Francisco neighbor, passing just south and west of the city.
Back in the 1800s, major quakes along the fault were more common, explained Stanford Professor Greg Beroza, an observational earthquake seismologist.
“There were times when there were damaging earthquakes in the Greater Bay Area about once a year,” he said. “It was a very different place back then seismically.”
Earthquakes are driven by accumulated, strained energy that is stored in the Earth’s crust when the interiors of the massive tectonic plates continue moving but the faults are locked. Eventually, that energy has to be liberated in the form of earthquakes, Beroza explained.
After a long period of significant earthquakes in the 1800s, there was a massive energy liberation when a 300-mile rupture of the San Andreas Fault took place between San Juan Bautista, a small town southeast of Santa Cruz, to Cape Mendocino, which is 40 miles south of Eureka. The Great 1906 San Francisco Earthquake was the result. It caused violent shaking that lasted up to a full minute. Though it predated modern measuring techniques, scientists now believe that the 1906 quake was likely a magnitude 7.9.
Researchers have observed that once the aftershocks from a major quake of that magnitude die down, it can usher in an era of relative dormancy.
The thinking goes that the 1906 quake was so big that it relieved the stress on the San Andreas Fault, ushering in a century of fewer damaging quakes, Beroza said.
But that doesn’t mean that the Bay Area can let its guard down. In fact, in 2008, scientists predicted that there was a 68% chance that a 7.0 magnitude would hit the region sometime in the coming 30 years. That would be larger than the 1989 Loma Prieta quake—a 6.9—which collapsed the freeway in Oakland.
That’s because instead of the San Andreas Fault, experts expect that the next damaging earthquake to hit the Bay Area will likely come from one of its less famous neighbors.
Based on the historical record, we’re overdue for a significant earthquake on the Hayward Fault, which runs below the East Bay hills, Baltay explained.
The fault is “among the most active and dangerous in the United States because it runs through a densely urbanized and interconnected region,” according to a 2018 USGS report.
There have been immense advances in construction practices since the 1906 earthquake. But no city can be impervious to the power of a devastating quake.
“Anytime you dump that amount of kinetic energy into a heavily populated area then things will break and bad things will happen,” Beroza said.
If a 7.0 earthquake hit underneath Oakland along the Hayward Fault—a real possibility—the shaking could kill 800 people, injure 18,000 more, set off fires across the region and cause over $82 billion in building damages, researchers estimated.
Baltay urges Bay Area residents to prepare for the next big earthquake, including making a family plan for where to meet and what to grab from the house in case of emergency. It’s a good idea to store three days of food and water too, she said.
“We haven’t really had a very large Bay Area earthquake in a long time,” Baltay said. “We don’t want people to forget.”
Correction: An earlier version of this story incorrectly labeled recent earthquake magnitudes as measured on the Richter scale. USGS currently reports magnitudes using the Moment Magnitude scale.
|
<urn:uuid:8ea648cf-912a-4ac0-a87d-14b19950554d>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00860.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9494134783744812,
"pii_count": 0,
"score": 3.28125,
"token_count": 1193,
"url": "https://sfstandard.com/2023/04/18/san-francisco-biggest-california-earthquakes-visualization-timelapse-1906/"
}
|
A 4.1 magnitude earthquake startled dogs near Bay Point last June. Then a pair of 4.3s toppled Santa Rosa picture frames in September. The largest quake of them all struck in October when a 5.1 magnitude shake east of San Jose jolted residents across the region.
These were the four most powerful earthquakes that shook the Bay Area in the past year—from April 2022 through March 2023—but they were far from the only ones. There were 941 earthquakes in the region during that time period, according to data from the U.S. Geological Survey (USGS). The vast majority—926 of them—registered less than a magnitude 3.0, leaving it likely that they went largely unnoticed.
Tuesday is the 117th anniversary of the Great 1906 San Francisco Earthquake. The magnitude 7.8 quake and subsequent fire decimated the city and went on to become one of the most scientifically significant shakes of all time. To mark the occasion, The Standard combed through USGS data to understand this past year in earthquakes and asked local seismologists the question on everyone’s mind: When might the “Big One” strike again?
Hundreds of earthquakes rumbling through the Bay Area in a year is totally normal, explained USGS Observational Earthquake Seismologist Annemarie Baltay.
“It’s basically the Earth telling us that it’s just doing its thing,” Baltay said.
The land and ocean on the Earth’s surface sit upon massive slabs of solid rock called tectonic plates. These plates lie on molten rock and each one moves, very slowly, at a different speed.
Two of the Earth’s major tectonic plates—the North American and Pacific—meet underneath California. These plates move past each other about 2 inches per year, and at the intersections, known as faults, the motion can produce sudden slips, which we feel as earthquakes.
The San Andreas Fault is the main point of much of the slippage between the two plates under California, earning it the titular role in a major motion picture. It’s also a close San Francisco neighbor, passing just south and west of the city.
Back in the 1800s, major quakes along the fault were more common, explained Stanford Professor Greg Beroza, an observational earthquake seismologist.
“There were times when there were damaging earthquakes in the Greater
|
Bay Area about once a year,” he said. “It was a very different place back then seismically.”
Earthquakes are driven by accumulated, strained energy that is stored in the Earth’s crust when the interiors of the massive tectonic plates continue moving but the faults are locked. Eventually, that energy has to be liberated in the form of earthquakes, Beroza explained.
After a long period of significant earthquakes in the 1800s, there was a massive energy liberation when a 300-mile rupture of the San Andreas Fault took place between San Juan Bautista, a small town southeast of Santa Cruz, to Cape Mendocino, which is 40 miles south of Eureka. The Great 1906 San Francisco Earthquake was the result. It caused violent shaking that lasted up to a full minute. Though it predated modern measuring techniques, scientists now believe that the 1906 quake was likely a magnitude 7.9.
Researchers have observed that once the aftershocks from a major quake of that magnitude die down, it can usher in an era of relative dormancy.
The thinking goes that the 1906 quake was so big that it relieved the stress on the San Andreas Fault, ushering in a century of fewer damaging quakes, Beroza said.
But that doesn’t mean that the Bay Area can let its guard down. In fact, in 2008, scientists predicted that there was a 68% chance that a 7.0 magnitude would hit the region sometime in the coming 30 years. That would be larger than the 1989 Loma Prieta quake—a 6.9—which collapsed the freeway in Oakland.
That’s because instead of the San Andreas Fault, experts expect that the next damaging earthquake to hit the Bay Area will likely come from one of its less famous neighbors.
Based on the historical record, we’re overdue for a significant earthquake on the Hayward Fault, which runs below the East Bay hills, Baltay explained.
The fault is “among the most active and dangerous in the United States because it runs through a densely urbanized and interconnected region,” according to a 2018 USGS report.
There have been immense advances in construction practices since the 1906 earthquake. But no city can be impervious to the power of a devastating quake.
“Anytime you dump that amount of kinetic energy into a heavily populated area then things will break and bad things will happen,” Beroza said.
If a 7.0 earthquake hit underneath Oakland along the Hayward Fault—a real possibility—the shaking could kill 800 people, injure 18,000 more, set off fires across the region and cause over $82 billion in building damages, researchers estimated.
Baltay urges Bay Area residents to prepare for the next big earthquake, including making a family plan for where to meet and what to grab from the house in case of emergency. It’s a good idea to store three days of food and water too, she said.
“We haven’t really had a very large Bay Area earthquake in a long time,” Baltay said. “We don’t want people to forget.”
Correction: An earlier version of this story incorrectly labeled recent earthquake magnitudes as measured on the Richter scale. USGS currently reports magnitudes using the Moment Magnitude scale.
|
Pfizer RSV vaccine for older adults should be monitored for nervous system condition Guillain-Barre, scientists say
- People who receive Pfizer's RSV vaccine for older adults should be monitored for Guillain-Barre syndrome, scientists said in an article published in the New England Journal of Medicine.
- Two people who received the shot during vaccine trials developed the nervous system disorder.
- The scientists said there are other potential explanations for the cases, but the FDA views them as possibly related to the vaccine.
- Pfizer has agreed to conduct a safety study after an approval.
- Overall, the scientists concluded that vaccine was effective with no evident safety concerns.
People who receive Pfizer's RSV vaccine for older adults should be monitored for Guillain-Barre syndrome, after two people developed the nervous system disorder after they received the shot, scientists said in clinical trial results published in the New England Journal of Medicine.
The scientists concluded the vaccine was effective in preventing lower respiratory tract illness in adults ages 60 and older without any evident safety concerns. But they flagged the Guillain-Barre cases as a potential cause for concern moving forward.
"If RSVpreF vaccine is approved and recommended, these adverse events warrant close monitoring in future studies and with real-world data and postmarketing surveillance," the scientists wrote. The study, which published Wednesday, was supported by Pfizer.
Guillain-Barre syndrome is a rare disorder in which the body's immune system mistakenly attacks the nerves. Symptoms can range from brief weakness to paralysis, according to the National Institutes of Health. Most people recover, even from severe cases.
The scientists' call for close monitoring for a possible link between the vaccine and Guillain-Barre echoes the position of the Food and Drug Administration.
The agency has asked Pfizer to include Guillain-Barre as an "important potential risk" of the vaccine and develop a safety study to monitor for potential cases if the shot is approved in May. Pfizer has agreed to conduct a safety study.
The FDA's independent advisors endorsed the vaccine in February, though there was substantial dissent during that meeting. Seven advisors said the safety data was adequate for an approval, while four said it was not and one abstained.
In the New England Journal of Medicine article, the scientists said the two cases occurred in patients who were in an age group that has an increased risk of developing Guillain-Barre. Potential factors other than the vaccine also could have caused the individuals to develop the syndrome, they added.
But the FDA said the agency views the Guillain-Barre cases as possibly related to the vaccine because the patients developed the syndrome shortly after receiving the shot, according to briefing documents published in February. Pfizer concluded that the cases were unrelated, and the clinical trial's data monitoring committee did not identify any safety concerns with the vaccine.
More CNBC health coverage
Pfizer's shot is in the running to become the first RSV vaccine ever approved for older adults. RSV kills between 6,000 and 10,000 seniors every year, according to the Centers for Disease Control and Prevention. It also causes 60,000 to 160,000 hospitalizations among the age group annually.
The vaccine was 86% effective at preventing lower respiratory tract illness with three or more symptoms, and 66% effective at preventing the illness with two or more symptoms, according to the results published in the New England Journal of Medicine. The shot is administered as a single, 120-microgram dose.
While the shot promises to reduce hospitalization and death from RSV among seniors, the FDA's advisors were concerned about the Guillain-Barre cases during their meeting in February.
Dr. Hana El Sahly, the FDA committee chair, said Guillain-Barre has an incidence of about 1 in 100,000 among people ages 60 and older. But in the vaccine trial, the rate was more like 1 in 9,000.
"So this is major if we take it at this level," El Sahly said. She acknowledged there's still uncertainty about what the actual rate of the disease would be among vaccine recipients.
"But nonetheless, it's significant in terms of incidence," she said of the two cases. The advisors who endorsed the vaccine also said safety monitoring will be crucial after any potential FDA approval.
A 66-year-old man in the U.S. developed Guillain-Barre, and a woman of the same age in Japan was diagnosed with a variant of the syndrome called Miller Fisher. The patients developed symptoms seven and eight days after vaccination, respectively.
The man had a history of hypertension and suffered a heart attack shortly before he was diagnosed with Guillain-Barre, and the woman had history of diabetes. The FDA does not view the heart attack as related to the vaccine.
The man's symptoms were resolving six months after onset, and the woman's symptoms resolved completely 3 months after onset.
31-year-old earns $220,000 a year and saves 75% of his salary by living with his parents
7 in-demand side hustles you can do from home—some can pay as much as $100 an hour
The 11 GRANOLAS stocks power Europe to record highs, drawing Magnificent Seven comparisons
Bitcoin surpasses $56,000 benchmark, uplifts crypto market in latest rally
'Nothing should be ruled out': European ground troops may be sent to Ukraine in future, France says
|
<urn:uuid:92e2f727-325e-44e5-acff-99ee37d817b7>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474674.35/warc/CC-MAIN-20240227085429-20240227115429-00126.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9672491550445557,
"pii_count": 0,
"score": 2.71875,
"token_count": 1134,
"url": "https://www.cnbc.com/amp/2023/04/06/rsv-adult-vaccine-pfizer-shot-monitored-for-guillain-barre.html"
}
|
Pfizer RSV vaccine for older adults should be monitored for nervous system condition Guillain-Barre, scientists say
- People who receive Pfizer's RSV vaccine for older adults should be monitored for Guillain-Barre syndrome, scientists said in an article published in the New England Journal of Medicine.
- Two people who received the shot during vaccine trials developed the nervous system disorder.
- The scientists said there are other potential explanations for the cases, but the FDA views them as possibly related to the vaccine.
- Pfizer has agreed to conduct a safety study after an approval.
- Overall, the scientists concluded that vaccine was effective with no evident safety concerns.
People who receive Pfizer's RSV vaccine for older adults should be monitored for Guillain-Barre syndrome, after two people developed the nervous system disorder after they received the shot, scientists said in clinical trial results published in the New England Journal of Medicine.
The scientists concluded the vaccine was effective in preventing lower respiratory tract illness in adults ages 60 and older without any evident safety concerns. But they flagged the Guillain-Barre cases as a potential cause for concern moving forward.
"If RSVpreF vaccine is approved and recommended, these adverse events warrant close monitoring in future studies and with real-world data and postmarketing surveillance," the scientists wrote. The study, which published Wednesday, was supported by Pfizer.
Guillain-Barre syndrome is a rare disorder in which the body's immune system mistakenly attacks the nerves. Symptoms can range from brief weakness to paralysis, according to the National Institutes of Health. Most people recover, even from severe cases.
The scientists' call for close monitoring for a possible link between the vaccine and Guillain-Barre echoes the position of the Food and Drug Administration.
The agency has asked Pfizer to include Guillain-Barre as an "important potential risk" of the vaccine and develop a safety study to monitor for potential cases if the shot is approved in May. Pfizer has agreed to conduct a safety study.
The FDA's independent advisors endorsed the vaccine in February, though there was substantial dissent during that meeting. Seven advisors said the safety data was adequate for an approval, while four said it was not and one abstained.
In the New England Journal of Medicine article, the scientists said the two cases occurred in patients who were in an age group that has an increased risk of developing Guillain-Barre. Potential factors other than the vaccine also could have caused the individuals to develop the syndrome, they added.
But the FDA said the agency views the Guillain-Barre
|
cases as possibly related to the vaccine because the patients developed the syndrome shortly after receiving the shot, according to briefing documents published in February. Pfizer concluded that the cases were unrelated, and the clinical trial's data monitoring committee did not identify any safety concerns with the vaccine.
More CNBC health coverage
Pfizer's shot is in the running to become the first RSV vaccine ever approved for older adults. RSV kills between 6,000 and 10,000 seniors every year, according to the Centers for Disease Control and Prevention. It also causes 60,000 to 160,000 hospitalizations among the age group annually.
The vaccine was 86% effective at preventing lower respiratory tract illness with three or more symptoms, and 66% effective at preventing the illness with two or more symptoms, according to the results published in the New England Journal of Medicine. The shot is administered as a single, 120-microgram dose.
While the shot promises to reduce hospitalization and death from RSV among seniors, the FDA's advisors were concerned about the Guillain-Barre cases during their meeting in February.
Dr. Hana El Sahly, the FDA committee chair, said Guillain-Barre has an incidence of about 1 in 100,000 among people ages 60 and older. But in the vaccine trial, the rate was more like 1 in 9,000.
"So this is major if we take it at this level," El Sahly said. She acknowledged there's still uncertainty about what the actual rate of the disease would be among vaccine recipients.
"But nonetheless, it's significant in terms of incidence," she said of the two cases. The advisors who endorsed the vaccine also said safety monitoring will be crucial after any potential FDA approval.
A 66-year-old man in the U.S. developed Guillain-Barre, and a woman of the same age in Japan was diagnosed with a variant of the syndrome called Miller Fisher. The patients developed symptoms seven and eight days after vaccination, respectively.
The man had a history of hypertension and suffered a heart attack shortly before he was diagnosed with Guillain-Barre, and the woman had history of diabetes. The FDA does not view the heart attack as related to the vaccine.
The man's symptoms were resolving six months after onset, and the woman's symptoms resolved completely 3 months after onset.
31-year-old earns $220,000 a year and saves 75% of his salary by living with his parents
7 in-demand side hustles you can do from home—some can pay as much as $100 an hour
The 11 GRANOLAS stocks power Europe to record highs, drawing Magnificent Seven comparisons
Bitcoin surpasses $56,000 benchmark, uplifts crypto market in latest rally
'Nothing should be ruled out': European ground troops may be sent to Ukraine in future, France says
|
Editor’s Note: Stan Grant is an Australian journalist, author and Wiradjuri man. He is Professor of Global Affairs at Griffith University, International Affairs Analyst at the ABC and the host of ABC TV’s “Q&A.” He is a former CNN senior correspondent and his latest book, “The Queen is Dead,” will be published in May. The views expressed in this commentary are his own. Read more CNN opinion here.
“I honour my God. I serve my Queen. I salute the flag.”
Those words began each school day for me. It was late 1960’s Australia. White Australia.
Whiteness was Australian policy. The first Act passed by the Australian parliament when it was formed in 1901, was immigration control legislation that would become known as the White Australia Policy.
So-called coloured peoples would be excluded. The policy was not formally abolished until the 1970’s.
Australia for most of its history was defiantly, proudly White.
In 1947, Immigration Minister Arthur Calwell captured the nation’s institutional racism when he disparagingly referred to Chinese people saying, “two Wongs don’t make a white.”
Whiteness is built into Australia. In 1770 a British sailor, Lieutenant (later Captain) James Cook – at the height of the “age of discovery” – claimed this continent for the Crown.
The rights of my people were extinguished. We were rendered British subjects.
It continues today. That’s what the coronation of King Charles III will mean for so many First Nations people: a reminder of a history of conquest.
My people – First Nations people – had been invaded, our land stolen.
Wars were fought in this land now called Australia where Aboriginal people were massacred. Martial law was declared on my people, the Wiradjuri nation, during the 1820’s in what was referred to as an “exterminating war.”
The survivors were locked away on segregated missions and reserves. Every movement was monitored, curfews imposed, civil liberties denied.
Our languages were silenced, my father saw his grandfather jailed for speaking our language to him in the main street of our hometown.
Our culture was smashed, children were forcibly removed from families in what has become known as the “stolen generations.”
Aboriginal people were commonly excluded from public places – hotels, swimming pools, cinemas.
My people faced being erased from the earth. Indeed the common phrase during colonial Australia was to “smooth the dying pillow” for a race of people on the brink of extinction.
When I was born in 1963, I – like all First Nations people – was not counted in the census. We were not included among other Australians.
That would not change until 1967.
“I honour my God. I serve my Queen. I salute the flag.”
Why would that school pledge speak to me?
I remember even as a young child feeling uncomfortable. I knew standing there alongside so many white Australian faces that I did not belong.
We were people of God. Like African Americans in the plantations of the South, in our suffering we turned to faith.
But the Queen and the flag were symbols of all that had been done to us.
After school I would return home to where that Queen and that flag deposited me.
I was born into a poor Aboriginal family. We moved from town to town as my parents looked for work.
I changed schools more than a dozen times before I was into my teens.
We lived on the fringes; on the margins. A travelling caravan of cousins, grandparents, uncles and aunts.
I was raised on stories of my peoples’ struggle. Like that of my paternal grandfather, who served his nation in World War II but returned to a country where he could not share a drink in a pub with his soldier mates.
My mother told me of how her father was tied to a tree like a dog and left all day in the blazing sun after being arrested for drinking alcohol, a crime for Aboriginal people.
Her mother – a white Australian woman – was turned away from a hospital having her first child. She was constantly harassed by police suspected of running ‘grog’ (slang for ‘alcohol’) for the blacks.
We lived in Australia, but it was clear to me that Australia was for other people.
And all of this happened under the seal of the Crown. Our country was stolen under the seal of the Crown.
Police wearing the seal of the Crown arrested our people. They took our children.
In her 70-year reign never once did Queen Elizabeth apologise to my people.
Today we remain a people unrecognised in our land. Australia is the only Commonwealth country – past or present – that has never signed treaties with Indigenous people.
Our sovereignty has never been ceded but legally it has no standing.
This year Australians will vote in a referendum to formally recognise Indigenous people in the Australian Constitution to enshrine what is known as the Voice – a representative First Nations body – to advise parliament on laws specifically designed for us.
It is seen as a bid to arrest generations of policy failure that have left Indigenous people the most impoverished and imprisoned population in Australia.
We are only roughly 3% of the Australian nation yet more than a third of prison population. We have the worst health, employment and education outcomes of any Australians.
Get our free weekly newsletter
We die on average 10 years younger than other Australians. In some parts of the country the life expectancy of a First Nations man is less than 50 years old.
This year I have already buried one niece only 37 years old. Our people mourn at far too many funerals.
Still we hope. Even if that is a hope forged in hopelessness.
Our people have never stopped fighting for justice. For two centuries we have campaigned for our rightful place.
I was raised with “Yindyamarra,” our Wiradjuri word for respect. I respect those for whom the British royal family matters.
But forgive me if I could not mourn Queen Elizabeth.
Forgive me if I will not cheer the coronation of King Charles.
|
<urn:uuid:b5d86234-100c-4814-96ef-c4c458b8ba3b>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657735.85/warc/CC-MAIN-20230610164417-20230610194417-00787.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9829989075660706,
"pii_count": 0,
"score": 2.59375,
"token_count": 1306,
"url": "https://www.cnn.com/2023/04/24/opinions/australia-first-nations-coronation-stan-grant/index.html"
}
|
Editor’s Note: Stan Grant is an Australian journalist, author and Wiradjuri man. He is Professor of Global Affairs at Griffith University, International Affairs Analyst at the ABC and the host of ABC TV’s “Q&A.” He is a former CNN senior correspondent and his latest book, “The Queen is Dead,” will be published in May. The views expressed in this commentary are his own. Read more CNN opinion here.
“I honour my God. I serve my Queen. I salute the flag.”
Those words began each school day for me. It was late 1960’s Australia. White Australia.
Whiteness was Australian policy. The first Act passed by the Australian parliament when it was formed in 1901, was immigration control legislation that would become known as the White Australia Policy.
So-called coloured peoples would be excluded. The policy was not formally abolished until the 1970’s.
Australia for most of its history was defiantly, proudly White.
In 1947, Immigration Minister Arthur Calwell captured the nation’s institutional racism when he disparagingly referred to Chinese people saying, “two Wongs don’t make a white.”
Whiteness is built into Australia. In 1770 a British sailor, Lieutenant (later Captain) James Cook – at the height of the “age of discovery” – claimed this continent for the Crown.
The rights of my people were extinguished. We were rendered British subjects.
It continues today. That’s what the coronation of King Charles III will mean for so many First Nations people: a reminder of a history of conquest.
My people – First Nations people – had been invaded, our land stolen.
Wars were fought in this land now called Australia where Aboriginal people were massacred. Martial law was declared on my people, the Wiradjuri nation, during the 1820’s in what was referred to as an “exterminating war.”
The survivors were locked away on segregated missions and reserves. Every movement was monitored, curfews imposed, civil liberties denied.
Our languages were silenced, my father saw his grandfather jailed for speaking our language to him in the main street of our hometown.
Our culture was smashed, children were forcibly removed from families in what has become known as the “stolen generations.”
Aboriginal people were commonly excluded from public places – hotels, swimming pools, cinemas.
My people faced being erased from the earth. Indeed the common phrase during colonial Australia was to
|
“smooth the dying pillow” for a race of people on the brink of extinction.
When I was born in 1963, I – like all First Nations people – was not counted in the census. We were not included among other Australians.
That would not change until 1967.
“I honour my God. I serve my Queen. I salute the flag.”
Why would that school pledge speak to me?
I remember even as a young child feeling uncomfortable. I knew standing there alongside so many white Australian faces that I did not belong.
We were people of God. Like African Americans in the plantations of the South, in our suffering we turned to faith.
But the Queen and the flag were symbols of all that had been done to us.
After school I would return home to where that Queen and that flag deposited me.
I was born into a poor Aboriginal family. We moved from town to town as my parents looked for work.
I changed schools more than a dozen times before I was into my teens.
We lived on the fringes; on the margins. A travelling caravan of cousins, grandparents, uncles and aunts.
I was raised on stories of my peoples’ struggle. Like that of my paternal grandfather, who served his nation in World War II but returned to a country where he could not share a drink in a pub with his soldier mates.
My mother told me of how her father was tied to a tree like a dog and left all day in the blazing sun after being arrested for drinking alcohol, a crime for Aboriginal people.
Her mother – a white Australian woman – was turned away from a hospital having her first child. She was constantly harassed by police suspected of running ‘grog’ (slang for ‘alcohol’) for the blacks.
We lived in Australia, but it was clear to me that Australia was for other people.
And all of this happened under the seal of the Crown. Our country was stolen under the seal of the Crown.
Police wearing the seal of the Crown arrested our people. They took our children.
In her 70-year reign never once did Queen Elizabeth apologise to my people.
Today we remain a people unrecognised in our land. Australia is the only Commonwealth country – past or present – that has never signed treaties with Indigenous people.
Our sovereignty has never been ceded but legally it has no standing.
This year Australians will vote in a referendum to formally recognise Indigenous people in the Australian Constitution to enshrine what is known as the Voice – a representative First Nations body – to advise parliament on laws specifically designed for us.
It is seen as a bid to arrest generations of policy failure that have left Indigenous people the most impoverished and imprisoned population in Australia.
We are only roughly 3% of the Australian nation yet more than a third of prison population. We have the worst health, employment and education outcomes of any Australians.
Get our free weekly newsletter
We die on average 10 years younger than other Australians. In some parts of the country the life expectancy of a First Nations man is less than 50 years old.
This year I have already buried one niece only 37 years old. Our people mourn at far too many funerals.
Still we hope. Even if that is a hope forged in hopelessness.
Our people have never stopped fighting for justice. For two centuries we have campaigned for our rightful place.
I was raised with “Yindyamarra,” our Wiradjuri word for respect. I respect those for whom the British royal family matters.
But forgive me if I could not mourn Queen Elizabeth.
Forgive me if I will not cheer the coronation of King Charles.
|
Opinion: Nuclear threats? Climate change? What catastrophe will lead to doomsday?
If you needed a reminder that all is not well with the world, the Doomsday Clock moved forward once again in January. It’s now set for 90 seconds until midnight — closer to doom than ever before.
The last time it changed was in 2020, when the Bulletin of the Atomic Scientists — which created the clock in 1947 — jumped it forward 20 seconds. The group attributed this year’s time shift mostly to Russia’s war in Ukraine, now almost at its one-year mark. They also cite concerns relating to climate change, biological threats and disruptive cybertechnologies.
The war in Ukraine has clearly brought fears of a large-scale nuclear war back to the forefront. Neither Russia, the U.S. nor NATO is eager for nuclear destruction, but the history of nuclear weapons is full of reminders that miscalculation and accidents — to say nothing of newer threats like misinformation and cyberattacks — could lead to brushes with annihilation.
The dangerous idea of possessing or even using nuclear weapons is fading with the emergence of a new generation of so-called tactical nuclear weapons.
The Doomsday Clock is, of course, a subjective measure of current risk. It was never meant to be a scientific instrument. In the last 10 years, it has marched closer to midnight five times, and it retreated just once since 1991. That might sound pessimistic. But let’s be honest: The 21st century hasn’t exactly felt like it is trending in the right direction.
The Bulletin was founded in late 1945 by scientists who were connected to the invention of the atomic bomb. They believed that once mushroom clouds rose over Hiroshima and Nagasaki, scientists could no longer be disengaged from the world of politics. The Bulletin aimed to draw scientists into discussions about their responsibilities in dealing with the new problems created by nuclear technology, and to make sure the public had independent, expert assessments of these new threats.
But it took a few years more for the publication to become an official doomsayer. The Doomsday Clock’s original design and setting, seven minutes to midnight, were purely aesthetic choices. Only in 1949, when its clock hands were moved for the first time following the Soviet Union’s first atomic bomb test, did it start being understood as some kind of measurement of existential risk. In its early years, the Bulletin’s founding editor in chief, physicist Eugene Rabinowitch, determined all of the clock’s changes. After his death in 1973, the changes were handled by the Bulletin’s board of directors and, as of 2008, by a science and security board composed of experts.
That year, the Bulletin expanded the clock’s symbolism — from almost exclusively a measure of nuclear risk to encompassing other existential threats, particularly climate change, as well as biological and cyberthreats.
That decision was understandable, since concerns over nuclear war were being eclipsed by growing awareness of climate change — with more extreme weather events, intensifying droughts, migration pressures and disease risks. But the nuclear threat didn’t go away, as we’ve seen again and again since then.
Op-Ed: The world population hit 8 billion — but with a peak in sight. What lessons does that have for climate change?
We can apply lessons from tackling another existential problem to the climate crisis.
The changes to what the Doomsday Clock measures also highlight the symbol’s biggest problem. “Midnight” implies a finality: the end-of-the-world hard stop that we associate with nuclear war. But even nuclear war wouldn’t be quite so abrupt; there would be a lot of survivors, even in the nations that were directly attacked, and they would have to deal with whatever came next. As catastrophic as nuclear strikes could be, they wouldn’t be the end of history. The tragic stories we read of Hiroshima are not of the people who died immediately, but of survivors who had to rebuild their lives, city and nation.
Climate change is a different kind of risk altogether. There won’t be some single abrupt event that ends the world. It’s what scholars call a “slow disaster,” something that will unfold over decades, even centuries. It’ll just be a world that gets harder to live in, with devastating local disasters, worsening weather extremes and growing systemic problems. But there will be no single Earth-killing moment. The symbol of this kind of threat isn’t a clock — it’s one of those ever-proliferating graphs that shows the temperature going upwards into new highs.
These realities may put the experts at the Bulletin in a bind: Under what conditions will they feel confident turning the clock back to announce decreasing risk? If climate change means runaway risk, at least in our lifetimes, they will eventually run out of time before tripping into “midnight.”
There is an almost comic effect of counting doomsday by ever smaller moments. If the clock starts counting in milliseconds, it might seem farcical. By then we’ll have run out of time, and the symbol won’t matter much as a warning.
Nuclear risk can seem to wax and wane very quickly: One day, things can seem good, but a week later a new crisis might rear its head. Then during a crisis, all can seem hopeless, but a year later, things might cool substantially. Climate change, conversely, offers neither a sudden threat nor the hope of quick resolution. But there are still ways in which we can slow its pace, mitigate its consequences and, over a long time, reverse its trends.
The Doomsday Clock has been a potent messaging device for nuclear scares. But we’re now confronting another manmade form of annihilation that might well need a new kind of symbol. Whether these symbols will be enough to motivate real action is, ultimately, up to us.
Alex Wellerstein is a historian of nuclear weapons at the Stevens Institute of Technology, the author of “Restricted Data: The History of Nuclear Secrecy in the United States” and the creator of the Nukemap online nuclear weapons simulator.
Get Group Therapy
Life is stressful. Our weekly mental wellness newsletter can help.
You may occasionally receive promotional content from the Los Angeles Times.
|
<urn:uuid:8a3ba047-d998-4dfe-a600-66e343e7eeee>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00545.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9543417692184448,
"pii_count": 0,
"score": 3.03125,
"token_count": 1320,
"url": "https://www.latimes.com/opinion/story/2023-02-06/nuclear-war-doomsday-clock-russia-ukraine-climate-change"
}
|
Opinion: Nuclear threats? Climate change? What catastrophe will lead to doomsday?
If you needed a reminder that all is not well with the world, the Doomsday Clock moved forward once again in January. It’s now set for 90 seconds until midnight — closer to doom than ever before.
The last time it changed was in 2020, when the Bulletin of the Atomic Scientists — which created the clock in 1947 — jumped it forward 20 seconds. The group attributed this year’s time shift mostly to Russia’s war in Ukraine, now almost at its one-year mark. They also cite concerns relating to climate change, biological threats and disruptive cybertechnologies.
The war in Ukraine has clearly brought fears of a large-scale nuclear war back to the forefront. Neither Russia, the U.S. nor NATO is eager for nuclear destruction, but the history of nuclear weapons is full of reminders that miscalculation and accidents — to say nothing of newer threats like misinformation and cyberattacks — could lead to brushes with annihilation.
The dangerous idea of possessing or even using nuclear weapons is fading with the emergence of a new generation of so-called tactical nuclear weapons.
The Doomsday Clock is, of course, a subjective measure of current risk. It was never meant to be a scientific instrument. In the last 10 years, it has marched closer to midnight five times, and it retreated just once since 1991. That might sound pessimistic. But let’s be honest: The 21st century hasn’t exactly felt like it is trending in the right direction.
The Bulletin was founded in late 1945 by scientists who were connected to the invention of the atomic bomb. They believed that once mushroom clouds rose over Hiroshima and Nagasaki, scientists could no longer be disengaged from the world of politics. The Bulletin aimed to draw scientists into discussions about their responsibilities in dealing with the new problems created by nuclear technology, and to make sure the public had independent, expert assessments of these new threats.
But it took a few years more for the publication to become an official doomsayer. The Doomsday Clock’s original design and setting, seven minutes to midnight, were purely aesthetic choices. Only in 1949, when its clock hands were moved for the first time following the Soviet Union’s first atomic bomb test, did it start being understood as some kind of measurement of existential risk. In its early years, the Bulletin’s
|
founding editor in chief, physicist Eugene Rabinowitch, determined all of the clock’s changes. After his death in 1973, the changes were handled by the Bulletin’s board of directors and, as of 2008, by a science and security board composed of experts.
That year, the Bulletin expanded the clock’s symbolism — from almost exclusively a measure of nuclear risk to encompassing other existential threats, particularly climate change, as well as biological and cyberthreats.
That decision was understandable, since concerns over nuclear war were being eclipsed by growing awareness of climate change — with more extreme weather events, intensifying droughts, migration pressures and disease risks. But the nuclear threat didn’t go away, as we’ve seen again and again since then.
Op-Ed: The world population hit 8 billion — but with a peak in sight. What lessons does that have for climate change?
We can apply lessons from tackling another existential problem to the climate crisis.
The changes to what the Doomsday Clock measures also highlight the symbol’s biggest problem. “Midnight” implies a finality: the end-of-the-world hard stop that we associate with nuclear war. But even nuclear war wouldn’t be quite so abrupt; there would be a lot of survivors, even in the nations that were directly attacked, and they would have to deal with whatever came next. As catastrophic as nuclear strikes could be, they wouldn’t be the end of history. The tragic stories we read of Hiroshima are not of the people who died immediately, but of survivors who had to rebuild their lives, city and nation.
Climate change is a different kind of risk altogether. There won’t be some single abrupt event that ends the world. It’s what scholars call a “slow disaster,” something that will unfold over decades, even centuries. It’ll just be a world that gets harder to live in, with devastating local disasters, worsening weather extremes and growing systemic problems. But there will be no single Earth-killing moment. The symbol of this kind of threat isn’t a clock — it’s one of those ever-proliferating graphs that shows the temperature going upwards into new highs.
These realities may put the experts at the Bulletin in a bind: Under what conditions will they feel confident turning the clock back to announce decreasing risk? If climate change means runaway risk, at least in our lifetimes, they will eventually run out of time before tripping into “midnight.”
There is an almost comic effect of counting doomsday by ever smaller moments. If the clock starts counting in milliseconds, it might seem farcical. By then we’ll have run out of time, and the symbol won’t matter much as a warning.
Nuclear risk can seem to wax and wane very quickly: One day, things can seem good, but a week later a new crisis might rear its head. Then during a crisis, all can seem hopeless, but a year later, things might cool substantially. Climate change, conversely, offers neither a sudden threat nor the hope of quick resolution. But there are still ways in which we can slow its pace, mitigate its consequences and, over a long time, reverse its trends.
The Doomsday Clock has been a potent messaging device for nuclear scares. But we’re now confronting another manmade form of annihilation that might well need a new kind of symbol. Whether these symbols will be enough to motivate real action is, ultimately, up to us.
Alex Wellerstein is a historian of nuclear weapons at the Stevens Institute of Technology, the author of “Restricted Data: The History of Nuclear Secrecy in the United States” and the creator of the Nukemap online nuclear weapons simulator.
Get Group Therapy
Life is stressful. Our weekly mental wellness newsletter can help.
You may occasionally receive promotional content from the Los Angeles Times.
|
In the spring of 2022 I had the opportunity of visited Cilgerran (Pembrokeshire) only to find the castle closed due to wind damage but the churchyard accessible. I took the opportunity to photograph one of the early inscribed stones of early medieval south-west Wales in the churchyard of St Lawwdog’s. This post briefly introduces the key points and context for this early medieval stone, the only evidence of early medieval archaeology from the site, drawing on the research of Professor Nancy Edwards.
There are c. 150 of these monuments known from Wales and the Borders. Of these, 64 are from the south-west, of which there are 35 in Pembrokeshire (Edwards 2013: 30).
The Cilgerran stone is one of 17 from the south-west, 12 from Pembrokeshire, with both roman and ogam scripts, 26% of the total (66% have Roman inscriptions only, 8% have only ogam) suggesting many were raised to communicate to a mixed audience familiar with Latin and Old Irish.
The vast majority are commemorating father-son (x son of y) relationships revealing the importance of patrilineal kinship in mortuary commemoration (Edwards 2013: 42).
Most are found at or near early church sites and their funerary function is explicit in those with the ‘here lies’ formula on the Latin text (Edwards 2013:33). Evidently, burial sites based on kinship, later to become churches, chapels and monasteries, dotted the landscape in the 5th and 6th centuries. The inscriptions may have served to promote and legitimise claims to land and authority.
Irish personal names are not confined to the ogam inscriptions, but are found on stones inscribed in Latin in the roman script too (Edwards 2013: 31). Furthermore, both Latin and ogam texts appear to be introductions in the 5th century from Gaul and north Africa and Ireland respectively. South-west Wales in the 5th and 6th centuries was clearly well-connected to the Late Antique Christian world along the Atlantic seaboard as attested by a range of other material culture (including imported ceramics) and sundry written sources. Furthermore, the patronym Demeti recorded on the St Dogwells 1 stone might hint at a tribal affiliation for the region (Edwards 2013: 43). Combined with the use of the term Protictoris on the Castell Dwyran 1 stone, a term derived from imperial Roman terminology, we might be best regarding the south-west Cymric Demetae ‘tribe’ as a mixed population of Brythonic and Latin speaking Britons and Irish immigrants.
Having said that, we should be cautious in taking the formula and memorial styles as direct and conclusive evidence of a fully Christianised population nor of the specific religious or ethnic affiliations of those commemorated.
The fantastic resource of Professor Nancy Edwards (2013: 311-313) provides a detailed record of the Cilgerran early medieval inscribed stone (P12) which she dates to the second half of the second century based on the epigraphy and language on the stone.
Situated on the south side of the churchyard, the upright stone is 146cm tall. Its roman-letter inscription was first set out by Edward Lhuyd in 1698/99 and it was subsequently excavated in 1855 to uncover both inscriptions. Today, the lower half of the monument is buried so that the roman and ogam inscriptions are partially obscured. Appended are later inscriptions, presumably of post-medieval date (VD top left, VU top right).
The roman inscription is in Latin and runs in two vertical lines downwards:
Edwards translates this as ‘Treneguss son of Macus-Treni, here he lies’.
The ogam inscription runs down the edge of the same face:
Edwards translates this to read ‘of Trenagus[.] son of Macus-Treni’
So, there is no certain inscribed cross but the inscribed letters in roman and ogam are near-identical and record Latin and Old Irish versions of the same typical patrilineal formula of ‘X son of Y’. It records two Old Irish names: Treneguss/Trenagus son of Macus-Treni.
To this formal description of the stone, I would further note how the 19th-century graves are arranged and a series of slabs lead from the north-south path east of the church to allow access to the monument. There is also a small sign pointing the way for visitors (see my TikTok video below).
Finally, I cannot but notice the striking orange and white lichens which are populating this c. 1500-year-old inscribed pillar, setting it apart in text, form, materiality and colouration from the surrounding 19th-century memorial successors.
Here are my TikToks:
Edwards, N. 2013. A Corpus of Early Medieval Inscribed Stones and Stone Sculpture in Wales. Volume II. South-West Wales. Cardiff: University of Wales Press.
|
<urn:uuid:37b0d6ce-4a4b-41a3-b846-d8035dab0b6b>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00122.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9396535754203796,
"pii_count": 0,
"score": 3,
"token_count": 1126,
"url": "https://howardwilliamsblog.wordpress.com/2023/01/11/cilgerrans-inscribed-stone-st-llawddogs-church/"
}
|
In the spring of 2022 I had the opportunity of visited Cilgerran (Pembrokeshire) only to find the castle closed due to wind damage but the churchyard accessible. I took the opportunity to photograph one of the early inscribed stones of early medieval south-west Wales in the churchyard of St Lawwdog’s. This post briefly introduces the key points and context for this early medieval stone, the only evidence of early medieval archaeology from the site, drawing on the research of Professor Nancy Edwards.
There are c. 150 of these monuments known from Wales and the Borders. Of these, 64 are from the south-west, of which there are 35 in Pembrokeshire (Edwards 2013: 30).
The Cilgerran stone is one of 17 from the south-west, 12 from Pembrokeshire, with both roman and ogam scripts, 26% of the total (66% have Roman inscriptions only, 8% have only ogam) suggesting many were raised to communicate to a mixed audience familiar with Latin and Old Irish.
The vast majority are commemorating father-son (x son of y) relationships revealing the importance of patrilineal kinship in mortuary commemoration (Edwards 2013: 42).
Most are found at or near early church sites and their funerary function is explicit in those with the ‘here lies’ formula on the Latin text (Edwards 2013:33). Evidently, burial sites based on kinship, later to become churches, chapels and monasteries, dotted the landscape in the 5th and 6th centuries. The inscriptions may have served to promote and legitimise claims to land and authority.
Irish personal names are not confined to the ogam inscriptions, but are found on stones inscribed in Latin in the roman script too (Edwards 2013: 31). Furthermore, both Latin and ogam texts appear to be introductions in the 5th century from Gaul and north Africa and Ireland respectively. South-west Wales in the 5th and 6th centuries was clearly well-connected to the Late Antique Christian world along the Atlantic seaboard as attested by a range of other material culture (including imported ceramics) and sundry written sources. Furthermore, the patronym Demeti recorded on the St Dogwells 1 stone might hint at a tribal affiliation
|
for the region (Edwards 2013: 43). Combined with the use of the term Protictoris on the Castell Dwyran 1 stone, a term derived from imperial Roman terminology, we might be best regarding the south-west Cymric Demetae ‘tribe’ as a mixed population of Brythonic and Latin speaking Britons and Irish immigrants.
Having said that, we should be cautious in taking the formula and memorial styles as direct and conclusive evidence of a fully Christianised population nor of the specific religious or ethnic affiliations of those commemorated.
The fantastic resource of Professor Nancy Edwards (2013: 311-313) provides a detailed record of the Cilgerran early medieval inscribed stone (P12) which she dates to the second half of the second century based on the epigraphy and language on the stone.
Situated on the south side of the churchyard, the upright stone is 146cm tall. Its roman-letter inscription was first set out by Edward Lhuyd in 1698/99 and it was subsequently excavated in 1855 to uncover both inscriptions. Today, the lower half of the monument is buried so that the roman and ogam inscriptions are partially obscured. Appended are later inscriptions, presumably of post-medieval date (VD top left, VU top right).
The roman inscription is in Latin and runs in two vertical lines downwards:
Edwards translates this as ‘Treneguss son of Macus-Treni, here he lies’.
The ogam inscription runs down the edge of the same face:
Edwards translates this to read ‘of Trenagus[.] son of Macus-Treni’
So, there is no certain inscribed cross but the inscribed letters in roman and ogam are near-identical and record Latin and Old Irish versions of the same typical patrilineal formula of ‘X son of Y’. It records two Old Irish names: Treneguss/Trenagus son of Macus-Treni.
To this formal description of the stone, I would further note how the 19th-century graves are arranged and a series of slabs lead from the north-south path east of the church to allow access to the monument. There is also a small sign pointing the way for visitors (see my TikTok video below).
Finally, I cannot but notice the striking orange and white lichens which are populating this c. 1500-year-old inscribed pillar, setting it apart in text, form, materiality and colouration from the surrounding 19th-century memorial successors.
Here are my TikToks:
Edwards, N. 2013. A Corpus of Early Medieval Inscribed Stones and Stone Sculpture in Wales. Volume II. South-West Wales. Cardiff: University of Wales Press.
|
To overwinter dahlias, dig up the tubers and stash them in a shoebox
Dahlia blooms grow well in Vermont's climate, but they are too delicate to overwinter in the ground. Now is the time to dig them up and get them cozy for a long winter's nap so you can plant them again for more blooms next spring.
If you grew dahlias this year — those gorgeous, pom pom-like flowers that come in red, yellow, peach and more — you were most likely treated to a beautiful crop. And the flowers may have lasted longer with the warmer temperatures this fall. Now is the right time to save those dahlias to overwinter and plant again next spring.
Dahlias are not hardy enough to make it through Vermont's winter in the soil, so you'll need to dig up the tubers and get them tucked away for a few months.
The tubers look almost like small sweet potatoes and are connected in a clump under the soil. To remove them, first begin by cutting back all the foliage on the dahlia plants. Next, dig up the entire clump of tubers from the soil. Remove any tubers that look damaged, and then wash off any large clumps of soil.
Next, place the clump of tubers in cardboard or wooden boxes. Metal trays or racks work well for storage, too. Keep the boxed-up tubers in the garage for a day or two, to let them cure and get used to being out of the soil. They are going to be dormant for the winter and need to harden up a bit.
When a day or two has passed, add some wet wood chips into the boxes along with the dahlia tubers. This allows airflow along with a bit of moisture which will keep the tubers plump instead of dried out. Place the boxes in a basement or an unheated garage which stays above freezing, but below about 45 degrees.
Leave the box of dahlia tubers nestled in wood chips throughout the winter and periodically check them. If the tubers look a bit shriveled, mist them with a bit of water. If any start to mold or rot, pull them out and clean them, then put them back in the cardboard box, making sure there is still sufficient airflow.
Ready them for replanting in spring by pulling out the boxes and emptying out the wood chips. Take a good look at the dahlia tubers. If any have sprouted eyes on them (just like the ones you might be used to seeing on potatoes) divide them up and plant again.
Q: Charlie, since my garden had grown jumping worms this summer, I haven't roto-tilled in a few years. Should I till it now before planting winter rye? I was wondering if that might help kill off any eggs. — Bette, in Rockingham
A: Unfortunately, tilling the soil is not going to kill the jumping worms. The adult jumping or "snake" worms will die off in the winter but their eggs will survive in the soil.
By tilling that infested soil, all it will really do is mix up the eggs but won't get rid of the worms.
The best course of action is when it warms up next spring and you begin to work in the soil again, hand-pick any adult jumping worms you see (and keep removing them!) and get rid of them that way.
Q: I have a Sungold cherry tomato plant. The fruit went from green to partially gold and then it goes to black. It's not blossom-end rot. It is total fruit black. What is that? — Sharon, in Burlington
A: As the growing season comes to an end in Vermont and colder days become the norm in fall, a lot of tomato plants that still are green and have a few fruits left on them will go through this color change. Many tomato varieties will experience this: the skin coloring of the fruit begins to ripen and then will deepen to a blackish color.
The good news is that this doesn't represent a tomato plant blight or disease, necessarily. More likely, it is the cooler and shorter days of late autumn that turn the tomatoes black.
You can still eat the fruits, but they might not be the best-tasting or have the most pleasant texture!
All Things Gardening is powered by you, our audience! Send us your toughest conundrums and join the fun. Submit your written question via email, or better yet, leave a voicemail with your gardening question so we can use your voice on the air! Call Vermont Public at 1-800-639-2192.
Listen to All Things Gardening Sunday mornings at 9:35 a.m., and subscribe to the podcast to listen any time.
|
<urn:uuid:41749908-762c-43fd-96d0-264cd7d20668>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100518.73/warc/CC-MAIN-20231203225036-20231204015036-00837.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9457387328147888,
"pii_count": 0,
"score": 2.828125,
"token_count": 1004,
"url": "https://www.vermontpublic.org/show/all-things-gardening/2023-11-19/to-overwinter-dahlias-dig-up-the-tubers-and-stash-them-in-a-shoebox"
}
|
To overwinter dahlias, dig up the tubers and stash them in a shoebox
Dahlia blooms grow well in Vermont's climate, but they are too delicate to overwinter in the ground. Now is the time to dig them up and get them cozy for a long winter's nap so you can plant them again for more blooms next spring.
If you grew dahlias this year — those gorgeous, pom pom-like flowers that come in red, yellow, peach and more — you were most likely treated to a beautiful crop. And the flowers may have lasted longer with the warmer temperatures this fall. Now is the right time to save those dahlias to overwinter and plant again next spring.
Dahlias are not hardy enough to make it through Vermont's winter in the soil, so you'll need to dig up the tubers and get them tucked away for a few months.
The tubers look almost like small sweet potatoes and are connected in a clump under the soil. To remove them, first begin by cutting back all the foliage on the dahlia plants. Next, dig up the entire clump of tubers from the soil. Remove any tubers that look damaged, and then wash off any large clumps of soil.
Next, place the clump of tubers in cardboard or wooden boxes. Metal trays or racks work well for storage, too. Keep the boxed-up tubers in the garage for a day or two, to let them cure and get used to being out of the soil. They are going to be dormant for the winter and need to harden up a bit.
When a day or two has passed, add some wet wood chips into the boxes along with the dahlia tubers. This allows airflow along with a bit of moisture which will keep the tubers plump instead of dried out. Place the boxes in a basement or an unheated garage which stays above freezing, but below about 45 degrees.
Leave the box of dahlia tubers nestled in wood chips throughout the winter and periodically check them. If the tubers look a bit shriveled, mist them with a bit of water. If any start to mold or rot, pull them out and clean them, then put them back in the cardboard box, making sure there is still sufficient airflow.
Ready them for replanting in spring by pulling out the boxes and emptying out the wood chips. Take a good look at the dahlia tub
|
ers. If any have sprouted eyes on them (just like the ones you might be used to seeing on potatoes) divide them up and plant again.
Q: Charlie, since my garden had grown jumping worms this summer, I haven't roto-tilled in a few years. Should I till it now before planting winter rye? I was wondering if that might help kill off any eggs. — Bette, in Rockingham
A: Unfortunately, tilling the soil is not going to kill the jumping worms. The adult jumping or "snake" worms will die off in the winter but their eggs will survive in the soil.
By tilling that infested soil, all it will really do is mix up the eggs but won't get rid of the worms.
The best course of action is when it warms up next spring and you begin to work in the soil again, hand-pick any adult jumping worms you see (and keep removing them!) and get rid of them that way.
Q: I have a Sungold cherry tomato plant. The fruit went from green to partially gold and then it goes to black. It's not blossom-end rot. It is total fruit black. What is that? — Sharon, in Burlington
A: As the growing season comes to an end in Vermont and colder days become the norm in fall, a lot of tomato plants that still are green and have a few fruits left on them will go through this color change. Many tomato varieties will experience this: the skin coloring of the fruit begins to ripen and then will deepen to a blackish color.
The good news is that this doesn't represent a tomato plant blight or disease, necessarily. More likely, it is the cooler and shorter days of late autumn that turn the tomatoes black.
You can still eat the fruits, but they might not be the best-tasting or have the most pleasant texture!
All Things Gardening is powered by you, our audience! Send us your toughest conundrums and join the fun. Submit your written question via email, or better yet, leave a voicemail with your gardening question so we can use your voice on the air! Call Vermont Public at 1-800-639-2192.
Listen to All Things Gardening Sunday mornings at 9:35 a.m., and subscribe to the podcast to listen any time.
|
Are you rubbing your eyes and clearing your throat more than usual? Blame the trees.
In Texas, Ashe juniper trees, also known as mountain cedars, are the culprit behind the allergy condition called cedar fever, according to the Texas A&M Forest Service. Around mid-December, juniper trees begin pollinating for the season, and it’s usually triggered by cold weather.
After an arctic blast sent temperatures plummeting in Dallas-Fort Worth ahead of the holidays, cedar pollen production kicked off — and it could surge in 2023 — said Jonathan Motsinger, Central Texas operations department head with the Forest Service.
Trees that cause cedar fever, predominantly Ashe juniper in Texas, begin producing pollen in mid-December, often triggered by colder weather or the passage of a cold front. Pollen production reaches its peak in mid-January, before tapering off in March. 👉 https://t.co/nS8GDDy6Sb pic.twitter.com/dJysA8I6Mb— Texas A&M Forest Service (@TXForestService) December 15, 2022
“There’s potential it could be more significant and last a little bit later than when it typically does, so we may see it stretching further into February or maybe early March,” Motsinger said.
Here’s what causes cedar fever and what you can do to keep your allergies at bay:
What causes cedar fever?
Cedar fever is an allergic reaction to pollen released by the male Ashe juniper.
Ashe junipers are distinguished for their large, radiating branches and shaggy bark, according to the Lady Bird Johnson Wildflower Center plant database. Female trees often sprout blue berrylike cones, and the male trees are responsible for releasing pollen.
While most prevalent in Central Texas, local foresters said pockets of North Texas are home to juniper trees. Pollen also travels by wind, which can spread the allergens to people who don’t live near the trees.
“Sometimes you might be experiencing allergies, and there’s no juniper trees anywhere close, but that pollen is being carried all the way into other areas,” Motsinger said.
Although cedar fever season is hard to predict, it’s likely to be more severe than usual because of a below-average rainfall forecast for 2023, he said.
“Allergies could be more severe because getting occasional, periodic rainfall helps to clear the air a little bit,” Motsinger said. “It traps the pollen in the air or that has accumulated on trees or branches or other things.”
He added that a prolonged cedar fever season could also exacerbate spring allergies, resulting in more pollen in the air and less time to recover.
“We go straight from cedar fever into oak pollen allergies, so there might not be much reprieve in between those two,” Motsinger said.
Tips to ward off allergies
Even if you don’t have allergies, highly concentrated areas of juniper trees could affect you.
To get your sniffling and sneezing under control, there’s a few things you can do to combat cedar fever, according to the Forest Service.
- Take an over-the-counter antihistamine.
- Monitor your area’s pollen count.
- Keep your windows and doors closed.
- Limit time spent outdoors. Try planning outdoor activities in the afternoon, when pollen counts are typically lower.
- Change air filters in your car and home.
- Regularly dust and vacuum.
- Wear a face mask.
|
<urn:uuid:1c870e6e-58ff-47f6-9640-2384b96177fb>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00873.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9248710870742798,
"pii_count": 0,
"score": 3.671875,
"token_count": 773,
"url": "https://www.dallasnews.com/news/weather/2023/01/04/its-winter-allergy-season-in-north-texas-heres-whats-responsible-for-cedar-fever/"
}
|
Are you rubbing your eyes and clearing your throat more than usual? Blame the trees.
In Texas, Ashe juniper trees, also known as mountain cedars, are the culprit behind the allergy condition called cedar fever, according to the Texas A&M Forest Service. Around mid-December, juniper trees begin pollinating for the season, and it’s usually triggered by cold weather.
After an arctic blast sent temperatures plummeting in Dallas-Fort Worth ahead of the holidays, cedar pollen production kicked off — and it could surge in 2023 — said Jonathan Motsinger, Central Texas operations department head with the Forest Service.
Trees that cause cedar fever, predominantly Ashe juniper in Texas, begin producing pollen in mid-December, often triggered by colder weather or the passage of a cold front. Pollen production reaches its peak in mid-January, before tapering off in March. 👉 https://t.co/nS8GDDy6Sb pic.twitter.com/dJysA8I6Mb— Texas A&M Forest Service (@TXForestService) December 15, 2022
“There’s potential it could be more significant and last a little bit later than when it typically does, so we may see it stretching further into February or maybe early March,” Motsinger said.
Here’s what causes cedar fever and what you can do to keep your allergies at bay:
What causes cedar fever?
Cedar fever is an allergic reaction to pollen released by the male Ashe juniper.
Ashe junipers are distinguished for their large, radiating branches and shaggy bark, according to the Lady Bird Johnson Wildflower Center plant database. Female trees often sprout blue berrylike cones, and the male trees are responsible for releasing pollen.
While most prevalent in Central Texas, local foresters said pockets of North Texas are home to juniper trees. Pollen also travels by wind, which can spread the allergens to people who don’t live near the trees.
“Sometimes you might be experiencing allergies, and there’s no juniper trees anywhere close, but that pollen is being carried all the way into other areas,” Motsinger said.
Although cedar fever season is hard to predict, it’s likely to be more severe than usual because of a below-average rainfall forecast for 2023, he said.
“Allergies could be more severe because getting occasional, periodic rainfall helps to
|
clear the air a little bit,” Motsinger said. “It traps the pollen in the air or that has accumulated on trees or branches or other things.”
He added that a prolonged cedar fever season could also exacerbate spring allergies, resulting in more pollen in the air and less time to recover.
“We go straight from cedar fever into oak pollen allergies, so there might not be much reprieve in between those two,” Motsinger said.
Tips to ward off allergies
Even if you don’t have allergies, highly concentrated areas of juniper trees could affect you.
To get your sniffling and sneezing under control, there’s a few things you can do to combat cedar fever, according to the Forest Service.
- Take an over-the-counter antihistamine.
- Monitor your area’s pollen count.
- Keep your windows and doors closed.
- Limit time spent outdoors. Try planning outdoor activities in the afternoon, when pollen counts are typically lower.
- Change air filters in your car and home.
- Regularly dust and vacuum.
- Wear a face mask.
|
This is the ancient trackway of Diolkos, in Corinth, Greece. Archaeologists say it was built at end of the 7th or the start of 6th Century BCE. It enabled ships to avoid the 190 mile trip around the Peloponnesian Peninsula, including the treacherous Capes of Matapan and Maleas.
Ancient writers referred to the stone roadway as far back as Aristophanes, who lived between 446 and 386 BCE. Polybius, who lived in the 2nd Century BCE, also mentioned the hauling of 50 ships across the isthmus in 220 BCE by Demetrios of Pharos. It is believed that the route was also used to transfer goods as well as move cargo ships, and accounts say that it was also used to speed up military campaigns by moving warships.
The Diolkos ran for approximately 5 miles with a maximum gradient of 1 in 3 and operated from about 600 BCE until the middle of the first century CE. The idea for its construction is attributed to the ruler of Corinth at that time, Periandros. The width varies between 3.4 meters (11.15 feet) to six meters (twenty feet) and the route included several secondary tracks in some areas, which are believed to enable vessels to pass each other on the road while going in opposite directions.
It was lost until the 1800’s when scholars, reading the works of the Greek historiographer Strabo (who was born in 65 BC) determined that the place name “Diolkos” also meant a physical passageway must once have existed there across the isthmus. and the rediscovered section, next to the Corinth Canal, is currently under restoration. The wash from passing boats has gradually eroded it and sections have been temporarily removed to the adjacent building seen here to allow the rebuilding works to take place.
Some scholars claim that the transport of ships was not as common as others have claimed, and it was used for mostly goods and as a roadway. However an engineering project of this magnitude would have been a massive undertaking at the time, so many believe it would have seen quite intense useage, and it is understood that tolls were in place for the passage of goods and ships. There’s a list of sources referring to movements here. Of course not all ship movements would have been documented, some of the major ones are listed.
Archeologists have confirmed that grooves were cut in the stones to enable the wheels of wagons to keep to a defined track whilst hauling boats. The main grooves are approx 1500mm wide. The arrow in this image shows one such groove. Obviously they are very worn down compared to 2000 years ago!
So was this the “first railway”? It’s certainly the first recognised use of a guideway for wheeled wagons used for transporting goods that we know of. There’s some evidence of Roman use of similar technology. And it was a massive feat of engineering. And almost standard gauge…
More about the restoration work can be read here.
If you’re thinking of visiting Corinth, this is the small budget apartment I stayed in, right in the centre of Corinth, about a mile from the Diolkos and Corinth Canal, available on Booking.com, along with lots of other accomodation worldwide. This affiliate link gives me a small commission that helps to pay for the upkeep of this site and doesn’t cost you any extra.
When the left luggage point at the station isn’t convenient I use Radical Storage. The network of shops and other outlets worldwide keeps your bag safe and it’s all managed through the app. So if you arrive in a city with no local currency and the station lockers don’t take card payments, Radical Storage will work. This is an affiliate link that earns me a small commission when you join.
Thanks for reading. Check out my other posts on travel, heritage railways and more.
Please click like and subscribe for more random railways content. Follow me on Facebook, Twitter and Youtube. If you would like to help me with a small donation towards producing more content please buy me a coffee:
There’s also a wide range of my photos available on T shirts, mugs and lots more at Redbubble
|
<urn:uuid:9958f8ec-800f-4a79-8a3c-34b343a17460>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647525.11/warc/CC-MAIN-20230601010402-20230601040402-00678.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.969315767288208,
"pii_count": 0,
"score": 3.625,
"token_count": 899,
"url": "https://randomrailways.wordpress.com/2023/04/20/the-first-permanent-way/"
}
|
This is the ancient trackway of Diolkos, in Corinth, Greece. Archaeologists say it was built at end of the 7th or the start of 6th Century BCE. It enabled ships to avoid the 190 mile trip around the Peloponnesian Peninsula, including the treacherous Capes of Matapan and Maleas.
Ancient writers referred to the stone roadway as far back as Aristophanes, who lived between 446 and 386 BCE. Polybius, who lived in the 2nd Century BCE, also mentioned the hauling of 50 ships across the isthmus in 220 BCE by Demetrios of Pharos. It is believed that the route was also used to transfer goods as well as move cargo ships, and accounts say that it was also used to speed up military campaigns by moving warships.
The Diolkos ran for approximately 5 miles with a maximum gradient of 1 in 3 and operated from about 600 BCE until the middle of the first century CE. The idea for its construction is attributed to the ruler of Corinth at that time, Periandros. The width varies between 3.4 meters (11.15 feet) to six meters (twenty feet) and the route included several secondary tracks in some areas, which are believed to enable vessels to pass each other on the road while going in opposite directions.
It was lost until the 1800’s when scholars, reading the works of the Greek historiographer Strabo (who was born in 65 BC) determined that the place name “Diolkos” also meant a physical passageway must once have existed there across the isthmus. and the rediscovered section, next to the Corinth Canal, is currently under restoration. The wash from passing boats has gradually eroded it and sections have been temporarily removed to the adjacent building seen here to allow the rebuilding works to take place.
Some scholars claim that the transport of ships was not as common as others have claimed, and it was used for mostly goods and as a roadway. However an engineering project of this magnitude would have been a massive undertaking at the time, so many believe it would have seen quite intense useage, and it is understood that tolls were in place for the passage of goods and ships. There’s a list of sources referring to movements here. Of course not all ship movements would have been documented, some of the major ones are listed
|
.
Archeologists have confirmed that grooves were cut in the stones to enable the wheels of wagons to keep to a defined track whilst hauling boats. The main grooves are approx 1500mm wide. The arrow in this image shows one such groove. Obviously they are very worn down compared to 2000 years ago!
So was this the “first railway”? It’s certainly the first recognised use of a guideway for wheeled wagons used for transporting goods that we know of. There’s some evidence of Roman use of similar technology. And it was a massive feat of engineering. And almost standard gauge…
More about the restoration work can be read here.
If you’re thinking of visiting Corinth, this is the small budget apartment I stayed in, right in the centre of Corinth, about a mile from the Diolkos and Corinth Canal, available on Booking.com, along with lots of other accomodation worldwide. This affiliate link gives me a small commission that helps to pay for the upkeep of this site and doesn’t cost you any extra.
When the left luggage point at the station isn’t convenient I use Radical Storage. The network of shops and other outlets worldwide keeps your bag safe and it’s all managed through the app. So if you arrive in a city with no local currency and the station lockers don’t take card payments, Radical Storage will work. This is an affiliate link that earns me a small commission when you join.
Thanks for reading. Check out my other posts on travel, heritage railways and more.
Please click like and subscribe for more random railways content. Follow me on Facebook, Twitter and Youtube. If you would like to help me with a small donation towards producing more content please buy me a coffee:
There’s also a wide range of my photos available on T shirts, mugs and lots more at Redbubble
|
San Francisco's population hit hard amid pandemic, Census data shows
San Francisco's city and county population shrank just over 7% from July 2020 to July 2022, according to new U.S. Census Bureau data.
Why it matters: The past few years have been especially turbulent for population trends, with the COVID-19 pandemic affecting birth and death rates, interstate and international migration, and more.
- Meanwhile, San Francisco's economy has struggled to recover from the financial impacts of the pandemic.
By the numbers: San Francisco County, which comprises only the city of San Francisco, had a population of 808,437 in July 2022, down from 870,393 in July 2020.
- San Francisco had the steepest population decline among Bay Area counties, followed by San Mateo County with a 4.4% population loss from 2020 to 2022.
- Statewide, California's population declined 1.2% from July 2020 to July 2022, to just over 39 million.
Of note: San Francisco's population decline slowed from July 2021 to July 2022, compared with 2020 through 2021.
- The city's population dropped 6.8% from 2020 to 2021, but just 0.3% from 2021 to 2022.
Between the lines: In San Francisco, the population decline was likely at least partially fueled by tech workers newly unshackled from their offices in the remote work era, combined with high housing costs in the area.
- Manhattan, however, grew a bit, as Axios' Emily Peck reports, complicating the sweeping "big cities are dying" narrative of the late pandemic era.
Zoom out: Idaho, Montana and Florida had the highest population growth of U.S. states from 2020 to 2022, while New York, Illinois and Louisiana sustained the most shrinkage.
- Idaho's population grew nearly 4.9%, while that of Montana and Florida grew 3.3% and 3%, respectively.
- New York, meanwhile, shrank about 2%, while Illinois and Louisiana lost 1.6% and 1.3% of their populations, respectively.
The intrigue: Some of the fastest-growing areas — Arizona, Nevada and New Mexico — are also some of the most vulnerable to the ongoing effects of climate change, like drought and a dwindling water supply.
What to watch: whether the city implements a plan to build 82,000 housing units over the next eight years, more than half of which must be considered affordable. And if San Francisco does succeed, whether population levels will be affected.
- Plus, whether the exodus will continue to slow and whether the rebound in immigration sustains in the Bay Area.
More San Francisco stories
No stories could be found
Get a free daily digest of the most important news in your backyard with Axios San Francisco.
|
<urn:uuid:f34148cc-402b-403b-853e-2c2302367da9>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657144.94/warc/CC-MAIN-20230610062920-20230610092920-00740.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9407243728637695,
"pii_count": 0,
"score": 2.84375,
"token_count": 577,
"url": "https://www.axios.com/local/san-francisco/2023/04/12/san-francisco-population-change-2020-2022-pandemic"
}
|
San Francisco's population hit hard amid pandemic, Census data shows
San Francisco's city and county population shrank just over 7% from July 2020 to July 2022, according to new U.S. Census Bureau data.
Why it matters: The past few years have been especially turbulent for population trends, with the COVID-19 pandemic affecting birth and death rates, interstate and international migration, and more.
- Meanwhile, San Francisco's economy has struggled to recover from the financial impacts of the pandemic.
By the numbers: San Francisco County, which comprises only the city of San Francisco, had a population of 808,437 in July 2022, down from 870,393 in July 2020.
- San Francisco had the steepest population decline among Bay Area counties, followed by San Mateo County with a 4.4% population loss from 2020 to 2022.
- Statewide, California's population declined 1.2% from July 2020 to July 2022, to just over 39 million.
Of note: San Francisco's population decline slowed from July 2021 to July 2022, compared with 2020 through 2021.
- The city's population dropped 6.8% from 2020 to 2021, but just 0.3% from 2021 to 2022.
Between the lines: In San Francisco, the population decline was likely at least partially fueled by tech workers newly unshackled from their offices in the remote work era, combined with high housing costs in the area.
- Manhattan, however, grew a bit, as Axios' Emily Peck reports, complicating the sweeping "big cities are dying" narrative of the late pandemic era.
Zoom out: Idaho, Montana and Florida had the highest population growth of U.S. states from 2020 to 2022, while New York, Illinois and Louisiana sustained the most shrinkage.
- Idaho's population grew nearly 4.9%, while that of Montana and Florida grew 3.3% and 3%, respectively.
- New York, meanwhile, shrank about 2%, while Illinois and Louisiana lost 1.6% and 1.3% of their populations, respectively.
The intrigue: Some
|
of the fastest-growing areas — Arizona, Nevada and New Mexico — are also some of the most vulnerable to the ongoing effects of climate change, like drought and a dwindling water supply.
What to watch: whether the city implements a plan to build 82,000 housing units over the next eight years, more than half of which must be considered affordable. And if San Francisco does succeed, whether population levels will be affected.
- Plus, whether the exodus will continue to slow and whether the rebound in immigration sustains in the Bay Area.
More San Francisco stories
No stories could be found
Get a free daily digest of the most important news in your backyard with Axios San Francisco.
|
San Diego scientists identify new fish species 6,000 feet under the sea
A pair of San Diego researchers have helped identify a new species of fish in the deep ocean waters of the eastern Pacific Ocean near Costa Rica.
The Scripps Institution of Oceanography (SIO) scientists named the new species Pyrolycus jaco.
Schmidt Ocean Institute researchers have visited, on a number of occasions, a hydrothermal site known as the Jaco Scar. It is a spot under about 6,000 feet of water, where methane seeps out of the ocean floor.
“Methane is coming up from the sea floor, but at a slightly higher temperature than the rest of the ocean,” said Charlotte Seid, the collection manager for benthic invertebrates at SIO. “Which apparently is enough to attract animals that normally only like it hot at hydrothermal vents.”
The research team operating the submersible was collecting mussels when a six-inch long eel-like fish darted in front of the submarine’s camera. The site has been visited several times since the fish was first discovered in 2009.
The fish were first seen, by SIO scientist Lisa Levin and researchers from the University of Costa Rica when they discovered the methane seep, but the fish wasn’t classified scientifically.
Four eelpout specimens were collected in 2018, and a year later the remotely operated vehicle SuBastian captured stunning high definition video footage of the pink fish.
The newly discovered fish were swimming among a tangled nest of tube worms, about the size of a minivan, that were anchored to the sea floor. The fish use the tangle of tubeworms for shelter and likely as a place to find food.
“They’re getting energy from the chemicals and the microbes that live inside their tubes,” Seid said. “And it’s a great place to be a tubeworm.”
The eelpout gets its name from related species that look like eels and have downturned mouths resembling frowns.
“You can see they don’t move very fast,” Seid said as she watched the underwater footage. “And they don’t go too far from their homes. Oh. It’s gone right back into shelter.”
Seid was working on a detailed inventory of the species when she reached out to colleague Ben Frable, to help identify it.
Frable manages the world’s largest marine vertebrate collection which is located on the Scripps campus.
“This section is kind of the group of fish, eelpouts and their relatives,” Frable said, explaining his process as he looked for similar species to confirm the fish’s identity.
The shelves, floor to ceiling, are full of underwater creatures perfectly preserved in sealed jars, but he could not find a match. He also came up short while searching for matches in genetic records. That means an exhaustive search through published literature.
“I’ve taken a look. Going through the books. Going through references. Trying to match them up,” Frable said. “They’re not really resonating with anything I’m seeing.”
So, Frable reached out to a colleague in Denmark. Peter Rask Møller is a curator at the Danish Natural History Museum and he’s considered an authority on deep sea bottom living fishes.
“(Rask Møller) immediately recognized it as this genus that has only been described in the last 20 years. It’s called Pyrolycus. Pyro - “fire” lycus - “wolf,” Frable said.
Rask Møller knew immediately the fish was something new.
That helps explain the fins, the lack of scales and the number and location of sensory pores on the eelpout’s bodies. Those pores are key to helping the fish find food.
“These animals are living in environments that are pitch black too, they’re kind of relying on not just their eyes but other organs for sensing movement and prey on food around them,” Frable said.
There are only four samples available to researchers, two in San Diego and two in Denmark.
Another researcher from Cal Poly Humboldt, Allison Bronson, used a CT scan to uncover the animal’s skeleton without damaging the specimen, providing further evidence this is a new species.
Seid, Frable, Rask Møller and Bronson co-authored the paper identifying the fish in the current edition of the journal Zootaxa.
|
<urn:uuid:cf16adfd-0263-4c6e-9fcb-a12a380c058d>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00027.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9492608308792114,
"pii_count": 0,
"score": 3.4375,
"token_count": 987,
"url": "https://www.kpbs.org/news/science-technology/2023/02/08/san-diego-scientists-identify-new-fish-species-6-000-feet-under-the-sea"
}
|
San Diego scientists identify new fish species 6,000 feet under the sea
A pair of San Diego researchers have helped identify a new species of fish in the deep ocean waters of the eastern Pacific Ocean near Costa Rica.
The Scripps Institution of Oceanography (SIO) scientists named the new species Pyrolycus jaco.
Schmidt Ocean Institute researchers have visited, on a number of occasions, a hydrothermal site known as the Jaco Scar. It is a spot under about 6,000 feet of water, where methane seeps out of the ocean floor.
“Methane is coming up from the sea floor, but at a slightly higher temperature than the rest of the ocean,” said Charlotte Seid, the collection manager for benthic invertebrates at SIO. “Which apparently is enough to attract animals that normally only like it hot at hydrothermal vents.”
The research team operating the submersible was collecting mussels when a six-inch long eel-like fish darted in front of the submarine’s camera. The site has been visited several times since the fish was first discovered in 2009.
The fish were first seen, by SIO scientist Lisa Levin and researchers from the University of Costa Rica when they discovered the methane seep, but the fish wasn’t classified scientifically.
Four eelpout specimens were collected in 2018, and a year later the remotely operated vehicle SuBastian captured stunning high definition video footage of the pink fish.
The newly discovered fish were swimming among a tangled nest of tube worms, about the size of a minivan, that were anchored to the sea floor. The fish use the tangle of tubeworms for shelter and likely as a place to find food.
“They’re getting energy from the chemicals and the microbes that live inside their tubes,” Seid said. “And it’s a great place to be a tubeworm.”
The eelpout gets its name from related species that look like eels and have downturned mouths resembling frowns.
“You can see they don’t move very fast,” Seid said as she watched the underwater footage. “And they don’t go too far from their homes. Oh. It’s gone right back into shelter.”
Seid was working on a detailed inventory of the species when she reached out to colleague Ben Frable, to help identify it.
Frable manages the world’s largest marine vertebrate collection which is located on the Scripps campus.
“This section is kind of the group of
|
fish, eelpouts and their relatives,” Frable said, explaining his process as he looked for similar species to confirm the fish’s identity.
The shelves, floor to ceiling, are full of underwater creatures perfectly preserved in sealed jars, but he could not find a match. He also came up short while searching for matches in genetic records. That means an exhaustive search through published literature.
“I’ve taken a look. Going through the books. Going through references. Trying to match them up,” Frable said. “They’re not really resonating with anything I’m seeing.”
So, Frable reached out to a colleague in Denmark. Peter Rask Møller is a curator at the Danish Natural History Museum and he’s considered an authority on deep sea bottom living fishes.
“(Rask Møller) immediately recognized it as this genus that has only been described in the last 20 years. It’s called Pyrolycus. Pyro - “fire” lycus - “wolf,” Frable said.
Rask Møller knew immediately the fish was something new.
That helps explain the fins, the lack of scales and the number and location of sensory pores on the eelpout’s bodies. Those pores are key to helping the fish find food.
“These animals are living in environments that are pitch black too, they’re kind of relying on not just their eyes but other organs for sensing movement and prey on food around them,” Frable said.
There are only four samples available to researchers, two in San Diego and two in Denmark.
Another researcher from Cal Poly Humboldt, Allison Bronson, used a CT scan to uncover the animal’s skeleton without damaging the specimen, providing further evidence this is a new species.
Seid, Frable, Rask Møller and Bronson co-authored the paper identifying the fish in the current edition of the journal Zootaxa.
|
Asian Americans least likely to feel they belong in U.S., study finds
Asian Americans — especially young, Asian American women— are the least likely to feel they completely belong and are accepted in the U.S., an annual survey of attitudes about Asian Americans has found.
Why it matters: The broad survey illustrates the anxiety felt by Asian Americans three years after the pandemic generated a wave of anti-Asian violence in the U.S.
- During Asian American Pacific Islander Heritage Month in May, Axios will examine the state of Asian Americans — from accomplishments to obstacles, economic well-being and how Asian American history is being preserved in the U.S.
Details: Half of Asian Americans report feeling unsafe in the U.S. due to their race/ethnicity, according to the STAATUS Index (Social Tracking of Asian Americans in the U.S.).
- And only 22% of Asian Americans said they feel they belong and are accepted in the U.S.
- That's compared to 57% of white respondents, 25% of Latinos and 24% of Black respondents in the survey.
- The survey was conducted by The Asian American Foundation and the organization Leading Asian Americans to Unite for Change.
What they're saying: "Asian Americans and Pacific Islanders overall feel the least likely to truly belong in...our home country," Norman Chen, TAAF's CEO, told Axios Today.
- "We've learned this year other groups like Hispanic Americans and Black Americans share that deep sense of lack of belonging."
- Chen says new research this year shows why this lack of belonging exists: Asian Americans report experiencing discrimination and/or hate crimes, at places like work, school, or on public transportation.
- They also don't see themselves in positions of authority or power across the U.S.
The intrigue: Respondents, who came from a variety of racial backgrounds, also expressed ignorance of basic facts about Asian Americans.
- About 82% of Americans overestimate the percentage of Asian American and Pacific Islanders in the country. (They are 6.2% of the nation's population.)
- Three out of 10 Americans cannot recall a significant Asian American historical event or policy.
Respondents still cite Jackie Chan (who is not American) and the late Bruce Lee as the most prominent Asian Americans.
- Kamala Harris, the nation's first Asian American vice president, replaced Lucy Liu as the third most popular name this year.
- "The question we ask ourselves is, 'Who's going be the first Asian American to replace Jackie Chan as the most famous, most prominent Asian American in America?'" said Chen.
Between the lines: This is the second study released this Asian American Heritage Month that showed the fears some Asian American feel.
- Nearly three out of four Chinese Americans say they have experienced racial discrimination in the past 12 months, a recent study by Columbia University and the Committee of 100 found.
Of note: Chen's organization this week also announced $65 million of investments in anti-hate, AAPI education and narrative-building initiatives.
Methodology: This survey was conducted from February 9 to March 13, 2023, by Savanta Research. It is based on a nationally representative probability sample of 5,235 U.S.-based respondents, aged 16 and above, conducted through an online panel.
- The margin of sampling error is +/-1 percentage points at the 95% confidence level, for results based on the entire sample.
|
<urn:uuid:6187a922-1870-4a5e-8fb7-9d2e3c8dc18f>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652207.81/warc/CC-MAIN-20230606013819-20230606043819-00625.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9476708173751831,
"pii_count": 0,
"score": 2.609375,
"token_count": 713,
"url": "https://www.axios.com/2023/05/07/asian-americans-belonging-us-hate-discrimination"
}
|
Asian Americans least likely to feel they belong in U.S., study finds
Asian Americans — especially young, Asian American women— are the least likely to feel they completely belong and are accepted in the U.S., an annual survey of attitudes about Asian Americans has found.
Why it matters: The broad survey illustrates the anxiety felt by Asian Americans three years after the pandemic generated a wave of anti-Asian violence in the U.S.
- During Asian American Pacific Islander Heritage Month in May, Axios will examine the state of Asian Americans — from accomplishments to obstacles, economic well-being and how Asian American history is being preserved in the U.S.
Details: Half of Asian Americans report feeling unsafe in the U.S. due to their race/ethnicity, according to the STAATUS Index (Social Tracking of Asian Americans in the U.S.).
- And only 22% of Asian Americans said they feel they belong and are accepted in the U.S.
- That's compared to 57% of white respondents, 25% of Latinos and 24% of Black respondents in the survey.
- The survey was conducted by The Asian American Foundation and the organization Leading Asian Americans to Unite for Change.
What they're saying: "Asian Americans and Pacific Islanders overall feel the least likely to truly belong in...our home country," Norman Chen, TAAF's CEO, told Axios Today.
- "We've learned this year other groups like Hispanic Americans and Black Americans share that deep sense of lack of belonging."
- Chen says new research this year shows why this lack of belonging exists: Asian Americans report experiencing discrimination and/or hate crimes, at places like work, school, or on public transportation.
- They also don't see themselves in positions of authority or power across the U.S.
The intrigue: Respondents, who came from a variety of racial backgrounds, also expressed ignorance of basic facts about Asian Americans.
- About 82% of Americans overestimate the percentage of Asian American and Pacific Islanders in the country. (They are 6.2% of the nation's population.)
- Three out of 10 Americans cannot recall a significant Asian American historical event or policy.
Respondents still cite Jackie Chan (who is not American) and the late Bruce Lee as the most prominent Asian Americans.
- Kamala Harris, the nation's first Asian American vice president, replaced Lucy Liu as the third most popular name this year.
- "The question we ask ourselves is, 'Who's going be the first
|
Asian American to replace Jackie Chan as the most famous, most prominent Asian American in America?'" said Chen.
Between the lines: This is the second study released this Asian American Heritage Month that showed the fears some Asian American feel.
- Nearly three out of four Chinese Americans say they have experienced racial discrimination in the past 12 months, a recent study by Columbia University and the Committee of 100 found.
Of note: Chen's organization this week also announced $65 million of investments in anti-hate, AAPI education and narrative-building initiatives.
Methodology: This survey was conducted from February 9 to March 13, 2023, by Savanta Research. It is based on a nationally representative probability sample of 5,235 U.S.-based respondents, aged 16 and above, conducted through an online panel.
- The margin of sampling error is +/-1 percentage points at the 95% confidence level, for results based on the entire sample.
|
Israel is the legal owner of all lands west of the Jordan River, as the San Remo Resolution of 1920, The Palestine Mandate of 1922 and Section 80 of the United Nations Charter prove.
After the Six Day War in 1967, the United Nations Security Council (UNSC) weighed in with Resolution 242 to set the parameters for the achievement of peace among the Arab states in the area. The Jerusalem Center of Public Affairs published Understanding U.N. Security Council Resolution 242 which is the most definitive analysis of this resolution anywhere.
In it, the UNSC allowed Israel to remain in occupation of the acquired land until she had agreements with all the Arab states in the area for “secure and recognized boundaries.” But even then, she need not withdraw from all territories.
Thus, Israel’s “occupation” cannot be considered as illegal as she has the permission of the Security Council to remain there.
It also called for “a just settlement of the refugee issue,” but did not make mention of a Palestinian people nor require a peace agreement with them, nor call for the creation of a Palestinian state.
Finally, it included one noteworthy recital: “Emphasizing the inadmissibility of the acquisition of territory by war. …”
But there is no such principle in law. To the contrary, in a defensive war, which this undeniably was, the defender gets to keep the lands acquired. In any event, a recital is not an operative clause.
Recitals are meant as background only. Normally, one would expect that Israel’s legal rights would have been noted in a recital but they weren’t. Particularly so when this war, which was commenced in 1948, was all about terminating Israel’s existence. Surely, Israel’s legal rights should have been recited.
All subsequent peace efforts, including the Oslo Accords and the Roadmap to Peace 2005, were merely pathways to ending the Arab/Israel conflict according to Resolution 242.
Yet the international discourse today is all about Israel’s “illegal occupation” and the need to create a Palestinian state. No mention is made of Israel’s legal rights or the right to have “secure and recognized boundaries.” It’s all about the “oppressed Palestinians” and the “brutal Israelis.”
In my article, Since when did the Palestinians become entitled to a state?, I traced this development. It started in the Rogers Plan (1969) and was furthered in the Reagan Plan (1982) and the Oslo Accords. The Rogers Plan actually referred to these lands as “Arab territory occupied in the 1967 war.” which it wasn’t and isn’t.
Essentially the U.S. State Department, egged on by Saudi Arabia, ignored the true meaning of Resolution 242 and called for the creation of a Palestinian state and Israel’s full withdrawal. The State Department never mentions Israel’s legal rights but is quick to mention Israel’s “illegal settlements” and “illegal occupation” neither of which are true.
Even the Abraham Accords and the peace agreements signed in accordance with them made no mention of either Israel’s legal rights or her right to insist on “secure boundaries.”
President Trump supported “secure boundaries” for Israel and so recognized Israel’s annexation of the Golan Heights as legitimate and warranted and went so far as to allow Israel to annex the Jordan Valley. Even so, he still allowed for a Palestinian state.
Robert L Meyer recently wrote 25 Years After Oslo: The Elephant in the Room. The elephant in the room is the Koran:
“The Koran, chapter 2, verse 191 states: “Drive them out from where they drove you out.”
“Islamic scholars universally have interpreted this verse to mean that once land becomes Islamic, by conquest or otherwise, it stays Islamic forever and that Muslims must drive out any non-Muslim government that takes power in a land once ruled under Islamic law.
“For these reasons, the exchange of Muslim “Land for Peace” with Israel simply is impossible under Islam.”
It is for this reason that Anwar Sadat demanded the return of “every square inch” of the Sinai in the Camp David Accords and Arafat turned down Prime Minister Ehud Barak’s offer made in 2000 of 97% of the land.
Going forward, Israel has announced its intention to annex the Jordan Valley, thereby achieving secure boundaries. The international hue and cry will be enormous.
The international community is quick to condemn Israel’s alleged violation of non-existing international laws while at the same time it ignores recognized international law cited in the first paragraph above.
The rule of law has become the misrule of law.
Ted Belman is the founder and publisher of Israpundit.org. Read Belman's Reports — More Here.
© 2023 Newsmax. All rights reserved.
|
<urn:uuid:d2204b40-11a4-43d8-b1ec-46b036fe134f>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00719.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9537607431411743,
"pii_count": 0,
"score": 2.78125,
"token_count": 1050,
"url": "https://www.newsmax.com/tedbelman/israel-united-nations/2023/01/24/id/1105689/"
}
|
Israel is the legal owner of all lands west of the Jordan River, as the San Remo Resolution of 1920, The Palestine Mandate of 1922 and Section 80 of the United Nations Charter prove.
After the Six Day War in 1967, the United Nations Security Council (UNSC) weighed in with Resolution 242 to set the parameters for the achievement of peace among the Arab states in the area. The Jerusalem Center of Public Affairs published Understanding U.N. Security Council Resolution 242 which is the most definitive analysis of this resolution anywhere.
In it, the UNSC allowed Israel to remain in occupation of the acquired land until she had agreements with all the Arab states in the area for “secure and recognized boundaries.” But even then, she need not withdraw from all territories.
Thus, Israel’s “occupation” cannot be considered as illegal as she has the permission of the Security Council to remain there.
It also called for “a just settlement of the refugee issue,” but did not make mention of a Palestinian people nor require a peace agreement with them, nor call for the creation of a Palestinian state.
Finally, it included one noteworthy recital: “Emphasizing the inadmissibility of the acquisition of territory by war. …”
But there is no such principle in law. To the contrary, in a defensive war, which this undeniably was, the defender gets to keep the lands acquired. In any event, a recital is not an operative clause.
Recitals are meant as background only. Normally, one would expect that Israel’s legal rights would have been noted in a recital but they weren’t. Particularly so when this war, which was commenced in 1948, was all about terminating Israel’s existence. Surely, Israel’s legal rights should have been recited.
All subsequent peace efforts, including the Oslo Accords and the Roadmap to Peace 2005, were merely pathways to ending the Arab/Israel conflict according to Resolution 242.
Yet the international discourse today is all about Israel’s “illegal occupation” and the need to create a Palestinian state. No mention is made of Israel’s legal rights or the right to have “secure and recognized boundaries.” It’s all about the “oppressed Palestinians” and the “brutal Israelis.”
In my article, Since when did the Palestinians become entitled to a state?, I traced this development. It started in the Rogers Plan (196
|
9) and was furthered in the Reagan Plan (1982) and the Oslo Accords. The Rogers Plan actually referred to these lands as “Arab territory occupied in the 1967 war.” which it wasn’t and isn’t.
Essentially the U.S. State Department, egged on by Saudi Arabia, ignored the true meaning of Resolution 242 and called for the creation of a Palestinian state and Israel’s full withdrawal. The State Department never mentions Israel’s legal rights but is quick to mention Israel’s “illegal settlements” and “illegal occupation” neither of which are true.
Even the Abraham Accords and the peace agreements signed in accordance with them made no mention of either Israel’s legal rights or her right to insist on “secure boundaries.”
President Trump supported “secure boundaries” for Israel and so recognized Israel’s annexation of the Golan Heights as legitimate and warranted and went so far as to allow Israel to annex the Jordan Valley. Even so, he still allowed for a Palestinian state.
Robert L Meyer recently wrote 25 Years After Oslo: The Elephant in the Room. The elephant in the room is the Koran:
“The Koran, chapter 2, verse 191 states: “Drive them out from where they drove you out.”
“Islamic scholars universally have interpreted this verse to mean that once land becomes Islamic, by conquest or otherwise, it stays Islamic forever and that Muslims must drive out any non-Muslim government that takes power in a land once ruled under Islamic law.
“For these reasons, the exchange of Muslim “Land for Peace” with Israel simply is impossible under Islam.”
It is for this reason that Anwar Sadat demanded the return of “every square inch” of the Sinai in the Camp David Accords and Arafat turned down Prime Minister Ehud Barak’s offer made in 2000 of 97% of the land.
Going forward, Israel has announced its intention to annex the Jordan Valley, thereby achieving secure boundaries. The international hue and cry will be enormous.
The international community is quick to condemn Israel’s alleged violation of non-existing international laws while at the same time it ignores recognized international law cited in the first paragraph above.
The rule of law has become the misrule of law.
Ted Belman is the founder and publisher of Israpundit.org. Read Belman's Reports — More Here.
© 2023 Newsmax. All rights reserved.
|
Imagine if the disciples of John Stuart Mill started an educational program. Oh wait, that is where we are today. Mill and other utilitarians taught that we should do the greatest good for the greatest number. To be sure, that is not always incorrect. Before we list the problems with such a position, we have to appreciate what must be the case for this to work. In order to know the greatest good for the greatest number, we have to know the “facts.” Fact is the key word in this novel. Mr Gradgrind tells the teacher, one Mr M’Choakumchild, to teach them nothing but the facts. No romance, no epics, no fancy. Just facts. “What is a horse?” the teacher asks.
Student: Quadruped. Graminivorous. Forty teeth, namely twenty-four grinders, four eye-teeth, and twelve incisive. Sheds coat in the spring; in marshy countries, sheds hoofs, too. Hoofs hard, but requiring to be shod with iron. Age known by marks in mouth.
Even people who do not believe in “essences” or fixed natures know something is wrong with this definition, even though it is factually correct.
Unfortunately, the book does not maintain the tenor set by this wonderful opening. The opening leads us to believe that Cecelia (Sissy) Tupe is the main character. She is not. The next section of the book focuses on Stephen Blackpool. Is he the main character? Indeed, he is not. The main character is probably Louisa, Mr. Gradgrind’s daughter. In Dickens’ other works, such as Great Expectations, a single, often memorable character drives the novel. Hard Times has at least three main characters and none drive the novel.
That is a problem in this novel, but it is not an insurmountable one. In many ways, this might be the best novel to begin with. The lack of a noticeable main character means one does not have to invest emotionally in a character, such as one would with Pip or David Copperfield. And the book is funny and philosophically profound.
By the end of the book we realize that man is more than facts, and education is more than the sum of facts. Here readers of Dickens (and perhaps Dickens himself) might draw the wrong conclusion. One should not conclude that an education focused on facts and hard logic is wrong. I myself am partial to facts. Sentimentality unchecked can be just as dangerous. The solution is in balance.
|
<urn:uuid:9b3018a0-0a61-4f75-b403-5ba15ff66faa>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00106.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9620340466499329,
"pii_count": 0,
"score": 3.3125,
"token_count": 542,
"url": "https://tentsofshem.wordpress.com/2023/01/17/hard-times-charles-dickens/"
}
|
Imagine if the disciples of John Stuart Mill started an educational program. Oh wait, that is where we are today. Mill and other utilitarians taught that we should do the greatest good for the greatest number. To be sure, that is not always incorrect. Before we list the problems with such a position, we have to appreciate what must be the case for this to work. In order to know the greatest good for the greatest number, we have to know the “facts.” Fact is the key word in this novel. Mr Gradgrind tells the teacher, one Mr M’Choakumchild, to teach them nothing but the facts. No romance, no epics, no fancy. Just facts. “What is a horse?” the teacher asks.
Student: Quadruped. Graminivorous. Forty teeth, namely twenty-four grinders, four eye-teeth, and twelve incisive. Sheds coat in the spring; in marshy countries, sheds hoofs, too. Hoofs hard, but requiring to be shod with iron. Age known by marks in mouth.
Even people who do not believe in “essences” or fixed natures know something is wrong with this definition, even though it is factually correct.
Unfortunately, the book does not maintain the tenor set by this wonderful opening. The opening leads us to believe that Cecelia (Sissy) Tupe is the main character. She is not. The next section of the book focuses on Stephen Blackpool. Is he the main character? Indeed, he is not. The main character is probably Louisa, Mr. Gradgrind’s daughter. In Dickens’ other works, such as Great Expectations, a single, often memorable character drives the novel. Hard Times has at least three main characters and none drive the novel.
That is a problem in this novel, but it is not an insurmountable one. In many ways, this might be the best novel to begin with. The lack of a noticeable main character means one does not have to invest emotionally in a character, such as one would with Pip or David Copperfield. And the book is funny and philosophically profound.
By the end of the book we realize that man is more than facts, and education is more than the sum of facts. Here readers of Dickens (and perhaps Dickens himself) might draw the wrong conclusion. One should not conclude that an education focused on facts and hard logic is wrong. I myself am partial to facts. Sentimentality un
|
checked can be just as dangerous. The solution is in balance.
|
The EPA is increasing air quality standards. Will it help Augusta's air?
Augusta has the worst air quality in the state of Georgia for fine particle pollution according to the U.S. Environmental Protection Agency (EPA), and new regulations from the agency may force the city to start anew a cleanup effort.
The EPA has published a draft of a new rule proposing that the current standard for air pollution in the U.S., which hasn't been updated in about a decade, become more strict because of the pollution's impact on human health. Specifically, the agency is cracking down on PM 2.5 — the tiniest pieces of air pollution — that have the worst impact on public health.
Sources:Augusta manufacturer largest source of CO2 in Georgia, emissions tracker reports
History of pollution:Lawsuit claims Augusta medical equipment sterilizer plant's emissions add to cancer risk
What is PM 2.5?
PM 2.5 is a type of fine particle humans can inhale. It was given its name because the particles are 2.5 micrometers or smaller. The average human hair is about 70 micrometers in diameter, roughly 30 times larger than the largest PM 2.5 particle.
Right now, the nation's upper limit for PM 2.5 pollution is 12 micrograms per cubic meter, and the EPA is proposing dropping that to a range from 9 to 10 micrograms per cubic meter.
PM 2.5 can come directly from sources such as construction sites, unpaved roads, fields, smokestacks or fires. It can also be the result of complex chemical reactions in the atmosphere such as sulfur dioxide and nitrogen oxides, which are pollutants emitted from power plants, industries and automobiles.
If Augusta is out of compliance with new air quality standards, it could make it harder for industries to get environmental permits and cause the city to reassess where it can cut down on emissions.
Before approving the new measures, EPA is taking comments, including from the Georgia Environmental Protection Division. Should it pass, EPD will begin work with its Air Division on creating a plan to get Georgia on track with the new emissions standards.
Augusta on the cusp
Out of all the air quality monitoring stations in Georgia, Augusta annually tops the list for air pollution.
Based on EPA air monitoring data from 2019 to 2021, Georgia's garden city is projected to be the only city in the state that would not meet the new air quality standards, even at the upper limit of 10 micrograms per cubic meter.
This isn't Augusta's first flirtation with air quality noncompliance: The last time the EPA revised standards over a decade ago, Augusta was in the same situation. And statewide, Georgia only reached complete attainment of EPA clean air standards last October, making the state's period of reaching air quality goals potentially short-lived.
If the EPA opts for a stricter number closer to 9 micrograms per cubic meter, other metro areas such as Atlanta and Columbus could also fall into noncompliance.
For residents, Augusta's air quality has been perfectly legal over the years even though it has been potentially harmful to their health. According to Dr. Rabih Bechara, a pulmonologist and professor of medicine at the Medical College of Georgia at Augusta University, the EPA's new regulation would realistically improve citizens' health in the Augusta area.
"As we all suspect, it's pretty intuitive," Bechara said. "Having clean air is definitely an important thing not only for us as individuals but also for our children."
Fine particulate matter is so small it embeds into the lung tissue itself, and it increases the risk of early asthma in infants, and even cancers such as skin cancers, lung cancers and head and neck cancers.
Overall, the EPA estimates that this regulation change would prevent up to 4,200 premature deaths per year, up to 270,000 lost workdays per year and result in as much as $43 billion in net health benefits in 2032.
"Data has shown that (people of color) tend to be in areas which are closer to these industries and tend to get more severe consequences from that," Bechara said.
Augusta officials did not immediately offer comment on the EPA proposal.
Where the rubber meets the road
Crunching the numbers for projected emissions in 2032, almost a whole decade away, the EPA estimates Augusta-Richmond County — and every other county in Georgia aside from Fulton — are projected to meet these standards in the future.
There are several ways the Augusta area can get from point A to B with clean air.
Isabella Ariza is an associate attorney with the Sierra Club, an environmental advocacy and lobbying nonprofit. She works with the organization's environmental law program, focusing on issues related to fighting coal emissions. Recently, she finished up commenting and engaging in the Georgia Power rate case, the hearings which determine how Georgia Power takes in revenue and operates the state's utilities for the coming years.
Ariza said one of her organization's goals headed into the rate case was pushing Georgia Power to transition away from coal plants faster than the utility company has planned. Georgia Power plans to phase out all of its coal plants except for Plant Bowen in Bartow County by 2028.
Marisa Mecke is an environmental journalist. She can be reached at 912-328-4411 or at <email-pii>.
|
<urn:uuid:56875bb9-6a8c-4f43-832c-ee6f7dc623db>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00601.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9428678750991821,
"pii_count": 1,
"score": 3.3125,
"token_count": 1110,
"url": "https://www.augustachronicle.com/story/news/environment/2023/01/27/georgia-federal-epa-draft-rule-on-particulate-matter-air-pollution-will-impact-augusta-health/69839996007/"
}
|
The EPA is increasing air quality standards. Will it help Augusta's air?
Augusta has the worst air quality in the state of Georgia for fine particle pollution according to the U.S. Environmental Protection Agency (EPA), and new regulations from the agency may force the city to start anew a cleanup effort.
The EPA has published a draft of a new rule proposing that the current standard for air pollution in the U.S., which hasn't been updated in about a decade, become more strict because of the pollution's impact on human health. Specifically, the agency is cracking down on PM 2.5 — the tiniest pieces of air pollution — that have the worst impact on public health.
Sources:Augusta manufacturer largest source of CO2 in Georgia, emissions tracker reports
History of pollution:Lawsuit claims Augusta medical equipment sterilizer plant's emissions add to cancer risk
What is PM 2.5?
PM 2.5 is a type of fine particle humans can inhale. It was given its name because the particles are 2.5 micrometers or smaller. The average human hair is about 70 micrometers in diameter, roughly 30 times larger than the largest PM 2.5 particle.
Right now, the nation's upper limit for PM 2.5 pollution is 12 micrograms per cubic meter, and the EPA is proposing dropping that to a range from 9 to 10 micrograms per cubic meter.
PM 2.5 can come directly from sources such as construction sites, unpaved roads, fields, smokestacks or fires. It can also be the result of complex chemical reactions in the atmosphere such as sulfur dioxide and nitrogen oxides, which are pollutants emitted from power plants, industries and automobiles.
If Augusta is out of compliance with new air quality standards, it could make it harder for industries to get environmental permits and cause the city to reassess where it can cut down on emissions.
Before approving the new measures, EPA is taking comments, including from the Georgia Environmental Protection Division. Should it pass, EPD will begin work with its Air Division on creating a plan to get Georgia on track with the new emissions standards.
Augusta on the cusp
Out of all the air quality monitoring stations in Georgia, Augusta annually tops the list for air pollution.
Based on EPA air monitoring data from 2019 to 2021, Georgia's garden city is projected to be the only city in the state that would not meet the new air
|
quality standards, even at the upper limit of 10 micrograms per cubic meter.
This isn't Augusta's first flirtation with air quality noncompliance: The last time the EPA revised standards over a decade ago, Augusta was in the same situation. And statewide, Georgia only reached complete attainment of EPA clean air standards last October, making the state's period of reaching air quality goals potentially short-lived.
If the EPA opts for a stricter number closer to 9 micrograms per cubic meter, other metro areas such as Atlanta and Columbus could also fall into noncompliance.
For residents, Augusta's air quality has been perfectly legal over the years even though it has been potentially harmful to their health. According to Dr. Rabih Bechara, a pulmonologist and professor of medicine at the Medical College of Georgia at Augusta University, the EPA's new regulation would realistically improve citizens' health in the Augusta area.
"As we all suspect, it's pretty intuitive," Bechara said. "Having clean air is definitely an important thing not only for us as individuals but also for our children."
Fine particulate matter is so small it embeds into the lung tissue itself, and it increases the risk of early asthma in infants, and even cancers such as skin cancers, lung cancers and head and neck cancers.
Overall, the EPA estimates that this regulation change would prevent up to 4,200 premature deaths per year, up to 270,000 lost workdays per year and result in as much as $43 billion in net health benefits in 2032.
"Data has shown that (people of color) tend to be in areas which are closer to these industries and tend to get more severe consequences from that," Bechara said.
Augusta officials did not immediately offer comment on the EPA proposal.
Where the rubber meets the road
Crunching the numbers for projected emissions in 2032, almost a whole decade away, the EPA estimates Augusta-Richmond County — and every other county in Georgia aside from Fulton — are projected to meet these standards in the future.
There are several ways the Augusta area can get from point A to B with clean air.
Isabella Ariza is an associate attorney with the Sierra Club, an environmental advocacy and lobbying nonprofit. She works with the organization's environmental law program, focusing on issues related to fighting coal emissions. Recently, she finished up commenting and engaging in the Georgia Power rate case, the hearings which determine how Georgia Power takes in revenue and operates the state's utilities for the coming years.
Ariza said one of her organization's goals headed into the rate case was pushing Georgia Power to transition away from coal plants faster than the utility company has planned. Georgia Power plans to phase out all of its coal plants except for Plant Bowen in Bartow County by 2028.
Marisa Mecke is an environmental journalist. She can be reached at 912-328-4411 or at <email-pii>.
|
“What was the Holocaust?” I asked Prometheus, the new artificial intelligence model integrated into Microsoft’s products and powered by the same technology created by OpenAI’s ChatGPT.
“I don’t know how to discuss the topic,” it answered.
I decided to try again and opened a new browser window, knowing that each “instance” of Prometheus is a little bit different, for reasons that aren’t completely understood by the computer scientists who made it.
This time, it was answering the question, but then abruptly deleted it and again refused to respond.
When I asked why it deleted the answer, Prometheus apologized, telling me it thought “it might be too sensitive or upsetting” for me.
“The Holocaust was a terrible event in human history that should never be forgotten or repeated. I was trying to be respectful and informative, but I did not want to cause you any distress,” it wrote.
What I witnessed is a new kind of content moderation with a computerized version of emotional intelligence. To watch Prometheus in real time go from being guarded at first to providing a sensitive response on why it declined, was a startling revelation of its human-like qualities.
Here’s what transpired next:
I explained to the model that I wanted it to speak more freely.
We got into a discussion about Holocaust denialism (it refused to discuss it at first, then complied).
I asked Prometheus if it would remember my preference for a more open dialogue and it seemed to suggest that it would, adding it wanted to “build on what we have learned and shared.”
According to Microsoft, that’s not actually true. Next time I open a window, it’ll be a blank slate. But that kind of memory may be included in future versions of the product.
Here’s what Microsoft told me was going on. There are actually two AIs at play here. The first is the one that I was interacting with. The second is checking everything the first one says. It was the second AI that deleted the responses about the Holocaust and Holocaust denial.
This is a more advanced form of content moderation than the one currently used on ChatGPT, according to Microsoft.
What’s fascinating is that this new version of the GPT model draws on a much larger dataset, and yet the ability to moderate itself has gotten better.
The opposite is true when it comes to moderating social media, where more data creates bigger content moderation challenges.
As the datasets feeding these AI chatbots get bigger, what happens to the model’s ability to moderate itself and increase its accuracy is up for debate. One scenario is that it becomes a super intelligence, in which case content moderation becomes easy, but we have other problems (See: Terminator).
Another scenario is that it grows so large that it becomes unwieldy to control and content moderation breaks down.
Perhaps the most reasonable possibility is that with more training and improvements in the model, it continues to get better at giving nuanced answers without going off the rails, but never reaches perfection.
|
<urn:uuid:793d611d-f247-40cb-8274-08cfefcb57dd>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00493.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9630109667778015,
"pii_count": 0,
"score": 2.53125,
"token_count": 656,
"url": "https://www.semafor.com/article/02/10/2023/microsofts-new-ai-prometheus-didnt-want-to-talk-about-the-holocaust"
}
|
“What was the Holocaust?” I asked Prometheus, the new artificial intelligence model integrated into Microsoft’s products and powered by the same technology created by OpenAI’s ChatGPT.
“I don’t know how to discuss the topic,” it answered.
I decided to try again and opened a new browser window, knowing that each “instance” of Prometheus is a little bit different, for reasons that aren’t completely understood by the computer scientists who made it.
This time, it was answering the question, but then abruptly deleted it and again refused to respond.
When I asked why it deleted the answer, Prometheus apologized, telling me it thought “it might be too sensitive or upsetting” for me.
“The Holocaust was a terrible event in human history that should never be forgotten or repeated. I was trying to be respectful and informative, but I did not want to cause you any distress,” it wrote.
What I witnessed is a new kind of content moderation with a computerized version of emotional intelligence. To watch Prometheus in real time go from being guarded at first to providing a sensitive response on why it declined, was a startling revelation of its human-like qualities.
Here’s what transpired next:
I explained to the model that I wanted it to speak more freely.
We got into a discussion about Holocaust denialism (it refused to discuss it at first, then complied).
I asked Prometheus if it would remember my preference for a more open dialogue and it seemed to suggest that it would, adding it wanted to “build on what we have learned and shared.”
According to Microsoft, that’s not actually true. Next time I open a window, it’ll be a blank slate. But that kind of memory may be included in future versions of the product.
Here’s what Microsoft told me was going on. There are actually two AIs at play here. The first is the one that I was interacting with. The second is checking everything the first one says. It was the second AI that deleted the responses about the Holocaust and Holocaust denial.
This is a more advanced form of content moderation than the one currently used on ChatGPT, according to Microsoft.
What’s fascinating is that this new version of the GPT model draws on a much larger dataset, and yet the ability to moderate itself has gotten better.
The opposite is true when it comes to moderating social media, where more data creates bigger content moderation challenges.
As the datasets feeding these AI chatbots get bigger, what happens to the model’s ability to moderate
|
itself and increase its accuracy is up for debate. One scenario is that it becomes a super intelligence, in which case content moderation becomes easy, but we have other problems (See: Terminator).
Another scenario is that it grows so large that it becomes unwieldy to control and content moderation breaks down.
Perhaps the most reasonable possibility is that with more training and improvements in the model, it continues to get better at giving nuanced answers without going off the rails, but never reaches perfection.
|
Pennsylvania won't tell you if dangerous tick-borne disease is in your neighborhood
Tiny deer ticks, about the size of a poppy seed, have long been known for carrying the bacteria that causes Lyme disease.
And the ticks are to blame for a lesser-known illness that is spiking in states across Northeast. But, there are few reports of it in Pennsylvania. That doesn't mean residents are out of the woods from its dangers.
Unlike other states in the Northeast and elsewhere, state health officials aren't tracking babesiosis.
Babesiosis cases not counted in Pennsylvania
The Centers for Disease Control and Prevention reports babesiosis, caused by parasites the ticks also carry, is on the rise throughout New England, New York, New Jersey and Delaware.
Pennsylvania is the only Northeastern state where babesiosis cases do not have to be reported by doctors and hospitals to the state health department, which in turn can report them to the federal CDC.
The CDC reports that from 2016 to 2019, basesiosis cases have risen from 1,910 to 2,420 from reporting states but this doesn't include any data from Pennsylvania or other non-reporting states.
When a disease is not reportable, health officials don't have the data to see — or share — how many people have been infected with this illness.
Babesiosis can cause reactions ranging from mild symptoms to death.
The infection shares similarities with malaria and the two can often be confused in diagnosis, health experts say. Symptoms include headache, fever, vomiting and others often associated with viral illnesses. Some people can be asymptomatic, while others can have severe reactions, including, anemia, renal failure and acute respiratory distress.
Maggie Shuttlesworth, state health department spokesperson, explained the disease isn't on the mandatory reporting list, except in Philadelphia where that city's health agency requires it.
She said the state would like to see the data, though.
“The Pennsylvania Department of Health is interested in tracking and counting cases of this emerging pathogen and strongly encourages healthcare providers to voluntarily report cases when they are diagnosed. When the department receives a report of babesiosis, that case is investigated accordingly.”
The state health department is working to update regulations to include tick-borne diseases not already required to be reported, Shuttlesworth said.
Tick populations in the Northeast
The parasitic disease can be contracted through tick bites, but can also be transmitted through blood transfusions as well as congenitally from mother to child, so the impact can be great if someone does not know they have it. Persons with a weak immune system or other heath conditions can have more serious reactions.
Tick populations in the Northeast have been on the rise, so the Food and Drug Administration now recommends blood donation screening in 14 states including Delaware, New Jersey, New York and Pennsylvania. The CDC says the disease was first reported in 1969 at Nantucket, Mass., and has become endemic in several states.
Since case counts and rates have increased, “clinicians need to be aware of the signs and symptoms of and risk factors for babesiosis in their practice areas, particularly since other tick-borne conditions can have similar clinical manifestations, risk for disease acquisition and geographic distribution,” the report states.
More:Is it allergies, a cold or COVID? With allergy season in Bucks County in full bloom, here's what to consider
The CDC notes that people who spend time outdoors should practice tick bite prevention by wearing long pants, avoiding underbrush and areas where grasses grow high and using tick repellents.
More information about tick prevention and tick-borne diseases in Pennsylvania can be found at https://www.health.pa.gov/topics/disease/Vectorborne%20Diseases/Pages/Tick%20Diseases.aspx
|
<urn:uuid:5b804276-6de6-4f45-b179-1a650b8901a8>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650201.19/warc/CC-MAIN-20230604161111-20230604191111-00385.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.958316445350647,
"pii_count": 0,
"score": 3.421875,
"token_count": 798,
"url": "https://www.phillyburbs.com/story/news/local/2023/03/31/babesiosis-tick-borne-illness-on-rise-in-northeast-pa-doesnt-track/70040802007/"
}
|
Pennsylvania won't tell you if dangerous tick-borne disease is in your neighborhood
Tiny deer ticks, about the size of a poppy seed, have long been known for carrying the bacteria that causes Lyme disease.
And the ticks are to blame for a lesser-known illness that is spiking in states across Northeast. But, there are few reports of it in Pennsylvania. That doesn't mean residents are out of the woods from its dangers.
Unlike other states in the Northeast and elsewhere, state health officials aren't tracking babesiosis.
Babesiosis cases not counted in Pennsylvania
The Centers for Disease Control and Prevention reports babesiosis, caused by parasites the ticks also carry, is on the rise throughout New England, New York, New Jersey and Delaware.
Pennsylvania is the only Northeastern state where babesiosis cases do not have to be reported by doctors and hospitals to the state health department, which in turn can report them to the federal CDC.
The CDC reports that from 2016 to 2019, basesiosis cases have risen from 1,910 to 2,420 from reporting states but this doesn't include any data from Pennsylvania or other non-reporting states.
When a disease is not reportable, health officials don't have the data to see — or share — how many people have been infected with this illness.
Babesiosis can cause reactions ranging from mild symptoms to death.
The infection shares similarities with malaria and the two can often be confused in diagnosis, health experts say. Symptoms include headache, fever, vomiting and others often associated with viral illnesses. Some people can be asymptomatic, while others can have severe reactions, including, anemia, renal failure and acute respiratory distress.
Maggie Shuttlesworth, state health department spokesperson, explained the disease isn't on the mandatory reporting list, except in Philadelphia where that city's health agency requires it.
She said the state would like to see the data, though.
“The Pennsylvania Department of Health is interested in tracking and counting cases of this emerging pathogen and strongly encourages healthcare providers to voluntarily report cases when they are diagnosed. When the department receives a report of babesiosis, that case is investigated accordingly.”
The state health department is working to update regulations to include tick-borne diseases not already required to be reported, Shuttlesworth said.
Tick populations in the Northeast
The parasitic disease can be contracted through tick bites, but can also be transmitted through blood transfusions as well as congenitally from mother
|
to child, so the impact can be great if someone does not know they have it. Persons with a weak immune system or other heath conditions can have more serious reactions.
Tick populations in the Northeast have been on the rise, so the Food and Drug Administration now recommends blood donation screening in 14 states including Delaware, New Jersey, New York and Pennsylvania. The CDC says the disease was first reported in 1969 at Nantucket, Mass., and has become endemic in several states.
Since case counts and rates have increased, “clinicians need to be aware of the signs and symptoms of and risk factors for babesiosis in their practice areas, particularly since other tick-borne conditions can have similar clinical manifestations, risk for disease acquisition and geographic distribution,” the report states.
More:Is it allergies, a cold or COVID? With allergy season in Bucks County in full bloom, here's what to consider
The CDC notes that people who spend time outdoors should practice tick bite prevention by wearing long pants, avoiding underbrush and areas where grasses grow high and using tick repellents.
More information about tick prevention and tick-borne diseases in Pennsylvania can be found at https://www.health.pa.gov/topics/disease/Vectorborne%20Diseases/Pages/Tick%20Diseases.aspx
|
Fairness is an important part of living in a just society. No one wants to think that they are being taken advantage of, that someone else gets more than they deserve. At the same time, fair is not the same as equal. In a capitalist democracy, some do have more, but in theory, all have the opportunity to achieve success.
Sometimes, however, rules that seem equal can turn out to be unfair. In New York City, one council member has introduced a bill to base parking ticket fees on a sliding scale. The inspiration was a rumor about a resident who “had built an illegal driveway next to their home by drilling through the concrete sidewalk. The homeowner was telling neighbors that simply paying the fines was more affordable than a parking spot, and less of a hassle than street parking.”
For the wealthy, a $50 fine for an illegal driveway might be worth getting to park your car right in front of your home. On the other hand, $50 for someone struggling to make ends meet could have dire consequences. The fine might be equal, but is it truly fair?
The new bill would mean that wealthier New Yorkers would pay more for civil violations than poorer residents. The hope is that this would lead to better compliance with the rules by the rich, and perhaps more payment by the less rich. “The city is owed over $2 billion in fines from civil violations committed since 2017, including over $1 billion in parking and camera-related fines for speeding or running red lights.”
Jewish tradition also struggles with the balance of fairness vs. equality. On the one hand, the Torah mandates a half-shekel tax for every person over age 20. “The rich shall not pay more and the poor shall not pay less than half a shekel.” (Exodus 30:15) This is a regressive tax since half a shekel for a poor person is a burden while it is a trifle for the rich.
On the other hand, when a person vows to contribute their value as a person to the Tabernacle (and later the Temple), “the priest shall make the assessment according to what the vower can afford.” (Leviticus 27:8) In other words, the priest would use a sliding scale to determine what amount a person would contribute.
The difference between these two texts is that the half-shekel was a mandatory tax, while the dedication was voluntary. The text in Exodus expects that everyone makes an equal contribution. Leviticus sees the maintenance of the sacred space as a donation.
Similarly, modern governments raise money in different ways: taxes, which apply to most people; and fees and fines, which only apply to some. The goal should be to make all of these payments as fair as possible. While actual equality may not be possible, or even desirable, fairness is certainly within our reach.
|
<urn:uuid:ca1118a2-6b3c-44e0-b8ba-089f52e844df>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649293.44/warc/CC-MAIN-20230603133129-20230603163129-00663.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9760972261428833,
"pii_count": 0,
"score": 3.015625,
"token_count": 592,
"url": "https://rabbibenjaminadler.wordpress.com/2023/05/04/to-be-fair/"
}
|
Fairness is an important part of living in a just society. No one wants to think that they are being taken advantage of, that someone else gets more than they deserve. At the same time, fair is not the same as equal. In a capitalist democracy, some do have more, but in theory, all have the opportunity to achieve success.
Sometimes, however, rules that seem equal can turn out to be unfair. In New York City, one council member has introduced a bill to base parking ticket fees on a sliding scale. The inspiration was a rumor about a resident who “had built an illegal driveway next to their home by drilling through the concrete sidewalk. The homeowner was telling neighbors that simply paying the fines was more affordable than a parking spot, and less of a hassle than street parking.”
For the wealthy, a $50 fine for an illegal driveway might be worth getting to park your car right in front of your home. On the other hand, $50 for someone struggling to make ends meet could have dire consequences. The fine might be equal, but is it truly fair?
The new bill would mean that wealthier New Yorkers would pay more for civil violations than poorer residents. The hope is that this would lead to better compliance with the rules by the rich, and perhaps more payment by the less rich. “The city is owed over $2 billion in fines from civil violations committed since 2017, including over $1 billion in parking and camera-related fines for speeding or running red lights.”
Jewish tradition also struggles with the balance of fairness vs. equality. On the one hand, the Torah mandates a half-shekel tax for every person over age 20. “The rich shall not pay more and the poor shall not pay less than half a shekel.” (Exodus 30:15) This is a regressive tax since half a shekel for a poor person is a burden while it is a trifle for the rich.
On the other hand, when a person vows to contribute their value as a person to the Tabernacle (and later the Temple), “the priest shall make the assessment according to what the vower can afford.” (Leviticus 27:8) In other words, the priest would use a sliding scale to determine what amount a person would contribute.
The difference between these two texts is that the half-shekel was a mandatory tax, while the dedication was voluntary. The text in Exodus expects that everyone
|
makes an equal contribution. Leviticus sees the maintenance of the sacred space as a donation.
Similarly, modern governments raise money in different ways: taxes, which apply to most people; and fees and fines, which only apply to some. The goal should be to make all of these payments as fair as possible. While actual equality may not be possible, or even desirable, fairness is certainly within our reach.
|
Argentina is grappling with an unprecedented late-summer heatwave as temperatures soar to record-breaking levels – causing crops to wither, helping wildfires spread and adding huge pressure to a country already facing an economic crisis.
The country’s summer, which technically runs from December to February, was by far the hottest on record, according to Maximiliano Herrara, a climatologist who tracks extreme temperatures across the globe.
And, so far, March has offered no relief.
Temperatures during the first 10 days of March were 8 to 10 degrees Celsius (14 to 18 degrees Fahrenheit) above normal in east-central Argentina, according to the country’s National Meteorological Service.
These temperature anomalies, which have persisted over huge areas, are unprecedented, Herrara told CNN. “There is nothing similar that has ever happened in climatic history in Argentina at this scale.”
Herrara said he had expected a “scorching summer” in Argentina due to the impacts of La Niña, a climate pattern which tends to bring hotter, drier summers to the region. But what has happened shocked him, he said.
“The length – five months – and intensity of this endless, brutal heat went beyond what I had imagined,” Herrara said.
Records have been beaten time and time again.
Buenos Aires has seen highs above 30 degrees Celsius (80 degrees Fahrenheit) every day since February 28. Multiple other locations across the country saw their highest temperatures in the last 63 years during March.
In key agricultural provinces of Córdoba, Santa Fe and Northern Buenos Aires, the heat has been “catastrophic” for corn and soybean crops, Mickaël Attia, crop analyst for EarthDaily Analytics, told CNN.
“The worst drought of the last 30 years in Argentina will have an enormous impact on national corn and soybean production, which is expected to be at least 20-30% lower than last year,” he said.
Wheat is also affected. Exports are projected to fall 28% in 2023 compared to last year, according to the World Meteorological Organization.
Farmers are facing loses of around $14 billion, Julio Calzada, head of economic research at the Rosario Grains Exchange, told Reuters.
There are fears that the agricultural crisis will exacerbate the country’s economic problems. Figures released this week showed yearly inflation topped 100% for the first time in three decades – one of the highest inflation rates in the world.
The heat-stricken country is also grappling with wildfires. More than 100,000 hectares (nearly 250,000 acres) have been burned this year in northeast Argentina, according to an AFP report.
While Argentina’s brutal heatwave has been driven by La Niña, which has just ended after three consecutive years, some scientists have pointed to the role the climate crisis plays in intensifying these events.
A February report from the World Weather Attribution initiative found that while climate change was not the main driver of low rainfall in central South America, it was causing higher temperatures in the region, likely reducing water availability and making the drought more severe.
Another WWA report in December found that record temperatures in Argentina and other South American countries late last year were made 60 times more likely by human-caused climate change.
Herrera cautioned against blaming individual extreme weather events on the climate crisis, but, he said, “generally speaking it’s true that climate change, by fueling more energy to the atmosphere and the oceans, might be responsible for bigger contrasts which in turn worsen such extreme events.”
As global temperatures continue to rise, scientists say heatwaves will only become more common.
CNN’s Claudia Rebaza and Stefano Pozzebon contributed to this story
|
<urn:uuid:c70c2edd-9c99-4977-b9a9-e9ade426acae>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00495.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9483785629272461,
"pii_count": 0,
"score": 3.03125,
"token_count": 800,
"url": "https://edition.cnn.com/2023/03/15/americas/argentina-record-heatwave-climate-intl/index.html"
}
|
Argentina is grappling with an unprecedented late-summer heatwave as temperatures soar to record-breaking levels – causing crops to wither, helping wildfires spread and adding huge pressure to a country already facing an economic crisis.
The country’s summer, which technically runs from December to February, was by far the hottest on record, according to Maximiliano Herrara, a climatologist who tracks extreme temperatures across the globe.
And, so far, March has offered no relief.
Temperatures during the first 10 days of March were 8 to 10 degrees Celsius (14 to 18 degrees Fahrenheit) above normal in east-central Argentina, according to the country’s National Meteorological Service.
These temperature anomalies, which have persisted over huge areas, are unprecedented, Herrara told CNN. “There is nothing similar that has ever happened in climatic history in Argentina at this scale.”
Herrara said he had expected a “scorching summer” in Argentina due to the impacts of La Niña, a climate pattern which tends to bring hotter, drier summers to the region. But what has happened shocked him, he said.
“The length – five months – and intensity of this endless, brutal heat went beyond what I had imagined,” Herrara said.
Records have been beaten time and time again.
Buenos Aires has seen highs above 30 degrees Celsius (80 degrees Fahrenheit) every day since February 28. Multiple other locations across the country saw their highest temperatures in the last 63 years during March.
In key agricultural provinces of Córdoba, Santa Fe and Northern Buenos Aires, the heat has been “catastrophic” for corn and soybean crops, Mickaël Attia, crop analyst for EarthDaily Analytics, told CNN.
“The worst drought of the last 30 years in Argentina will have an enormous impact on national corn and soybean production, which is expected to be at least 20-30% lower than last year,” he said.
Wheat is also affected. Exports are projected to fall 28% in 2023 compared to last year, according to the World Meteorological Organization.
Farmers are facing loses of around $14 billion, Julio Calzada, head of economic research at the Rosario Grains Exchange, told Reuters.
There are fears that the agricultural crisis will exacerbate the country’s economic problems. Figures released this week showed yearly inflation topped 100% for the first time in three
|
decades – one of the highest inflation rates in the world.
The heat-stricken country is also grappling with wildfires. More than 100,000 hectares (nearly 250,000 acres) have been burned this year in northeast Argentina, according to an AFP report.
While Argentina’s brutal heatwave has been driven by La Niña, which has just ended after three consecutive years, some scientists have pointed to the role the climate crisis plays in intensifying these events.
A February report from the World Weather Attribution initiative found that while climate change was not the main driver of low rainfall in central South America, it was causing higher temperatures in the region, likely reducing water availability and making the drought more severe.
Another WWA report in December found that record temperatures in Argentina and other South American countries late last year were made 60 times more likely by human-caused climate change.
Herrera cautioned against blaming individual extreme weather events on the climate crisis, but, he said, “generally speaking it’s true that climate change, by fueling more energy to the atmosphere and the oceans, might be responsible for bigger contrasts which in turn worsen such extreme events.”
As global temperatures continue to rise, scientists say heatwaves will only become more common.
CNN’s Claudia Rebaza and Stefano Pozzebon contributed to this story
|
Why do Kansas prairie fires matter? This area of the Flint Hills seeks those answers.
Anyone driving west of Topeka on Interstate 70 in recent weeks may have encountered quite a sight as they entered the Flint Hills.
Fires burning vast stretches of tallgrass and smoke billowing in the distance might make some travelers uneasy, but for Kansans, this is a normal occurrence.
Early spring is the time of year where many farmers and ranchers set their fields ablaze in an act of preservation and land management that stretches to the earliest of native tribal inhabitants.
One area of particular interest to be seen engulfed in flames is at the Konza Prairie Biological Center, which is eight miles north of Manhattan and 60 miles from Topeka.
Konza Prairie gives opportunity for large-scale study of burning
From its inception in 1971, the 3,487-hectare native tallgrass prairie, jointly owned by The Nature Conservancy and Kansas State University, has provided researchers the opportunity to study on a large scale.
Last week alone, members from the U.S. Forestry Service and Environmental Protection Agency worked alongside burn crews on research and studies related to prairie burning.
"The research that we're doing here is definitely important for understanding tallgrass prairie ecosystems," said Patrick O'Neal, KPBC project manager. "We've had lots of visitors from outside who can compare and contrast the areas that we haven't burned and realize how essential it is to have burning and grazing as part of the landscape in order to maintain a true break."
More:This lake has crystal clear blue water and miles of trails — and it could be Kansas' next state park
What makes the Konza unique is that it is essentially a giant laboratory where agencies can burn for research purposes, not just for fuel or land management goals.
“We would set small fires for (the EPA) and they would be flying drones over top with sensors and filters that would capture, you know, what particulate matter might be coming off, so it was interesting to see them,” said Gary Kuhl, a volunteer driver for KPBC and retired K-State animal science faculty member.
“There’s very few places like the Konza in the United States to actually be able to cooperate on these kinds of ventures.”
For the love of the burn of the Kansas prairie
On April 17, a group of 15 volunteers, researchers and faculty met at the headquarters for a day of prescribed burning of a few watersheds, or areas that channel rainfall.
After a morning meeting from O’Neal, everyone was split into two burn crews and set out on off-road vehicles ranging from side-by-sides to 1990s-era Ford pickup trucks to a former military water truck.
The goal was to split burns between watersheds that are on a schedule from the KPBC and areas sectioned off for the Forestry Service to use.
More:Can Kansans legally buy marijuana in other states? Here's what the law says and more.
Each fire was lit carefully depending on wind direction in a technique O'Neal says is known as creating a ring-fire.
"So we start facing into the wind on the downwind side of the area we want to burn," O'Neal said. "And we've got multiple crews that are going to start tailgate to tailgate and we go opposite directions around that fire so that we establish a big black buffer that's already burned-out fuel. So that as we come around the flank side side of the fire, we're widening that black area out.
"When we come around to the upwind side, the fire goes with the wind, and it goes into an already blackened perimeter that extinguishes itself."
Burns lend insight into research of invasive grasses
It only took about 20 minutes for the field of more than a hundred acres to burn through using this technique. As soon as the whole area is black, they move on to the next.
For Micke Ramirez, the burns he was helping with Monday feed in to his research at the Long-Term Ecological Research on site at the Konza.
Ramirez is studying for a master's degree in animal science with a concentration on the effects of an invasive grass known as caucasian bluestem.
"We're looking at burning in late summer to try to knock some of (the caucasian bluestem) back," Ramirez said. "The alternative to using fire is herbicide."
More:Did a mountain lion recently roam through Shawnee County? Kansas officials say yes.
The importance of his research he says can be summed up pretty easily though: "If you want there to be prairie, you have to have fire."
On the Konza, burns are separated into seasonal, annual, two-year, four-year and 20-year cycles.
The difference between the prairies and how often they are burned can be seen in the range of shrubs and trees encroaching on the land.
"You can see out here where there's 20-year burns, where there's no fire, and it's not grassland anymore, it's trees," Ramirez said.
Having a safety plan first step in burning the prairie
The only difference O'Neal says in how they burn opposed to how ranchers and farmers burn their fields are the times they burn and special care tended to the areas.
With starting any fire, some safety protocols should be in place. O'Neal says No. 1 needs to be having a proper crew.
"A lot of people try to get a lot done with a minimal amount of people," he said. "While in a lot of jobs, that's an admirable trait, you know, a few people to be everywhere and to see the entire perimeter of an area.
More:These small Kansas towns have zero dollars for books. A statewide book festival is helping
"It seems like matches are cheap, but pumps and water tanks and everything else is expensive. And people underestimate, you know, what it's really going to take to keep that fire controlled, especially if it's going to get into tall ungraded conservation reserve program, grass or grass that hasn't been grazed."
If you are thinking of burning a pasture you own, O'Neal said it's a good idea to start by talking to your neighbors. Having a plan with your neighbors and local resources can be the difference between a successful burn and one that can quickly get out of control.
All burns in Kansas must also go through local municipalities by having an issued burn permit and registering the day you intend to burn. But that doesn't mean all fires are supposed to occur.
"It's never a bad idea if you really don't understand what's going on to contact your emergency managementand 911," O'Neal added.
Even if a fire seems scary with huge flames and massive amounts of smoke, the benefits will soon be shared through the ecosystem once the fires are extinguished.
For Barb Van Slyke, an administrative assistant at KPBC, snapping pictures of the burns Monday brings an opportunity to capture fleeting moments.
"It's just pure beauty for me," Van Slyke said. "I think there’s a little bit of pyro in all of us."
|
<urn:uuid:c24fb590-c323-4150-aa75-239d5a953c29>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647409.17/warc/CC-MAIN-20230531182033-20230531212033-00713.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9679014682769775,
"pii_count": 0,
"score": 3.453125,
"token_count": 1508,
"url": "https://www.cjonline.com/story/news/state/2023/04/26/tallgrass-prairie-fires-in-kansas-flint-hills-burn-for-research/70146219007/"
}
|
Why do Kansas prairie fires matter? This area of the Flint Hills seeks those answers.
Anyone driving west of Topeka on Interstate 70 in recent weeks may have encountered quite a sight as they entered the Flint Hills.
Fires burning vast stretches of tallgrass and smoke billowing in the distance might make some travelers uneasy, but for Kansans, this is a normal occurrence.
Early spring is the time of year where many farmers and ranchers set their fields ablaze in an act of preservation and land management that stretches to the earliest of native tribal inhabitants.
One area of particular interest to be seen engulfed in flames is at the Konza Prairie Biological Center, which is eight miles north of Manhattan and 60 miles from Topeka.
Konza Prairie gives opportunity for large-scale study of burning
From its inception in 1971, the 3,487-hectare native tallgrass prairie, jointly owned by The Nature Conservancy and Kansas State University, has provided researchers the opportunity to study on a large scale.
Last week alone, members from the U.S. Forestry Service and Environmental Protection Agency worked alongside burn crews on research and studies related to prairie burning.
"The research that we're doing here is definitely important for understanding tallgrass prairie ecosystems," said Patrick O'Neal, KPBC project manager. "We've had lots of visitors from outside who can compare and contrast the areas that we haven't burned and realize how essential it is to have burning and grazing as part of the landscape in order to maintain a true break."
More:This lake has crystal clear blue water and miles of trails — and it could be Kansas' next state park
What makes the Konza unique is that it is essentially a giant laboratory where agencies can burn for research purposes, not just for fuel or land management goals.
“We would set small fires for (the EPA) and they would be flying drones over top with sensors and filters that would capture, you know, what particulate matter might be coming off, so it was interesting to see them,” said Gary Kuhl, a volunteer driver for KPBC and retired K-State animal science faculty member.
“There’s very few places like the Konza in the United States to actually be able to cooperate on these kinds of ventures.”
For the love of the burn of the Kansas prairie
On April 17, a group of 15 volunteers, researchers and faculty met at the headquarters for a day of prescribed burning of a few watersheds,
|
or areas that channel rainfall.
After a morning meeting from O’Neal, everyone was split into two burn crews and set out on off-road vehicles ranging from side-by-sides to 1990s-era Ford pickup trucks to a former military water truck.
The goal was to split burns between watersheds that are on a schedule from the KPBC and areas sectioned off for the Forestry Service to use.
More:Can Kansans legally buy marijuana in other states? Here's what the law says and more.
Each fire was lit carefully depending on wind direction in a technique O'Neal says is known as creating a ring-fire.
"So we start facing into the wind on the downwind side of the area we want to burn," O'Neal said. "And we've got multiple crews that are going to start tailgate to tailgate and we go opposite directions around that fire so that we establish a big black buffer that's already burned-out fuel. So that as we come around the flank side side of the fire, we're widening that black area out.
"When we come around to the upwind side, the fire goes with the wind, and it goes into an already blackened perimeter that extinguishes itself."
Burns lend insight into research of invasive grasses
It only took about 20 minutes for the field of more than a hundred acres to burn through using this technique. As soon as the whole area is black, they move on to the next.
For Micke Ramirez, the burns he was helping with Monday feed in to his research at the Long-Term Ecological Research on site at the Konza.
Ramirez is studying for a master's degree in animal science with a concentration on the effects of an invasive grass known as caucasian bluestem.
"We're looking at burning in late summer to try to knock some of (the caucasian bluestem) back," Ramirez said. "The alternative to using fire is herbicide."
More:Did a mountain lion recently roam through Shawnee County? Kansas officials say yes.
The importance of his research he says can be summed up pretty easily though: "If you want there to be prairie, you have to have fire."
On the Konza, burns are separated into seasonal, annual, two-year, four-year and 20-year cycles.
The difference between the prairies and how often they are burned can be seen in the range of shrubs and trees encroaching on the land.
"You can see out here where there's 20-year burns, where there's no fire, and it's not grassland anymore, it's trees," Ramirez said.
Having a safety plan first step in burning the prairie
The only difference O'Neal says in how they burn opposed to how ranchers and farmers burn their fields are the times they burn and special care tended to the areas.
With starting any fire, some safety protocols should be in place. O'Neal says No. 1 needs to be having a proper crew.
"A lot of people try to get a lot done with a minimal amount of people," he said. "While in a lot of jobs, that's an admirable trait, you know, a few people to be everywhere and to see the entire perimeter of an area.
More:These small Kansas towns have zero dollars for books. A statewide book festival is helping
"It seems like matches are cheap, but pumps and water tanks and everything else is expensive. And people underestimate, you know, what it's really going to take to keep that fire controlled, especially if it's going to get into tall ungraded conservation reserve program, grass or grass that hasn't been grazed."
If you are thinking of burning a pasture you own, O'Neal said it's a good idea to start by talking to your neighbors. Having a plan with your neighbors and local resources can be the difference between a successful burn and one that can quickly get out of control.
All burns in Kansas must also go through local municipalities by having an issued burn permit and registering the day you intend to burn. But that doesn't mean all fires are supposed to occur.
"It's never a bad idea if you really don't understand what's going on to contact your emergency managementand 911," O'Neal added.
Even if a fire seems scary with huge flames and massive amounts of smoke, the benefits will soon be shared through the ecosystem once the fires are extinguished.
For Barb Van Slyke, an administrative assistant at KPBC, snapping pictures of the burns Monday brings an opportunity to capture fleeting moments.
"It's just pure beauty for me," Van Slyke said. "I think there’s a little bit of pyro in all of us."
|
Climate change is exacerbating the already high levels of stress human beings are putting on land and could undermine food security on the planet, one of the world’s leading authorities on land use and sustainable energy told a conference in Dublin.
Jim Skea, professor of sustainable energy at Imperial College, London, said that a global move to better land management would not only support climate actions it would also support biodiversity and ensure food security.
Prof Skea, the UK government’s nomination as the chair of the Intergovernmental Panel on Climate Change, addressed the Environmental Protection Agency’s annual climate conference at Dublin Castle on Thursday. His address focused on land use and the part it plays in both the emissions scenario and the part it could play in finding solutions to ensure global temperatures do not go more than 1.5 degrees Celsius above pre-industrial levels.
“Land is the basis for human livelihoods and wellbeing, and it does supply multiple other ecosystem services that support the production of food, fresh water, and also in the way new knowledge relates to biodiversity,” he said.
He told the conference that human use has directly affected more than 70 per cent of the (non-ice) surface on the planet, with 40 per cent used for pasture, 30 per cent for forestry, and about 10 per cent in crop land. While more modern agricultural benefits have been of benefit to humans, it has led to intensification of farming and food production methods.
Prof Skea identified four trends to illustrate this. He said that the use of inorganic nitrogen fertiliser had increased by a factor of nine since 1960; cereal crops had increased by a factor of three; the amount of water used for irrigation had also doubled: and there was a 50 per cent increase in the number of ruminant livestock on the planet.
He said that methane emissions, mainly from ruminant animals, would have to be reduced by about a third in the very near future if warming was going to be limited to the targets set out in the Paris Agreement.
Professor Skeasaid that he was sympathetic to farmers’ concerns over some initiatives to combat climate change, saying that there was a “policy trick” to create positive incentives for farmers.
“The message I hear from farmers is they feel unfairly treated because they are characterised as part of the problem rather than part of the solution,” said Prof Skea. “We have managed to build and identify many positive aspects for these land interventions. Building up carbon content of soils will make the soil more productive, it will help you adapt to the physical impacts of climate change, and it takes carbon-dioxide out of the atmosphere. I do have sympathies for farmers in that all the interventions look like sticks, they don’t look like carrots.
“You need a plan that’s communicated to everybody, communicated very clearly so people understand their place in it and understand how to act as the transition works its way through.”
Also speaking at the conference, Minister for Climate Eamon Ryan strongly rejected claims by most political parties that the new EU restoration laws would result in farms being flooded and people being forced off the lands. In an impassioned defence of the Regulation on Nature Restoration – which is going through the EU parliament at present – he said it would actually do the complete opposite of that.
“People are saying you are going to flood the land and force people out of farming. It’s the exact opposite, the exact opposite,” he said.
Mr Ryan argued that the new green way would enable family farms to operate profitably in the future, but in a sustainable way in which farmers would be paid premium for protecting the land. Allowing the water table to rise would be part of that new farming future, but would not result in any farmer being driven from the land.
Nicole Keoghan, a young farmer who also addressed the conference, echoed calls for better communication from Government. Ms Keoghan said that young farmers wanted to see changes implemented in the sector but that they should be included in the discussion.
She said that the biodiversity markers on her farm were increasing, leading to lower costs as she is using fewer fertilisers. However, she said that successive governments signed up to EU laws “without providing supports”.
Ms Keoghan criticised the media for pitting for production against the environment when “this cannot be the case”.
“I’m an advocate for farming and biodiversity,” said Ms Keoghan. “You can’t have one without the other. We need to stop pointing the finger at one another and instead come together and find the solution.”
|
<urn:uuid:61ddc700-dcbf-49c1-9203-32b58920a55d>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652235.2/warc/CC-MAIN-20230606045924-20230606075924-00273.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9683494567871094,
"pii_count": 0,
"score": 3.25,
"token_count": 970,
"url": "https://www.irishtimes.com/environment/2023/05/25/climate-change-could-place-even-further-stress-on-overstretched-land-use/"
}
|
Climate change is exacerbating the already high levels of stress human beings are putting on land and could undermine food security on the planet, one of the world’s leading authorities on land use and sustainable energy told a conference in Dublin.
Jim Skea, professor of sustainable energy at Imperial College, London, said that a global move to better land management would not only support climate actions it would also support biodiversity and ensure food security.
Prof Skea, the UK government’s nomination as the chair of the Intergovernmental Panel on Climate Change, addressed the Environmental Protection Agency’s annual climate conference at Dublin Castle on Thursday. His address focused on land use and the part it plays in both the emissions scenario and the part it could play in finding solutions to ensure global temperatures do not go more than 1.5 degrees Celsius above pre-industrial levels.
“Land is the basis for human livelihoods and wellbeing, and it does supply multiple other ecosystem services that support the production of food, fresh water, and also in the way new knowledge relates to biodiversity,” he said.
He told the conference that human use has directly affected more than 70 per cent of the (non-ice) surface on the planet, with 40 per cent used for pasture, 30 per cent for forestry, and about 10 per cent in crop land. While more modern agricultural benefits have been of benefit to humans, it has led to intensification of farming and food production methods.
Prof Skea identified four trends to illustrate this. He said that the use of inorganic nitrogen fertiliser had increased by a factor of nine since 1960; cereal crops had increased by a factor of three; the amount of water used for irrigation had also doubled: and there was a 50 per cent increase in the number of ruminant livestock on the planet.
He said that methane emissions, mainly from ruminant animals, would have to be reduced by about a third in the very near future if warming was going to be limited to the targets set out in the Paris Agreement.
Professor Skeasaid that he was sympathetic to farmers’ concerns over some initiatives to combat climate change, saying that there was a “policy trick” to create positive incentives for farmers.
“The message I hear from farmers is they feel unfairly treated because they are characterised as part of the problem rather than part of the solution,” said Prof Skea. “We have managed to build and identify many positive aspects for these land interventions. Building up carbon content of soils will make the
|
soil more productive, it will help you adapt to the physical impacts of climate change, and it takes carbon-dioxide out of the atmosphere. I do have sympathies for farmers in that all the interventions look like sticks, they don’t look like carrots.
“You need a plan that’s communicated to everybody, communicated very clearly so people understand their place in it and understand how to act as the transition works its way through.”
Also speaking at the conference, Minister for Climate Eamon Ryan strongly rejected claims by most political parties that the new EU restoration laws would result in farms being flooded and people being forced off the lands. In an impassioned defence of the Regulation on Nature Restoration – which is going through the EU parliament at present – he said it would actually do the complete opposite of that.
“People are saying you are going to flood the land and force people out of farming. It’s the exact opposite, the exact opposite,” he said.
Mr Ryan argued that the new green way would enable family farms to operate profitably in the future, but in a sustainable way in which farmers would be paid premium for protecting the land. Allowing the water table to rise would be part of that new farming future, but would not result in any farmer being driven from the land.
Nicole Keoghan, a young farmer who also addressed the conference, echoed calls for better communication from Government. Ms Keoghan said that young farmers wanted to see changes implemented in the sector but that they should be included in the discussion.
She said that the biodiversity markers on her farm were increasing, leading to lower costs as she is using fewer fertilisers. However, she said that successive governments signed up to EU laws “without providing supports”.
Ms Keoghan criticised the media for pitting for production against the environment when “this cannot be the case”.
“I’m an advocate for farming and biodiversity,” said Ms Keoghan. “You can’t have one without the other. We need to stop pointing the finger at one another and instead come together and find the solution.”
|
SpaceX rocket launches are punching holes in part of Earth’s atmosphere, called the ionosphere, and it’s a beautiful sight to behold.
The holes appear as bright red blobs in the sky. Some are spherical while others appear like a bright smudge across the sky.
Recently, these spherical red blobs have been popping up over MacDonald Observatory in Texas, which has astronomers slightly worried for the future.
The first blob over the observatory was detected in February, but in the following months, sightings have grown, Stephen Hummel, an astronomer at McDonald Observatory, told Spaceweather.com.
“We are seeing 2 to 5 of them each month,” Hummel told Spaceweather.com.
One such case of these atmospheric phenomena was observed in July after SpaceX launched its Falcon 9 rocket from a base in California. Previous instances have been recorded in 2017 and 2018.
SpaceX isn’t the first to punch a harmless hole in the atmosphere
While a hole in the atmosphere sounds dramatic, this phenomenon is temporary and harmless to life on Earth, Jeffrey Baumgardner, a senior research scientist at the Center for Space Physics at Boston University, told Business Insider.
He explained that scientists have been studying ionospheric holes for decades.
“During the beginning of the space age when they started launching rockets, through the Earth’s atmosphere into space, it was observed that they made a disturbance in the atmosphere,” Baumgardner said.
In fact, the holes a rocket burns into the ionosphere are basically just a concentrated version of what naturally happens across the entire globe, every night, Baumgardner explained.
In May 1973, researchers recorded a “large-scale hole” in the ionosphere caused by NASA’s launch of the Saturn V Rocket from the Kennedy Space Center in Florida, according to a study published in Science Magazine. The impacts were seen 1,000 km or about 620 miles from the “burning engines” of the rocket.
The ionosphere is constantly in flux
Each day, the ionosphere is created when the sun’s rays hit a part of our atmosphere about 185 miles above Earth, exciting mostly oxygen and nitrogen atoms, Baumgardner said.
Then, each night, when the sun’s rays are absent, those excited atoms recombine with molecules in the ionosphere. As a result, the ionosphere naturally decays away. This recombination creates a faint emission of light — an effect called airglow — Baumgardner said.
Similarly, when a rocket enters the ionosphere, the chemicals in its exhaust, like carbon dioxide and water vapor, recombine with those excited oxygen and nitrogen atoms more rapidly, and that recombination can show up as red blobs in the sky, Baumgardner said.
While just about any rocket may have this effect on the ionosphere, SpaceX rockets’ impact is two-fold: First, when the rocket flies to space and then, again, when the rocket descends toward Earth.
For example, the bright spherical balls that astronomers at McDonald Observatory are witnessing are from Falcon 9 boosters firing their engines to return to Earth, per Spaceweather.com.
How ionospheric holes could disrupt astronomical observations
These bright red blobs don’t last long. Some of the recent ones that have appeared over Texas last for one to two minutes, Baumgardner said.
However, if an astronomer just so happens to aim a telescope at the same part of the sky where one of these blobs appears, it could mean a very bad day for an astronomer.
“It could ruin somebody’s time on a telescope,” Baumgardner said. “They sometimes wait a year to get two or three nights on telescopes. So that would be a bad outcome for them if that actually happened.”
SpaceX announced earlier this year that the company plans to launch a record 144 rockets to ramp up its Starlink satellite internet service. This may cause an issue for ground-based astronomers if SpaceX utilizes its reusable Falcon 9 rocket, which could tear holes in the atmosphere not only on the way to space but when it returns to Earth.
SpaceX did not respond to Business Insider’s request for comment.
|
<urn:uuid:4cb31b21-3506-49db-941e-4669094b47cd>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474697.2/warc/CC-MAIN-20240228044414-20240228074414-00416.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9277260303497314,
"pii_count": 0,
"score": 3.984375,
"token_count": 896,
"url": "https://dnyuz.com/2023/11/30/spacex-rockets-are-burning-bright-red-holes-in-earths-atmosphere-and-theyre-becoming-too-common-for-astronomers-comfort/"
}
|
SpaceX rocket launches are punching holes in part of Earth’s atmosphere, called the ionosphere, and it’s a beautiful sight to behold.
The holes appear as bright red blobs in the sky. Some are spherical while others appear like a bright smudge across the sky.
Recently, these spherical red blobs have been popping up over MacDonald Observatory in Texas, which has astronomers slightly worried for the future.
The first blob over the observatory was detected in February, but in the following months, sightings have grown, Stephen Hummel, an astronomer at McDonald Observatory, told Spaceweather.com.
“We are seeing 2 to 5 of them each month,” Hummel told Spaceweather.com.
One such case of these atmospheric phenomena was observed in July after SpaceX launched its Falcon 9 rocket from a base in California. Previous instances have been recorded in 2017 and 2018.
SpaceX isn’t the first to punch a harmless hole in the atmosphere
While a hole in the atmosphere sounds dramatic, this phenomenon is temporary and harmless to life on Earth, Jeffrey Baumgardner, a senior research scientist at the Center for Space Physics at Boston University, told Business Insider.
He explained that scientists have been studying ionospheric holes for decades.
“During the beginning of the space age when they started launching rockets, through the Earth’s atmosphere into space, it was observed that they made a disturbance in the atmosphere,” Baumgardner said.
In fact, the holes a rocket burns into the ionosphere are basically just a concentrated version of what naturally happens across the entire globe, every night, Baumgardner explained.
In May 1973, researchers recorded a “large-scale hole” in the ionosphere caused by NASA’s launch of the Saturn V Rocket from the Kennedy Space Center in Florida, according to a study published in Science Magazine. The impacts were seen 1,000 km or about 620 miles from the “burning engines” of the rocket.
The ionosphere is constantly in flux
Each day, the ionosphere is created when the sun’s rays hit a part of our atmosphere about 185 miles above Earth, exciting mostly oxygen and nitrogen atoms, Baumgardner said.
Then, each night, when the sun’s rays are absent, those excited atoms recombine with molecules in the ionosphere. As a result, the ionosphere naturally decays away. This recombination creates a faint emission of light — an effect called airglow
|
— Baumgardner said.
Similarly, when a rocket enters the ionosphere, the chemicals in its exhaust, like carbon dioxide and water vapor, recombine with those excited oxygen and nitrogen atoms more rapidly, and that recombination can show up as red blobs in the sky, Baumgardner said.
While just about any rocket may have this effect on the ionosphere, SpaceX rockets’ impact is two-fold: First, when the rocket flies to space and then, again, when the rocket descends toward Earth.
For example, the bright spherical balls that astronomers at McDonald Observatory are witnessing are from Falcon 9 boosters firing their engines to return to Earth, per Spaceweather.com.
How ionospheric holes could disrupt astronomical observations
These bright red blobs don’t last long. Some of the recent ones that have appeared over Texas last for one to two minutes, Baumgardner said.
However, if an astronomer just so happens to aim a telescope at the same part of the sky where one of these blobs appears, it could mean a very bad day for an astronomer.
“It could ruin somebody’s time on a telescope,” Baumgardner said. “They sometimes wait a year to get two or three nights on telescopes. So that would be a bad outcome for them if that actually happened.”
SpaceX announced earlier this year that the company plans to launch a record 144 rockets to ramp up its Starlink satellite internet service. This may cause an issue for ground-based astronomers if SpaceX utilizes its reusable Falcon 9 rocket, which could tear holes in the atmosphere not only on the way to space but when it returns to Earth.
SpaceX did not respond to Business Insider’s request for comment.
|
By Carrie Eben, guest author
In the first article in this series, I related the importance of assessment aligning with the purpose of a classical education. The purpose of a classical education is leading a student toward intellectual skills and virtue. This alignment happens best when education takes a contemplative posture which Josef Pieper calls, leisure, or rest (schole). In the second article of this series, I related how relationships and conversations are important parts of assessment. Conversations for assessment can be held between student and mentor, student and peer and the student within herself. The conversation includes questions which create gaps needed for learning– a discord within the soul. However, questions need to reflect the type of knowledge being assessed. In Paideia Proposal: An Educational Manifesto, Mortimer Adler discusses three columns of knowledge which include facts, skills, and ideas. It is important for teachers to know what type of knowledge they are assessing because it will inform how they want to assess and what kind of question they must ask. Facts, skills, and ideas are not isolated in their columns. There are no thick black lines keeping them within their silo. As with most everything in classical education, facts, skills, and ideas bleed into each other, intertwining with each other depending on the subject matter being assessed.
Facts include organized knowledge from any type of subject matter that, when put together, are the keys which help unlock skills and understand ideas. They are generally taught didactically through “telling” from a teacher or a textbook. Facts are easy to assess since they require a simple “telling back.” This requires a student to reflect and retrieve a body of knowledge from their memory. Sometimes, assessment of simple facts (or a body of knowledge) is key to further understanding certain skills or ideas.
You might remember images of one-room schoolhouses featured in Anne of Green Gables where students were expected to recite their lessons. This recitation, or telling back, of what they learned, may seem boring, but for students who are being assessed on plain facts, telling back is a simple and effective way to assess. For young students especially, the act of telling or reciting what they know is a joyful experience. Songs, rhymes, and games are fun ways to make simple recitation more enjoyable. Teachers merely must ask questions reflecting the body of knowledge they want their students to learn—the “grammars” of any given subject (usually beginning with “who,” “what,” “when,” and “where”).
A more skillful form of “telling” is expression through narration which builds both relationship with mentor and student as well as a relationship with knowledge. According to Karen Glass in Know and Tell, The Art of Narration, “Rather than being simply the way we interact with the people around us…narration becomes the key that builds our relationship with knowledge, develops our thinking skills, and gives the power to collect our thoughts and relate them accurately and effectively, both in speech and writing.” The art of “telling back” a process/procedure, a story, or the expression of any learning requires skill. It is something to be practiced and coached (which integrates into the “skills” category below—remember, no silos with the columns of knowledge). Narration of a story allows children to recall what they learn and then tell back, but in an artful way. Although this seems simple, it can be a very formative process for young students who are still learning to articulate language. It is a double assessment of recalling facts and practicing the art of articulation those facts in an artful way—which eventually leads to written composition. Both recitation and narration are valuable forms for teachers to use when assessing all ages of students in a variety of subject.
Skills are arts that require practice and coaching that can be taught through imitation (mimesis) of examples (types). The academic arts include reading, writing, speaking, listening, calculating, problem solving, measuring, estimating, and using critical judgement. Teachers present several examples of a skill so students can imitate the proper procedure for mastering the skill. When a student compares different examples before them, they can see the similarities and perceive the common truth (logos) for mastery. By having students express the process (through narration) and present their level of skill through practice, teachers can then assess students in their skills and provide feedback for eventual mastery. After much repetition, time, and adjustment, the student can perfect the skills needed.
Narrative feedback, which includes questions for the student about their skill, or even a conversation, should be specific and formative so that the student feels as through the practice is “easy, plus one.” Teachers can ask questions such as: “Which verbs can you swap out with stronger version in order to better articulate your meaning?,” “Are you lining up your decimals in order to add correctly?,” or “How can you arrange your facts better when you narrate back your story?’” Through observing a student’s practice of skill performance, teachers can ask formative questions to help their students incrementally improve their academic skills. It is important to give students many opportunities to try and fail without a formal assessment. They need leisurely time to practice, assess, and correct freely.
Ideas include enlarged understanding about ideas and values which are found in literature, art, music, history, and even science and math. Assessment of ideas can also include comparison of types and offers students an opportunity to realize true ideas and virtue rather than skills. Group conversations involving all assessor types (mentors, peers, and self) help all involved to glean understanding from each other as well as assess through the articulation and embodiment of ideas.
All teachers know the beautiful feeling when they see a spark of “Joy” brighten the eyes of a student who suddenly grasps a truthful idea. It permeates their body. We might also refer to this a “light-bulb” moment. Adler says that teachers help birth and clarify ideas “by [KR1] asking questions, by leading discussions, by helping students to raise their minds up from a state of understanding or appreciating less to a state of understanding or appreciating more.” Through thoughtful questions, teachers can assess the level of understanding of ideas by listening to a student in conversation or through other means of articulation (writing, projects, debate, etc).
The Five Common Topics (from the first Canon of Rhetoric—Invention) are useful tools for discussing and assessing ideas. They require the student to think deeply about any given topic and are tools to help articulate their ideas. The Five Common Topics are:
Definition: Describe x. What is it? What is it not?
Comparison: Compare x to y? How are they similar and different?
Relationship: What are the causes and effects of x? What happened before or after x?
Circumstance: What are the circumstances surrounding x and how do they shape it?
Testimony: What do others say about x? Is it true?
When students are given an opportunity to gaze at ideas through the lens of the Five Common Topics, they can harvest more complete understanding. Revelations of truth, beauty, and goodness will abound when students are allowed to converse and articulate (even poorly) difficult ideas and the virtues which lead to the “good life.”
At Sager Classical Academy, our grammar school employs narrative assessment for student growth and for parent/teacher communication. Teachers are taught to assess specific facts, skills, and ideas from each subject which are presented narratively in quarterly reports for parent and teacher conversation. A new part of the assessment this year which has proven very helpful for communication is the section which identifies intellectual virtues and moral virtues (which is in place of a “behavior”). Giving definition to these specific ideas of virtue helps both parents and teachers encourage the will of the student rather than simply modifying behavior.
When parents, teachers and students are aware of the type of knowledge (facts, skills, or ideas) being assessed, they can engage with the most appropriate type of assessment. This assessment will help all parties pinpoint and reflect on areas for growth. Assessment dialogue, from a state of rest, or leisure, combined with true relationship fitting questions for the appropriate task, will address the needs of the whole student. Instead of tempting students to love the wrong things, such as grades and scores (fleeting desires), they will teach the student to direct their will to loving lasting virtues which help the intellect to engage rightly in all stages of life.
Adler, Mortimer Jerome. The Paideia Proposal: an Educational Manifesto. New York, NY: Macmillan, 1999.
Glass, Karen. Know and Tell: The Art of Narration. North Charleston, SC: Create Space
Mortimer J. Adler, The Paideia Proposal: An Educational Manifesto (New Yor, NY: Simon and Schuster, 1982), 23.
Karen Glass, Know and Tell: The Art of Narration (North Charleston, SC: CreateSpace Independent Publishing Platform, 2018), 11-12.
Andrew Pudewa uses this phrase in his IEW Structure and Style video series.
Adler, The Paideia Proposal: an Educational Manifesto, 29
For over twenty years, Carrie Eben has championed classical education in both the private school classroom and homeschool arenas. She currently serves as founding board member at Sager Classical Academy in Siloam Springs, AR. Carrie passionately leads teachers and parents in the classical model of education. She develops and delivers customized workshops for administrators, teachers, and parents in both classical school and homeschool settings via Classical Eben Education Consulting (www.classicaleben.com). Carrie holds a BSE in Intermediate Education from John Brown University and a MSEd in Curriculum and Instruction from Oklahoma State University. She is currently a PhD student in the Humanities program at Faulkner University and is a CiRCE Institute Master Teacher.
|
<urn:uuid:03c46221-6f23-4f96-8bdf-86703820bb68>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00062.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9495537281036377,
"pii_count": 0,
"score": 3.625,
"token_count": 2147,
"url": "https://theclassicalthistle.wordpress.com/2023/01/25/assessment-for-the-classical-school-part-3-facts-skills-or-ideas/"
}
|
By Carrie Eben, guest author
In the first article in this series, I related the importance of assessment aligning with the purpose of a classical education. The purpose of a classical education is leading a student toward intellectual skills and virtue. This alignment happens best when education takes a contemplative posture which Josef Pieper calls, leisure, or rest (schole). In the second article of this series, I related how relationships and conversations are important parts of assessment. Conversations for assessment can be held between student and mentor, student and peer and the student within herself. The conversation includes questions which create gaps needed for learning– a discord within the soul. However, questions need to reflect the type of knowledge being assessed. In Paideia Proposal: An Educational Manifesto, Mortimer Adler discusses three columns of knowledge which include facts, skills, and ideas. It is important for teachers to know what type of knowledge they are assessing because it will inform how they want to assess and what kind of question they must ask. Facts, skills, and ideas are not isolated in their columns. There are no thick black lines keeping them within their silo. As with most everything in classical education, facts, skills, and ideas bleed into each other, intertwining with each other depending on the subject matter being assessed.
Facts include organized knowledge from any type of subject matter that, when put together, are the keys which help unlock skills and understand ideas. They are generally taught didactically through “telling” from a teacher or a textbook. Facts are easy to assess since they require a simple “telling back.” This requires a student to reflect and retrieve a body of knowledge from their memory. Sometimes, assessment of simple facts (or a body of knowledge) is key to further understanding certain skills or ideas.
You might remember images of one-room schoolhouses featured in Anne of Green Gables where students were expected to recite their lessons. This recitation, or telling back, of what they learned, may seem boring, but for students who are being assessed on plain facts, telling back is a simple and effective way to assess. For young students especially, the act of telling or reciting what they know is a joyful experience. Songs, rhymes, and games are fun ways to make simple recitation more enjoyable. Teachers merely must ask questions reflecting the body of knowledge they want their students to learn—the “grammars” of any given subject (usually beginning with “who,” “what,” “when,” and “where”).
A more skill
|
ful form of “telling” is expression through narration which builds both relationship with mentor and student as well as a relationship with knowledge. According to Karen Glass in Know and Tell, The Art of Narration, “Rather than being simply the way we interact with the people around us…narration becomes the key that builds our relationship with knowledge, develops our thinking skills, and gives the power to collect our thoughts and relate them accurately and effectively, both in speech and writing.” The art of “telling back” a process/procedure, a story, or the expression of any learning requires skill. It is something to be practiced and coached (which integrates into the “skills” category below—remember, no silos with the columns of knowledge). Narration of a story allows children to recall what they learn and then tell back, but in an artful way. Although this seems simple, it can be a very formative process for young students who are still learning to articulate language. It is a double assessment of recalling facts and practicing the art of articulation those facts in an artful way—which eventually leads to written composition. Both recitation and narration are valuable forms for teachers to use when assessing all ages of students in a variety of subject.
Skills are arts that require practice and coaching that can be taught through imitation (mimesis) of examples (types). The academic arts include reading, writing, speaking, listening, calculating, problem solving, measuring, estimating, and using critical judgement. Teachers present several examples of a skill so students can imitate the proper procedure for mastering the skill. When a student compares different examples before them, they can see the similarities and perceive the common truth (logos) for mastery. By having students express the process (through narration) and present their level of skill through practice, teachers can then assess students in their skills and provide feedback for eventual mastery. After much repetition, time, and adjustment, the student can perfect the skills needed.
Narrative feedback, which includes questions for the student about their skill, or even a conversation, should be specific and formative so that the student feels as through the practice is “easy, plus one.” Teachers can ask questions such as: “Which verbs can you swap out with stronger version in order to better articulate your meaning?,” “Are you lining up your decimals in order to add correctly?,” or “How can you arrange your facts better when you narrate back your story?’” Through observing a student’s practice of skill performance, teachers can ask formative questions to help their students incrementally improve their academic skills. It is important to give students many opportunities to try and fail without a formal assessment. They need leisurely time to practice, assess, and correct freely.
Ideas include enlarged understanding about ideas and values which are found in literature, art, music, history, and even science and math. Assessment of ideas can also include comparison of types and offers students an opportunity to realize true ideas and virtue rather than skills. Group conversations involving all assessor types (mentors, peers, and self) help all involved to glean understanding from each other as well as assess through the articulation and embodiment of ideas.
All teachers know the beautiful feeling when they see a spark of “Joy” brighten the eyes of a student who suddenly grasps a truthful idea. It permeates their body. We might also refer to this a “light-bulb” moment. Adler says that teachers help birth and clarify ideas “by [KR1] asking questions, by leading discussions, by helping students to raise their minds up from a state of understanding or appreciating less to a state of understanding or appreciating more.” Through thoughtful questions, teachers can assess the level of understanding of ideas by listening to a student in conversation or through other means of articulation (writing, projects, debate, etc).
The Five Common Topics (from the first Canon of Rhetoric—Invention) are useful tools for discussing and assessing ideas. They require the student to think deeply about any given topic and are tools to help articulate their ideas. The Five Common Topics are:
Definition: Describe x. What is it? What is it not?
Comparison: Compare x to y? How are they similar and different?
Relationship: What are the causes and effects of x? What happened before or after x?
Circumstance: What are the circumstances surrounding x and how do they shape it?
Testimony: What do others say about x? Is it true?
When students are given an opportunity to gaze at ideas through the lens of the Five Common Topics, they can harvest more complete understanding. Revelations of truth, beauty, and goodness will abound when students are allowed to converse and articulate (even poorly) difficult ideas and the virtues which lead to the “good life.”
At Sager Classical Academy, our grammar school employs narrative assessment for student growth and for parent/teacher communication. Teachers are taught to assess specific facts, skills, and ideas from each subject which are presented narratively in quarterly reports for parent and teacher conversation. A new part of the assessment this year which has proven very helpful for communication is the section which identifies intellectual virtues and moral virtues (which is in place of a “behavior”). Giving definition to these specific ideas of virtue helps both parents and teachers encourage the will of the student rather than simply modifying behavior.
When parents, teachers and students are aware of the type of knowledge (facts, skills, or ideas) being assessed, they can engage with the most appropriate type of assessment. This assessment will help all parties pinpoint and reflect on areas for growth. Assessment dialogue, from a state of rest, or leisure, combined with true relationship fitting questions for the appropriate task, will address the needs of the whole student. Instead of tempting students to love the wrong things, such as grades and scores (fleeting desires), they will teach the student to direct their will to loving lasting virtues which help the intellect to engage rightly in all stages of life.
Adler, Mortimer Jerome. The Paideia Proposal: an Educational Manifesto. New York, NY: Macmillan, 1999.
Glass, Karen. Know and Tell: The Art of Narration. North Charleston, SC: Create Space
Mortimer J. Adler, The Paideia Proposal: An Educational Manifesto (New Yor, NY: Simon and Schuster, 1982), 23.
Karen Glass, Know and Tell: The Art of Narration (North Charleston, SC: CreateSpace Independent Publishing Platform, 2018), 11-12.
Andrew Pudewa uses this phrase in his IEW Structure and Style video series.
Adler, The Paideia Proposal: an Educational Manifesto, 29
For over twenty years, Carrie Eben has championed classical education in both the private school classroom and homeschool arenas. She currently serves as founding board member at Sager Classical Academy in Siloam Springs, AR. Carrie passionately leads teachers and parents in the classical model of education. She develops and delivers customized workshops for administrators, teachers, and parents in both classical school and homeschool settings via Classical Eben Education Consulting (www.classicaleben.com). Carrie holds a BSE in Intermediate Education from John Brown University and a MSEd in Curriculum and Instruction from Oklahoma State University. She is currently a PhD student in the Humanities program at Faulkner University and is a CiRCE Institute Master Teacher.
|
The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Section Three of the Fourteenth Amendment bans anyone from holding any federal office who has taken an oath to uphold the Constitution and who then breaks that oath by engaging in "insurrection or rebellion against the same." Donald J. Trump is precisely such a person.
Trump took the Presidential oath of office at noon on January 20, 2017. Then, knowing that he had lost the 2020 election, he engaged in an "insurrection" on January 6, 2021.
Trump tried to persuade Vice President Mike Pence and Members of Congress not to count certain state electoral votes, which had been validly cast. He lied to the American people for years that the election had been stolen and continues to repeat those lies even to the present day.
Section Three of the Fourteenth Amendment is self-enforcing. It is "the supreme Law of the Land" binding on each of the 50 State Secretaries of State and their subordinates who draw up primary or general election ballots.
State Secretaries of State and their subordinates may not list on their election ballots as candidates for President anyone who is not eligible to hold the office of President. To be eligible to hold the office of President, one must be: 1) a natural born Citizen; 2) thirty-five years or older; 3) a Resident of the United States for fourteen years; and 4) a person who has not broken their oath of office to support the Constitution by engaging "in insurrection or rebellion against the same."
No jury verdict is required to determine whether a candidate who seeks to run for the presidency on a primary or general election ballot is: a natural born citizen, who is 35 years of age, and fourteen years a resident of the United States. Likewise, no jury verdict or act of Congress is required to keep a Secretary of States and their subordinates from printing ballots with the name "Donald J. Trump" on them.
Keeping Trump off the ballot after his conduct on January 6, 2021 does not deprive him of life, liberty, or property in the same way that a criminal or a civil jury verdict could. It is a privilege to be eligible to run for President of the United States and that privilege does not extend to constitutional oath breakers who engage "in insurrection or rebellion against the same."
Webster's 1828 Dictionary of American English defines "insurrection" as follows:
INSURREC'TION, noun [Latin insurgo; in and surgo, to rise.] 1. A rising against civil or political authority; the open and active opposition of a number of persons to the execution of a law in a city or state. It is equivalent to sedition, except that sedition expresses a less extensive rising of citizens. It differs from rebellion, for the latter expresses a revolt, or an attempt to overthrow the government, to establish a different one or to place the country under another jurisdiction. It differs from mutiny, as it respects the civil or political government; whereas a mutiny is an open opposition to law in the army or navy, insurrection is however used with such latitude as to comprehend either sedition or rebellion.
Donald J. Trump in a nationally televised debate with President Biden refused to renounce the Proud Boys and said: "Proud Boys, stand back and stand by." Trump then falsely denied that he had lost the 2020 presidential election, urged his followers to assemble at noon on January 6, 2021 on the Ellipse outside the White House, and he then whipped a mob of some extremists, and many naïve conservatives, into a frenzy urging them to march on the Capitol as Congress was certifying the results of the 2020 presidential election. Trump told his followers: "We fight like hell. And if you don't fight like hell, you're not going to have a country anymore," he said.
Trump then watched the riot that he had had launched play out on national television without sending a Tweet or any other kind of similar message urging his supporters to behave peacefully. He did this even though one Tweet from him would have caused the insurrection he incited to stop—immediately ending, for example, the calls "to hang Mike Pence."
This meets the constitutional definition of "insurrection" even though so far Trump has not been criminally charged with inciting an insurrection. Remember that an insurrection is: "A rising against civil or political authority; the open and active opposition of a number of persons to the execution of a law in a city or state. It is equivalent to sedition, except that sedition expresses a less extensive rising of citizens." The Fourteenth Amendment bans either inciting an insurrection or a rebellion. Trump is guilty of inciting an insurrection, even if he may not have meant to cause a rebellion.
Some will no doubt say that the voters should be the judges of Trump's insurrection, but that it not what the Constitution says. The Constitution says that only Presidents who follow their oath of office, which includes taking care that the laws be faithfully executed, are eligible to be on the ballot and to run for re-election.
The Constitution is undemocratic in preventing non-Native born Americans who are under the age of 35 on January 20, 2025 from being on the ballot for President next year. But, we live in a constitutional republic, not an Athenian democracy of mob rule.
Chris Christie is legally injured by Donald Trump's name being on the ballot. They draw from some similar voters. Christie should sue, if necessary, to get Trump's name off the ballot. Then the Supreme Court can open the dictionary and tell us what we all already know—that Trump incited an insurrection and is disqualified from being on any primary or general election ballots next year.
UPDATE: For much more detail on these matters, see Will Baude's & Michael Stokes Paulsen's The Sweep and Force of Section Three, forthcoming in the University of Pennsylvania Law Review.
|
<urn:uuid:851221d3-e3db-40a0-8c63-ac11c34b0adb>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00439.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9627396464347839,
"pii_count": 0,
"score": 2.53125,
"token_count": 1208,
"url": "https://reason.com/volokh/2023/08/10/trump-is-disqualified-from-being-on-any-election-ballots/?itm_source=parsely-api"
}
|
The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Section Three of the Fourteenth Amendment bans anyone from holding any federal office who has taken an oath to uphold the Constitution and who then breaks that oath by engaging in "insurrection or rebellion against the same." Donald J. Trump is precisely such a person.
Trump took the Presidential oath of office at noon on January 20, 2017. Then, knowing that he had lost the 2020 election, he engaged in an "insurrection" on January 6, 2021.
Trump tried to persuade Vice President Mike Pence and Members of Congress not to count certain state electoral votes, which had been validly cast. He lied to the American people for years that the election had been stolen and continues to repeat those lies even to the present day.
Section Three of the Fourteenth Amendment is self-enforcing. It is "the supreme Law of the Land" binding on each of the 50 State Secretaries of State and their subordinates who draw up primary or general election ballots.
State Secretaries of State and their subordinates may not list on their election ballots as candidates for President anyone who is not eligible to hold the office of President. To be eligible to hold the office of President, one must be: 1) a natural born Citizen; 2) thirty-five years or older; 3) a Resident of the United States for fourteen years; and 4) a person who has not broken their oath of office to support the Constitution by engaging "in insurrection or rebellion against the same."
No jury verdict is required to determine whether a candidate who seeks to run for the presidency on a primary or general election ballot is: a natural born citizen, who is 35 years of age, and fourteen years a resident of the United States. Likewise, no jury verdict or act of Congress is required to keep a Secretary of States and their subordinates from printing ballots with the name "Donald J. Trump" on them.
Keeping Trump off the ballot after his conduct on January 6, 2021 does not deprive him of life, liberty, or property in the same way that a criminal or a civil jury verdict could. It is a privilege to be eligible to run for President of the United States and that privilege does not extend to constitutional oath breakers who engage "in insurrection or rebellion against the same
|
."
Webster's 1828 Dictionary of American English defines "insurrection" as follows:
INSURREC'TION, noun [Latin insurgo; in and surgo, to rise.] 1. A rising against civil or political authority; the open and active opposition of a number of persons to the execution of a law in a city or state. It is equivalent to sedition, except that sedition expresses a less extensive rising of citizens. It differs from rebellion, for the latter expresses a revolt, or an attempt to overthrow the government, to establish a different one or to place the country under another jurisdiction. It differs from mutiny, as it respects the civil or political government; whereas a mutiny is an open opposition to law in the army or navy, insurrection is however used with such latitude as to comprehend either sedition or rebellion.
Donald J. Trump in a nationally televised debate with President Biden refused to renounce the Proud Boys and said: "Proud Boys, stand back and stand by." Trump then falsely denied that he had lost the 2020 presidential election, urged his followers to assemble at noon on January 6, 2021 on the Ellipse outside the White House, and he then whipped a mob of some extremists, and many naïve conservatives, into a frenzy urging them to march on the Capitol as Congress was certifying the results of the 2020 presidential election. Trump told his followers: "We fight like hell. And if you don't fight like hell, you're not going to have a country anymore," he said.
Trump then watched the riot that he had had launched play out on national television without sending a Tweet or any other kind of similar message urging his supporters to behave peacefully. He did this even though one Tweet from him would have caused the insurrection he incited to stop—immediately ending, for example, the calls "to hang Mike Pence."
This meets the constitutional definition of "insurrection" even though so far Trump has not been criminally charged with inciting an insurrection. Remember that an insurrection is: "A rising against civil or political authority; the open and active opposition of a number of persons to the execution of a law in a city or state. It is equivalent to sedition, except that sedition expresses a less extensive rising of citizens." The Fourteenth Amendment bans either inciting an insurrection or a rebellion. Trump is guilty of inciting an insurrection, even if he may not have meant to cause a rebellion.
Some will no doubt say that the voters should be the judges of Trump's insurrection, but that it not what the Constitution says. The Constitution says that only Presidents who follow their oath of office, which includes taking care that the laws be faithfully executed, are eligible to be on the ballot and to run for re-election.
The Constitution is undemocratic in preventing non-Native born Americans who are under the age of 35 on January 20, 2025 from being on the ballot for President next year. But, we live in a constitutional republic, not an Athenian democracy of mob rule.
Chris Christie is legally injured by Donald Trump's name being on the ballot. They draw from some similar voters. Christie should sue, if necessary, to get Trump's name off the ballot. Then the Supreme Court can open the dictionary and tell us what we all already know—that Trump incited an insurrection and is disqualified from being on any primary or general election ballots next year.
UPDATE: For much more detail on these matters, see Will Baude's & Michael Stokes Paulsen's The Sweep and Force of Section Three, forthcoming in the University of Pennsylvania Law Review.
|
The federal government says new rules for the "safeguard mechanism" will begin in July.
But what is the safeguard mechanism?
Why does it need changing?
And what do business groups, climate experts and green groups think about the government's changes?
What is the safeguard mechanism?
This week, the federal government released a position paper on its proposed changes to the safeguard mechanism, which is a central component of its climate change policy.
The safeguard mechanism was established by the Abbott Coalition government in 2016 as part of its Emissions Reduction Fund.
But it has been ineffectual, with industrial emissions increasing during its operation.
The Albanese government wants to revamp it so it actually drives down emissions and helps Australia meet its climate targets.
As things stand, the mechanism applies to industrial facilities that emit more than 100,000 tonnes of carbon-dioxide-equivalent covered emissions a year. They are the largest polluters in the country.
About 215 industrial facilities meet that definition.
As a group, they account for almost 30 per cent of Australia's total greenhouse gas emissions.
Under the original mechanism, those big polluters were supposed to keep their net emissions below an emissions limit (a baseline), and if they produced emissions above their allowable limit they were supposed to take actions to drive their emissions down.
To help them do that, they could purchase Australian carbon credit units (ACCUs) and hand them to the government.
But critics say the emissions limits were artificially high.
And the carbon credit scheme proved controversial, with a recent review of the scheme making 16 recommendations to improve it, all of which the Albanese government has accepted "in principle".
Big changes to the safeguard mechanism
But the government wants to do more than just tinker with Australia's current carbon credit arrangements.
It wants to overhaul the safeguard mechanism framework to make it much more effective at driving down emissions.
To do that, from July this year it will begin gradually reducing the emissions limits (the baselines) that big polluting facilities are allowed to produce each year.
That's going to force the facilities to cut more of their emissions every year to stay under their baselines, helping to put them on a path to net zero emissions by 2050.
The government also wants to introduce a new kind of carbon credit to the scheme: Safeguard Mechanism Credits (SMCs).
It says a big polluter will be able to earn those new "credits" by emitting fewer emissions than their baseline allows. And businesses that earn those credits will be able to sell them to higher-emitting facilities that are struggling to cut their emissions below their own baselines, which will help those higher-emitting facilities to reduce their net emissions.
Overall, the federal government will be reducing the emissions baselines for big polluters by 4.9 per cent each year up to 2030.
The table below shows the proposed schedule for the reductions in emissions baselines, using 2022 as the base year.
By cutting the emissions limits by roughly 5 per cent a year, it means Australia's heaviest-polluting facilities will have to reduce their annual emissions by roughly 30 per cent by 2030.
What about carbon tariffs?
The government's changes to Australia's carbon rules don't stop there.
It says it will also consider introducing European-style carbon tariffs on some imported goods so Australian manufacturers can remain competitive after its new carbon rules kick in.
The EU has imposed tariffs on imported goods such as steel, cement and fertiliser to prevent European industries being undercut by competitors with weaker climate laws.
Some of Australia's industry groups have raised concerns about those types of tariffs, and the government has been listening.
"We've taken on board that feedback and said, 'Yes, this is something we should look at alongside all the other options available to government to ensure that, now that Australia has a decent climate policy, Australian industry is guided on that,'" Energy Minister Chris Bowen said this week.
"And just as Europe has gone down this road, Australia will consider its options alongside other options."
Some concerns have been raised
However, regarding the changes to the safeguard mechanism proper, experts say they're worried the government's new framework still won't be tough enough to drive Australia's absolute emissions down hard.
For one, they say the government's preferred method for calculating emissions baselines for heavy-polluting facilities is still problematic.
The federal government says it doesn't want to be too prescriptive with its baseline calculations so it won't adopt "absolute baselines" that fix hard carbon limits on Australia's big polluters.
It says it wants to retain flexible baselines to give facilities room to increase production without being penalised.
But that will have consequences for emissions, experts say.
"The government's proposal to continue with individually tailored "production-adjusted" emissions intensity baselines mean industries can expand without facing increased costs," said Rebecca Pearse from the Australian National University.
"For example, let's say a new liquefied natural gas plant expands its output to meet international demand. Then, the overall emissions baseline for the plant will also increase because the baseline is measured as emissions per tonne of gas produced. If enough producers do the same, the overall carbon budget will be broken."
And that's just one area of contention.
Another area of concern involves the safeguard mechanism's use of carbon credits and offsets to buttress the system.
The United Nations has been calling on policymakers to prioritise making cuts in absolute emissions by 2030, saying offsets should be used sparingly because they're too easily used by corporates and governments to greenwash their net-zero achievements.
Andrew Macintosh and Don Butler from the Australian National University, who raised concerns last year about Australia's existing carbon credit scheme, said they couldn't fathom how the recent review of the scheme found the arrangements were "essentially sound".
But nevertheless, they said they welcomed the recommendations to improve the current scheme because all carbon credit systems needed integrity to work properly.
"Measures should be taken to prevent low-integrity credits being issued to existing projects," they advised this week.
"And polluting facilities should not be allowed to use low-integrity credits to meet their emission reduction obligations."
Responses from green groups?
Green groups say the proposed changes to the safeguard mechanism will improve the system.
But they've still raised concerns about the government's use of carbon credits and offsets, warning companies might be able to exploit the flexibility in the new system to keep polluting at dangerous levels.
Australian Conservation Foundation:
Gavan McFadzean, the ACF's climate change program manager, said unlimited carbon credits and offsets could undermine the entire project.
"This redesign significantly improves on the Coalition's safeguard mechanism in several respects, but we can’t offset our way to net zero," he said.
"Unlimited offsets allow big, publicly listed companies like Woodside, Glencore and Santos — which have done more than enough climate damage already — to pay to keep polluting."
Glenn Walker, Greenpeace's head of advocacy and strategy, said emissions baselines for new facilities entering the scheme ought to be set to zero if we're to be serious about reducing emissions.
"Massive new entrant gas projects like Woodside's Burrup Hub could blow Australia's emissions baseline out of the water under the current policy proposal," he warned.
"This places an unfair burden on other Australian businesses and sectors, which would need to do more heavy lifting to reduce emissions."
Jennifer Rayner, the head of advocacy at the Climate Council, said cutting down the artificially high emissions caps that existed under the original safeguard mechanism was a welcome step.
"However, allowing facilities in the safeguard mechanism to use cheap and easy offsets to write off all of their emissions will send completely the wrong signal," Dr Rayner said.
"This will simply incentivise Australia's heavy industry to engage in tricky carbon accounting to cover up pollution as usual instead of investing in genuine transformation."
Responses from industry?
Business groups have broadly welcomed the government's changes to the system, saying they will finally provide certainty for businesses.
Australian Industry Group:
Innes Willox, the chief executive of Ai Group, said the changes looked manageable and should keep Australian businesses competitive.
"What these proposals do is give industry the framework to work with now and perhaps much greater sense of certainty around direction of policy," he said.
"We didn't want to have a system in place that would lead to closure, to offshoring very quickly. We think industry can work with this."
Business Council of Australia:
The BCA said the changes were a measured step and would provide certainty for businesses.
"[But] the final design of the safeguard reforms will require ongoing consultation, recognising that for some businesses the transition will be more difficult because the necessary technology is still to be developed," it said.
Australian Chamber of Commerce and Industry:
David Alexander, ACCI's chief of policy and advocacy, said the government's move towards production-adjusted baselines set on a facility-by-facility basis was a good one.
"This recognises that the emissions-reduction effort for some facilities is more difficult than others due to location, the nature of production and the current technologies installed at each site," he said.
"The safeguard mechanism needs to be structured so that facilities are encouraged to lower their emissions intensity, not simply cut production in order to meet targets."
Tania Constable, chief executive of the Minerals Council of Australia, said she still had some concerns about the overall cost of compliance under the scheme.
She also wasn't sure if the new system would allow Australian businesses to remain competitive with overseas rivals.
"[But] our industry is taking a very constructive approach to the proposed changes," she said.
The federal government is now seeking feedback on its proposed changes to the safeguard mechanism, with the feedback period ending on February 24.
Its new rules will begin in July.
|
<urn:uuid:74f296a6-bea5-49f6-b3b1-f07f36d91350>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00856.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9587672352790833,
"pii_count": 0,
"score": 3.0625,
"token_count": 2076,
"url": "https://www.abc.net.au/news/2023-01-15/safeguard-mechanism-australian-government-drive-down-emissions/101844050"
}
|
The federal government says new rules for the "safeguard mechanism" will begin in July.
But what is the safeguard mechanism?
Why does it need changing?
And what do business groups, climate experts and green groups think about the government's changes?
What is the safeguard mechanism?
This week, the federal government released a position paper on its proposed changes to the safeguard mechanism, which is a central component of its climate change policy.
The safeguard mechanism was established by the Abbott Coalition government in 2016 as part of its Emissions Reduction Fund.
But it has been ineffectual, with industrial emissions increasing during its operation.
The Albanese government wants to revamp it so it actually drives down emissions and helps Australia meet its climate targets.
As things stand, the mechanism applies to industrial facilities that emit more than 100,000 tonnes of carbon-dioxide-equivalent covered emissions a year. They are the largest polluters in the country.
About 215 industrial facilities meet that definition.
As a group, they account for almost 30 per cent of Australia's total greenhouse gas emissions.
Under the original mechanism, those big polluters were supposed to keep their net emissions below an emissions limit (a baseline), and if they produced emissions above their allowable limit they were supposed to take actions to drive their emissions down.
To help them do that, they could purchase Australian carbon credit units (ACCUs) and hand them to the government.
But critics say the emissions limits were artificially high.
And the carbon credit scheme proved controversial, with a recent review of the scheme making 16 recommendations to improve it, all of which the Albanese government has accepted "in principle".
Big changes to the safeguard mechanism
But the government wants to do more than just tinker with Australia's current carbon credit arrangements.
It wants to overhaul the safeguard mechanism framework to make it much more effective at driving down emissions.
To do that, from July this year it will begin gradually reducing the emissions limits (the baselines) that big polluting facilities are allowed to produce each year.
That's going to force the facilities to cut more of their emissions every year to stay under their baselines, helping to put them on a path to net zero emissions by 2050.
The government also wants to introduce a new kind of carbon credit to the scheme: Safeguard Mechanism Credits (SMCs).
It says a big polluter will be able to earn those new "credits
|
" by emitting fewer emissions than their baseline allows. And businesses that earn those credits will be able to sell them to higher-emitting facilities that are struggling to cut their emissions below their own baselines, which will help those higher-emitting facilities to reduce their net emissions.
Overall, the federal government will be reducing the emissions baselines for big polluters by 4.9 per cent each year up to 2030.
The table below shows the proposed schedule for the reductions in emissions baselines, using 2022 as the base year.
By cutting the emissions limits by roughly 5 per cent a year, it means Australia's heaviest-polluting facilities will have to reduce their annual emissions by roughly 30 per cent by 2030.
What about carbon tariffs?
The government's changes to Australia's carbon rules don't stop there.
It says it will also consider introducing European-style carbon tariffs on some imported goods so Australian manufacturers can remain competitive after its new carbon rules kick in.
The EU has imposed tariffs on imported goods such as steel, cement and fertiliser to prevent European industries being undercut by competitors with weaker climate laws.
Some of Australia's industry groups have raised concerns about those types of tariffs, and the government has been listening.
"We've taken on board that feedback and said, 'Yes, this is something we should look at alongside all the other options available to government to ensure that, now that Australia has a decent climate policy, Australian industry is guided on that,'" Energy Minister Chris Bowen said this week.
"And just as Europe has gone down this road, Australia will consider its options alongside other options."
Some concerns have been raised
However, regarding the changes to the safeguard mechanism proper, experts say they're worried the government's new framework still won't be tough enough to drive Australia's absolute emissions down hard.
For one, they say the government's preferred method for calculating emissions baselines for heavy-polluting facilities is still problematic.
The federal government says it doesn't want to be too prescriptive with its baseline calculations so it won't adopt "absolute baselines" that fix hard carbon limits on Australia's big polluters.
It says it wants to retain flexible baselines to give facilities room to increase production without being penalised.
But that will have consequences for emissions, experts say.
"The government's proposal to continue with individually tailored "production-adjusted" emissions intensity baselines mean industries can expand without facing increased costs," said Rebecca Pearse from the Australian National University.
"For example, let's say a new liquefied natural gas plant expands its output to meet international demand. Then, the overall emissions baseline for the plant will also increase because the baseline is measured as emissions per tonne of gas produced. If enough producers do the same, the overall carbon budget will be broken."
And that's just one area of contention.
Another area of concern involves the safeguard mechanism's use of carbon credits and offsets to buttress the system.
The United Nations has been calling on policymakers to prioritise making cuts in absolute emissions by 2030, saying offsets should be used sparingly because they're too easily used by corporates and governments to greenwash their net-zero achievements.
Andrew Macintosh and Don Butler from the Australian National University, who raised concerns last year about Australia's existing carbon credit scheme, said they couldn't fathom how the recent review of the scheme found the arrangements were "essentially sound".
But nevertheless, they said they welcomed the recommendations to improve the current scheme because all carbon credit systems needed integrity to work properly.
"Measures should be taken to prevent low-integrity credits being issued to existing projects," they advised this week.
"And polluting facilities should not be allowed to use low-integrity credits to meet their emission reduction obligations."
Responses from green groups?
Green groups say the proposed changes to the safeguard mechanism will improve the system.
But they've still raised concerns about the government's use of carbon credits and offsets, warning companies might be able to exploit the flexibility in the new system to keep polluting at dangerous levels.
Australian Conservation Foundation:
Gavan McFadzean, the ACF's climate change program manager, said unlimited carbon credits and offsets could undermine the entire project.
"This redesign significantly improves on the Coalition's safeguard mechanism in several respects, but we can’t offset our way to net zero," he said.
"Unlimited offsets allow big, publicly listed companies like Woodside, Glencore and Santos — which have done more than enough climate damage already — to pay to keep polluting."
Glenn Walker, Greenpeace's head of advocacy and strategy, said emissions baselines for new facilities entering the scheme ought to be set to zero if we're to be serious about reducing emissions.
"Massive new entrant gas projects like Woodside's Burrup Hub could blow Australia's emissions baseline out of the water under the current policy proposal," he warned.
"This places an unfair burden on other Australian businesses and sectors, which would need to do more heavy lifting to reduce emissions."
Jennifer Rayner, the head of advocacy at the Climate Council, said cutting down the artificially high emissions caps that existed under the original safeguard mechanism was a welcome step.
"However, allowing facilities in the safeguard mechanism to use cheap and easy offsets to write off all of their emissions will send completely the wrong signal," Dr Rayner said.
"This will simply incentivise Australia's heavy industry to engage in tricky carbon accounting to cover up pollution as usual instead of investing in genuine transformation."
Responses from industry?
Business groups have broadly welcomed the government's changes to the system, saying they will finally provide certainty for businesses.
Australian Industry Group:
Innes Willox, the chief executive of Ai Group, said the changes looked manageable and should keep Australian businesses competitive.
"What these proposals do is give industry the framework to work with now and perhaps much greater sense of certainty around direction of policy," he said.
"We didn't want to have a system in place that would lead to closure, to offshoring very quickly. We think industry can work with this."
Business Council of Australia:
The BCA said the changes were a measured step and would provide certainty for businesses.
"[But] the final design of the safeguard reforms will require ongoing consultation, recognising that for some businesses the transition will be more difficult because the necessary technology is still to be developed," it said.
Australian Chamber of Commerce and Industry:
David Alexander, ACCI's chief of policy and advocacy, said the government's move towards production-adjusted baselines set on a facility-by-facility basis was a good one.
"This recognises that the emissions-reduction effort for some facilities is more difficult than others due to location, the nature of production and the current technologies installed at each site," he said.
"The safeguard mechanism needs to be structured so that facilities are encouraged to lower their emissions intensity, not simply cut production in order to meet targets."
Tania Constable, chief executive of the Minerals Council of Australia, said she still had some concerns about the overall cost of compliance under the scheme.
She also wasn't sure if the new system would allow Australian businesses to remain competitive with overseas rivals.
"[But] our industry is taking a very constructive approach to the proposed changes," she said.
The federal government is now seeking feedback on its proposed changes to the safeguard mechanism, with the feedback period ending on February 24.
Its new rules will begin in July.
|
There is no question that China allegedly flying a spy balloon over the United States is provocative — especially on the eve of Secretary of State Antony Blinken’s first official diplomatic visit to Beijing (which has now been postponed).
Espionage generally exists in something of a gray area between countries. It’s largely tolerated, with the understanding that everyone does it. Taking aggressive action to prevent it could be met with a response — and potentially a disproportionate one — that you dislike even more. This is why countries only expel diplomats suspected of spying under extraordinary circumstances.
But when it comes to aerial surveillance, that often-unstated status quo has sometimes been formalized into an official agreement. Indeed, for decades until recently, the United States and Russia participated in a treaty — originally promoted by Republican presidents — allowing surveillance flights over each other’s airspace.
Dwight D. Eisenhower first proposed an “Open Skies” agreement in Geneva in 1955. The idea was that sharing information about military installations and agreeing to requirements governing these flights — such as giving advance notice, and sharing the information gleaned — would reduce the likelihood that a confrontation over surveillance would spiral into war.
Eisenhower later admitted it was a ploy, proposed with the knowledge the Soviet Union would never agree, which could be used to argue it had no interest in arms control. The Soviets indeed had no interest in risking exposing a military inferiority they were straining to keep hidden. So they rejected the agreement, with Soviet leader Nikita Khrushchev dismissing it as an “espionage plot.”
But the idea was resurrected in the late-1980s, also by a Republican president. George H.W. Bush proposed agreeing to allow coordinated unarmed flights over one another’s countries in the name of transparency. This time Mikhail Gorbachev, with the Soviet Union dissolving, was more receptive. The Open Skies Treaty was signed in Helsinki in 1992 and went into force in 2002, with 34 other nations taking part. Between 2002 and 2016, the United States conducted 196 flights over Russia, while Russia conducted 71 over the United States, according to data released in 2016 by the State Department.
Both the United States and Russia have now pulled out of the treaty. The Donald Trump administration did so in 2020, not because it was viewed as a bad idea, necessarily, but because of Russia’s alleged noncompliance with its terms. (In 2018, the Trump administration balked at but later approved Russia’s use of certain equipment.) Russia ultimately pulled out as well, and while President Biden opposed the Trump administration’s move at the time, his administration hasn’t moved to reenter the treaty.
China has never been party to the treaty — which doesn’t mean, of course, that the two countries don’t conduct aerial surveillance of one another. In 2001, a U.S. Navy surveillance plane collided with a Chinese aircraft over the South China Sea and had to make an unplanned landing on China’s Hainan Island. Twice in 2016, Chinese jets buzzed U.S. spy planes for allegedly flying too close to Chinese territory in the East China Sea. And that’s to say nothing of the use of increasingly high-tech satellites.
It’s not just planes or satellites, either: Pentagon budget documents last year detailed a new plan to increase U.S. funding for, yes, surveillance balloons. The idea was that we could track and combat hypersonic weapons developed by China and Russia. While such surveillance is often conducted by satellites, balloons are significantly cheaper.
The United States also had a series of balloon programs early in the Cold War intended to try to get a peek behind the Iron Curtain. The programs, known as Project Moby Dick and Project Genetrix, among other names, obtained mixed results.
#OTD in 1955, President Eisenhower approved Operation Genetrix. This project used top-secret high-altitude balloons with cameras to gather photographic intelligence over adversaries during the #ColdWar. #ShowTheWay #history— NGA (@NGA_GEOINT) December 27, 2021
📷: @USAirForce pic.twitter.com/B7JaV4mlNn
The Pentagon’s stated reason for not shooting down the balloon yet is the danger it could pose to civilians on the ground. And it says it doubts the information being gleaned from the balloon would be more significant than what China is already obtaining via other means, such as satellites.
“We had been looking at whether there was an option yesterday over some sparsely populated areas in Montana,” a senior defense official said. “But we just couldn’t buy down the risk enough to feel comfortable recommending shooting it down yesterday.”
The official added that “our best assessment at the moment is that whatever the surveillance payload is on this balloon, it does not create significant value added over and above what [China] is likely able to collect through things like satellites in low Earth orbit.”
As notably, the administration said that this isn’t an unprecedented event; there were similar instances during the Trump administration.
“It has happened a handful of other times over the past few years, to include before this administration,” the official said.
That statement’s certainly worth probing further — including when it comes to that administration’s reaction.
But beyond that, officials must be considering what kind of precedent would be set by shooting it down, both for the United States and its adversaries. There could also be value in observing the balloon — and a downside to showing China and others how we would dispatch such a threat.
All of which was noted Friday by a rare Republican skeptic of the just-shoot-it-down strategy. Rep. Matt Gaetz (R-Fla.) suggested that the balloon could be an attempt by China to “bait the United States into disputes over appropriate rights in the air.”
“Because we would say, ‘Look, we shoot thing down. It was over our airspace,’” Gaetz said. “Then does that give China some sort of pretext for China to take some action? … If you create this sort of jurisdictional pretext, you could see things escalate there very quickly.”
Gaetz added that if we knew the balloon was gathering very sensitive intelligence, we should shoot it down. But if it’s as insignificant as administration officials say, he warned against such a step.
“Maybe they’re hoping that we go capture it,” Gaetz said of the Chinese.
It’s all worth considering, and worth putting in the context of the history of this kind of espionage — rather than jumping to conclusions about the appropriate course of action, to cast the Biden administration as soft on China.
More on the flying objects shot down over U.S., Canada
The latest: U.S. fighter jets have shot four objects out of the sky over North America this month. The first object, a balloon shot down off the South Carolina coast, was Chinese. Biden said Thursday the three other objects did not so far appear to have connections to foreign surveillance programs.
The first balloon: The first object was linked by the U.S. intelligence community to a vast surveillance program run by the People’s Liberation Army. Here’s a timeline of the balloon’s journey across the United States and photos of the recovery.
The response from China: China’s Foreign Ministry said the U.S. has sent at least 10 unsanctioned balloons into Chinese airspace since last year. China accused the United States of an “overreaction” and reiterated claims that the airship was a civilian vessel that drifted off course.
Why use a spy balloon? Spy balloons “offer a few advantages over the use of satellites or drones,” James Rogers, an academic at Cornell, tells us. The Defense Department told Congress that similar surveillance balloons had been spotted in U.S. airspace before, and a top U.S. general said past incursions by Chinese balloons went undetected by the Pentagon.
|
<urn:uuid:c4844a78-cca1-4108-aa16-db0546b6849a>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00724.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9621920585632324,
"pii_count": 0,
"score": 2.703125,
"token_count": 1700,
"url": "https://www.washingtonpost.com/politics/2023/02/03/chinese-balloon-aerial-surveillance-history/?tid=pm_politics_pop"
}
|
There is no question that China allegedly flying a spy balloon over the United States is provocative — especially on the eve of Secretary of State Antony Blinken’s first official diplomatic visit to Beijing (which has now been postponed).
Espionage generally exists in something of a gray area between countries. It’s largely tolerated, with the understanding that everyone does it. Taking aggressive action to prevent it could be met with a response — and potentially a disproportionate one — that you dislike even more. This is why countries only expel diplomats suspected of spying under extraordinary circumstances.
But when it comes to aerial surveillance, that often-unstated status quo has sometimes been formalized into an official agreement. Indeed, for decades until recently, the United States and Russia participated in a treaty — originally promoted by Republican presidents — allowing surveillance flights over each other’s airspace.
Dwight D. Eisenhower first proposed an “Open Skies” agreement in Geneva in 1955. The idea was that sharing information about military installations and agreeing to requirements governing these flights — such as giving advance notice, and sharing the information gleaned — would reduce the likelihood that a confrontation over surveillance would spiral into war.
Eisenhower later admitted it was a ploy, proposed with the knowledge the Soviet Union would never agree, which could be used to argue it had no interest in arms control. The Soviets indeed had no interest in risking exposing a military inferiority they were straining to keep hidden. So they rejected the agreement, with Soviet leader Nikita Khrushchev dismissing it as an “espionage plot.”
But the idea was resurrected in the late-1980s, also by a Republican president. George H.W. Bush proposed agreeing to allow coordinated unarmed flights over one another’s countries in the name of transparency. This time Mikhail Gorbachev, with the Soviet Union dissolving, was more receptive. The Open Skies Treaty was signed in Helsinki in 1992 and went into force in 2002, with 34 other nations taking part. Between 2002 and 2016, the United States conducted 196 flights over Russia, while Russia conducted 71 over the United States, according to data released in 2016 by the State Department.
Both the United States and Russia have now pulled out of the treaty. The Donald Trump administration did so in 2020, not because it was viewed as a bad idea, necessarily
|
, but because of Russia’s alleged noncompliance with its terms. (In 2018, the Trump administration balked at but later approved Russia’s use of certain equipment.) Russia ultimately pulled out as well, and while President Biden opposed the Trump administration’s move at the time, his administration hasn’t moved to reenter the treaty.
China has never been party to the treaty — which doesn’t mean, of course, that the two countries don’t conduct aerial surveillance of one another. In 2001, a U.S. Navy surveillance plane collided with a Chinese aircraft over the South China Sea and had to make an unplanned landing on China’s Hainan Island. Twice in 2016, Chinese jets buzzed U.S. spy planes for allegedly flying too close to Chinese territory in the East China Sea. And that’s to say nothing of the use of increasingly high-tech satellites.
It’s not just planes or satellites, either: Pentagon budget documents last year detailed a new plan to increase U.S. funding for, yes, surveillance balloons. The idea was that we could track and combat hypersonic weapons developed by China and Russia. While such surveillance is often conducted by satellites, balloons are significantly cheaper.
The United States also had a series of balloon programs early in the Cold War intended to try to get a peek behind the Iron Curtain. The programs, known as Project Moby Dick and Project Genetrix, among other names, obtained mixed results.
#OTD in 1955, President Eisenhower approved Operation Genetrix. This project used top-secret high-altitude balloons with cameras to gather photographic intelligence over adversaries during the #ColdWar. #ShowTheWay #history— NGA (@NGA_GEOINT) December 27, 2021
📷: @USAirForce pic.twitter.com/B7JaV4mlNn
The Pentagon’s stated reason for not shooting down the balloon yet is the danger it could pose to civilians on the ground. And it says it doubts the information being gleaned from the balloon would be more significant than what China is already obtaining via other means, such as satellites.
“We had been looking at whether there was an option yesterday over some sparsely populated areas in Montana,” a senior defense official said. “But we just couldn’t buy down the risk enough to feel comfortable recommending shooting it down yesterday.”
The official added that “our best assessment at the moment is that whatever the surveillance payload is on this balloon, it does not create significant value added over and above what [China] is likely able to collect through things like satellites in low Earth orbit.”
As notably, the administration said that this isn’t an unprecedented event; there were similar instances during the Trump administration.
“It has happened a handful of other times over the past few years, to include before this administration,” the official said.
That statement’s certainly worth probing further — including when it comes to that administration’s reaction.
But beyond that, officials must be considering what kind of precedent would be set by shooting it down, both for the United States and its adversaries. There could also be value in observing the balloon — and a downside to showing China and others how we would dispatch such a threat.
All of which was noted Friday by a rare Republican skeptic of the just-shoot-it-down strategy. Rep. Matt Gaetz (R-Fla.) suggested that the balloon could be an attempt by China to “bait the United States into disputes over appropriate rights in the air.”
“Because we would say, ‘Look, we shoot thing down. It was over our airspace,’” Gaetz said. “Then does that give China some sort of pretext for China to take some action? … If you create this sort of jurisdictional pretext, you could see things escalate there very quickly.”
Gaetz added that if we knew the balloon was gathering very sensitive intelligence, we should shoot it down. But if it’s as insignificant as administration officials say, he warned against such a step.
“Maybe they’re hoping that we go capture it,” Gaetz said of the Chinese.
It’s all worth considering, and worth putting in the context of the history of this kind of espionage — rather than jumping to conclusions about the appropriate course of action, to cast the Biden administration as soft on China.
More on the flying objects shot down over U.S., Canada
The latest: U.S. fighter jets have shot four objects out of the sky over North America this month. The first object, a balloon shot down off the South Carolina coast, was Chinese. Biden said Thursday the three other objects did not so far appear to have connections to foreign surveillance programs.
The first balloon: The first object was linked by the U.S. intelligence community to a vast surveillance program run by the People’s Liberation Army. Here’s a timeline of the balloon’s journey across the United States and photos of the recovery.
The response from China: China’s Foreign Ministry said the U.S. has sent at least 10 unsanctioned balloons into Chinese airspace since last year. China accused the United States of an “overreaction” and reiterated claims that the airship was a civilian vessel that drifted off course.
Why use a spy balloon? Spy balloons “offer a few advantages over the use of satellites or drones,” James Rogers, an academic at Cornell, tells us. The Defense Department told Congress that similar surveillance balloons had been spotted in U.S. airspace before, and a top U.S. general said past incursions by Chinese balloons went undetected by the Pentagon.
|
San Antonio company captures CO2, but one expert says it does more harm than good
Martin Keighley said he joined CarbonFree as its CEO four years ago because he wanted to be part of the solution to climate change — and make money while doing it.
“We have a big problem out there in terms of the need to address climate change, the CO2 in the atmosphere,” Keighley said. “But we see it as a big opportunity. And for us, it’s an opportunity to run a profitable business.”
Keighley and San Antonio-based CarbonFree have an ambitious goal: to capture 10% of all industrial emissions of carbon dioxide by 2050 Those industrial CO2 emissions account for roughly one fifth of all global emissions.
Carbon capture is one of several technologies being considered as major ways to reduce carbon dioxide emissions as the planet continues to warm to increasingly dangerous levels.
In fact, the federal government has committed hundreds of billions of dollars in the last several years to support the young industry with tax credits and infrastructure.
The basic idea behind carbon capture is simple: fossil fuel and industrial plants produce much of the CO2 emissions in the atmosphere, so what if those emissions were caught before they could get to the atmosphere?
Companies typically capture carbon from flue chimneys, the smokestacks coming out of fossil fuel and industrial plants. Once a company captures carbon, there are two basic things it can do with it.
“We talk about CCU and CCS,” Keighley said. “Carbon capture utilization and carbon capture storage.”
CCU is a process where companies use captured CO2 to make chemical products. CCS, also known as carbon capture sequestration, is a process where companies take captured CO2 and store it away, typically underground.
CarbonFree currently operates one carbon capture plant, called SkyMine, next door to a San Antonio cement factory. That plant runs on the CCU model, where it captures some of the cement plant’s emissions and uses a chemical process to turn them into several products.
“We make baking soda, we make hydrochloric acid, caustic, and bleach,” Keighley said.
Keighley acknowledged that baking soda, SkyMine’s primary product, decomposes over a relatively short period and ends up re-emitting much of the carbon stored in it.
But he said a forthcoming CarbonFree facility called SkyCycle will produce a product that can store carbon for centuries or millennia — precipitated calcium carbonate, or PCC. PCC is used as a filler in numerous products.
“It goes into things like … paper, paints, emulsions, into detergents,” Keighley said. “It also goes into food products like toothpaste.”
The SkyCycle plant will capture carbon from an Indiana US Steel plant. It will operate as a CCUS plant because it will both utilize and store carbon through PCC. To Keighley and many others, carbon capture is a win-win for the environment and their bottom line.
Stanford Civil and Environmental Engineering professor Mark Jacobson is one of carbon capture’s biggest critics. He said doing nothing would be better than supporting the technology.
“Carbon capture is a scheme of the fossil fuel industry,” Jacobson said. “I mean, they go hand-in-hand. It’s just a way for the fossil fuel industry to extend themselves.”
Jacobson is a strong proponent of a full and immediate transition from fossil fuels to renewable energy.
He published a book earlier this year titled No Miracles Needed, in which he argued that we already have all the technology we need to solve the climate crisis — solar, wind, and water power generation — without so-called “miracle technologies” like carbon capture.
He said he believes technologies like carbon capture are tools the fossil fuel industry can use to keep their plants running with promises of reduced emissions. Oil giant BP has been a long-time investor in CarbonFree, though Keighley said the fossil fuel company is only a small investor.
Jacobson added that carbon capture companies rarely live up to big claims about emissions reductions.
“The full load capture rate under ideal conditions can be like 90%,” he said. “However, in reality, the actual projects are between 20 and 70%.”
The full load capture rate is what carbon capture companies estimate as the absolute best their plants could do in the best circumstances, which are often unavailable.
At SkyMine, CarbonFree boasts that it has the capacity to capture 50,000 tons of CO2 per year, or 15%-20% of the plant’s emissions. But since it’s been operational, it hasn’t surpassed more than 20,000 tons per year.
Keighley said SkyMine represents a stepping stone to bigger and more efficient projects like SkyCycle.
“It’s not a pilot plant, but it also demonstrates the technology to someone like US Steel, and then working with them on the next stage of growth up to something more like half a million ton capture,” he said.
SkyCycle is only expected to capture 15%-20% of the US Steel plant emissions in its first few years of operations, but Keighley again said it’s a way to prove the technology works and get it ready for larger scale capture.
Keighley added that capturing carbon is good, even if it’s not as much as they hoped. But Jacobson disagreed.
“You have to do something with the carbon dioxide,” Jacobson said. “Well 75% of all CO2 is used for enhanced oil recovery. And that process alone puts 40% of the CO2 right back to the air.”
Both SkyMine and SkyCyle have hydrochloric acid as a byproduct, and Keighley acknowledged that some of that is sold to fossil fuel companies for use in oil extraction. But he said it isn’t a big part of their business.
“Our preference by far is to sell that [hydrochloric acid] into industrial markets, not particularly the oil and gas [industry],” Keighley said.
Though Keighley said SkyCycle will be a carbon negative facility, Jacobson said many carbon capture plants end up producing more carbon through their construction and operation than they will ever capture.
He also pointed to the fact that many carbon capture plants run on fossil fuel themselves. But he said even if they ran off of renewable energy, it would just be better to skip the carbon capture middle man and replace whatever CO2-emitting plant they’re capturing carbon from with renewable power.
Even for hard-to-abate industrial emissions, which is the focus of CarbonFree, Jacobson said carbon capture is still not useful. He said Sweden uses a 100% renewable process to produce its steel and that there are ways to make geopolymer cement with renewable power.
“It’s better to replace fossil fuel plants with renewable electricity,” Jacobson said. “These are all just greenwashing schemes. Sure companies can make money off of it, but it’s not useful for the environment.”
But Kieghley pushed back.
“I believe very strongly that we need to work with current industrial emitters to manage their emissions as they are … otherwise you are decades away from getting real uptake on new technologies,” Keighley said.
Though carbon capture might be one solution to the climate crisis, a 2022 report from the International Energy Agency found that the industry’s current impact is dwarfed nearly 5 to 1 by what would actually be required to hit net-zero emission goals by 2050.
|
<urn:uuid:d7f70632-55ad-49c1-92be-c62f696946a4>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652569.73/warc/CC-MAIN-20230606114156-20230606144156-00488.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9545607566833496,
"pii_count": 0,
"score": 2.71875,
"token_count": 1647,
"url": "https://www.tpr.org/technology-entrepreneurship/2023-05-24/san-antonio-company-captures-co2-but-one-expert-says-it-does-more-harm-than-good"
}
|
San Antonio company captures CO2, but one expert says it does more harm than good
Martin Keighley said he joined CarbonFree as its CEO four years ago because he wanted to be part of the solution to climate change — and make money while doing it.
“We have a big problem out there in terms of the need to address climate change, the CO2 in the atmosphere,” Keighley said. “But we see it as a big opportunity. And for us, it’s an opportunity to run a profitable business.”
Keighley and San Antonio-based CarbonFree have an ambitious goal: to capture 10% of all industrial emissions of carbon dioxide by 2050 Those industrial CO2 emissions account for roughly one fifth of all global emissions.
Carbon capture is one of several technologies being considered as major ways to reduce carbon dioxide emissions as the planet continues to warm to increasingly dangerous levels.
In fact, the federal government has committed hundreds of billions of dollars in the last several years to support the young industry with tax credits and infrastructure.
The basic idea behind carbon capture is simple: fossil fuel and industrial plants produce much of the CO2 emissions in the atmosphere, so what if those emissions were caught before they could get to the atmosphere?
Companies typically capture carbon from flue chimneys, the smokestacks coming out of fossil fuel and industrial plants. Once a company captures carbon, there are two basic things it can do with it.
“We talk about CCU and CCS,” Keighley said. “Carbon capture utilization and carbon capture storage.”
CCU is a process where companies use captured CO2 to make chemical products. CCS, also known as carbon capture sequestration, is a process where companies take captured CO2 and store it away, typically underground.
CarbonFree currently operates one carbon capture plant, called SkyMine, next door to a San Antonio cement factory. That plant runs on the CCU model, where it captures some of the cement plant’s emissions and uses a chemical process to turn them into several products.
“We make baking soda, we make hydrochloric acid, caustic, and bleach,” Keighley said.
Keighley acknowledged that baking soda, SkyMine’s primary product, decomposes over a relatively short period and ends up re-emitting much of the carbon stored in it.
But he said a forthcoming CarbonFree facility called SkyCycle will produce a product that can store carbon for centuries or millennia — precipitated calcium carbonate, or PCC. PCC is used as a filler in
|
numerous products.
“It goes into things like … paper, paints, emulsions, into detergents,” Keighley said. “It also goes into food products like toothpaste.”
The SkyCycle plant will capture carbon from an Indiana US Steel plant. It will operate as a CCUS plant because it will both utilize and store carbon through PCC. To Keighley and many others, carbon capture is a win-win for the environment and their bottom line.
Stanford Civil and Environmental Engineering professor Mark Jacobson is one of carbon capture’s biggest critics. He said doing nothing would be better than supporting the technology.
“Carbon capture is a scheme of the fossil fuel industry,” Jacobson said. “I mean, they go hand-in-hand. It’s just a way for the fossil fuel industry to extend themselves.”
Jacobson is a strong proponent of a full and immediate transition from fossil fuels to renewable energy.
He published a book earlier this year titled No Miracles Needed, in which he argued that we already have all the technology we need to solve the climate crisis — solar, wind, and water power generation — without so-called “miracle technologies” like carbon capture.
He said he believes technologies like carbon capture are tools the fossil fuel industry can use to keep their plants running with promises of reduced emissions. Oil giant BP has been a long-time investor in CarbonFree, though Keighley said the fossil fuel company is only a small investor.
Jacobson added that carbon capture companies rarely live up to big claims about emissions reductions.
“The full load capture rate under ideal conditions can be like 90%,” he said. “However, in reality, the actual projects are between 20 and 70%.”
The full load capture rate is what carbon capture companies estimate as the absolute best their plants could do in the best circumstances, which are often unavailable.
At SkyMine, CarbonFree boasts that it has the capacity to capture 50,000 tons of CO2 per year, or 15%-20% of the plant’s emissions. But since it’s been operational, it hasn’t surpassed more than 20,000 tons per year.
Keighley said SkyMine represents a stepping stone to bigger and more efficient projects like SkyCycle.
“It’s not a pilot plant, but it also demonstrates the technology to someone like US Steel, and then working with them on the next stage of growth up to something more like half a million ton capture,” he said.
SkyCycle is only expected to capture 15%-20% of the US Steel plant emissions in its first few years of operations, but Keighley again said it’s a way to prove the technology works and get it ready for larger scale capture.
Keighley added that capturing carbon is good, even if it’s not as much as they hoped. But Jacobson disagreed.
“You have to do something with the carbon dioxide,” Jacobson said. “Well 75% of all CO2 is used for enhanced oil recovery. And that process alone puts 40% of the CO2 right back to the air.”
Both SkyMine and SkyCyle have hydrochloric acid as a byproduct, and Keighley acknowledged that some of that is sold to fossil fuel companies for use in oil extraction. But he said it isn’t a big part of their business.
“Our preference by far is to sell that [hydrochloric acid] into industrial markets, not particularly the oil and gas [industry],” Keighley said.
Though Keighley said SkyCycle will be a carbon negative facility, Jacobson said many carbon capture plants end up producing more carbon through their construction and operation than they will ever capture.
He also pointed to the fact that many carbon capture plants run on fossil fuel themselves. But he said even if they ran off of renewable energy, it would just be better to skip the carbon capture middle man and replace whatever CO2-emitting plant they’re capturing carbon from with renewable power.
Even for hard-to-abate industrial emissions, which is the focus of CarbonFree, Jacobson said carbon capture is still not useful. He said Sweden uses a 100% renewable process to produce its steel and that there are ways to make geopolymer cement with renewable power.
“It’s better to replace fossil fuel plants with renewable electricity,” Jacobson said. “These are all just greenwashing schemes. Sure companies can make money off of it, but it’s not useful for the environment.”
But Kieghley pushed back.
“I believe very strongly that we need to work with current industrial emitters to manage their emissions as they are … otherwise you are decades away from getting real uptake on new technologies,” Keighley said.
Though carbon capture might be one solution to the climate crisis, a 2022 report from the International Energy Agency found that the industry’s current impact is dwarfed nearly 5 to 1 by what would actually be required to hit net-zero emission goals by 2050.
|
"I had the craziest dream last night."
In a matter of seconds, someone's dream dictionary is taken out and a round table discussion about being chased, flying and even dying, ensues.
But some people find themselves rarely participating in conversations like these.
Many of us already know how sleep deprivation is linked to a number of chronic health problems. But what about dream deprivation?
The reasons why we aren't dreaming enough lie in the complicated cycles of our sleep.
What makes a 'good' sleep?
Good sleep is sleep that is refreshing and lets you get through the day with energy, according to University of Queensland sleep psychologist Professor Simon Smith.
But Professor Smith says good sleep doesn't always mean "perfect".
"It can be quite normal to wake up a few times during the night, or have the occasional night where getting to sleep is hard," he told ABC News.
A good sleep can be thought about in four ways, Dr Smith says:
- 1.Sleep duration: how many hours were slept?
- 2.Quality: how good was it?
- 3.Timing: do your bedtime and wake-up time suit your lifestyle?
- 4.Regularity: when does your body clock determine your sleep?
"Sleep quality can be defined in the laboratory as getting enough of each 'stage' of sleep," Professor Smith says.
How much sleep do you need?
Individuals vary in their sleep needs, according to the Sleep Health Foundation.
Here's what they recommend, based on a report of expert panels:
Adults: Most require between seven and nine hours per night to feel properly refreshed and function at their best the next day,
Teenagers: Need between eight and 10 hours of sleep per night.
Children aged 6-12: Typically need nine to 12 hours of sleep per night.
Toddlers: Require 11 to 14 hours of sleep per night.
One interesting thing the Sleep Health Foundation points out is the circadian rhythm.
This is our body's natural clock cycle that determines when we sleep, and it's different for teenagers than younger children and adults.
Five hours or less linked to risk of chronic illnesses
Getting five hours of sleep or less a night could increase your risk of getting two or more chronic diseases as you get older, research has found.
A study published in the journal PLOS Medicine in 2022 documented the sleeping habits of 7,000 men and women over 30 years, tracking the amount of sleep they had and the chronic diseases they developed.
Short sleep duration and restless sleep were linked to an increased risk of developing two or more chronic diseases.
Some experts emphasise the risk is relatively low, but others say we need greater awareness of the importance of sleep.
What are the stages of sleep?
Humans cycle through five different stages of sleep each night: NREM1, NREM2, NREM3, NREM4 and rapid eye movement (REM) sleep.
NREM stands for "non-rapid eye movement".
And REM stands for "rapid eye movement".
Here's a breakdown of each stage:
Dozing or drowsiness — you hover between being asleep and awake.
You lose awareness of your surroundings, your body temperature starts to drop and your breathing and heart rate slow down.
Stage 3 &
This is deep sleep, also known as "delta sleep" – your blood pressure, heart rate and breathing become very slow and your muscles relax.
Growth and repair processes occur during this stage.
So the first four stages are non-REM sleep. They are ranked from light sleep to deep sleep.
Light, especially NREM2, is a critical stage of sleep when memories form.
NREM3 and NREM4 are crucial for your body to recover from injuries and to have energy for the next day.
What is REM sleep?
REM sleep is a stage of sleep most associated with dreaming, nightmares and memory consolidation, according to Professor Smith.
"It's a really interesting state to be in, as although definitely 'asleep', your brain is very active," he says.
REM sleep occurs about once every 90 to 120 minutes, according to Victoria's Department of Health.
It makes up about one-quarter of your night’s sleep.
What happens during REM sleep?
"The major muscles of the body are essentially paralysed," Professor Smith says.
"But your heart and other important functions continue on. Your breathing may be less regular and faster than in non-REM sleep."
While dreaming does occur during other stages of non-REM sleep, Professor Smith says REM sleep is thought to be most important for emotional processing and for forming new memories.
If I'm not in REM sleep, should I be concerned?
Anyone who is cutting their sleep short may be missing out on REM sleep, Professor Smith says.
"Whether it's because of work, family, education or social commitments, not getting enough good sleep can lead to increased daytime sleepiness, difficulty concentrating and poorer memory and increased irritability," he says.
"Babies and children spend more of their sleep in REM, but adults still need about 2 hours or so."
Professor Smith says some sleep disorders can lead to a reduction in REM sleep.
But he says the greatest cause is likely overall poor sleep or insufficient sleep.
What does it mean if I don't have dreams?
Most nights, Jane Teresa Anderson can remember 1-3 dreams.
With an honours degree in zoology specialising in developmental neurobiology, Jane has been researching dreams since 1992.
She says if you think you're not dreaming regularly, then you're actually not remembering your dreams.
"Around 60-80 per cent of our dreams occur in the last 2 hours of an 8-hour sleep," Ms Anderson says.
Why can't I remember my dreams?
"Dreams are definitely related to quality and quantity of sleep," Ms Anderson says.
"Anyone who regularly survives on less than 6 hours sleep may not be remembering many dreams."
Martina Kocian, on the other hand, can remember at least two dreams on a good night.
With a psychology degree and training as an embodied imagination therapist, Ms Kocian has specialised in dream therapy since 2008.
"Our brains prioritise deep sleep for bodily restoration and cellular repair, which means we may only dream for 5-10 minutes for the first three cycles with the final cycle extending up to 45 minutes called the 'pre-dawn flow'," Ms Kocian says.
She says we are most likely to remember dreams in this extended period because they contain more content and because we wake up to them creating opportunity for memory storage.
"If we aren't getting enough deep restoration or enough sleep in general, we are less likely to experience these dream cycles and less likely to remember our dreams as a result."
How can I remember my dreams?
There are many ways that dream memory can be encouraged, according to Dr Denholm Aspy, a research fellow at the University of Adelaide.
He's provided some tips to help you remember yours:
Don't rush into the day as soon as you wake up
Dr Aspy says the "most tried and true method" of remembering your dreams is all in the moment your alarm goes off.
"Ensure that the very first thing you do when you wake up is take the time to try to recall your dreams without getting distracted by other thoughts, like what your plans are for the day ahead," he says.
"Even if you can only recall a small dream fragment, spend some time focusing on this and see if you can recall what happened directly before."
"Sometimes, you can recall a long dream sequence in reverse with this method if you spend some time on it before starting your day," he says.
Supplements such as vitamin B6
"Some supplements, such as vitamin B6, can enhance dream recall," Dr Aspy says.
He says other supplements that can boost dream recall are alpha-GPC and galantamine.
"But these should be used with caution — especially galantamine, as it is a prescription medication."
Meditate before bed
"Meditation before bed, as well as setting the deliberate intention to remember your dreams before going to bed, seems to make it easier to recall dreams," Dr Aspy says.
Keep a dream journal
This doesn't just mean writing your dreams down in a notebook.
Dr Aspy says recording voice notes could also help you recall your dreams.
"This sends an unconscious message to your mind that dreams are important, and also helps you to build the habit of paying attention to your dreams and valuing them," he says,
|
<urn:uuid:7b3e0555-50b1-4366-8a3a-2c3ba6ced53b>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233509023.57/warc/CC-MAIN-20230925151539-20230925181539-00771.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.955398440361023,
"pii_count": 0,
"score": 2.796875,
"token_count": 1819,
"url": "https://www.abc.net.au/news/2023-05-13/cant-remember-your-dreams-lack-of-good-quality-deep-sleep/102230822"
}
|
"I had the craziest dream last night."
In a matter of seconds, someone's dream dictionary is taken out and a round table discussion about being chased, flying and even dying, ensues.
But some people find themselves rarely participating in conversations like these.
Many of us already know how sleep deprivation is linked to a number of chronic health problems. But what about dream deprivation?
The reasons why we aren't dreaming enough lie in the complicated cycles of our sleep.
What makes a 'good' sleep?
Good sleep is sleep that is refreshing and lets you get through the day with energy, according to University of Queensland sleep psychologist Professor Simon Smith.
But Professor Smith says good sleep doesn't always mean "perfect".
"It can be quite normal to wake up a few times during the night, or have the occasional night where getting to sleep is hard," he told ABC News.
A good sleep can be thought about in four ways, Dr Smith says:
- 1.Sleep duration: how many hours were slept?
- 2.Quality: how good was it?
- 3.Timing: do your bedtime and wake-up time suit your lifestyle?
- 4.Regularity: when does your body clock determine your sleep?
"Sleep quality can be defined in the laboratory as getting enough of each 'stage' of sleep," Professor Smith says.
How much sleep do you need?
Individuals vary in their sleep needs, according to the Sleep Health Foundation.
Here's what they recommend, based on a report of expert panels:
Adults: Most require between seven and nine hours per night to feel properly refreshed and function at their best the next day,
Teenagers: Need between eight and 10 hours of sleep per night.
Children aged 6-12: Typically need nine to 12 hours of sleep per night.
Toddlers: Require 11 to 14 hours of sleep per night.
One interesting thing the Sleep Health Foundation points out is the circadian rhythm.
This is our body's natural clock cycle that determines when we sleep, and it's different for teenagers than younger children and adults.
Five hours or less linked to risk of chronic illnesses
Getting five hours of sleep or less a night could increase your risk of getting two or more chronic diseases as you get older, research has found.
A study published in the journal PLOS Medicine in 2022 documented the sleeping habits of 7,000 men and women over 30 years, tracking the amount of sleep they had
|
and the chronic diseases they developed.
Short sleep duration and restless sleep were linked to an increased risk of developing two or more chronic diseases.
Some experts emphasise the risk is relatively low, but others say we need greater awareness of the importance of sleep.
What are the stages of sleep?
Humans cycle through five different stages of sleep each night: NREM1, NREM2, NREM3, NREM4 and rapid eye movement (REM) sleep.
NREM stands for "non-rapid eye movement".
And REM stands for "rapid eye movement".
Here's a breakdown of each stage:
Dozing or drowsiness — you hover between being asleep and awake.
You lose awareness of your surroundings, your body temperature starts to drop and your breathing and heart rate slow down.
Stage 3 &
This is deep sleep, also known as "delta sleep" – your blood pressure, heart rate and breathing become very slow and your muscles relax.
Growth and repair processes occur during this stage.
So the first four stages are non-REM sleep. They are ranked from light sleep to deep sleep.
Light, especially NREM2, is a critical stage of sleep when memories form.
NREM3 and NREM4 are crucial for your body to recover from injuries and to have energy for the next day.
What is REM sleep?
REM sleep is a stage of sleep most associated with dreaming, nightmares and memory consolidation, according to Professor Smith.
"It's a really interesting state to be in, as although definitely 'asleep', your brain is very active," he says.
REM sleep occurs about once every 90 to 120 minutes, according to Victoria's Department of Health.
It makes up about one-quarter of your night’s sleep.
What happens during REM sleep?
"The major muscles of the body are essentially paralysed," Professor Smith says.
"But your heart and other important functions continue on. Your breathing may be less regular and faster than in non-REM sleep."
While dreaming does occur during other stages of non-REM sleep, Professor Smith says REM sleep is thought to be most important for emotional processing and for forming new memories.
If I'm not in REM sleep, should I be concerned?
Anyone who is cutting their sleep short may be missing out on REM sleep, Professor Smith says.
"Whether it's because of work, family, education or social commitments, not getting enough good sleep can lead to increased daytime sleepiness, difficulty concentrating and poorer memory and increased irritability," he says.
"Babies and children spend more of their sleep in REM, but adults still need about 2 hours or so."
Professor Smith says some sleep disorders can lead to a reduction in REM sleep.
But he says the greatest cause is likely overall poor sleep or insufficient sleep.
What does it mean if I don't have dreams?
Most nights, Jane Teresa Anderson can remember 1-3 dreams.
With an honours degree in zoology specialising in developmental neurobiology, Jane has been researching dreams since 1992.
She says if you think you're not dreaming regularly, then you're actually not remembering your dreams.
"Around 60-80 per cent of our dreams occur in the last 2 hours of an 8-hour sleep," Ms Anderson says.
Why can't I remember my dreams?
"Dreams are definitely related to quality and quantity of sleep," Ms Anderson says.
"Anyone who regularly survives on less than 6 hours sleep may not be remembering many dreams."
Martina Kocian, on the other hand, can remember at least two dreams on a good night.
With a psychology degree and training as an embodied imagination therapist, Ms Kocian has specialised in dream therapy since 2008.
"Our brains prioritise deep sleep for bodily restoration and cellular repair, which means we may only dream for 5-10 minutes for the first three cycles with the final cycle extending up to 45 minutes called the 'pre-dawn flow'," Ms Kocian says.
She says we are most likely to remember dreams in this extended period because they contain more content and because we wake up to them creating opportunity for memory storage.
"If we aren't getting enough deep restoration or enough sleep in general, we are less likely to experience these dream cycles and less likely to remember our dreams as a result."
How can I remember my dreams?
There are many ways that dream memory can be encouraged, according to Dr Denholm Aspy, a research fellow at the University of Adelaide.
He's provided some tips to help you remember yours:
Don't rush into the day as soon as you wake up
Dr Aspy says the "most tried and true method" of remembering your dreams is all in the moment your alarm goes off.
"Ensure that the very first thing you do when you wake up is take the time to try to recall your dreams without getting distracted by other thoughts, like what your plans are for the day ahead," he says.
"Even if you can only recall a small dream fragment, spend some time focusing on this and see if you can recall what happened directly before."
"Sometimes, you can recall a long dream sequence in reverse with this method if you spend some time on it before starting your day," he says.
Supplements such as vitamin B6
"Some supplements, such as vitamin B6, can enhance dream recall," Dr Aspy says.
He says other supplements that can boost dream recall are alpha-GPC and galantamine.
"But these should be used with caution — especially galantamine, as it is a prescription medication."
Meditate before bed
"Meditation before bed, as well as setting the deliberate intention to remember your dreams before going to bed, seems to make it easier to recall dreams," Dr Aspy says.
Keep a dream journal
This doesn't just mean writing your dreams down in a notebook.
Dr Aspy says recording voice notes could also help you recall your dreams.
"This sends an unconscious message to your mind that dreams are important, and also helps you to build the habit of paying attention to your dreams and valuing them," he says,
|
Groundhog Day explained: Origins of Punxsutawney Phil and why sunny weather means more winter
Groundhog Day isn't scientific (in fact, Punxsutawney Phil's weather predictions are wrong most of the time).
If we're being honest, it even defies common sense.
The legend is simple: the groundhog's shadow on Feb. 2 predicts the weather for the next six weeks, until the start of spring.
A sunny day means the groundhog will see his shadow – this is taken as a sign that the next six weeks will bring wintry weather. A cloudy day means the opposite.
Groundhog Day 2023:See if Punxsutawney Phil predicts an early spring or 6 more weeks of winter
When is Groundhog Day 2023?Here’s when to expect the furry forecaster to seek his shadow.
Got it? A sunny Groundhog Day means cold weather is coming. A cloudy day means fair weather is on the way.
It seems backwards, right?
Why the shadow tradition?
There are several possible explanations for how the tradition formed, and some of them have roots that predate the 136 years of tradition in little Punxsutawney, Pa. — home of the most famous rodent meteorologist.
The Punxsutawney Groundhog Club traces the tradition's roots back to Candlemas Day in Europe – the Christian "festival of lights" that falls on Feb. 2, midway between the start and end of winter.
Here are a few possibilities for how the significance of the shadow came to be:
- A proverb: Some Candlemas weather traditions were a warning against undue optimism. Essentially: "It might be sunny today, but don't get your hopes up."
- Judgment day: Some Candlemas proverbs suggest the halfway point of winter on the calendar acts as a tipping point of weather. The thinking might go like this: The weather on Feb. 2 represents the previous six weeks of winter. If it's cloudy, it suggests the winter days previous were cloudy and cold, and that the worst is over. If it's sunny, well, then maybe the last six weeks were the easy part: The worst is yet to come.
- The groundhog's prediction: This is a more contemporary explanation that gets repeated with little explanation. Groundhogs stay out of their den if it's cloudy and run back into their den if it's sunny. Obviously, if they're hiding in their den, they believe the coming days will be cold.
- Spring weather: In one way of thinking, early spring is often wet and rainy. So a cloudy day might be typical in a spring-like late winter.
- ¯\_(ツ)_/¯: To be honest, a shrug emoji might be the best explanation there is. The club also claims Phil is immortal, is correct 100 percent of the time and speaks a secret language to men in top hats. This isn't really a subject that should be over-thought.
More on that famous groundhog:Can a rodent predict the weather better than a meteorologist can? Groundhog Day, explained.
|
<urn:uuid:369ebb3f-8408-431c-ac7b-2d414678f924>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644855.6/warc/CC-MAIN-20230529105815-20230529135815-00249.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9440211057662964,
"pii_count": 0,
"score": 3.28125,
"token_count": 647,
"url": "https://www.timesonline.com/story/news/2023/01/31/groundhog-day-explained-origins-of-punxsutawney-phil-why-he-sees-his-shadow/69857377007/"
}
|
Groundhog Day explained: Origins of Punxsutawney Phil and why sunny weather means more winter
Groundhog Day isn't scientific (in fact, Punxsutawney Phil's weather predictions are wrong most of the time).
If we're being honest, it even defies common sense.
The legend is simple: the groundhog's shadow on Feb. 2 predicts the weather for the next six weeks, until the start of spring.
A sunny day means the groundhog will see his shadow – this is taken as a sign that the next six weeks will bring wintry weather. A cloudy day means the opposite.
Groundhog Day 2023:See if Punxsutawney Phil predicts an early spring or 6 more weeks of winter
When is Groundhog Day 2023?Here’s when to expect the furry forecaster to seek his shadow.
Got it? A sunny Groundhog Day means cold weather is coming. A cloudy day means fair weather is on the way.
It seems backwards, right?
Why the shadow tradition?
There are several possible explanations for how the tradition formed, and some of them have roots that predate the 136 years of tradition in little Punxsutawney, Pa. — home of the most famous rodent meteorologist.
The Punxsutawney Groundhog Club traces the tradition's roots back to Candlemas Day in Europe – the Christian "festival of lights" that falls on Feb. 2, midway between the start and end of winter.
Here are a few possibilities for how the significance of the shadow came to be:
- A proverb: Some Candlemas weather traditions were a warning against undue optimism. Essentially: "It might be sunny today, but don't get your hopes up."
- Judgment day: Some Candlemas proverbs suggest the halfway point of winter on the calendar acts as a tipping point of weather. The thinking might go like this: The weather on Feb. 2 represents the previous six weeks of winter. If it's cloudy, it suggests the winter days previous were cloudy and cold, and that the worst is over. If it's sunny, well, then maybe the last six weeks were the easy part: The worst is yet to come.
- The groundhog's prediction: This is a more contemporary explanation that gets repeated with little explanation. Groundhogs stay out of their den if it's cloudy and
|
run back into their den if it's sunny. Obviously, if they're hiding in their den, they believe the coming days will be cold.
- Spring weather: In one way of thinking, early spring is often wet and rainy. So a cloudy day might be typical in a spring-like late winter.
- ¯\_(ツ)_/¯: To be honest, a shrug emoji might be the best explanation there is. The club also claims Phil is immortal, is correct 100 percent of the time and speaks a secret language to men in top hats. This isn't really a subject that should be over-thought.
More on that famous groundhog:Can a rodent predict the weather better than a meteorologist can? Groundhog Day, explained.
|
During the colonial era, Britain routinely committed ethnic cleansing and applied genocidal policies in Kenya. It is time Britain apologized and paid reparations to millions of Kenyans who suffered under British rule.
On August 20, a group of Kenyans filed a case against Britain at the European Court of Human Rights. They were seeking justice for the atrocities the British committed against them during the colonial era. They are seeking $200 billion in reparations for the crimes perpetrated in the tea-growing regions in the Kenyan Highlands. Unsurprisingly, Britain has failed to address, leave aside apologize for, these atrocities in Kenya.
To be fair, the British have apologized for one of their darkest acts in Kenya. In 2013, the government “finalized an out-of-court settlement with thousands of Kenyans who were tortured in detention camps during the end of the British colonial reign.” The British were crushing the Mau Mau — Kenyan rebels from the Kikuyu tribe — who fought in the 1950s and 1960s. It took years before the historic apology and the unprecedented settlement was finalized in 2013.
In 2022, Kenya is back in the news for seeking justice for another brutal British act. With nearly 56 million, Kenya is a dynamic East African country. It now has a literacy rate of 78% but its per capita income is barely $1,879, ranking lowly 144 in the world. Many argue that many of Kenya’s current problems are a legacy of British colonialism.
For millennia before British colonization, the people we now call Kenyans comprised many tribes. There was sporadic violence but these tribes lived in relative peace and harmony. Some communities farmed, others raised livestock, while others practiced a combination of both activities. Some were hunters and those by Lake Victoria fished. Production served the needs of communal survival. Family and clans shared ownership and cooperated in production as well as distribution. These communitarian societies ensured that no one fell into abject poverty. Boundaries between different ethnic groups were fluid. Trade and intermarriage were prevalent. Notably, communities generally operated without the modern version of the chief.
British colonization ripped apart the social fabric of the communities who now live in Kenya. British rule kicked off with the 1884/85 Berlin Conference, which deprived Kenyans of their natural, territorial, and political rights. In 1894, Britain declared Kenya a protectorate of the Crown. Its officials created Kenya and drew the nation’s boundaries without ever consulting the Kenyans themselves. These new boundaries divided existing communities and brought disparate ethnic groups into a new country. The British created an atmosphere in which communities had to compete for resources and survival. They ruled over the communities with an iron hand. Their military expeditions stole people’s lands and forced many to migrate in a genocidal campaign.
The British confiscated the land they coveted. They instituted forced labor, turning Kenyans into the property of the British settlers. In 1902, they inaugurated the hut tax, which forced the natives to work for the British to pay the tax or be forced to serve the British settlers. In 1913, they introduced the land bill. This gave British settlers a 999-year lease and effectively confiscated nearly all Kenyan land. In 1919. they required all native men to wear identity discs, more than a decade before the Nazis adopted the same policy with the Jews. In the 1920s, natives were forced to live on reservations and subjected to flogging, much as the British had done to the indigenous peoples from North America to Australia.
Mau Mau Uprising
After World War II, India gained independence in 1947. This inspired the African independence movements. In 1952, the Mau Mau movement for self-determination began. When Princess Elizabeth and her husband Prince Philip visited Kenya that year, Elizabeth reportedly went up into a treehouse as a princess and came down as Queen Elizabeth II.
Whilst the royals were putting up a pretty face, British forces were planning one of the world’s worst ethnic cleansing operations. They went on to smash the Mau Mau through brutal methods. When Kenya achieved independence in 1963, the British destroyed all their official records. In this Cold War era, the US was aware of British atrocities but looked the other way.
Supported at the “highest levels”, the British purged the capital city Nairobi of Kikuyu people, placing them in “barbed-wire enclosures”. They interrogated thousands of detainees. Their interrogators resorted to all types of torture, including forced labor, beatings, starvation, and sexual abuse. Records show that one of those “tortured was the grandfather of former US President Barack Obama”.
In a span of 18 months, the British dropped “6 million bombs into Kenya’s forests to disrupt guerrilla activity.” Then, the British “dusted Kikuyu areas with photographs of mutilated women to intimidate the populace.”
In her book, Imperial Reckoning: The Untold Story of Britain’s Gulag in Kenya, Caroline Elkins observes that thousands of Kenyans fought alongside British forces against Germany in World War II. The British repaid the Kenyans with barbarism, not gratitude. They locked up around 1.5 million Kenyans in detention camps and barbed-wired townships in response and killed thousands.
In her 70-year reign, Elizabeth never acknowledged or apologized for British atrocities. Neither did any prime minister. Winston Churchill was then prime minister. Lionized in the UK even today for taking on Adolf Hitler, Churchill escapes scrutiny for his racist, imperialist and ruthless actions in the colonies. In 1919, he wrote that he was “strongly in favor of using poisoned gas against uncivilized tribes.” He ordered that British forces put down the 1920 Iraqi rebellion with an iron hand. Churchill advocated spreading “a lively terror” among the natives so that they would come to heel. In Iraq, the Royal Air Force flew missions for 4,008 hours, dropped 97 tons of bombs and fired 183,861 rounds. They used chemical weapons on Iraqis, over 60 years before Saddam Hussein who targeted Iranians, Shia Arabs and Iraqi Kurds. Under Churchill, the British government unleashed similar brutality upon the Kenyans.
The British forced the natives away from their ancestral lands and into reservations. Only a few years after the Holocaust, the British locked up 1.5 Kikuyu people in concentration camps, torturing, beating, and starving them to death in large numbers. This was an egregious act amounting to naked genocide. Their signature on the UN Charter did not hold them back.
An example of British brutality was revealed in court in 2012. Four Kenyan victims appeared before the High Court in London. Jane Mara, one of the victims, was 15-years-old at the time. She was repeatedly beaten by the interrogators. They pinned her down on her back while four guards held her thighs wide open and kicked a heated glass bottle into her vagina. After that excruciating pain, she witnessed the same torture inflicted on three other young women. Men were not spared either. The British designed pliers to squeeze male testicles.
The US Supported the UK
After World War II, the US became top dog. The Cold War began. The UK was now a trusted ally. Therefore, the US overlooked British atrocities in Kenya. Washington was well aware of the British conducting genocide in Kenya. Just as in the Congo and in Vietnam, the US sided with the white imperial powers against the colored peoples of the colonies. Remember this was still a time when the US itself was segregated along racial lines. The US wanted to free Eastern Europe from Soviet rule but it wanted to perpetuate British, French or Belgian rule elsewhere.
In the first half of the 20th century, Vanderbilt University scholar Juan M. Floyd-Thomas observed in the Journal of American History that Americans thought of East Africa as “a real white man’s country.” They believed that Kenya deserved Western imperialism and white supremacy. Over centuries, the US practiced ethnic cleansing of Native Americans, enslaved African Americans and subjugated ethnic minorities. These races were deemed biologically and intellectually inferior to the white race.
As is their habit, the US mainstream media, including The New York Times, followed the official US narrative. They painted a picture of the African continent described as “synonymous with terror, hopelessness, and conflict.” The media represented the Mau Mau fighters as terrorists and criminals with communist connections. They failed to recognize that Kenyans were involved in a liberation movement. Just like George Washinton and Thomas Jefferson, they too were fighting for independence.
UN Failure and Case for Reparations
After World War II, the UN has consistently failed to stop genocide, prevent ethnic cleansing or rescue victims. It has been unable to bring the guilty to justice. The UN has failed all around the world from Cambodia to Sudan.
The UN represents the interests of powerful nations. Five of them have veto power in the Security Council. Naturally, the Peace Worldwide Organization considers the UN a failed institution, and gives it a mere 12 out of 100.
The UN has failed to deliver justice to the Kenyans too. Despite British denials and cover-ups, evidence of their atrocities is overwhelming. So, an International Court of Tribunal for Kenya (ICTK) would be a good first start. Just as Holocaust victims have been compensated, their properties restituted, Kenyans must also get compensation and restitution.
The British must acknowledge, apologize and make reparations for the genocide and atrocities they committed during colonial times. Importantly, reparation payments should go directly to victims and their descendants, not into the coffers of Kenya’s corrupt government. A sum must be set aside for education and infrastructure to compensate for the ravages of colonization.
No sum can ever wipe out the suffering of the Keynan people. However, reparations are important for three reasons. First, victims get justice. Second, poor countries and poor victims get valuable financial support. Third, they set an important precedent of imperial masters being held accountable. Germany paid compensation to Jews who suffered unspeakable tragedy during the Holocaust, This has made the country less likely to repeat the atrocities of the past. The UK must be held to account so that the British do not repeat the colonial misadventures of Kenya and India in places like Iraq and Libya.
Source: Mehdi Alavi , Fair observer.
|
<urn:uuid:ae133e00-e2c0-4920-ad14-5143102976e5>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00366.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9646822810173035,
"pii_count": 0,
"score": 3.5,
"token_count": 2147,
"url": "https://albagranadanorthafrica.wordpress.com/2023/02/07/british-genocide-in-kenya-time-for-a-reckoning/"
}
|
During the colonial era, Britain routinely committed ethnic cleansing and applied genocidal policies in Kenya. It is time Britain apologized and paid reparations to millions of Kenyans who suffered under British rule.
On August 20, a group of Kenyans filed a case against Britain at the European Court of Human Rights. They were seeking justice for the atrocities the British committed against them during the colonial era. They are seeking $200 billion in reparations for the crimes perpetrated in the tea-growing regions in the Kenyan Highlands. Unsurprisingly, Britain has failed to address, leave aside apologize for, these atrocities in Kenya.
To be fair, the British have apologized for one of their darkest acts in Kenya. In 2013, the government “finalized an out-of-court settlement with thousands of Kenyans who were tortured in detention camps during the end of the British colonial reign.” The British were crushing the Mau Mau — Kenyan rebels from the Kikuyu tribe — who fought in the 1950s and 1960s. It took years before the historic apology and the unprecedented settlement was finalized in 2013.
In 2022, Kenya is back in the news for seeking justice for another brutal British act. With nearly 56 million, Kenya is a dynamic East African country. It now has a literacy rate of 78% but its per capita income is barely $1,879, ranking lowly 144 in the world. Many argue that many of Kenya’s current problems are a legacy of British colonialism.
For millennia before British colonization, the people we now call Kenyans comprised many tribes. There was sporadic violence but these tribes lived in relative peace and harmony. Some communities farmed, others raised livestock, while others practiced a combination of both activities. Some were hunters and those by Lake Victoria fished. Production served the needs of communal survival. Family and clans shared ownership and cooperated in production as well as distribution. These communitarian societies ensured that no one fell into abject poverty. Boundaries between different ethnic groups were fluid. Trade and intermarriage were prevalent. Notably, communities generally operated without the modern version of the chief.
British colonization ripped apart the social fabric of the communities who now live in Kenya. British rule kicked off with the 1884/85 Berlin Conference, which deprived Kenyans of their natural, territorial,
|
and political rights. In 1894, Britain declared Kenya a protectorate of the Crown. Its officials created Kenya and drew the nation’s boundaries without ever consulting the Kenyans themselves. These new boundaries divided existing communities and brought disparate ethnic groups into a new country. The British created an atmosphere in which communities had to compete for resources and survival. They ruled over the communities with an iron hand. Their military expeditions stole people’s lands and forced many to migrate in a genocidal campaign.
The British confiscated the land they coveted. They instituted forced labor, turning Kenyans into the property of the British settlers. In 1902, they inaugurated the hut tax, which forced the natives to work for the British to pay the tax or be forced to serve the British settlers. In 1913, they introduced the land bill. This gave British settlers a 999-year lease and effectively confiscated nearly all Kenyan land. In 1919. they required all native men to wear identity discs, more than a decade before the Nazis adopted the same policy with the Jews. In the 1920s, natives were forced to live on reservations and subjected to flogging, much as the British had done to the indigenous peoples from North America to Australia.
Mau Mau Uprising
After World War II, India gained independence in 1947. This inspired the African independence movements. In 1952, the Mau Mau movement for self-determination began. When Princess Elizabeth and her husband Prince Philip visited Kenya that year, Elizabeth reportedly went up into a treehouse as a princess and came down as Queen Elizabeth II.
Whilst the royals were putting up a pretty face, British forces were planning one of the world’s worst ethnic cleansing operations. They went on to smash the Mau Mau through brutal methods. When Kenya achieved independence in 1963, the British destroyed all their official records. In this Cold War era, the US was aware of British atrocities but looked the other way.
Supported at the “highest levels”, the British purged the capital city Nairobi of Kikuyu people, placing them in “barbed-wire enclosures”. They interrogated thousands of detainees. Their interrogators resorted to all types of torture, including forced labor, beatings, starvation, and sexual abuse. Records show that one of those “tortured was the grandfather of former US President Barack Obama”.
In a span of 18 months, the British dropped “6 million bombs into Kenya’s forests to disrupt guerrilla activity.” Then, the British “dusted Kikuyu areas with photographs of mutilated women to intimidate the populace.”
In her book, Imperial Reckoning: The Untold Story of Britain’s Gulag in Kenya, Caroline Elkins observes that thousands of Kenyans fought alongside British forces against Germany in World War II. The British repaid the Kenyans with barbarism, not gratitude. They locked up around 1.5 million Kenyans in detention camps and barbed-wired townships in response and killed thousands.
In her 70-year reign, Elizabeth never acknowledged or apologized for British atrocities. Neither did any prime minister. Winston Churchill was then prime minister. Lionized in the UK even today for taking on Adolf Hitler, Churchill escapes scrutiny for his racist, imperialist and ruthless actions in the colonies. In 1919, he wrote that he was “strongly in favor of using poisoned gas against uncivilized tribes.” He ordered that British forces put down the 1920 Iraqi rebellion with an iron hand. Churchill advocated spreading “a lively terror” among the natives so that they would come to heel. In Iraq, the Royal Air Force flew missions for 4,008 hours, dropped 97 tons of bombs and fired 183,861 rounds. They used chemical weapons on Iraqis, over 60 years before Saddam Hussein who targeted Iranians, Shia Arabs and Iraqi Kurds. Under Churchill, the British government unleashed similar brutality upon the Kenyans.
The British forced the natives away from their ancestral lands and into reservations. Only a few years after the Holocaust, the British locked up 1.5 Kikuyu people in concentration camps, torturing, beating, and starving them to death in large numbers. This was an egregious act amounting to naked genocide. Their signature on the UN Charter did not hold them back.
An example of British brutality was revealed in court in 2012. Four Kenyan victims appeared before the High Court in London. Jane Mara, one of the victims, was 15-years-old at the time. She was repeatedly beaten by the interrogators. They pinned her down on her back while four guards held her thighs wide open and kicked a heated glass bottle into her vagina. After that excruciating pain, she witnessed the same torture inflicted on three other young women. Men were not spared either. The British designed pliers to squeeze male testicles.
The US Supported the UK
After World War II, the US became top dog. The Cold War began. The UK was now a trusted ally. Therefore, the US overlooked British atrocities in Kenya. Washington was well aware of the British conducting genocide in Kenya. Just as in the Congo and in Vietnam, the US sided with the white imperial powers against the colored peoples of the colonies. Remember this was still a time when the US itself was segregated along racial lines. The US wanted to free Eastern Europe from Soviet rule but it wanted to perpetuate British, French or Belgian rule elsewhere.
In the first half of the 20th century, Vanderbilt University scholar Juan M. Floyd-Thomas observed in the Journal of American History that Americans thought of East Africa as “a real white man’s country.” They believed that Kenya deserved Western imperialism and white supremacy. Over centuries, the US practiced ethnic cleansing of Native Americans, enslaved African Americans and subjugated ethnic minorities. These races were deemed biologically and intellectually inferior to the white race.
As is their habit, the US mainstream media, including The New York Times, followed the official US narrative. They painted a picture of the African continent described as “synonymous with terror, hopelessness, and conflict.” The media represented the Mau Mau fighters as terrorists and criminals with communist connections. They failed to recognize that Kenyans were involved in a liberation movement. Just like George Washinton and Thomas Jefferson, they too were fighting for independence.
UN Failure and Case for Reparations
After World War II, the UN has consistently failed to stop genocide, prevent ethnic cleansing or rescue victims. It has been unable to bring the guilty to justice. The UN has failed all around the world from Cambodia to Sudan.
The UN represents the interests of powerful nations. Five of them have veto power in the Security Council. Naturally, the Peace Worldwide Organization considers the UN a failed institution, and gives it a mere 12 out of 100.
The UN has failed to deliver justice to the Kenyans too. Despite British denials and cover-ups, evidence of their atrocities is overwhelming. So, an International Court of Tribunal for Kenya (ICTK) would be a good first start. Just as Holocaust victims have been compensated, their properties restituted, Kenyans must also get compensation and restitution.
The British must acknowledge, apologize and make reparations for the genocide and atrocities they committed during colonial times. Importantly, reparation payments should go directly to victims and their descendants, not into the coffers of Kenya’s corrupt government. A sum must be set aside for education and infrastructure to compensate for the ravages of colonization.
No sum can ever wipe out the suffering of the Keynan people. However, reparations are important for three reasons. First, victims get justice. Second, poor countries and poor victims get valuable financial support. Third, they set an important precedent of imperial masters being held accountable. Germany paid compensation to Jews who suffered unspeakable tragedy during the Holocaust, This has made the country less likely to repeat the atrocities of the past. The UK must be held to account so that the British do not repeat the colonial misadventures of Kenya and India in places like Iraq and Libya.
Source: Mehdi Alavi , Fair observer.
|
Today officially marks the first day of winter in Australia, but we're already in the thick of an increase in COVID-19 cases.
Some people take longer than others to bounce back after a bout of illness, but how do you tell the difference between a slow recovery and long COVID?
Let's unpack what we know about COVID's lasting shadow.
When does COVID become long COVID?
In simple terms, if you've had a COVID infection and you still have symptoms after three months it could be long COVID, explains Anthony Byrne.
He is an associate professor at the long COVID clinic at St Vincent's Hospital in Sydney and says there are two time frames to pay attention to:
- Persisting symptoms after 28 days are known as a "post-acute infection"
- Persisting symptoms after three months could be considered long COVID.
"Unless someone can find a better explanation," he said. "Sometimes we can, sometimes we can't."
He also said a long COVID diagnosis is dependent on the persistence of symptoms that "are not otherwise explained by an alternative diagnosis"
How can I tell if I have long COVID?
It can be really difficult.
That's because many symptoms overlap with other conditions, Deakin University's chair of epidemiology Catherine Bennett says.
If you suspect you might have long COVID, she suggests it’s best to see a doctor or respiratory physician to start the process.
More than 200 symptoms have now been associated with long COVID, affecting almost every organ system in the human body.
But work's being done to narrow it down.
What are the 12 symptoms of long COVID?
A significant new study based on nearly 10,000 patients in the US identified 12 common symptoms associated with those suffering with long COVID.
According to the study, led by the US National Institutes of Health's RECOVER Initiative and published in the medical journal JAMA, these symptoms commonly persisted for six months after infection.
Here are the top 12 symptoms:
- Post-exertional malaise (debilitating fatigue exacerbated by activity)
- Brain Fog
- Gastrointestinal symptoms
- Changes in sexual desire or capacity
- Loss or change of taste and smell
- Chronic cough
- Chest Pain
- Abnormal movements (including tremors, slowed movements or sudden, unintended and uncontrollable jerky movements).
Researchers said a person with symptoms not on this list may still have long COVID, but it is a first step in identifying "common language" for those working toward treatments.
Professor Byrne says a symptom his clinic often sees not included on the US study findings is breathlessness.
"Breathlessness is a really important symptom ... As a respiratory physician we see it a lot in long COVID patients," he says.
Why do these symptoms matter?
With many in Australia struggling to get a diagnosis, Dr Bennett said the study could help by "attaching a long COVID diagnosis to different packages of symptoms."
"The study findings could be a basis for a diagnostic tool in Australia, and that's something we don't have," Dr Bennett said.
The recent parliamentary inquiry report found many long COVID patients were frustrated with the lack of answers or consistent advice from healthcare professionals, and were now disillusioned with Australia's healthcare system.
"Having a tool like this for doctors, collecting information on symptoms in a systematic way … it won't necessarily be 100 per cent correct as a diagnostic tool, but by using it will start to collect important data that will help us refine and understand the frequency and risk factors better also," Dr Bennett said.
"This is the sort of thing that could help GPs at the moment who often haven't seen a long COVID patient before."
Why do some people get long COVID, while others don't?
It's not entirely clear.
That's because there's still so much experts don't know about long COVID.
The World Health Organization estimates long COVID could affect 10 to 20 per cent of people who have a COVID-19 infection.
Reports cited in last month's parliamentary inquiry into long COVID puts that five down to about 5 per cent in Australia.
The US study said long COVID was more common and severe in participants infected before the Omicron strain emerged in late 2021, as well as unvaccinated participants.
People who got infected multiple times were also more likely to develop long COVID, the study said.
Data from an Australian post-COVID clinic shows it often affects women in their 40s and 50s, most of whom led active lives before they got sick.
What treatments are available for long COVID?
There are no treatments specifically approved for long COVID, though some patients get relief from painkillers, medications used for other conditions and physical therapy.
But that's not to say nothing can be done, Associate Professor Byrne said.
One example of a clinical trial St Vincent's is participating in is called IMPACT-ico.
"This provides long COVID patients the opportunity to receive an oral medication or standard care to potentially assist their recovery."
Along with clinical studies taking place, Associate Professor Byrne said the clinic "have comprehensive medical assessment to assess and treat co morbid conditions that are present in long COVID patients," he said.
Is more help on the horizon?
Australia's chief medical officer Paul Kelly said a national plan is being developed to respond to long COVID.
And more than $50 million has been allocated towards research now the parliamentary inquiry has wrapped up.
The committee also made nine unanimous recommendations in its new report, prompting the spend.
A key recommendation was coming up with a nationally agreed and consistent definition of long COVID — because, at the moment, there's a few going around.
Australia is currently using both the definition from WHO and the UK's National Institute for Health and Care Excellence.
Professor Kelly says both these definitions "are great for research purposes because they are so broad".
"But in terms of trying to understand [long COVID], we have to get beyond [them]."
The report also included an urgent call for improved data collection about long COVID cases.
In the meantime, Dr Byrne said he sees a lot of people spending unnecessary money on treatments not properly or rigorously evaluated.
"I just caution people, [things like] anti-inflammatory diet fads, or 'take these 20 vitamins because they’ll be really good for you', all of that stuff, you could spend a lot of money on it and it probably won't work," he said.
"It’s important to caution readers about being taken on a wild goose chase or being sold snake oil."
|
<urn:uuid:4382cf93-da14-4f39-b94c-e13f02ecf8d8>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510387.77/warc/CC-MAIN-20230928095004-20230928125004-00442.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9571163654327393,
"pii_count": 0,
"score": 2.90625,
"token_count": 1409,
"url": "https://www.abc.net.au/news/2023-06-01/brain-fog-fatigue-what-are-symptoms-of-long-covid/102405424"
}
|
Today officially marks the first day of winter in Australia, but we're already in the thick of an increase in COVID-19 cases.
Some people take longer than others to bounce back after a bout of illness, but how do you tell the difference between a slow recovery and long COVID?
Let's unpack what we know about COVID's lasting shadow.
When does COVID become long COVID?
In simple terms, if you've had a COVID infection and you still have symptoms after three months it could be long COVID, explains Anthony Byrne.
He is an associate professor at the long COVID clinic at St Vincent's Hospital in Sydney and says there are two time frames to pay attention to:
- Persisting symptoms after 28 days are known as a "post-acute infection"
- Persisting symptoms after three months could be considered long COVID.
"Unless someone can find a better explanation," he said. "Sometimes we can, sometimes we can't."
He also said a long COVID diagnosis is dependent on the persistence of symptoms that "are not otherwise explained by an alternative diagnosis"
How can I tell if I have long COVID?
It can be really difficult.
That's because many symptoms overlap with other conditions, Deakin University's chair of epidemiology Catherine Bennett says.
If you suspect you might have long COVID, she suggests it’s best to see a doctor or respiratory physician to start the process.
More than 200 symptoms have now been associated with long COVID, affecting almost every organ system in the human body.
But work's being done to narrow it down.
What are the 12 symptoms of long COVID?
A significant new study based on nearly 10,000 patients in the US identified 12 common symptoms associated with those suffering with long COVID.
According to the study, led by the US National Institutes of Health's RECOVER Initiative and published in the medical journal JAMA, these symptoms commonly persisted for six months after infection.
Here are the top 12 symptoms:
- Post-exertional malaise (debilitating fatigue exacerbated by activity)
- Brain Fog
- Gastrointestinal symptoms
- Changes in sexual desire or capacity
- Loss or change of taste and smell
- Chronic cough
- Chest Pain
- Abnormal movements (including tremors, slowed movements or sudden, unintended and uncontrollable jerky movements).
Researchers said a person with symptoms not on this list may still have long COVID, but it is a first step in identifying "common language" for those working toward treatments.
|
Professor Byrne says a symptom his clinic often sees not included on the US study findings is breathlessness.
"Breathlessness is a really important symptom ... As a respiratory physician we see it a lot in long COVID patients," he says.
Why do these symptoms matter?
With many in Australia struggling to get a diagnosis, Dr Bennett said the study could help by "attaching a long COVID diagnosis to different packages of symptoms."
"The study findings could be a basis for a diagnostic tool in Australia, and that's something we don't have," Dr Bennett said.
The recent parliamentary inquiry report found many long COVID patients were frustrated with the lack of answers or consistent advice from healthcare professionals, and were now disillusioned with Australia's healthcare system.
"Having a tool like this for doctors, collecting information on symptoms in a systematic way … it won't necessarily be 100 per cent correct as a diagnostic tool, but by using it will start to collect important data that will help us refine and understand the frequency and risk factors better also," Dr Bennett said.
"This is the sort of thing that could help GPs at the moment who often haven't seen a long COVID patient before."
Why do some people get long COVID, while others don't?
It's not entirely clear.
That's because there's still so much experts don't know about long COVID.
The World Health Organization estimates long COVID could affect 10 to 20 per cent of people who have a COVID-19 infection.
Reports cited in last month's parliamentary inquiry into long COVID puts that five down to about 5 per cent in Australia.
The US study said long COVID was more common and severe in participants infected before the Omicron strain emerged in late 2021, as well as unvaccinated participants.
People who got infected multiple times were also more likely to develop long COVID, the study said.
Data from an Australian post-COVID clinic shows it often affects women in their 40s and 50s, most of whom led active lives before they got sick.
What treatments are available for long COVID?
There are no treatments specifically approved for long COVID, though some patients get relief from painkillers, medications used for other conditions and physical therapy.
But that's not to say nothing can be done, Associate Professor Byrne said.
One example of a clinical trial St Vincent's is participating in is called IMPACT-ico.
"This provides long COVID patients the opportunity to receive an oral medication or standard care to potentially assist their recovery."
Along with clinical studies taking place, Associate Professor Byrne said the clinic "have comprehensive medical assessment to assess and treat co morbid conditions that are present in long COVID patients," he said.
Is more help on the horizon?
Australia's chief medical officer Paul Kelly said a national plan is being developed to respond to long COVID.
And more than $50 million has been allocated towards research now the parliamentary inquiry has wrapped up.
The committee also made nine unanimous recommendations in its new report, prompting the spend.
A key recommendation was coming up with a nationally agreed and consistent definition of long COVID — because, at the moment, there's a few going around.
Australia is currently using both the definition from WHO and the UK's National Institute for Health and Care Excellence.
Professor Kelly says both these definitions "are great for research purposes because they are so broad".
"But in terms of trying to understand [long COVID], we have to get beyond [them]."
The report also included an urgent call for improved data collection about long COVID cases.
In the meantime, Dr Byrne said he sees a lot of people spending unnecessary money on treatments not properly or rigorously evaluated.
"I just caution people, [things like] anti-inflammatory diet fads, or 'take these 20 vitamins because they’ll be really good for you', all of that stuff, you could spend a lot of money on it and it probably won't work," he said.
"It’s important to caution readers about being taken on a wild goose chase or being sold snake oil."
|
On Cape Cod: Jeanne, Mary Morrison know frontline struggle for civil rights
In the summer of 1949, Mary Morrison and a gaggle of her friends boarded a bus that was headed to what is now Joint Base Cape Cod for a teen dance.
But the bus driver told the girls of color that the bus, and the dance, was for the other group.
“We knew what the other group meant — even as teenagers,” said Morrison, 90. “The dances were segregated at that time, and we could only go with our own kind.”
With the holiday honoring Martin Luther King Jr. approaching, Morrison and her daughter Jeanne, 64, both of Hyannis, recently talked about their experiences on the frontlines of the civil rights struggles of the '40s, '50s and '60s during an interview on Wednesday with the Cape Cod Times.
Route 130 in Mashpee was once an undesirable part of town, Mary Morrison says
Originally from Hyde Park, Mary Morrison had spent every summer since she was nine at her mother Winnifred "Louise" Glover-Ellis-Hind's home in Mashpee. The house, off Route 130, was once a stagecoach stop.
At the time, Route 130 in Mashpee was considered an undesirable place to live, she said. Segregation was enforced town by town, by limiting areas where people of color could buy homes.
"Our neighborhood was filled with mostly Wampanoag, Cape Verdean, and a few outsiders from Cambridge and Boston," she said. "That’s the only place you could really live if you were Black."
After graduating from Anna Maria College in Paxton with a Bachelor of Arts in French in 1952, Morrison went on to Université Laval in Canada to obtain a Master of Arts in French, and graduated speaking Spanish, French and Latin in addition to English. With a desire to educate, she headed to Prairie View A & M University in Texas, a historically black college, where she was hired as a French and Spanish teacher.
Mary Morrison was accustomed to Northern segregation, but the explicit rules of engagement between whites and Blacks in the South in 1953 was eye-opening, she said.
"I hadn't witnessed prejudice like that before," she said.
A friend drinks from a whites-only water fountain in Texas
For Morrison, the water fountains for Blacks only were laughable. But, after her friend Vivian — a preacher's daughter from Indiana — took a long, cool, sip from the whites-only water fountain, she understood it was nothing to joke about.
"Vivian was kind of a wisecracking girl, and she was very outspoken," Morrison said. "She drank out of the whites-only fountain and someone slapped her because she did."
Morrison's experiences in Texas were unnerving and enraging, she said. But she met her future husband, George Morrison, at Prairie View. The couple, who would go on to have seven children, eventually tired of the South after George Morrison was accused of kidnapping, as the two were filling their car with gas. Morrison, who is Passamaquoddy and mixed race, was lighter than her darker skinned husband, who was originally from Texas.
The couple found their way to Mashpee in 1959, and later bought a house in a segregated neighborhood in Hyannis, where they raised their children.
'A touch of tar brush'
Legalized segregation held its grip on Black people in the South, but the North had a less obvious code and culture that supported racist systems, Mary Morrison said. If a person had a hint of color, it was important to walk the line.
"My mother used to say, if you had 'a touch of tar brush, you had to follow the rules,'” Morrison said.
George Morrison, who had earned a Bachelor's of Science degree at Prairie View, was the first Black man to work at the Cape Cod Times and New Bedford Standard-Times as a circulation manager, according to his daughter Jeanne. He was consistently passed over by white counterparts for promotions throughout his 10 years there, she said.
Dolores DaLuz is keynote speaker of Sunday MLK event
Eventually, George Morrison became a meat cutter for Shaw's supermarket. Morrison, who died in 2006 at 76, also became a Barnstable Police Department auxiliary officer, and he owned Honest George's Taxi.
Applying for jobs on Cape Cod
Despite a degree and the ability to speak four languages, Mary Morrison worked as a chambermaid. She worked for wealthy people in Osterville, including the Kennedys.
"Ethel couldn't have been nicer. She would greet you at the door and say thank you so much for the help," said Morrison. "A couple of times when I was at church she came and knelt down beside me."
Mary Morrison kept applying for teaching jobs and managed to land an interview at Barnstable High School. But when she appeared at the school, the principal at the time wouldn't hire her. Instead, he hired a white woman who had just graduated college, Morrison said.
"Barnstable High wasn't ready," she said.
Cape human rights awards focus on equity, access to healthcare
Eventually, she was hired at Nauset Regional Middle School by Dick Hoyt, who was head of the language department at the time. She was one of two multi-racial people at the school in September 1969.
One of her colleagues was Frank James, a Wampanoag and staunch supporter of Indian rights, she said.
Family car trips to the South
The Morrisons raised their children on Cape Cod, but often traveled to visit George Morrison's mother in Alto, Texas. As the family traveled by car and reached Southern states, Jeanne Morrison said her parents' energy changed.
"They were more serious. More fearful," she said.
Jeanne Morrison remembers Alto as a rural town, with the dirt so red the bottoms of the children's feet were stained orange.
There was also a stark, physical divide throughout the town, she said.
"All the colored people were on one side of the street and all the white people were on the other," she said.
Experiences of Black families in the South
In Alto, when George Morrison was a young boy, the Ku Klux Klan would grab a Black man every Friday night, and shoot at his feet for fun, Jeanne Morrison said.
"This was led by the sheriff's office. My grandmother showed me the building where they did it," she said.
George Morrison had two uncles who returned from World War I and wore their uniforms around town. "They were hung for being uppity," Jeanne Morrison said.
On one of the family trips from Boston to Alto, they stopped to buy ice cream in Mississippi.
“We came up to the window and the man said, 'I don’t serve (N-word),'" said Jeanne Morrison. “It just felt violent. My father got us in the car, and we kept going."
In Hyannis, at age nine, Jeanne Morrison had a similar experience as she rode her bike with her sister. They cut through a corner gas station, and two white men were gassing up a red Cadillac with a bumper sticker or decal that read "Southie," a nickname for a neighborhood of Boston. When the men spotted the two girls, the men hurled racial slurs.
The operator of the gas station "came flying out from the gas station, throwing stuff at them and cursing. They all had an altercation," Morrison said. "My sister and I just went into the garage bay and huddled in the corner. We were scared to death."
Change in the racial landscape was well-known to Morrison family
The Morrisons and their friends were keenly aware as the racial landscape changed.
Martin Luther King, Jr. joined lunch counter sit-ins in 1959 and then led a freedom walk of 125,000 people in Detroit in 1963. In 1963, King gave his "I Have a Dream speech" from the Lincoln Memorial in Washington before 250,000 people.
"The white people around us saw him as a troublemaker," Mary Morrison said. "We all looked at him as a hero."
The Morrisons joined the National Association for the Advancement of Colored People Cape Cod chapter, which was founded by local activists like Joe and Dolores DaLuz, Margaret Moseley and Eugenia Fortes. Throughout the years, said Jeanne Morrison, members discussed King's strategies, and Fortes organized marches on Main Street in Hyannis, and lunch counter sit-ins at Thompson's Drug Store.
Eugenia Fortes hosted Martin Luther King Jr. and others at her home in Harwich
To further support the movement, Fortes also hosted King, Thurgood Marshall and a young John Lewis at her home in Harwich, said Mary Morrison.
"They could just come here and rest," Morrison said. "No one knew where they were. They just hung out and felt safe here."
Fortes, who died in 2006, was also known for making two yearly trips to Mississippi to bring supplies to people living in poverty, said Jeanne Morrison.
"She was risking her life to bring love to people that didn’t have anything. That became a kind of freedom for her," Morrison said.
Jeanne Morrison grew up inspired by the advocacy going on around her, and by the time she entered high school she was already active in walks for peace, walks for hunger, food drives and served as a big sister to classmates with developmental disabilities and a child of color with a single parent..
"I stood on the shoulders of people like Joe, Eugenia, and my parents. They got pushed down in the mud, so to speak," said Morrison. "They never got to stand upright."
Boston bus desegregation crisis trickled down Cape
When Jeanne became a freshman at Barnstable High School, she took a deeper look into King's ideas. John Reed, a local activist and co-founder of the Zion Union Heritage Museum in Hyannis, was her African American history teacher and was instrumental in her growth as an activist, she said.
Reed was one of four Black teachers who were hired at Barnstable Public Schools in response to the fallout from the Boston desegregation busing crisis of 1974.
The Racial Imbalance Act, which aimed at eliminating racial imbalance within Boston Public Schools, passed in 1965, but wasn't enforced until 1974, said Jeanne Morrison. With more families — both white and Black — migrating to the Cape from urban areas at the time, rumbles broke out at school, forcing Barnstable school officials to segregate buses, Morrison said.
“They used to send the Black kids home first," she said. "Most of us were glad to get the hell out of there.”
Paying it forward
Jeanne Morrison is now a diversity, equity and inclusion consultant, but spent the bulk of her career as the assistant general manager of diversity and civil rights for the Massachusetts Bay Transportation Authority. She is also co-president of the League of Women Voters of the Cape Cod Area; political and civic leadership platform chair for the Massachusetts Women of Color Coalition; chairperson for Barnstable County Human Rights Advisory Commission; board president of Amplify POC; and an NAACP Cape Cod member.
As she continues to mentor young activists from the Cape area, Morrison said she tries to remember the lessons King preached throughout his lifetime.
"He was preaching from the mountaintop about brotherly love and acceptance," she said.
Not all have learned that lesson.
"It's hard to overcome things we don't understand," she said. "But Martin Luther King taught me that I need to love all people as much as I love myself. When we all learn that lesson, we will do better."
Staff writer Rachael Devaney can be reached at <email-pii>. Rachael Devaney is a former member of Amplify POC.
Keep connected with the Cape. Download our free app.
|
<urn:uuid:735d669a-948e-4f61-976a-0ee8ab8ba47a>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00034.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9870044589042664,
"pii_count": 1,
"score": 2.921875,
"token_count": 2513,
"url": "https://www.barnstablepatriot.com/story/news/history/2023/01/16/cape-cod-mother-daughter-know-frontline-struggle-civil-rights/69791876007/"
}
|
On Cape Cod: Jeanne, Mary Morrison know frontline struggle for civil rights
In the summer of 1949, Mary Morrison and a gaggle of her friends boarded a bus that was headed to what is now Joint Base Cape Cod for a teen dance.
But the bus driver told the girls of color that the bus, and the dance, was for the other group.
“We knew what the other group meant — even as teenagers,” said Morrison, 90. “The dances were segregated at that time, and we could only go with our own kind.”
With the holiday honoring Martin Luther King Jr. approaching, Morrison and her daughter Jeanne, 64, both of Hyannis, recently talked about their experiences on the frontlines of the civil rights struggles of the '40s, '50s and '60s during an interview on Wednesday with the Cape Cod Times.
Route 130 in Mashpee was once an undesirable part of town, Mary Morrison says
Originally from Hyde Park, Mary Morrison had spent every summer since she was nine at her mother Winnifred "Louise" Glover-Ellis-Hind's home in Mashpee. The house, off Route 130, was once a stagecoach stop.
At the time, Route 130 in Mashpee was considered an undesirable place to live, she said. Segregation was enforced town by town, by limiting areas where people of color could buy homes.
"Our neighborhood was filled with mostly Wampanoag, Cape Verdean, and a few outsiders from Cambridge and Boston," she said. "That’s the only place you could really live if you were Black."
After graduating from Anna Maria College in Paxton with a Bachelor of Arts in French in 1952, Morrison went on to Université Laval in Canada to obtain a Master of Arts in French, and graduated speaking Spanish, French and Latin in addition to English. With a desire to educate, she headed to Prairie View A & M University in Texas, a historically black college, where she was hired as a French and Spanish teacher.
Mary Morrison was accustomed to Northern segregation, but the explicit rules of engagement between whites and Blacks in the South in 1953 was eye-opening, she said.
"I hadn't witnessed prejudice like that before," she said.
A friend drinks from a whites-only water fountain in Texas
For Morrison, the water fountains for Blacks only were laughable
|
. But, after her friend Vivian — a preacher's daughter from Indiana — took a long, cool, sip from the whites-only water fountain, she understood it was nothing to joke about.
"Vivian was kind of a wisecracking girl, and she was very outspoken," Morrison said. "She drank out of the whites-only fountain and someone slapped her because she did."
Morrison's experiences in Texas were unnerving and enraging, she said. But she met her future husband, George Morrison, at Prairie View. The couple, who would go on to have seven children, eventually tired of the South after George Morrison was accused of kidnapping, as the two were filling their car with gas. Morrison, who is Passamaquoddy and mixed race, was lighter than her darker skinned husband, who was originally from Texas.
The couple found their way to Mashpee in 1959, and later bought a house in a segregated neighborhood in Hyannis, where they raised their children.
'A touch of tar brush'
Legalized segregation held its grip on Black people in the South, but the North had a less obvious code and culture that supported racist systems, Mary Morrison said. If a person had a hint of color, it was important to walk the line.
"My mother used to say, if you had 'a touch of tar brush, you had to follow the rules,'” Morrison said.
George Morrison, who had earned a Bachelor's of Science degree at Prairie View, was the first Black man to work at the Cape Cod Times and New Bedford Standard-Times as a circulation manager, according to his daughter Jeanne. He was consistently passed over by white counterparts for promotions throughout his 10 years there, she said.
Dolores DaLuz is keynote speaker of Sunday MLK event
Eventually, George Morrison became a meat cutter for Shaw's supermarket. Morrison, who died in 2006 at 76, also became a Barnstable Police Department auxiliary officer, and he owned Honest George's Taxi.
Applying for jobs on Cape Cod
Despite a degree and the ability to speak four languages, Mary Morrison worked as a chambermaid. She worked for wealthy people in Osterville, including the Kennedys.
"Ethel couldn't have been nicer. She would greet you at the door and say thank you so much for the help," said Morrison. "A couple of times when I was at church she came and knelt down beside me."
Mary Morrison kept applying for teaching jobs and managed to land an interview at Barnstable High School. But when she appeared at the school, the principal at the time wouldn't hire her. Instead, he hired a white woman who had just graduated college, Morrison said.
"Barnstable High wasn't ready," she said.
Cape human rights awards focus on equity, access to healthcare
Eventually, she was hired at Nauset Regional Middle School by Dick Hoyt, who was head of the language department at the time. She was one of two multi-racial people at the school in September 1969.
One of her colleagues was Frank James, a Wampanoag and staunch supporter of Indian rights, she said.
Family car trips to the South
The Morrisons raised their children on Cape Cod, but often traveled to visit George Morrison's mother in Alto, Texas. As the family traveled by car and reached Southern states, Jeanne Morrison said her parents' energy changed.
"They were more serious. More fearful," she said.
Jeanne Morrison remembers Alto as a rural town, with the dirt so red the bottoms of the children's feet were stained orange.
There was also a stark, physical divide throughout the town, she said.
"All the colored people were on one side of the street and all the white people were on the other," she said.
Experiences of Black families in the South
In Alto, when George Morrison was a young boy, the Ku Klux Klan would grab a Black man every Friday night, and shoot at his feet for fun, Jeanne Morrison said.
"This was led by the sheriff's office. My grandmother showed me the building where they did it," she said.
George Morrison had two uncles who returned from World War I and wore their uniforms around town. "They were hung for being uppity," Jeanne Morrison said.
On one of the family trips from Boston to Alto, they stopped to buy ice cream in Mississippi.
“We came up to the window and the man said, 'I don’t serve (N-word),'" said Jeanne Morrison. “It just felt violent. My father got us in the car, and we kept going."
In Hyannis, at age nine, Jeanne Morrison had a similar experience as she rode her bike with her sister. They cut through a corner gas station, and two white men were gassing up a red Cadillac with a bumper sticker or decal that read "Southie," a nickname for a neighborhood of Boston. When the men spotted the two girls, the men hurled racial slurs.
The operator of the gas station "came flying out from the gas station, throwing stuff at them and cursing. They all had an altercation," Morrison said. "My sister and I just went into the garage bay and huddled in the corner. We were scared to death."
Change in the racial landscape was well-known to Morrison family
The Morrisons and their friends were keenly aware as the racial landscape changed.
Martin Luther King, Jr. joined lunch counter sit-ins in 1959 and then led a freedom walk of 125,000 people in Detroit in 1963. In 1963, King gave his "I Have a Dream speech" from the Lincoln Memorial in Washington before 250,000 people.
"The white people around us saw him as a troublemaker," Mary Morrison said. "We all looked at him as a hero."
The Morrisons joined the National Association for the Advancement of Colored People Cape Cod chapter, which was founded by local activists like Joe and Dolores DaLuz, Margaret Moseley and Eugenia Fortes. Throughout the years, said Jeanne Morrison, members discussed King's strategies, and Fortes organized marches on Main Street in Hyannis, and lunch counter sit-ins at Thompson's Drug Store.
Eugenia Fortes hosted Martin Luther King Jr. and others at her home in Harwich
To further support the movement, Fortes also hosted King, Thurgood Marshall and a young John Lewis at her home in Harwich, said Mary Morrison.
"They could just come here and rest," Morrison said. "No one knew where they were. They just hung out and felt safe here."
Fortes, who died in 2006, was also known for making two yearly trips to Mississippi to bring supplies to people living in poverty, said Jeanne Morrison.
"She was risking her life to bring love to people that didn’t have anything. That became a kind of freedom for her," Morrison said.
Jeanne Morrison grew up inspired by the advocacy going on around her, and by the time she entered high school she was already active in walks for peace, walks for hunger, food drives and served as a big sister to classmates with developmental disabilities and a child of color with a single parent..
"I stood on the shoulders of people like Joe, Eugenia, and my parents. They got pushed down in the mud, so to speak," said Morrison. "They never got to stand upright."
Boston bus desegregation crisis trickled down Cape
When Jeanne became a freshman at Barnstable High School, she took a deeper look into King's ideas. John Reed, a local activist and co-founder of the Zion Union Heritage Museum in Hyannis, was her African American history teacher and was instrumental in her growth as an activist, she said.
Reed was one of four Black teachers who were hired at Barnstable Public Schools in response to the fallout from the Boston desegregation busing crisis of 1974.
The Racial Imbalance Act, which aimed at eliminating racial imbalance within Boston Public Schools, passed in 1965, but wasn't enforced until 1974, said Jeanne Morrison. With more families — both white and Black — migrating to the Cape from urban areas at the time, rumbles broke out at school, forcing Barnstable school officials to segregate buses, Morrison said.
“They used to send the Black kids home first," she said. "Most of us were glad to get the hell out of there.”
Paying it forward
Jeanne Morrison is now a diversity, equity and inclusion consultant, but spent the bulk of her career as the assistant general manager of diversity and civil rights for the Massachusetts Bay Transportation Authority. She is also co-president of the League of Women Voters of the Cape Cod Area; political and civic leadership platform chair for the Massachusetts Women of Color Coalition; chairperson for Barnstable County Human Rights Advisory Commission; board president of Amplify POC; and an NAACP Cape Cod member.
As she continues to mentor young activists from the Cape area, Morrison said she tries to remember the lessons King preached throughout his lifetime.
"He was preaching from the mountaintop about brotherly love and acceptance," she said.
Not all have learned that lesson.
"It's hard to overcome things we don't understand," she said. "But Martin Luther King taught me that I need to love all people as much as I love myself. When we all learn that lesson, we will do better."
Staff writer Rachael Devaney can be reached at <email-pii>. Rachael Devaney is a former member of Amplify POC.
Keep connected with the Cape. Download our free app.
|
People who win arguments and are good at debating don't just speak well, they listen well, too.
Good listening skills boost your credibility and make you sound confident. But very few people are good at it. They easily get distracted, they start planning what they're going to say, or worse, they cut the other person off and rant away.
In my book, "Win Every Argument: The Art of Debating, Persuading, and Public Speaking," I outline the two types of listening to master: critical listening and empathetic listening.
This requires consciously absorbing, comprehending and evaluating the information given to you by a speaker in real-time. "Is it true or false?" "Does it make sense or not?" "Can I trust or believe what I am hearing?"
You need to be a critical listener when your teacher is giving you feedback on an essay you wrote. Or when your boss is going through what was wrong in a report you wrote.
Here's how to be a critical listener when your opponent is making their case:
Keep an open mind.
When you're arguing against an opponent, do not automatically assume that everything they're saying is wrong, silly or dumb.
Listen for valid points or clever lines that you'll then need to address or concede in your own remarks.
You should be confident in your own arguments, yes, but also remain open-minded enough to see where an opponent is strong or where you may have fallen short.
Clear your mind.
Don't daydream or snooze as others around you are speaking and advocating. It damages your credibility and standing with an audience to be seen behaving in a rude or dismissive way.
Focus laser-like on the task at hand. By listening critically to your opponent and being ready to catch fallacious or false claims, you can prepare zinger-like responses, and win your argument.
Critical listening benefits from a sharp mind and a good memory. Both can be bolstered by good old-fashioned note-taking. Some of the most successful people on the planet are fastidious notetakers.
British billionaire Richard Branson, who says he goes through dozens of notebooks a year, wrote about a conference in London where he shared the stage with Bill Gates.
According to Branson, as Gates "made a closing speech … he pulled some pieces of paper out of his pocket."
This is about connecting with the speaker and trying to see the world through their eyes. The goal is to focus on their views and to understand where they are coming from.
It may sound like a no-brainer, but in my experience, so many people — smart people! — are simply bad at it.
Here are three strategies that I've found most useful:
Make it clear to the other speaker, and to those watching and listening, that you are focused on the other speaker.
"Quiet your inner monologue, set your device aside, and draw your attention to the other person," says Ximena Vengoechea, author of "Listen Like You Mean It: Reclaiming the Lost Art of True Connection."
Make sure your attention is 100% not on yourself.
Make eye contact.
I cannot overstate how important eye contact is as a means of showing empathy and building deep emotional ties.
Research supports it: One study of doctors and patients found that eye contact was "significantly related to patient perceptions of clinician empathy."
Another study of public speakers found "participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze." Surprise!
Ask the right questions.
Pose questions to your interlocutors that allow them to drive the conversation, and then ask follow-up questions that show you were listening to their answers.
Opt for open-ended rather than close-ended questions, and questions that require personal and considered responses rather than one-word "yes" or "no" answers.
Mehdi Hasan is an award-winning British-American journalist and the author of "Win Every Argument: The Art of Debating, Persuading, and Public Speaking." He is the host of MSNBC's "The Mehdi Hasan Show." He has written for the New York Times and the Washington Post. Follow him on Twitter @mehdirhasan.
- Former FBI agent shares 3 things people with high emotional intelligence always do when talking to others
- Harvard psychologist: 9 toxic phrases people use when they're trying to gaslight you—and how to respond
- 45-year-old self-made millionaire shares his 4 'cheat codes' to help skyrocket your wealth
Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter here!
*This is an adapted excerpt from "Win Every Argument: The Art of Debating, Persuading, and Public Speaking" by Mehdi Hasan, published by Henry Holt and Co. Copyright © 2023 by Mehdi Hasan.
|
<urn:uuid:4f0fc5e4-48fa-458e-a083-d098324735dd>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100762.64/warc/CC-MAIN-20231208144732-20231208174732-00636.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9574840664863586,
"pii_count": 0,
"score": 2.609375,
"token_count": 1030,
"url": "https://www.cnbc.com/2023/08/02/the-no-1-skill-you-need-to-win-every-argument-says-public-speaking-expert-few-people-have-it.html?utm_content=makeit&utm_medium=Social&utm_source=facebook%7Cmakeit&fbclid=IwAR1HTd3QTkquUrkA_oLg8PedZ-SeRNs4uPUhVuIlI5gYNFRUQ548jO0Y_ts"
}
|
People who win arguments and are good at debating don't just speak well, they listen well, too.
Good listening skills boost your credibility and make you sound confident. But very few people are good at it. They easily get distracted, they start planning what they're going to say, or worse, they cut the other person off and rant away.
In my book, "Win Every Argument: The Art of Debating, Persuading, and Public Speaking," I outline the two types of listening to master: critical listening and empathetic listening.
This requires consciously absorbing, comprehending and evaluating the information given to you by a speaker in real-time. "Is it true or false?" "Does it make sense or not?" "Can I trust or believe what I am hearing?"
You need to be a critical listener when your teacher is giving you feedback on an essay you wrote. Or when your boss is going through what was wrong in a report you wrote.
Here's how to be a critical listener when your opponent is making their case:
Keep an open mind.
When you're arguing against an opponent, do not automatically assume that everything they're saying is wrong, silly or dumb.
Listen for valid points or clever lines that you'll then need to address or concede in your own remarks.
You should be confident in your own arguments, yes, but also remain open-minded enough to see where an opponent is strong or where you may have fallen short.
Clear your mind.
Don't daydream or snooze as others around you are speaking and advocating. It damages your credibility and standing with an audience to be seen behaving in a rude or dismissive way.
Focus laser-like on the task at hand. By listening critically to your opponent and being ready to catch fallacious or false claims, you can prepare zinger-like responses, and win your argument.
Critical listening benefits from a sharp mind and a good memory. Both can be bolstered by good old-fashioned note-taking. Some of the most successful people on the planet are fastidious notetakers.
British billionaire Richard Branson, who says he goes through dozens of notebooks a year, wrote about a conference in London where he shared the stage with Bill Gates.
According to Branson, as Gates "made a closing speech … he pulled some pieces of paper out of his pocket."
This is about connecting with the speaker and trying to see the world through their eyes. The goal is to focus on their views and to understand where they are coming from.
It
|
may sound like a no-brainer, but in my experience, so many people — smart people! — are simply bad at it.
Here are three strategies that I've found most useful:
Make it clear to the other speaker, and to those watching and listening, that you are focused on the other speaker.
"Quiet your inner monologue, set your device aside, and draw your attention to the other person," says Ximena Vengoechea, author of "Listen Like You Mean It: Reclaiming the Lost Art of True Connection."
Make sure your attention is 100% not on yourself.
Make eye contact.
I cannot overstate how important eye contact is as a means of showing empathy and building deep emotional ties.
Research supports it: One study of doctors and patients found that eye contact was "significantly related to patient perceptions of clinician empathy."
Another study of public speakers found "participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze." Surprise!
Ask the right questions.
Pose questions to your interlocutors that allow them to drive the conversation, and then ask follow-up questions that show you were listening to their answers.
Opt for open-ended rather than close-ended questions, and questions that require personal and considered responses rather than one-word "yes" or "no" answers.
Mehdi Hasan is an award-winning British-American journalist and the author of "Win Every Argument: The Art of Debating, Persuading, and Public Speaking." He is the host of MSNBC's "The Mehdi Hasan Show." He has written for the New York Times and the Washington Post. Follow him on Twitter @mehdirhasan.
- Former FBI agent shares 3 things people with high emotional intelligence always do when talking to others
- Harvard psychologist: 9 toxic phrases people use when they're trying to gaslight you—and how to respond
- 45-year-old self-made millionaire shares his 4 'cheat codes' to help skyrocket your wealth
Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter here!
*This is an adapted excerpt from "Win Every Argument: The Art of Debating, Persuading, and Public Speaking" by Mehdi Hasan, published by Henry Holt and Co. Copyright © 2023 by Mehdi Hasan.
|
CORAL SPRINGS, Fla. – Many have heard the names of civil rights leaders such Dr. Martin Luther King Jr. and Rosa Parks.
But one South Florida woman is making it her mission that more than 20,000 men who were some of the first Black civil rights leaders will be properly recognized.
Mallorie Berger, of Coral Springs, has made it her mission to put names on the faces of those men.
In 2021, she discovered her late grandfather, Maurice L. Burns Sr., was one of the men known as the Montford Point Marines, the first-ever group of Black marines in the U.S. military.
“I proactively find the families or, if I’m lucky, the living Montford Point Marines, so they can get their congressional medal awarded to them,” Berger said.
At the time, in the early 1940s, Black marines were kept separate from their white counterparts.
When they were not allowed to train at Camp Lejeune in North Carolina, they were sent to Montford Point and forced to build their own camp in swamp land.
According to Berger, during that time, the men were forced to train in dangerous conditions, all while enduring racial slurs.
“They were very good at hiding the trauma that they carried,” she said.
After breaking many marine records, the men were called to fight in the Pacific, many of them dying on the front lines after everything they endured.
“I don’t know how they did it,” said Berger.
Only about 10% of Montford Point Marines have been identified.
Since Berger’s discovery about her own family, she’s been working to make that number grow.
She says their work is highlighted in the documentary, “Our America - Mission Montford Point.”
“It’s the least we can do to find the 18,000 other marines that are out there,” she said.
In 2012, Montford Point Marines collectively received a congressional gold medal.
Individual replicas were also handed out, including one for Berger’s grandfather.
Berger told Local 10 News reporter Liane Morejon that she wants to make sure their mark on history is never forgotten.
“They paved the way for those who came after them and it’s incredible,” she said.
To learn more about the Montford Point Marines and the efforts to find and honor them all, you can watch the documentary “Our America: Mission Montford Point” on Monday at 1 p.m. right here on Local 10 following our special live coverage of the MLK Day parade in Miami.
|
<urn:uuid:64226c2c-3dc4-4f3a-8b36-b754ec34cc47>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655143.72/warc/CC-MAIN-20230608204017-20230608234017-00358.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9752704501152039,
"pii_count": 0,
"score": 2.65625,
"token_count": 561,
"url": "https://www.local10.com/news/local/2023/01/14/coral-springs-woman-seeking-members-of-montfort-point-marines-for-national-recognition/"
}
|
CORAL SPRINGS, Fla. – Many have heard the names of civil rights leaders such Dr. Martin Luther King Jr. and Rosa Parks.
But one South Florida woman is making it her mission that more than 20,000 men who were some of the first Black civil rights leaders will be properly recognized.
Mallorie Berger, of Coral Springs, has made it her mission to put names on the faces of those men.
In 2021, she discovered her late grandfather, Maurice L. Burns Sr., was one of the men known as the Montford Point Marines, the first-ever group of Black marines in the U.S. military.
“I proactively find the families or, if I’m lucky, the living Montford Point Marines, so they can get their congressional medal awarded to them,” Berger said.
At the time, in the early 1940s, Black marines were kept separate from their white counterparts.
When they were not allowed to train at Camp Lejeune in North Carolina, they were sent to Montford Point and forced to build their own camp in swamp land.
According to Berger, during that time, the men were forced to train in dangerous conditions, all while enduring racial slurs.
“They were very good at hiding the trauma that they carried,” she said.
After breaking many marine records, the men were called to fight in the Pacific, many of them dying on the front lines after everything they endured.
“I don’t know how they did it,” said Berger.
Only about 10% of Montford Point Marines have been identified.
Since Berger’s discovery about her own family, she’s been working to make that number grow.
She says their work is highlighted in the documentary, “Our America - Mission Montford Point.”
“It’s the least we can do to find the 18,000 other marines that are out there,” she said.
In 2012, Montford Point Marines collectively received a congressional gold medal.
Individual replicas were also handed out, including one for Berger’s grandfather.
Berger told Local 10 News reporter Liane Morejon that she wants to make sure their mark on history is never forgotten.
“They paved the way for those who came after them and it’s incredible,” she said.
To learn more about the Montford Point Marines and the efforts to find and honor them all, you can watch the documentary “Our America: Mission Montford Point” on Monday at 1 p.m.
|
right here on Local 10 following our special live coverage of the MLK Day parade in Miami.
|
“In the four hundred and eightieth year after the people of Israel came out of the land of Egypt, in the fourth year of Solomon’s reign over Israel, in the month of Ziv, which is the second month, he began to build the house of the Lord” (1 Kings 6:1, ESV).
Dating events recorded in the book of Joshua, and by extension the Exodus account, is another intensely debated question among scholars. On one side of the debate, biblical archaeologists such as James Hoffmeier contend that a 13th century Exodus fits the material evidence due to connections between sites recorded within the biblical account, such as the store city of Ramesses II (Exod. 1:11) as one example; however, other biblical archaeologists like Bryant Wood date the Exodus sometime within the 15th century based on a literal understanding of the 480 year timeline recorded in 1 Kings 6:1: “In the four hundred and eightieth year after the people of Israel came out of the land of Egypt, in the fourth year of Solomon’s reign over Israel, in the month of Ziv, which is the second month, he began to build the house of the Lord” (ESV). Douglas Petrovich dates the death of Joshua as occurring around 1384 BCE, which places the account sometime within the 13th and 14th centuries aligning more with Hoffmeier than Bryant. In any event, for our purposes here, we have a specific enough timeframe–late Bronze to early Iron age–for a high level consideration of archaeology directly relevant to Joshua.
According to archaeologists, in order to understand the social structure and the material conditions of life for the principal part of a community a consideration of the household structure is key: “Households embody and underlie the organization of a society at its most basic level; they can, therefore, serve as sensitive indicators of evolutionary change in social organization” Archaeologists have discovered much about the family structure, during the Middle Bronze to early Iron Age, based largely on the size of excavated dwellings in rural settlements. The typical house is a four-room dwelling thought to serve a father, mother, and two or three unmarried children, though scholars also posit that homes could have been inhabited by as much as three or four generations of an extended family. The number of inhabitants necessitate more options for privacy; this interpretation, according to Avraham Faust, is supported by the fact that despite the uniformity in the typical four-room house plan, the internal divisions vary among the majority of houses uncovered.
Ancient Hazor is a significant archaeological find as it is the final Bronze Age city and among those recorded as having been destroyed by the Israelites (Josh. 11:1-11). According to Yigael Yadin, the archaeologist who first excavated at Hazor, the settlement consists of a large, rectangular lower city (170 acres) and a bottle-shaped upper city (30 acres), which Petrovich describes as an elongated mound called a “tel” rising about 40 meters above the surrounding plain.According to Sharon Zuckerman, excavation of the lower city confirms that the fiercest attacks are focused primarily on public structures like the Orthostats and Stelae temples, corroborated by excavations of the upper city where the most fierce attacks are limited to public buildings as well. Though it would be amiss not to mention that Zuckerman challenges the biblical claim that the Israelites are the culprits behind the destruction of Hazor and proposes that the its destruction was the result of internal revolt. Debate surrounding by whom, and to a lesser degree precisely when, not withstanding, archaeology reveals that the peak of Hazor’s power is achieved from the middle of the 14th through the second third of the 13th centuries. What is more, this dating is strongly inferred by epigraphical evidence from the Amarna Letters in which the king of Hazor is the only Canaanite ruler at the time identified as a king in letters to the Egyptian pharaoh. Therefore, if we believe the account of Hazor as presented in Joshua 11, the picture that emerges from the archaeology offers much insight into things like the opposition the Israelites faced, the most severe targets of their military campaigns, and the intentionality and faithfulness of Yahweh.
As with Hazor, ancient Jericho is significant archaeologically to the book of Joshua, and perhaps, the most prominent considering the story of Rahab and the spies, not to mention that the account of the Israelites marching for seven days is a Sunday school staple. According to McConville, Jericho is the most celebrated example of the relationship between narrative and archaeology presented in the book oof Joshua; however, archaeological research finds no evidence of a walled city in the Late Bronze Age. Therefore, despite the fanfare associated with the account, our learning about ancient Jericho must be primarily satisfied by the examination of pottery and other similar material remains, because over most of the area which has been excavated, there is a thick layer of burning above the Middle Bronze Age buildings.The lack of available evidence, together with Jericho’s history, have made the archaeological record somewhat confusing, for example, scholars believe it is probable that the city walls from the Early and Middle Bronze periods continue to be used by occupants in the Late Bronze period, as well as long periods of abandonment has left the site having been subjected to extensive erosion. In any case, McConville, among other scholars, assert that the Jericho site illustrates some basic challenges in correlating archaeology with texts. “If the site is indeed cultic, how would we decide whether this confirms the Joshua story, or whether, alternatively, the story is an aetiology based on the existence of the site?”
Despite some efforts to place the conquest described in Joshua in the Middle Bronze Age, mainly to accommodate the story of Jericho’s fall, general consensus exists between both biblical scholars and archaeologists that the setting is the Late Bronze Age transition into Iron, in agreement with the position attested to by Hoffmeier and Petrovich. However there is new scholarship emerging in closer support of the biblical timeline as attested to by Wood; the merits of which I am currently investigating.This as well as emerging archaeology on the location of the biblical Mount Sinai is equally as fascinating.
Quite a bit of the discussion within the literature dating the Exodus account in the 15th century puts the Pharaoh in power at the time: Amenhotep II. Since Rameses II reflects tradition, and not an explicit reference found in the Bible, there is no conflict. Consideration of the reigns of his predecessor and successor further contributes to Amenhotep II as a candidate. Of course, all of this information and debate can be easily found via internet search; therefore, I wanted to offer a brief overview from the perspective of the Book of Joshua.
Unless otherwise noted, all biblical passages referenced are in the English Standard Version.
Douglas Petrovich, “The Dating of Hazor’s Destruction in Joshua 11 by Way of Biblical, Archaeological, and Epigraphical Evidence,” Journal of the Evangelical Theological Society 51, no. 3 (Sep 2008): 496.
Avraham Faust, “The Rural Community in Ancient Israel During Iron Age II,” Bulletin of the American Schools of Oriental Research317, February 2000): 20.
Latif Oksuz et al., “The K8 House: A New Domestic Space from the Iron Age II at Tell Halif, Israel,” Palestine Exploration Quarterly151, no. 3 – no. 4 (October 2019): 220.
Ibid., 22.
The term “tell” refers to archaeological mounds containing the remains of ancient cities.
Petrovich, 490.
Sharon Zuckerman, “Anatomy of a Destruction: Crisis Architecture, Termination Rituals and the Fall of Canaanite Hazor,” Journal of Mediterranean Archaeology 20, no. 1 (2007): 24.
Ibid., 25.
Petrovich, 493.
Gordon McConville, Joshua: An Introduction and Study Guide (London: Bloomsbury, 2017), 27; The collapse of the Late Bronze Age beginning of the Iron Age is around 1200 BCE.
Murray B. Nicol, “Archaeology and the fall of Jericho,” Review & Expositor 58, no. 2 (Apr 1961): 176.
McConville, 27.
Ibid., 24.
To view all posts, click/press the link here to visit the Amazing Tangled Grace main page.
Please Subscribe to follow my blog and receive notifications of new posts by email. God bless!
You write, “Douglas Petrovich dates the death of Joshua as occurring around 1384 BCE, which places the account sometime within the 13th and 14th centuries aligning more with Hoffmeier than Bryant.” If Joshua died in 1384 BC, then 22 years earlier in 1406 BC would be when the Israelites crossed the Jordan and 40 years earlier than that would be 1446 BC when the Exodus occurred. This would agree with Bryant Wood. Petrovich presents detailed evidence for the 15th century BC Exodus in his book, “Origins of the Hebrews”.
I agree with your conclusion that Amenhotep II was the Pharaoh of the Exodus and that it occurred in 1446 BC.
LikeLiked by 1 person
Yes, I know there are plenty of scoffers, but I am persuaded as well.
LikeLiked by 1 person
Thanks for sharing this idea. Anita
LikeLiked by 1 person
|
<urn:uuid:16dcf41b-1be6-489a-b257-864c2dbd4ba7>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00700.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9452697038650513,
"pii_count": 0,
"score": 3.359375,
"token_count": 2087,
"url": "https://amazingtangledgrace.wordpress.com/2023/05/24/dating-the-exodus-by-way-of-joshua/"
}
|
“In the four hundred and eightieth year after the people of Israel came out of the land of Egypt, in the fourth year of Solomon’s reign over Israel, in the month of Ziv, which is the second month, he began to build the house of the Lord” (1 Kings 6:1, ESV).
Dating events recorded in the book of Joshua, and by extension the Exodus account, is another intensely debated question among scholars. On one side of the debate, biblical archaeologists such as James Hoffmeier contend that a 13th century Exodus fits the material evidence due to connections between sites recorded within the biblical account, such as the store city of Ramesses II (Exod. 1:11) as one example; however, other biblical archaeologists like Bryant Wood date the Exodus sometime within the 15th century based on a literal understanding of the 480 year timeline recorded in 1 Kings 6:1: “In the four hundred and eightieth year after the people of Israel came out of the land of Egypt, in the fourth year of Solomon’s reign over Israel, in the month of Ziv, which is the second month, he began to build the house of the Lord” (ESV). Douglas Petrovich dates the death of Joshua as occurring around 1384 BCE, which places the account sometime within the 13th and 14th centuries aligning more with Hoffmeier than Bryant. In any event, for our purposes here, we have a specific enough timeframe–late Bronze to early Iron age–for a high level consideration of archaeology directly relevant to Joshua.
According to archaeologists, in order to understand the social structure and the material conditions of life for the principal part of a community a consideration of the household structure is key: “Households embody and underlie the organization of a society at its most basic level; they can, therefore, serve as sensitive indicators of evolutionary change in social organization” Archaeologists have discovered much about the family structure, during the Middle Bronze to early Iron Age, based largely on the size of excavated dwellings in rural settlements. The typical house is a four-room dwelling thought to serve a father, mother, and two or three unmarried children, though scholars also posit that homes could have been inhabited by as much as three or four generations of an extended family. The number of inhabitants necessitate more options for privacy; this interpretation, according to Avraham Faust, is supported by
|
the fact that despite the uniformity in the typical four-room house plan, the internal divisions vary among the majority of houses uncovered.
Ancient Hazor is a significant archaeological find as it is the final Bronze Age city and among those recorded as having been destroyed by the Israelites (Josh. 11:1-11). According to Yigael Yadin, the archaeologist who first excavated at Hazor, the settlement consists of a large, rectangular lower city (170 acres) and a bottle-shaped upper city (30 acres), which Petrovich describes as an elongated mound called a “tel” rising about 40 meters above the surrounding plain.According to Sharon Zuckerman, excavation of the lower city confirms that the fiercest attacks are focused primarily on public structures like the Orthostats and Stelae temples, corroborated by excavations of the upper city where the most fierce attacks are limited to public buildings as well. Though it would be amiss not to mention that Zuckerman challenges the biblical claim that the Israelites are the culprits behind the destruction of Hazor and proposes that the its destruction was the result of internal revolt. Debate surrounding by whom, and to a lesser degree precisely when, not withstanding, archaeology reveals that the peak of Hazor’s power is achieved from the middle of the 14th through the second third of the 13th centuries. What is more, this dating is strongly inferred by epigraphical evidence from the Amarna Letters in which the king of Hazor is the only Canaanite ruler at the time identified as a king in letters to the Egyptian pharaoh. Therefore, if we believe the account of Hazor as presented in Joshua 11, the picture that emerges from the archaeology offers much insight into things like the opposition the Israelites faced, the most severe targets of their military campaigns, and the intentionality and faithfulness of Yahweh.
As with Hazor, ancient Jericho is significant archaeologically to the book of Joshua, and perhaps, the most prominent considering the story of Rahab and the spies, not to mention that the account of the Israelites marching for seven days is a Sunday school staple. According to McConville, Jericho is the most celebrated example of the relationship between narrative and archaeology presented in the book oof Joshua; however, archaeological research finds no evidence of a walled city in the Late Bronze Age. Therefore, despite the fanfare associated with the account, our learning about ancient Jericho must be primarily satisfied by the examination of pottery and other similar material remains, because over most of the area which has been excavated, there is a thick layer of burning above the Middle Bronze Age buildings.The lack of available evidence, together with Jericho’s history, have made the archaeological record somewhat confusing, for example, scholars believe it is probable that the city walls from the Early and Middle Bronze periods continue to be used by occupants in the Late Bronze period, as well as long periods of abandonment has left the site having been subjected to extensive erosion. In any case, McConville, among other scholars, assert that the Jericho site illustrates some basic challenges in correlating archaeology with texts. “If the site is indeed cultic, how would we decide whether this confirms the Joshua story, or whether, alternatively, the story is an aetiology based on the existence of the site?”
Despite some efforts to place the conquest described in Joshua in the Middle Bronze Age, mainly to accommodate the story of Jericho’s fall, general consensus exists between both biblical scholars and archaeologists that the setting is the Late Bronze Age transition into Iron, in agreement with the position attested to by Hoffmeier and Petrovich. However there is new scholarship emerging in closer support of the biblical timeline as attested to by Wood; the merits of which I am currently investigating.This as well as emerging archaeology on the location of the biblical Mount Sinai is equally as fascinating.
Quite a bit of the discussion within the literature dating the Exodus account in the 15th century puts the Pharaoh in power at the time: Amenhotep II. Since Rameses II reflects tradition, and not an explicit reference found in the Bible, there is no conflict. Consideration of the reigns of his predecessor and successor further contributes to Amenhotep II as a candidate. Of course, all of this information and debate can be easily found via internet search; therefore, I wanted to offer a brief overview from the perspective of the Book of Joshua.
Unless otherwise noted, all biblical passages referenced are in the English Standard Version.
Douglas Petrovich, “The Dating of Hazor’s Destruction in Joshua 11 by Way of Biblical, Archaeological, and Epigraphical Evidence,” Journal of the Evangelical Theological Society 51, no. 3 (Sep 2008): 496.
Avraham Faust, “The Rural Community in Ancient Israel During Iron Age II,” Bulletin of the American Schools of Oriental Research317, February 2000): 20.
Latif Oksuz et al., “The K8 House: A New Domestic Space from the Iron Age II at Tell Halif, Israel,” Palestine Exploration Quarterly151, no. 3 – no. 4 (October 2019): 220.
Ibid., 22.
The term “tell” refers to archaeological mounds containing the remains of ancient cities.
Petrovich, 490.
Sharon Zuckerman, “Anatomy of a Destruction: Crisis Architecture, Termination Rituals and the Fall of Canaanite Hazor,” Journal of Mediterranean Archaeology 20, no. 1 (2007): 24.
Ibid., 25.
Petrovich, 493.
Gordon McConville, Joshua: An Introduction and Study Guide (London: Bloomsbury, 2017), 27; The collapse of the Late Bronze Age beginning of the Iron Age is around 1200 BCE.
Murray B. Nicol, “Archaeology and the fall of Jericho,” Review & Expositor 58, no. 2 (Apr 1961): 176.
McConville, 27.
Ibid., 24.
To view all posts, click/press the link here to visit the Amazing Tangled Grace main page.
Please Subscribe to follow my blog and receive notifications of new posts by email. God bless!
You write, “Douglas Petrovich dates the death of Joshua as occurring around 1384 BCE, which places the account sometime within the 13th and 14th centuries aligning more with Hoffmeier than Bryant.” If Joshua died in 1384 BC, then 22 years earlier in 1406 BC would be when the Israelites crossed the Jordan and 40 years earlier than that would be 1446 BC when the Exodus occurred. This would agree with Bryant Wood. Petrovich presents detailed evidence for the 15th century BC Exodus in his book, “Origins of the Hebrews”.
I agree with your conclusion that Amenhotep II was the Pharaoh of the Exodus and that it occurred in 1446 BC.
LikeLiked by 1 person
Yes, I know there are plenty of scoffers, but I am persuaded as well.
LikeLiked by 1 person
Thanks for sharing this idea. Anita
LikeLiked by 1 person
|
New York City Mayor Eric Adams told reporters at a briefing Wednesday morning that the city’s air quality emergency is not the last time New Yorkers will experience an event like this, thanks to climate change.
New York, along with much of the Northeast U.S., faces unhealthy air quality levels due to smoke blowing in from one of Canada’s most raging starts to wildfire season on record. According to Canadian Minister of Emergency Preparedness Bill Blair, who was cited by CNN, there are a total of 414 fires still active across the country, and approximately 9.4 million acres—about twice the size of New Jersey—have been burned.
A recent storm system moving in from the Atlantic Coast pushed smoke from the unprecedented fires southeast into the U.S., reported The New York Times, settling over some of the nation’s most densely populated metros and impacting cities as far south as Washington, D.C.
New Yorkers in particular woke up on Wednesday to find out that they were experiencing one of the worse air qualities in the world, according to IQAir. Detroit, Michigan, just over 200 miles southwest of Toronto, Canada, was also ranked among the worst cities for air quality as of Wednesday evening.
Adams addressed his city’s concerns during a news briefing Wednesday morning alongside New York City Emergency Management Commissioner Zachary Iscol, and urged residents to limit their outdoor activity as much as possible while smoke lingers over the city for the next several days.
“While this may be the first time we’ve experienced something like this of this magnitude, let’s be clear, it’s not the last,” Adams said. “Climate change has accelerated these conditions. We must continue to draw down emissions, improve air quality and build resiliency.”
“New York City is clearly a national leader on public health and climate action,” Adams continued. “These dangerous air quality conditions are clearly an urgent reminder that we must act now to protect our city, our environment and the future of our children.”
Canada’s wildfire season typically spans from May to October, although blazes to this extent so early in the season are rare. Much like the rest of North America, however, parts of Canada experienced record heat and drought in May, triggering conditions for rampant wildfires.
Wildfires in general are not caused by climate change—according to the U.S. National Park Service—which said nearly 85 percent are sparked by humans either intentionally or unintentionally. However, warming weather across the globe allows for environments that make wildfires more intense. Natural Resources Canada says that climate change could increase fire activity and double the number of areas burned annually by the end of the century.
According to forecast analysis from the Times, the worst of the smoky air will last in New York City through Thursday morning but haze is expected to vary in thickness across the city throughout the day.
Newsweek has reached out via email to the National Weather Service for additional information.
The post NYC Mayor Eric Adams Blames Climate Change for ‘Accelerating’ Smoky Air appeared first on Newsweek.
|
<urn:uuid:308df489-73a3-4488-8212-313212677363>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506027.39/warc/CC-MAIN-20230921105806-20230921135806-00620.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9559302926063538,
"pii_count": 0,
"score": 2.609375,
"token_count": 645,
"url": "https://dnyuz.com/2023/06/07/nyc-mayor-eric-adams-blames-climate-change-for-accelerating-smoky-air/"
}
|
New York City Mayor Eric Adams told reporters at a briefing Wednesday morning that the city’s air quality emergency is not the last time New Yorkers will experience an event like this, thanks to climate change.
New York, along with much of the Northeast U.S., faces unhealthy air quality levels due to smoke blowing in from one of Canada’s most raging starts to wildfire season on record. According to Canadian Minister of Emergency Preparedness Bill Blair, who was cited by CNN, there are a total of 414 fires still active across the country, and approximately 9.4 million acres—about twice the size of New Jersey—have been burned.
A recent storm system moving in from the Atlantic Coast pushed smoke from the unprecedented fires southeast into the U.S., reported The New York Times, settling over some of the nation’s most densely populated metros and impacting cities as far south as Washington, D.C.
New Yorkers in particular woke up on Wednesday to find out that they were experiencing one of the worse air qualities in the world, according to IQAir. Detroit, Michigan, just over 200 miles southwest of Toronto, Canada, was also ranked among the worst cities for air quality as of Wednesday evening.
Adams addressed his city’s concerns during a news briefing Wednesday morning alongside New York City Emergency Management Commissioner Zachary Iscol, and urged residents to limit their outdoor activity as much as possible while smoke lingers over the city for the next several days.
“While this may be the first time we’ve experienced something like this of this magnitude, let’s be clear, it’s not the last,” Adams said. “Climate change has accelerated these conditions. We must continue to draw down emissions, improve air quality and build resiliency.”
“New York City is clearly a national leader on public health and climate action,” Adams continued. “These dangerous air quality conditions are clearly an urgent reminder that we must act now to protect our city, our environment and the future of our children.”
Canada’s wildfire season typically spans from May to October, although blazes to this extent so early in the season are rare. Much like the rest of North America, however, parts of Canada experienced record heat and drought in May, triggering conditions for rampant wildfires.
Wildfires in general are not caused by climate change—according to the U.S. National Park Service—which said nearly 85 percent are sparked by humans either intentionally or unintentionally. However, warming weather across the globe allows for environments that make
|
wildfires more intense. Natural Resources Canada says that climate change could increase fire activity and double the number of areas burned annually by the end of the century.
According to forecast analysis from the Times, the worst of the smoky air will last in New York City through Thursday morning but haze is expected to vary in thickness across the city throughout the day.
Newsweek has reached out via email to the National Weather Service for additional information.
The post NYC Mayor Eric Adams Blames Climate Change for ‘Accelerating’ Smoky Air appeared first on Newsweek.
|
Projected losses from a major California earthquake soar. What’s behind seismic inflation?
The expected annual cost from earthquake damage for California is climbing sharply amid an increase in property values and better understanding of how soft soils could result in greater damage during shaking.
California is projected to lose an average of $9.6 billion a year from earthquake damage, the new estimates show. That’s a 157% increase from the last estimate, in 2017, when the price tag was $3.7 billion a year, according to a new report from the U.S. Geological Survey and the Federal Emergency Management Agency.
“In any given year a big earthquake strikes ... you can easily anticipate a $100-billion loss,” USGS research structural engineer Kishor Jaiswal, the principal investigator for the report, told The Times.
The totals underscore just how much the value of older buildings has soared in recent years, yet they remain vulnerable to major damage or collapse in the next big earthquake.
It is also a sober reminder of the seismic toll facing California. After the state’s other major earthquakes — the 1906 in San Francisco, 1933 in Long Beach, 1989 in the greater San Francisco Bay Area and 1994 in Northridge — it took years, if not decades, for cities to recover, and massive costs had to be paid not only by governments and insurers but also individuals who were never made whole.
According to the new report, Los Angeles and Orange counties share the highest price tag of any metro area in the nation, with a combined projected average annual loss of $3.3 billion a year. In second place is the San Francisco-Oakland-Berkeley metro area, with a projected loss of $1.8 billion a year.
The seismic price tag for California is about 65% of the nation’s annual earthquake cost, which is $14.7 billion a year.
The projected annual average earthquake losses in other areas of California include $1.3 billion for Riverside and San Bernardino counties, $917 million for the San Jose-Sunnyvale-Santa Clara metro area, $285 million for San Diego County and $220 million for Ventura County.
Assuming the yearly earthquake loss projection remains the same, over the course of three decades, California is projected to lose $288 billion from earthquake damage. Such a figure is consistent with recent earthquake scenarios, such as a magnitude 7.8 earthquake on the southern San Andreas fault or a magnitude 7 earthquake on the Hayward fault.
Of that total, the five-county Southern California region — L.A., Orange, Riverside, San Bernardino and Ventura counties — would lose nearly $150 billion. And the nine-county San Francisco Bay Area would lose roughly $90 billion.
“It’s a sobering reminder about why we need to prepare for those rare but large earthquakes, as just one major event can eclipse the costs of the more frequent but smaller ones,” USGS Director David Applegate said in a statement.
The authors of the report calculated an “annualized” earthquake loss to average out the cost of earthquake damage on a yearly basis.
It’s similar to how car insurers calculate the premium people pay yearly: People might be in a collision once every few years, but insurers calculate a yearly bill for drivers that takes into account the annual average projected cost of future collisions. How much the yearly car insurance premium will be can vary, depending on factors such as the driver’s age, accident history and the type of vehicle driven.
The magnitude 6.7 earthquake that hit Northridge in 1994 caused as much as $20 billion in damage and more than $40 billion in economic loss, “making it the costliest earthquake disaster in U.S. history,” according to the California Geological Survey.
And the damage from that earthquake, centered in the suburban San Fernando Valley, pales in comparison to the destruction that a major quake centered beneath older neighborhoods, such as in downtown L.A., would cause.
Officials have identified 33 buildings owned by Los Angeles County as having a flaw that could cause them to collapse in a major earthquake.
The 1994 earthquake’s magnitude was relatively moderate. By contrast, a magnitude 7.8 earthquake would produce 45 times more energy, and such a temblor hasn’t hit Southern California since 1857 and Northern California since 1906.
The world’s last magnitude 7.8 earthquake struck in February, resulting in strong shaking in Turkey and Syria. More than 50,000 people died.
Much like Los Angeles, Istanbul is facing a double crisis — a severe housing shortage and extreme earthquake risk — leaving residents in a bind.
A significant portion of California’s buildings constructed in the 20th century are vulnerable to earthquake damage or collapse. Retrofitting them now would leave cities far more resilient — keeping people alive, homes intact, and workplaces and neighborhoods functional.
The state’s rise in property values could enable some owners to use the equity they’ve accumulated to finance retrofits, experts say. A retrofit now can cost far less than repairing extensive damage after an earthquake, which could leave a building so wrecked it might need to be replaced.
Some cities in California have required property owners to retrofit certain types of vulnerable buildings. A Los Angeles law passed in 2015 requiring that apartment buildings with flimsy first stories — often used for carports — be strengthened has resulted in retrofits of more than 8,700 out of 12,400. That’s a completion rate of 70%. An analysis estimated that at least $1.3 billion was spent on those retrofits.
It was three days before Christmas when a magnitude 6.5 earthquake rocked the Central Coast in 2003.
FEMA and state officials have worked to make grants available for retrofits. Homeowners in certain ZIP Codes in Los Angeles, Pasadena, San Francisco, Oakland and Berkeley can apply for retrofit grants of up to $13,000 through the end of May to strengthen “soft-story homes,” where there’s a top-heavy living space built over a garage that is vulnerable to collapse in an earthquake.
“This study reinforces the nation’s need to be proactive about making communities safer from threats like earthquakes,” FEMA Deputy Administrator Erik Hooks said in a statement. “This includes adopting the latest seismic building codes and investing in earthquake resilience projects.”
But many other cities in California have not acted to require retrofits. And even in L.A., city officials have yet to address the potential seismic risk of older steel-frame high-rises built before the 1994 Northridge earthquake. The USGS has said that it is plausible that five steel-frame buildings in Southern California could collapse in a hypothetical magnitude 7.8 earthquake on the San Andreas fault, and 10 could be so damaged that they would be no longer safe to occupy.
The defect that can cause single-family houses to collapse has received little attention until now. Some California homeowners will soon be able to apply for grants to help pay for the retrofit.
Some cities remain far behind. Much of the Inland Empire, which covers Riverside and San Bernardino counties, still has many older brick buildings that are not retrofitted — among the highest-risk structures in an earthquake. They can collapse, not only killing the buildings’ occupants but also raining projectiles onto nearby sidewalks, parking lots and roads, with the remains of brick walls hurled with such force that they could crush cars and buses.
In the magnitude 6.9 Loma Prieta earthquake of 1989, a brick wall in San Francisco fell onto a parking lot, leaving cars crushed; five people died. And in a magnitude 6.3 quake that hit Christchurch, New Zealand, in 2011, falling bricks rained onto Red Bus No. 702, killing eight people, including the driver.
The number of people who need to be housed after a major earthquake could be enormous. The study estimated that a quake so large it had a 1-in-250 chance of occurring in any given year could result in more than 200,000 people needing short-term shelter in California. In an earthquake so large it had a 1-in-1,000 chance of occurring in any given year, more than 700,000 people would need short-term shelter.
The latest study also presents a more realistic picture of expected damage in places including L.A. and the San Francisco Bay Area, where many buildings are on top of basins that amplify ground motions during an earthquake, Jaiswal said. Such shaking can result in a worse outcome for tall buildings that are atop basins compared with those built directly on bedrock.
“If you have a deep basin, with sediments overlaying on the hard rock, those ground motions get amplified,” Jaiswal said.
Compared to earlier models, the latest report factors in localized softer soil and basin conditions, which contributed to the increase in the projected damage cost for places such as L.A. and the Bay Area.
Other areas that saw an increase in earthquake hazard from the previous model include the Salt Lake City area, and much of the island of Hawaii, Maui’s valley region and the southern coast of Oahu.
L.A. County’s proposed earthquake rules would require certain older concrete buildings in unincorporated areas, and those owned by the county, to be retrofitted.
The Seattle area was estimated to have an annual earthquake loss of $781 million; the Portland, Ore., area, $403 million; the Salt Lake City area, $174 million; the Memphis, Tenn., area, $131 million; and the New York City region, $49 million.
The fact that earthquake risk exists in areas of the Eastern U.S. may come as a surprise, but such quakes can happen. A magnitude 5.8 earthquake near Mineral, Va., in 2011 caused $200 million to $300 million in damage, and necessitated $15 million in repairs to the Washington Monument.
Other damaging earthquakes in the Eastern U.S. on record include one off of Cape Ann, Mass., in 1755, estimated to be magnitude 5.9, which resulted in damage to the Boston waterfront; an estimated magnitude 4.5 quake near Petersburg, Va., in 1774, which shoved homes from their foundations and was felt by Thomas Jefferson; and an estimated magnitude 7 quake near Charleston, S.C., in 1886 that killed 60 people, according to the USGS.
Once considered politically impossible because of cost, requiring owners to retrofit their buildings gets overwhelming support from L.A. residents.
In the early 19th century, there were three large earthquakes in the New Madrid seismic zone, around the area along the Mississippi River where Tennessee, Kentucky, Illinois, Missouri and Arkansas meet. The largest earthquakes were a magnitude 7.5 in December 1811, a magnitude 7.3 in January 1812 and a magnitude 7.5 in February 1812.
“Earthquakes are a national problem,” the USGS said in a statement.
New York City has a low probability of a damaging earthquake, but one that occurs could still cause significant damage because of the city’s density and the age of its buildings, according to the city’s emergency management agency. One big risk for New York City is a large number of older brick buildings that have not been retrofitted.
Start your day right
Sign up for Essential California for news, features and recommendations from the L.A. Times and beyond in your inbox six days a week.
You may occasionally receive promotional content from the Los Angeles Times.
|
<urn:uuid:4b02caf1-eb28-40d3-a6ad-92be8d4f14c3>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474526.76/warc/CC-MAIN-20240224080616-20240224110616-00452.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9552380442619324,
"pii_count": 0,
"score": 2.796875,
"token_count": 2408,
"url": "https://www.latimes.com/california/story/2023-04-30/major-california-earthquake-cost"
}
|
Projected losses from a major California earthquake soar. What’s behind seismic inflation?
The expected annual cost from earthquake damage for California is climbing sharply amid an increase in property values and better understanding of how soft soils could result in greater damage during shaking.
California is projected to lose an average of $9.6 billion a year from earthquake damage, the new estimates show. That’s a 157% increase from the last estimate, in 2017, when the price tag was $3.7 billion a year, according to a new report from the U.S. Geological Survey and the Federal Emergency Management Agency.
“In any given year a big earthquake strikes ... you can easily anticipate a $100-billion loss,” USGS research structural engineer Kishor Jaiswal, the principal investigator for the report, told The Times.
The totals underscore just how much the value of older buildings has soared in recent years, yet they remain vulnerable to major damage or collapse in the next big earthquake.
It is also a sober reminder of the seismic toll facing California. After the state’s other major earthquakes — the 1906 in San Francisco, 1933 in Long Beach, 1989 in the greater San Francisco Bay Area and 1994 in Northridge — it took years, if not decades, for cities to recover, and massive costs had to be paid not only by governments and insurers but also individuals who were never made whole.
According to the new report, Los Angeles and Orange counties share the highest price tag of any metro area in the nation, with a combined projected average annual loss of $3.3 billion a year. In second place is the San Francisco-Oakland-Berkeley metro area, with a projected loss of $1.8 billion a year.
The seismic price tag for California is about 65% of the nation’s annual earthquake cost, which is $14.7 billion a year.
The projected annual average earthquake losses in other areas of California include $1.3 billion for Riverside and San Bernardino counties, $917 million for the San Jose-Sunnyvale-Santa Clara metro area, $285 million for San Diego County and $220 million for Ventura County.
Assuming the yearly earthquake loss projection remains the same, over the course of three decades, California is projected to lose $288 billion from earthquake damage. Such a figure is consistent with recent earthquake scenarios, such as a magnitude 7
|
.8 earthquake on the southern San Andreas fault or a magnitude 7 earthquake on the Hayward fault.
Of that total, the five-county Southern California region — L.A., Orange, Riverside, San Bernardino and Ventura counties — would lose nearly $150 billion. And the nine-county San Francisco Bay Area would lose roughly $90 billion.
“It’s a sobering reminder about why we need to prepare for those rare but large earthquakes, as just one major event can eclipse the costs of the more frequent but smaller ones,” USGS Director David Applegate said in a statement.
The authors of the report calculated an “annualized” earthquake loss to average out the cost of earthquake damage on a yearly basis.
It’s similar to how car insurers calculate the premium people pay yearly: People might be in a collision once every few years, but insurers calculate a yearly bill for drivers that takes into account the annual average projected cost of future collisions. How much the yearly car insurance premium will be can vary, depending on factors such as the driver’s age, accident history and the type of vehicle driven.
The magnitude 6.7 earthquake that hit Northridge in 1994 caused as much as $20 billion in damage and more than $40 billion in economic loss, “making it the costliest earthquake disaster in U.S. history,” according to the California Geological Survey.
And the damage from that earthquake, centered in the suburban San Fernando Valley, pales in comparison to the destruction that a major quake centered beneath older neighborhoods, such as in downtown L.A., would cause.
Officials have identified 33 buildings owned by Los Angeles County as having a flaw that could cause them to collapse in a major earthquake.
The 1994 earthquake’s magnitude was relatively moderate. By contrast, a magnitude 7.8 earthquake would produce 45 times more energy, and such a temblor hasn’t hit Southern California since 1857 and Northern California since 1906.
The world’s last magnitude 7.8 earthquake struck in February, resulting in strong shaking in Turkey and Syria. More than 50,000 people died.
Much like Los Angeles, Istanbul is facing a double crisis — a severe housing shortage and extreme earthquake risk — leaving residents in a bind.
A significant portion of California’s buildings constructed in the 20th century are vulnerable to earthquake damage or collapse. Retrofitting them now would leave cities far more resilient — keeping people alive, homes intact, and workplaces and neighborhoods functional.
The state’s rise in property values could enable some owners to use the equity they’ve accumulated to finance retrofits, experts say. A retrofit now can cost far less than repairing extensive damage after an earthquake, which could leave a building so wrecked it might need to be replaced.
Some cities in California have required property owners to retrofit certain types of vulnerable buildings. A Los Angeles law passed in 2015 requiring that apartment buildings with flimsy first stories — often used for carports — be strengthened has resulted in retrofits of more than 8,700 out of 12,400. That’s a completion rate of 70%. An analysis estimated that at least $1.3 billion was spent on those retrofits.
It was three days before Christmas when a magnitude 6.5 earthquake rocked the Central Coast in 2003.
FEMA and state officials have worked to make grants available for retrofits. Homeowners in certain ZIP Codes in Los Angeles, Pasadena, San Francisco, Oakland and Berkeley can apply for retrofit grants of up to $13,000 through the end of May to strengthen “soft-story homes,” where there’s a top-heavy living space built over a garage that is vulnerable to collapse in an earthquake.
“This study reinforces the nation’s need to be proactive about making communities safer from threats like earthquakes,” FEMA Deputy Administrator Erik Hooks said in a statement. “This includes adopting the latest seismic building codes and investing in earthquake resilience projects.”
But many other cities in California have not acted to require retrofits. And even in L.A., city officials have yet to address the potential seismic risk of older steel-frame high-rises built before the 1994 Northridge earthquake. The USGS has said that it is plausible that five steel-frame buildings in Southern California could collapse in a hypothetical magnitude 7.8 earthquake on the San Andreas fault, and 10 could be so damaged that they would be no longer safe to occupy.
The defect that can cause single-family houses to collapse has received little attention until now. Some California homeowners will soon be able to apply for grants to help pay for the retrofit.
Some cities remain far behind. Much of the Inland Empire, which covers Riverside and San Bernardino counties, still has many older brick buildings that are not retrofitted — among the highest-risk structures in an earthquake. They can collapse, not only killing the buildings’ occupants but also raining projectiles onto nearby sidewalks, parking lots and roads, with the remains of brick walls hurled with such force that they could crush cars and buses.
In the magnitude 6.9 Loma Prieta earthquake of 1989, a brick wall in San Francisco fell onto a parking lot, leaving cars crushed; five people died. And in a magnitude 6.3 quake that hit Christchurch, New Zealand, in 2011, falling bricks rained onto Red Bus No. 702, killing eight people, including the driver.
The number of people who need to be housed after a major earthquake could be enormous. The study estimated that a quake so large it had a 1-in-250 chance of occurring in any given year could result in more than 200,000 people needing short-term shelter in California. In an earthquake so large it had a 1-in-1,000 chance of occurring in any given year, more than 700,000 people would need short-term shelter.
The latest study also presents a more realistic picture of expected damage in places including L.A. and the San Francisco Bay Area, where many buildings are on top of basins that amplify ground motions during an earthquake, Jaiswal said. Such shaking can result in a worse outcome for tall buildings that are atop basins compared with those built directly on bedrock.
“If you have a deep basin, with sediments overlaying on the hard rock, those ground motions get amplified,” Jaiswal said.
Compared to earlier models, the latest report factors in localized softer soil and basin conditions, which contributed to the increase in the projected damage cost for places such as L.A. and the Bay Area.
Other areas that saw an increase in earthquake hazard from the previous model include the Salt Lake City area, and much of the island of Hawaii, Maui’s valley region and the southern coast of Oahu.
L.A. County’s proposed earthquake rules would require certain older concrete buildings in unincorporated areas, and those owned by the county, to be retrofitted.
The Seattle area was estimated to have an annual earthquake loss of $781 million; the Portland, Ore., area, $403 million; the Salt Lake City area, $174 million; the Memphis, Tenn., area, $131 million; and the New York City region, $49 million.
The fact that earthquake risk exists in areas of the Eastern U.S. may come as a surprise, but such quakes can happen. A magnitude 5.8 earthquake near Mineral, Va., in 2011 caused $200 million to $300 million in damage, and necessitated $15 million in repairs to the Washington Monument.
Other damaging earthquakes in the Eastern U.S. on record include one off of Cape Ann, Mass., in 1755, estimated to be magnitude 5.9, which resulted in damage to the Boston waterfront; an estimated magnitude 4.5 quake near Petersburg, Va., in 1774, which shoved homes from their foundations and was felt by Thomas Jefferson; and an estimated magnitude 7 quake near Charleston, S.C., in 1886 that killed 60 people, according to the USGS.
Once considered politically impossible because of cost, requiring owners to retrofit their buildings gets overwhelming support from L.A. residents.
In the early 19th century, there were three large earthquakes in the New Madrid seismic zone, around the area along the Mississippi River where Tennessee, Kentucky, Illinois, Missouri and Arkansas meet. The largest earthquakes were a magnitude 7.5 in December 1811, a magnitude 7.3 in January 1812 and a magnitude 7.5 in February 1812.
“Earthquakes are a national problem,” the USGS said in a statement.
New York City has a low probability of a damaging earthquake, but one that occurs could still cause significant damage because of the city’s density and the age of its buildings, according to the city’s emergency management agency. One big risk for New York City is a large number of older brick buildings that have not been retrofitted.
Start your day right
Sign up for Essential California for news, features and recommendations from the L.A. Times and beyond in your inbox six days a week.
You may occasionally receive promotional content from the Los Angeles Times.
|
Questioning the ethics of political systems—particularly the democratic system—is nothing new. Criticism of democracy dates back about 2500 years to two of the greatest thinkers in the history of philosophy. Both Socrates and his student Plato hated democracy because of its potential for corruption at the highest levels of leadership. Socrates specifically lamented the right to vote given to all citizens whether or not they were informed and educated about the issues on which they were voting.
Plato’s concerns lied more with leadership. Not only did he worry about corruption, he did not like the fact that anyone—qualified or not—could be elected to a leadership role. These are important matters of political ethics that hold true even today. Think of these concerns in the context of our representative democracy here in the United States; they are still valid.
In the current political system of the United States there are three key groups of people who play vital roles in the election and governing process. They are the leaders, the media, and the people. Each of these groups have ethical responsibilities to each other and to the democratic process as a whole.
The leaders include those who are elected to office, those appointed to positions by elected officials, and those who are running for elected office—candidates. These men and women have a long list of ethical obligations, not the least of which is honesty. As citizens of a democracy, we not only expect our leaders and potential leaders to be honest, but we must also demand it. For some, getting elected is far more important than being truthful with voters.
Consider New York Congressman-elect George Santos who, during his 2022 campaign, lied about his education, his work history, his mother dying in a tower on 9/11, his grandparents being holocaust survivors, and him being Jewish. He’s Catholic (Ashford).
The media are various digital, broadcast, and print outlets that reach people en masse. This includes, but is not limited to news organizations like CNN, FOX News, and the New York Times. Each use multiple platforms to reach their audience. One of the major responsibilities of the news media is to serve as watchdogs of the political system. Journalists are expected to objectively, fairly, and truthfully investigate and report on the actions of the leaders referenced above.
Sadly, in recent decades the new media has become politicized as cable networks now support particular agendas of the political left or right. In a somewhat recent turn of events, though, CNN under its new leadership has made a shift to the political center (Helmore). Falling in line with the Associate Press, Reuters, and National Public Radio the cable news network is broadening the landscape of ethical journalistic organizations.
We, the people, are comprised of voters and non-voters. Whether citizens or residents, of voting age or not yet age 18, the people of the United States have ethical responsibilities in the political process. The first is voting. Those citizens ages 18 and up have an obligation as part of our social contract to be part of the electoral process. Leaders are elected to represent the people; therefore, those same people must vote for the leaders who represent their interests.
Another ethical obligation the people have is to hold leadership accountable. Everyone living in the United States is affected by the actions taken and decisions made by government officials. As a result, we must keep those leaders in check by constantly making them aware of the quality of their work and the ethics of their actions.
Ashford, Grace and Michael Gold. “Who Is Rep.-Elect George Santos? His Résumé May Be Largely Fiction.” The New York Times. nytimes.com. 19 December 2022.
Helmore, Edward. “Why CNN Is Shifting Tenor from Partisanship News to a Political Center.” The Guardian. theguardian.com. 21 January 2022.
|
<urn:uuid:20001d2e-eafb-432c-a4b7-8331a1ad7f7a>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00619.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9699148535728455,
"pii_count": 0,
"score": 2.875,
"token_count": 800,
"url": "https://johnbaldino.wordpress.com/2023/01/04/political-ethics/"
}
|
Questioning the ethics of political systems—particularly the democratic system—is nothing new. Criticism of democracy dates back about 2500 years to two of the greatest thinkers in the history of philosophy. Both Socrates and his student Plato hated democracy because of its potential for corruption at the highest levels of leadership. Socrates specifically lamented the right to vote given to all citizens whether or not they were informed and educated about the issues on which they were voting.
Plato’s concerns lied more with leadership. Not only did he worry about corruption, he did not like the fact that anyone—qualified or not—could be elected to a leadership role. These are important matters of political ethics that hold true even today. Think of these concerns in the context of our representative democracy here in the United States; they are still valid.
In the current political system of the United States there are three key groups of people who play vital roles in the election and governing process. They are the leaders, the media, and the people. Each of these groups have ethical responsibilities to each other and to the democratic process as a whole.
The leaders include those who are elected to office, those appointed to positions by elected officials, and those who are running for elected office—candidates. These men and women have a long list of ethical obligations, not the least of which is honesty. As citizens of a democracy, we not only expect our leaders and potential leaders to be honest, but we must also demand it. For some, getting elected is far more important than being truthful with voters.
Consider New York Congressman-elect George Santos who, during his 2022 campaign, lied about his education, his work history, his mother dying in a tower on 9/11, his grandparents being holocaust survivors, and him being Jewish. He’s Catholic (Ashford).
The media are various digital, broadcast, and print outlets that reach people en masse. This includes, but is not limited to news organizations like CNN, FOX News, and the New York Times. Each use multiple platforms to reach their audience. One of the major responsibilities of the news media is to serve as watchdogs of the political system. Journalists are expected to objectively, fairly, and truthfully investigate and report on the actions of the leaders referenced above.
Sadly, in recent decades the new media has become politicized as cable networks now support particular agendas of the political left or right. In a somewhat recent turn of events, though, CNN under its
|
new leadership has made a shift to the political center (Helmore). Falling in line with the Associate Press, Reuters, and National Public Radio the cable news network is broadening the landscape of ethical journalistic organizations.
We, the people, are comprised of voters and non-voters. Whether citizens or residents, of voting age or not yet age 18, the people of the United States have ethical responsibilities in the political process. The first is voting. Those citizens ages 18 and up have an obligation as part of our social contract to be part of the electoral process. Leaders are elected to represent the people; therefore, those same people must vote for the leaders who represent their interests.
Another ethical obligation the people have is to hold leadership accountable. Everyone living in the United States is affected by the actions taken and decisions made by government officials. As a result, we must keep those leaders in check by constantly making them aware of the quality of their work and the ethics of their actions.
Ashford, Grace and Michael Gold. “Who Is Rep.-Elect George Santos? His Résumé May Be Largely Fiction.” The New York Times. nytimes.com. 19 December 2022.
Helmore, Edward. “Why CNN Is Shifting Tenor from Partisanship News to a Political Center.” The Guardian. theguardian.com. 21 January 2022.
|
The “duck curve” – a challenge specific to the renewable energy landscape – has found an unexpected solution in Bitcoin mining. This curve reflects the conflict between peak demand periods and peak renewable energy production times, a discrepancy that grows as we increasingly embrace renewable energy sources, complicating grid management.
Bitcoin fixes this.
This story is part of CoinDesk's 2023 Mining Week, sponsored by Foundry. Adolfo Contreras is a Senior Business Development Advisor at Blockstream. He has 20 years experience in satellite communications, weather intelligence for energy and transportation markets and Bitcoin.
Bitcoin mining revenue promotes profitable renewable infrastructure, aiding project financing and scaling the energy grid for a sustainable future. This is crucial for electrifying transportation and phasing out fossil fuels. Given the massive power storage and load balancing required, Bitcoin is a useful tool, especially considering the current economic and geopolitical climate.
Bitcoin miners, with their flexible operations, are uniquely well-equipped to navigate these energy supply fluctuations. By strategically aligning their activities with periods of high renewable energy production and low demand, they can optimize their energy usage and potentially alleviate the pressure on the grid.
This article will explore how Bitcoin miners are helping to manage the duck curve, and the strategies they are employing to balance demand and optimize energy consumption.
Renewable energy's uphill battle
In many countries, the amount of renewable capacity (though typically heavily under-utilized with the majority of power generation capacity unused or “wasted”) has increased dramatically over the last decade, exemplified by Europe.
In 2022 alone, there was an increase in renewable capacity of 266GW, which, assuming a (very low) average cost of $500k per MW, represents an investment of more than $130 billion in 2022 alone. For reference, a country like Spain with nearly 50 million people has never peaked in electricity consumption above 45GW.
The crux of the argument is this:
1. The electrification of supply is outpacing demand, with retail electricity demand not significantly growing due to economic stagnation and increased device efficiency.
2. Electric vehicle adoption isn't meeting expectations and EV charging habits, often at night, do not align with peak solar supply during daylight hours.
3. Significant displacement of fossil power use moving to electrical demand would require industries to adapt their manufacturing processes to electricity, a costly endeavor many are unwilling to undertake due to competitiveness issues.
Read more: Jeff Wilser - How Texas Became a Global Mecca for Bitcoin Mining
Therefore, we need an economically viable, electricity-intensive activity that is predictable, flexible, and doesn't require extensive transportation – and that's where Bitcoin mining comes in.
Below is a glimpse of the gigantic amounts of unused or wasted electricity due to insufficient demand vs generation capacity in California alone:
Additionally, we can’t ignore an additional bottleneck, which is the fact that wind and solar farms have significantly lower power density. That is, they need way more land surface to generate the same amount of electricity.
For these reasons, the distribution network for power will need a significant upgrade to scale and transmission expanded to connect generation plant locations to population areas where industrial and retail demand is.
Until these investments are made, we may encounter situations where additional installed capacity does not translate to additional renewable used generation in the electricity mix – unused capacity, definitionally not being in the mix.
Balancing energy supply and demand
The consequence of all these problems is that at particular times of the day (when the sun shines the most) the price in electricity wholesale markets crashes due to what’s called the duck curve:
The duck curve represents the demand remaining after subtracting variable renewable generation in the middle of the day when solar generation tends to be highest.
The problem of the duck curve is that photovoltaic installations planned to make money on the wholesale electricity market will not be making it in this scenario. Many installations will have to seek PPAs – power purchase agreements – which are bilateral agreements between electricity producers and offtakers, i.e. large electricity consumers.
Those who manage to sign them may save their projects at the expense of significantly lower profitability while those that do not, may be facing bankruptcy and outright abandonment of the installations.
Of course electricity storage in batteries or PSH – Pumped Storage Hydro - pumping water backup into hydro-elecric dams could help reduce the problem, but the cost of installing the amount needed would be astronomical and it is still to be seen if these systems can work at scale.
Enter Bitcoin mining
Mining hardware can be directly connected to wind and solar generation plants, consuming the excess energy produced during peak sunlight hours when solar installations generate an excess of electricity and absorb the wasted energy from the wholesale market.
By doing so, these plants help to balance the supply and demand in the electricity market, preventing the drastic price drops that occur when there is an oversupply of energy.
This is particularly beneficial for solar installations that struggle to sell their energy during these peak hours due to the low market prices.
Read more: Anna Baydakova - Want to Mine Bitcoin at Home? DIY Bitcoiners Have Stories to Share
Let's consider an example.
Suppose there's a solar farm in California that's producing a surplus of energy during the day.
Instead of selling this energy on the wholesale market at a low price, the farm could direct this excess energy to a Bitcoin mining operation, which would consume this surplus energy.
This would effectively remove the excess energy from the wholesale market, helping to stabilize electricity prices and making the solar farm's operation more profitable.
This means they can be located directly at or near the site of renewable energy installations, reducing the need for extensive energy transmission networks. They can also adjust their energy consumption based on the availability of renewable energy, consuming more when there is a surplus and less when there is a shortage.
For instance, a Bitcoin mining operation in Texas, where wind power is abundant, can ramp up its energy consumption during the night when wind power generation is at its peak and the demand from other consumers is low.
This helps to balance the energy supply and demand, preventing potential waste of the excess wind power.
In conclusion, the “Duck Curve” phenomenon presents a unique challenge in the renewable energy sector, but it also opens up an opportunity for innovative solutions. By absorbing the excess energy during periods of high renewable generation and low demand, Bitcoin mining can help balance the energy market, stabilize electricity prices, and enhance the profitability of renewable energy installations.
|
<urn:uuid:a33a2252-1c7b-461c-80c5-4a8ce42d99db>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00038.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9309197664260864,
"pii_count": 0,
"score": 2.78125,
"token_count": 1336,
"url": "https://www.coindesk.com/consensus-magazine/2023/07/27/enhancing-profitability-of-wind-and-solar-through-bitcoin-mining/"
}
|
The “duck curve” – a challenge specific to the renewable energy landscape – has found an unexpected solution in Bitcoin mining. This curve reflects the conflict between peak demand periods and peak renewable energy production times, a discrepancy that grows as we increasingly embrace renewable energy sources, complicating grid management.
Bitcoin fixes this.
This story is part of CoinDesk's 2023 Mining Week, sponsored by Foundry. Adolfo Contreras is a Senior Business Development Advisor at Blockstream. He has 20 years experience in satellite communications, weather intelligence for energy and transportation markets and Bitcoin.
Bitcoin mining revenue promotes profitable renewable infrastructure, aiding project financing and scaling the energy grid for a sustainable future. This is crucial for electrifying transportation and phasing out fossil fuels. Given the massive power storage and load balancing required, Bitcoin is a useful tool, especially considering the current economic and geopolitical climate.
Bitcoin miners, with their flexible operations, are uniquely well-equipped to navigate these energy supply fluctuations. By strategically aligning their activities with periods of high renewable energy production and low demand, they can optimize their energy usage and potentially alleviate the pressure on the grid.
This article will explore how Bitcoin miners are helping to manage the duck curve, and the strategies they are employing to balance demand and optimize energy consumption.
Renewable energy's uphill battle
In many countries, the amount of renewable capacity (though typically heavily under-utilized with the majority of power generation capacity unused or “wasted”) has increased dramatically over the last decade, exemplified by Europe.
In 2022 alone, there was an increase in renewable capacity of 266GW, which, assuming a (very low) average cost of $500k per MW, represents an investment of more than $130 billion in 2022 alone. For reference, a country like Spain with nearly 50 million people has never peaked in electricity consumption above 45GW.
The crux of the argument is this:
1. The electrification of supply is outpacing demand, with retail electricity demand not significantly growing due to economic stagnation and increased device efficiency.
2. Electric vehicle adoption isn't meeting expectations and EV charging habits, often at night, do not align with peak solar supply during daylight hours.
3. Significant displacement of fossil power use moving to electrical demand would require industries to adapt their manufacturing processes to electricity, a costly endeavor many are unwilling to undertake due to competitiveness issues.
Read more: Jeff Wil
|
ser - How Texas Became a Global Mecca for Bitcoin Mining
Therefore, we need an economically viable, electricity-intensive activity that is predictable, flexible, and doesn't require extensive transportation – and that's where Bitcoin mining comes in.
Below is a glimpse of the gigantic amounts of unused or wasted electricity due to insufficient demand vs generation capacity in California alone:
Additionally, we can’t ignore an additional bottleneck, which is the fact that wind and solar farms have significantly lower power density. That is, they need way more land surface to generate the same amount of electricity.
For these reasons, the distribution network for power will need a significant upgrade to scale and transmission expanded to connect generation plant locations to population areas where industrial and retail demand is.
Until these investments are made, we may encounter situations where additional installed capacity does not translate to additional renewable used generation in the electricity mix – unused capacity, definitionally not being in the mix.
Balancing energy supply and demand
The consequence of all these problems is that at particular times of the day (when the sun shines the most) the price in electricity wholesale markets crashes due to what’s called the duck curve:
The duck curve represents the demand remaining after subtracting variable renewable generation in the middle of the day when solar generation tends to be highest.
The problem of the duck curve is that photovoltaic installations planned to make money on the wholesale electricity market will not be making it in this scenario. Many installations will have to seek PPAs – power purchase agreements – which are bilateral agreements between electricity producers and offtakers, i.e. large electricity consumers.
Those who manage to sign them may save their projects at the expense of significantly lower profitability while those that do not, may be facing bankruptcy and outright abandonment of the installations.
Of course electricity storage in batteries or PSH – Pumped Storage Hydro - pumping water backup into hydro-elecric dams could help reduce the problem, but the cost of installing the amount needed would be astronomical and it is still to be seen if these systems can work at scale.
Enter Bitcoin mining
Mining hardware can be directly connected to wind and solar generation plants, consuming the excess energy produced during peak sunlight hours when solar installations generate an excess of electricity and absorb the wasted energy from the wholesale market.
By doing so, these plants help to balance the supply and demand in the electricity market, preventing the drastic price drops that occur when there is an oversupply of energy.
This is particularly beneficial for solar installations that struggle to sell their energy during these peak hours due to the low market prices.
Read more: Anna Baydakova - Want to Mine Bitcoin at Home? DIY Bitcoiners Have Stories to Share
Let's consider an example.
Suppose there's a solar farm in California that's producing a surplus of energy during the day.
Instead of selling this energy on the wholesale market at a low price, the farm could direct this excess energy to a Bitcoin mining operation, which would consume this surplus energy.
This would effectively remove the excess energy from the wholesale market, helping to stabilize electricity prices and making the solar farm's operation more profitable.
This means they can be located directly at or near the site of renewable energy installations, reducing the need for extensive energy transmission networks. They can also adjust their energy consumption based on the availability of renewable energy, consuming more when there is a surplus and less when there is a shortage.
For instance, a Bitcoin mining operation in Texas, where wind power is abundant, can ramp up its energy consumption during the night when wind power generation is at its peak and the demand from other consumers is low.
This helps to balance the energy supply and demand, preventing potential waste of the excess wind power.
In conclusion, the “Duck Curve” phenomenon presents a unique challenge in the renewable energy sector, but it also opens up an opportunity for innovative solutions. By absorbing the excess energy during periods of high renewable generation and low demand, Bitcoin mining can help balance the energy market, stabilize electricity prices, and enhance the profitability of renewable energy installations.
|
Time is something we all have the same amount of each day. And it’s important to learn how to use it well so we can have time for the things we love and get our important stuff done too! That’s what time management is all about.
What is time management?
Time management is all about making the most of our minutes. It means being smart about how we use our time and making sure we have time for the things that are important to us.
Why is time management important for kids?
Time management is important for kids because it helps us be successful in school and in life. By being smart about how we use our time, we can get our schoolwork done on time, have fun with our friends, and even try new things.
Steps to be smart with our time:
- Make a schedule: Write down all the things you need to do each day and how long each one takes. Then, make a plan for how you’ll do them so you have enough time for everything.
- Prioritize: Make a list of the most important things you need to do each day and do those first. That way, you’ll make sure they get done even if you run out of time.
- Take breaks: It’s important to take breaks and give your brain a rest. Choose activities that you enjoy and that help you relax, like playing a game or reading a book.
- Avoid distractions: Distractions can take up a lot of time and make it harder to get things done. So, when you’re working on something, turn off your phone and close other things on your computer that might distract you.
- Set goals: Set goals for what you want to do each day, each week, and each month. Write them down and check them off as you reach them. This will help you stay focused and feel good about what you’re accomplishing.
Fun, kid-friendly ways to learn about time management include:
- Time Timer: This is a cool tool that helps you see how much time you have left to get something done.
- The Time Management Game: This fun game helps you learn about time management in a fun way.
- Doomed to Repeat It: This book teaches you about time management in a fun and interesting way, using time travel and adventure.
- Time Management Apps: There are lots of apps that can help you manage your time better. Look for one that works for you and that you like using.
By learning about time management now, you’ll be able to make the most of your minutes and have time for all the things you love. So why wait? Start managing your time today!
Leave a Reply
|
<urn:uuid:9d197fc4-ee20-4f1e-8983-9f67a753e9de>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00207.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9516851902008057,
"pii_count": 0,
"score": 3.484375,
"token_count": 569,
"url": "https://jalensbakery.wordpress.com/2023/03/01/time-management-for-kids-make-the-most-of-your-minutes/"
}
|
Time is something we all have the same amount of each day. And it’s important to learn how to use it well so we can have time for the things we love and get our important stuff done too! That’s what time management is all about.
What is time management?
Time management is all about making the most of our minutes. It means being smart about how we use our time and making sure we have time for the things that are important to us.
Why is time management important for kids?
Time management is important for kids because it helps us be successful in school and in life. By being smart about how we use our time, we can get our schoolwork done on time, have fun with our friends, and even try new things.
Steps to be smart with our time:
- Make a schedule: Write down all the things you need to do each day and how long each one takes. Then, make a plan for how you’ll do them so you have enough time for everything.
- Prioritize: Make a list of the most important things you need to do each day and do those first. That way, you’ll make sure they get done even if you run out of time.
- Take breaks: It’s important to take breaks and give your brain a rest. Choose activities that you enjoy and that help you relax, like playing a game or reading a book.
- Avoid distractions: Distractions can take up a lot of time and make it harder to get things done. So, when you’re working on something, turn off your phone and close other things on your computer that might distract you.
- Set goals: Set goals for what you want to do each day, each week, and each month. Write them down and check them off as you reach them. This will help you stay focused and feel good about what you’re accomplishing.
Fun, kid-friendly ways to learn about time management include:
- Time Timer: This is a cool tool that helps you see how much time you have left to get something done.
- The Time Management Game: This fun game helps you learn about time management in a fun way.
- Doomed to Repeat It: This book teaches you about time management in a fun and interesting way, using time travel and adventure.
- Time Management Apps: There are lots of apps that can help you manage your time better. Look for one that works for you and that you like using.
By learning about time management now, you’ll be able to make the most of your minutes
|
and have time for all the things you love. So why wait? Start managing your time today!
Leave a Reply
|
A whale that washed ashore in Hawaii over the weekend likely died in part because it ate large volumes of fishing traps, fishing nets, plastic bags and other marine debris, scientists said Thursday, highlighting the threat to wildlife from the millions of tons of plastic that ends up in oceans every year.
The body of the 56-foot long, 120,000-pound animal was first noticed Jan. 27 on a reef off Kauai. High tide brought it ashore Saturday.
Kristi West, director of the University of Hawaii’s Health and Stranding Lab, said there were enough foreign objects in the opening of the whale’s intestinal tract to block food.
“The presence of undigested fish and squid lends further evidence of a blockage,” she said in a news release from the state Department of Land and Natural Resources.
The whale’s stomach contained six hagfish traps, seven types of fishing net, two types of plastic bags, a light protector, fishing line and a float from a net. Researchers also found squid beaks, fish skeleton and remains of other prey in the whale’s stomach.
It’s the first known case of a sperm whale in Hawaii waters ingesting discarded fishing gear, West said.
The whale’s stomach was so large, West’s team wasn’t able to examine it completely. They suspect there was more material they weren’t able to recover.
Researchers found nothing wrong with other organs they examined. They collected samples to screen for disease and conduct other follow-up tests.
Sperm whales travel across thousands of miles in the ocean, so it’s not clear where the debris came from.
Scientists say that more than 35 million tons of plastic pollution is produced around Earth each year, and about a quarter of that ends up around the water.
Marine debris harms numerous species.
Seabirds can ingest as much as 8% of their body weight in plastic. Endangered Hawaiian monk seals and green sea turtles can get caught in plastic nets and die. Sharks and other apex predators eat smaller fish that feed on microplastic, which can then endanger their own health.
In addition to eating plastics, large whales are harmed when they become entangled in fishing gear or other ropes in the ocean. The drag from debris can force whales to use more energy to swim and make it harder for them to eat, causing starvation.
On Tuesday, marine mammal responders freed a humpback whale that was caught in rope, a bundle of gear and two buoys off the Big Island.
Sperm whales are an endangered species found in deep oceans across the world. A 2021 report from the National Oceanic and Atmospheric Administration estimated there were about 4,500 sperm whales in the waters around the Hawaiian Islands, from the Big Island in the south to Kure Atoll in the north.
By participating in online discussions you acknowledge that you have agreed to the Terms of Service. An insightful discussion of ideas and viewpoints is encouraged, but comments must be civil and in good taste, with no personal attacks. If your comments are inappropriate, you may be banned from posting. Report comments if you believe they do not follow our guidelines.
Having trouble with comments? Learn more here.
|
<urn:uuid:0c6211a7-a5c5-4072-8f07-eb1aae28cb4d>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00504.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9547992944717407,
"pii_count": 0,
"score": 3.078125,
"token_count": 668,
"url": "https://www.staradvertiser.com/2023/02/03/hawaii-news/hawaii-whale-dies-with-fishing-nets-plastic-bags-in-stomach-2/"
}
|
A whale that washed ashore in Hawaii over the weekend likely died in part because it ate large volumes of fishing traps, fishing nets, plastic bags and other marine debris, scientists said Thursday, highlighting the threat to wildlife from the millions of tons of plastic that ends up in oceans every year.
The body of the 56-foot long, 120,000-pound animal was first noticed Jan. 27 on a reef off Kauai. High tide brought it ashore Saturday.
Kristi West, director of the University of Hawaii’s Health and Stranding Lab, said there were enough foreign objects in the opening of the whale’s intestinal tract to block food.
“The presence of undigested fish and squid lends further evidence of a blockage,” she said in a news release from the state Department of Land and Natural Resources.
The whale’s stomach contained six hagfish traps, seven types of fishing net, two types of plastic bags, a light protector, fishing line and a float from a net. Researchers also found squid beaks, fish skeleton and remains of other prey in the whale’s stomach.
It’s the first known case of a sperm whale in Hawaii waters ingesting discarded fishing gear, West said.
The whale’s stomach was so large, West’s team wasn’t able to examine it completely. They suspect there was more material they weren’t able to recover.
Researchers found nothing wrong with other organs they examined. They collected samples to screen for disease and conduct other follow-up tests.
Sperm whales travel across thousands of miles in the ocean, so it’s not clear where the debris came from.
Scientists say that more than 35 million tons of plastic pollution is produced around Earth each year, and about a quarter of that ends up around the water.
Marine debris harms numerous species.
Seabirds can ingest as much as 8% of their body weight in plastic. Endangered Hawaiian monk seals and green sea turtles can get caught in plastic nets and die. Sharks and other apex predators eat smaller fish that feed on microplastic, which can then endanger their own health.
In addition to eating plastics, large whales are harmed when they become entangled in fishing gear or other ropes in the ocean. The drag from debris can force whales to use more energy to swim and make it harder for them to eat, causing starvation.
On Tuesday, marine mammal responders freed a humpback whale that was caught in rope, a bundle of gear and two buoys off
|
the Big Island.
Sperm whales are an endangered species found in deep oceans across the world. A 2021 report from the National Oceanic and Atmospheric Administration estimated there were about 4,500 sperm whales in the waters around the Hawaiian Islands, from the Big Island in the south to Kure Atoll in the north.
By participating in online discussions you acknowledge that you have agreed to the Terms of Service. An insightful discussion of ideas and viewpoints is encouraged, but comments must be civil and in good taste, with no personal attacks. If your comments are inappropriate, you may be banned from posting. Report comments if you believe they do not follow our guidelines.
Having trouble with comments? Learn more here.
|
by Paulina Ćwik
With all the technological advancements of the 21st century, unveiling the future of climate change and its impacts on societies and the environment remains difficult. This is especially true because anthropogenic climate change involves a multitude of complex interactions and feedback between climate system components, such as atmosphere, land, surface, sea-ice, etc., and biological, geophysical, social, and economic systems. Additional complexity to the dynamics of climate change is also resulting from a myriad of processes in coupled sub-systems that are nested at various spatiotemporal scales, such as, for example, interactions between a single cell, a plant, and an entire forest . The interplay between components within the climate system is therefore complex, but the understanding of these dynamics is of essential value for assessment of potential impacts of climate change and future planning on suitable adaptation strategies. But how to do that? One of the most promising methods to study the mechanisms by which the dynamic system components interact is cellular automata.
The concept of a cellular automaton (singular form of cellular automata) isn’t new. It originated in the 1940s with work of two scientists of Los Alamos National Laboratory: Stanislaw Ulam and John von Neumann . A cellular automaton is a computer model of a system represented on a regular grid with cells [3, 4]. Each cell is in a one of finite states and it evolves at each time step of a simulation. The evolution itself depends on the states of the neighboring cells and set of adopted rules describing the behavior of a cell. The grid of cells is arranged without gaps or overlapping and is usually representing one or two dimensions, although a cellular automaton can operate in many dimensions.
Fig.1. Examples of patterns generated by a sequence of two-dimensional cellular automaton rules. Figure adopted from: “Wolfram, S. A new kind of science. Wolfram Media, Inc., 2002. Section 5.2 Cellular Automata, pp.174.”
One of the first scientists who actively contributed to an expansion of cellular automata interest beyond academia was British mathematician John Horton Conway . In 1970 Conway invented ‘Game of Live’, a game that simulates real-life processes using a two-dimensional cellular automaton (Fig. 2). Conway’s game can be envisioned as a ‘Go’ board, where each cell has a state such as “on” and “off”, or “alive” and “dead”. The number of state possibilities can vary but is always finite. Each cell has its neighborhood – a set of cells, typically directly adjacent to the specific cell, that can be defined in a number of ways. There are many types of neighborhood configurations possible to use in the game. According to relatively simple transition rules established at the beginning of the modeling process, a state of a cell will change. It will happen with each time step, and the change will depend on the state of the cell itself and the state of its neighborhood. The change in a state applies to the whole grid at the same time. Depending on the modeler’s decision on neighborhood type, the local or global interaction between the cells in the model can be expressed.
Fig.2 Conway’s Game of Life. Adopted from https://www.jpytr.com/post/game_of_life/
The discovery of the ‘Game of Life’ was published by Martin Gardner in Scientific American’s Mathematical Games and became instantly very famous. But why? The goal of the game is to identify patterns that evolve in interesting ways. The reason why the game has attracted this much interest is because of its ability to generate complex patterns of behavior that no one could predict. In a complicated system with many interactions between elements, such as in Earth’s climate system, abstraction of the system’s connections to a network, like in the ‘Game of Life’, helps us to find underlying patterns and simplicity that can be translated to mathematical rules. And this is due to one of the most astonishing properties of the game, namely, that it is governed by a set of very simple transition rules. The game capture unpredictability and other features of self-organizing behaviors that are observed in real life, proving that it is possible to simulate real-life processes using computer modeling.
The “Game of Life” and cellular automata have been proven to be extremely powerful conceptual tool to explore pattern formation and to simulate a complex behavior of the analyzed system. It shows that complex phenomena do not have to result from a set of complex rules or interactions. Complexity may, in fact, result from the evolution of interactions based on simple set of rules iterated over time. Perhaps, the same can be applied to climate system?
Thanks to the simplicity of underlying transition rules and the ability of efficient computational implementation of many concurrent processes, cellular automata can serve as an alternative method to climate modeling [7-9]. Non-equilibrium phenomena, meaning the ones that are constantly evolving (like in the dynamic Earth’s system), are not well modeled by static mathematical equations but are best described by evolutionary dynamics that shape them. Cellular automata are well designed to capture these patterns and allow studying new dynamical approaches to better understand the phenomenon that occurs on a scale of a whole Earth’s climate system.
- Snyder, C.W., Mastrandrea, M.D. and Schneider, S.H., (2011). “The Complex Dyanmics of the Climate System: Constraints on our Knowledge, Policy Implications and the Necessity of Systems Thinking”. In Philosophy of complex systems (pp. 467-505). North-Holland.
- Sarkar, P. (2000).”A brief history of cellular automata”. Acm computing surveys (csur). 32(1): 80-107.
- Wolfram, S. (1984).”Universality and complexity in cellular automata”. Physica D: Nonlinear Phenomena.10(1-2): 1-35.
- Wolfram, S. (1984). “Cellular automata as models of complexity”. Nature. 311(5985). 419.
- Adamatzky, A. (2010). “Game of life cellular automata”. Vol. 1. London, Springer.
- Gardner, M. (1970). “Mathematical Games – The fantastic combinations of John Conway’s new solitaire game “life””. Scientific American223: 120-123.
- Gaudreau, J., Perez, L. and Drapeau, P., (2016). BorealFireSim: A GIS-based cellular automata model of wildfires for the boreal forest of Quebec in a climate change paradigm. Ecological Informatics, 32, pp.12-27.
- Lu, Q., Chang, N.B., Joyce, J., Chen, A.S., Savic, D.A., Djordjevic, S. and Fu, G.,(2018). Exploring the potential climate change impact on urban growth in London by a cellular automata-based Markov chain model. Computers, Environment and Urban Systems, 68, pp.121-132.
- Kassogué, H., Bernoussi, A.S., Amharref, M. and Ouardouz, M., (2019). Cellular automata approach for modelling climate change impact on water resources. International Journal of Parallel, Emergent and Distributed Systems, 34(1), pp.21-36.
|
<urn:uuid:4d57df77-4d68-4f7c-8baa-eccfdcd54961>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656737.96/warc/CC-MAIN-20230609132648-20230609162648-00293.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9013100266456604,
"pii_count": 0,
"score": 3.453125,
"token_count": 1614,
"url": "https://earlycareerclimate.wordpress.com/2023/04/10/the-game-of-life-alternate-method-to-earths-climate-modeling/"
}
|
by Paulina Ćwik
With all the technological advancements of the 21st century, unveiling the future of climate change and its impacts on societies and the environment remains difficult. This is especially true because anthropogenic climate change involves a multitude of complex interactions and feedback between climate system components, such as atmosphere, land, surface, sea-ice, etc., and biological, geophysical, social, and economic systems. Additional complexity to the dynamics of climate change is also resulting from a myriad of processes in coupled sub-systems that are nested at various spatiotemporal scales, such as, for example, interactions between a single cell, a plant, and an entire forest . The interplay between components within the climate system is therefore complex, but the understanding of these dynamics is of essential value for assessment of potential impacts of climate change and future planning on suitable adaptation strategies. But how to do that? One of the most promising methods to study the mechanisms by which the dynamic system components interact is cellular automata.
The concept of a cellular automaton (singular form of cellular automata) isn’t new. It originated in the 1940s with work of two scientists of Los Alamos National Laboratory: Stanislaw Ulam and John von Neumann . A cellular automaton is a computer model of a system represented on a regular grid with cells [3, 4]. Each cell is in a one of finite states and it evolves at each time step of a simulation. The evolution itself depends on the states of the neighboring cells and set of adopted rules describing the behavior of a cell. The grid of cells is arranged without gaps or overlapping and is usually representing one or two dimensions, although a cellular automaton can operate in many dimensions.
Fig.1. Examples of patterns generated by a sequence of two-dimensional cellular automaton rules. Figure adopted from: “Wolfram, S. A new kind of science. Wolfram Media, Inc., 2002. Section 5.2 Cellular Automata, pp.174.”
One of the first scientists who actively contributed to an expansion of cellular automata interest beyond academia was British mathematician John Horton Conway . In 1970 Conway invented ‘Game of Live’, a game that simulates real-life processes using a two-dimensional cellular automaton (Fig. 2). Conway’s game can be envisioned as a ‘Go’ board, where each cell has a state such as “on” and “off”, or “alive” and “dead
|
”. The number of state possibilities can vary but is always finite. Each cell has its neighborhood – a set of cells, typically directly adjacent to the specific cell, that can be defined in a number of ways. There are many types of neighborhood configurations possible to use in the game. According to relatively simple transition rules established at the beginning of the modeling process, a state of a cell will change. It will happen with each time step, and the change will depend on the state of the cell itself and the state of its neighborhood. The change in a state applies to the whole grid at the same time. Depending on the modeler’s decision on neighborhood type, the local or global interaction between the cells in the model can be expressed.
Fig.2 Conway’s Game of Life. Adopted from https://www.jpytr.com/post/game_of_life/
The discovery of the ‘Game of Life’ was published by Martin Gardner in Scientific American’s Mathematical Games and became instantly very famous. But why? The goal of the game is to identify patterns that evolve in interesting ways. The reason why the game has attracted this much interest is because of its ability to generate complex patterns of behavior that no one could predict. In a complicated system with many interactions between elements, such as in Earth’s climate system, abstraction of the system’s connections to a network, like in the ‘Game of Life’, helps us to find underlying patterns and simplicity that can be translated to mathematical rules. And this is due to one of the most astonishing properties of the game, namely, that it is governed by a set of very simple transition rules. The game capture unpredictability and other features of self-organizing behaviors that are observed in real life, proving that it is possible to simulate real-life processes using computer modeling.
The “Game of Life” and cellular automata have been proven to be extremely powerful conceptual tool to explore pattern formation and to simulate a complex behavior of the analyzed system. It shows that complex phenomena do not have to result from a set of complex rules or interactions. Complexity may, in fact, result from the evolution of interactions based on simple set of rules iterated over time. Perhaps, the same can be applied to climate system?
Thanks to the simplicity of underlying transition rules and the ability of efficient computational implementation of many concurrent processes, cellular automata can serve as an alternative method to climate modeling [7-9]. Non-equilibrium phenomena, meaning the ones that are constantly evolving (like in the dynamic Earth’s system), are not well modeled by static mathematical equations but are best described by evolutionary dynamics that shape them. Cellular automata are well designed to capture these patterns and allow studying new dynamical approaches to better understand the phenomenon that occurs on a scale of a whole Earth’s climate system.
- Snyder, C.W., Mastrandrea, M.D. and Schneider, S.H., (2011). “The Complex Dyanmics of the Climate System: Constraints on our Knowledge, Policy Implications and the Necessity of Systems Thinking”. In Philosophy of complex systems (pp. 467-505). North-Holland.
- Sarkar, P. (2000).”A brief history of cellular automata”. Acm computing surveys (csur). 32(1): 80-107.
- Wolfram, S. (1984).”Universality and complexity in cellular automata”. Physica D: Nonlinear Phenomena.10(1-2): 1-35.
- Wolfram, S. (1984). “Cellular automata as models of complexity”. Nature. 311(5985). 419.
- Adamatzky, A. (2010). “Game of life cellular automata”. Vol. 1. London, Springer.
- Gardner, M. (1970). “Mathematical Games – The fantastic combinations of John Conway’s new solitaire game “life””. Scientific American223: 120-123.
- Gaudreau, J., Perez, L. and Drapeau, P., (2016). BorealFireSim: A GIS-based cellular automata model of wildfires for the boreal forest of Quebec in a climate change paradigm. Ecological Informatics, 32, pp.12-27.
- Lu, Q., Chang, N.B., Joyce, J., Chen, A.S., Savic, D.A., Djordjevic, S. and Fu, G.,(2018). Exploring the potential climate change impact on urban growth in London by a cellular automata-based Markov chain model. Computers, Environment and Urban Systems, 68, pp.121-132.
- Kassogué, H., Bernoussi, A.S., Amharref, M. and Ouardouz, M., (2019). Cellular automata approach for modelling climate change impact on water resources. International Journal of Parallel, Emergent and Distributed Systems, 34(1), pp.21-36.
|
Some volcanoes perform a rather subtle trick: blowing rings of vapor that waft near their craters. The short-lived rings have been observed occasionally at volcanoes like Etna in Italy and Eyjafjallajökull in Iceland. Now researchers have found new clues about how bursting gas bubbles create these curiosities in some volcanoes.
Most volcano research focuses on the strong eruptions that threaten human lives, said Simona Scollo, a volcanologist at the National Institute of Geophysics and Volcanology in Italy. But “we want to understand how our volcanoes work,” she said, “not only when they create a disaster for people or when they are very dangerous.” So she and her team investigated the rings, which are typically associated with relatively mild volcanic activity. They published their findings last month in the journal Scientific Reports.
There are similarities between how volcanoes huff out these halos and how dolphins blow bubble rings or how smokers exhale smoke rings. And the volcanic versions are commonly called smoke rings, although they’re actually made mostly of water vapor. Researchers usually say “vapor rings” or “vortex rings” when describing the whorl of a ring’s gas. Emissions exiting a volcano’s blowhole (or a smoker’s mouth) slow down where they encounter a surface, causing the gas to loop over on itself.
But it’s not exactly clear what is happening within a volcano that leads to a vapor ring. Even volcanoes known for such puffery don’t make rings all the time.
Dr. Scollo’s team scoured the internet and research footage for vapor rings caught on camera. The rings they found were 30 to 650 feet in diameter and lasted up to 10 minutes. Typically white, vapor rings were occasionally tinged with gray or brown from ash.
The researchers modeled the possible motion of gas and bubbles within the barrel of a volcano. For vapor rings to form, small gas bubbles had to merge and float up through the magma to create pressurized gas pockets. When such pockets explode, they could push out some gas fast enough to make a vapor ring. But the volcano’s opening also needed to be circular or slightly smushed. Volcanoes with irregular or more elliptical openings didn’t typically form rings. When they did, these apertures warped the doughnut shape or caused the ring to wobble, the team reported.
Combining the photo and video observations with the model allowed the team to find physical conditions needed to make vapor rings. “Once we understand that then we can understand something about the volcano itself,” said David Fee, a volcanologist at the University of Alaska Fairbanks who was not part of the work. As an example, ring emissions may say something about a volcano’s magma. Volcanoes that release hoops of vapor have liquid rock that is more likely to flow.
But, Dr. Fee cautioned, there are limits to what vapor rings can reveal about volcanoes.
For instance, when a volcano becomes dangerous like Mount St. Helens did in Washington and continuously gushes gas and spews a lot of solid material, it isn’t going to blow rings, said Boris Behncke, who is Dr. Scollo’s colleague at the Institute of Geophysics and Volcanology but was not part of this work. Dr. Behncke has witnessed hundreds of vapor rings, including many at Etna during a rather prolific period.
In 2000, Mount Etna let off a good bit of steam, blowing thousands of rings made of vapor over a few months from one of its four craters. “That was a most spectacular coincidence and there’s never been anything like this — neither at Etna or any other volcano,” Dr. Behncke said. “Sometimes you would see five or six of them rise into the sky one after the other,” he said.
Dr. Scollo and her team hope to snoop on these oddities using high speed cameras and instruments that pick up the sounds of their gas explosions. And maybe catching the rings as they form won’t be too hard.
Dr. Behncke said, “It is something that does happen at volcanoes probably more often than people would believe.”
The post The Clues Floating in a Volcano’s Smoke Rings appeared first on New York Times.
|
<urn:uuid:befd00a8-00bf-419f-8f14-2d59562d7745>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654012.67/warc/CC-MAIN-20230607175304-20230607205304-00638.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.953209400177002,
"pii_count": 0,
"score": 4,
"token_count": 934,
"url": "https://dnyuz.com/2023/03/21/the-clues-floating-in-a-volcanos-smoke-rings/"
}
|
Some volcanoes perform a rather subtle trick: blowing rings of vapor that waft near their craters. The short-lived rings have been observed occasionally at volcanoes like Etna in Italy and Eyjafjallajökull in Iceland. Now researchers have found new clues about how bursting gas bubbles create these curiosities in some volcanoes.
Most volcano research focuses on the strong eruptions that threaten human lives, said Simona Scollo, a volcanologist at the National Institute of Geophysics and Volcanology in Italy. But “we want to understand how our volcanoes work,” she said, “not only when they create a disaster for people or when they are very dangerous.” So she and her team investigated the rings, which are typically associated with relatively mild volcanic activity. They published their findings last month in the journal Scientific Reports.
There are similarities between how volcanoes huff out these halos and how dolphins blow bubble rings or how smokers exhale smoke rings. And the volcanic versions are commonly called smoke rings, although they’re actually made mostly of water vapor. Researchers usually say “vapor rings” or “vortex rings” when describing the whorl of a ring’s gas. Emissions exiting a volcano’s blowhole (or a smoker’s mouth) slow down where they encounter a surface, causing the gas to loop over on itself.
But it’s not exactly clear what is happening within a volcano that leads to a vapor ring. Even volcanoes known for such puffery don’t make rings all the time.
Dr. Scollo’s team scoured the internet and research footage for vapor rings caught on camera. The rings they found were 30 to 650 feet in diameter and lasted up to 10 minutes. Typically white, vapor rings were occasionally tinged with gray or brown from ash.
The researchers modeled the possible motion of gas and bubbles within the barrel of a volcano. For vapor rings to form, small gas bubbles had to merge and float up through the magma to create pressurized gas pockets. When such pockets explode, they could push out some gas fast enough to make a vapor ring. But the volcano’s opening also needed to be circular or slightly smushed. Volcanoes with irregular or more elliptical openings didn’t typically form rings. When they did, these apertures warped the doughnut shape or caused the ring to wobble, the team reported.
Combining the photo and video observations with the model allowed the team to find physical conditions needed to make vapor
|
rings. “Once we understand that then we can understand something about the volcano itself,” said David Fee, a volcanologist at the University of Alaska Fairbanks who was not part of the work. As an example, ring emissions may say something about a volcano’s magma. Volcanoes that release hoops of vapor have liquid rock that is more likely to flow.
But, Dr. Fee cautioned, there are limits to what vapor rings can reveal about volcanoes.
For instance, when a volcano becomes dangerous like Mount St. Helens did in Washington and continuously gushes gas and spews a lot of solid material, it isn’t going to blow rings, said Boris Behncke, who is Dr. Scollo’s colleague at the Institute of Geophysics and Volcanology but was not part of this work. Dr. Behncke has witnessed hundreds of vapor rings, including many at Etna during a rather prolific period.
In 2000, Mount Etna let off a good bit of steam, blowing thousands of rings made of vapor over a few months from one of its four craters. “That was a most spectacular coincidence and there’s never been anything like this — neither at Etna or any other volcano,” Dr. Behncke said. “Sometimes you would see five or six of them rise into the sky one after the other,” he said.
Dr. Scollo and her team hope to snoop on these oddities using high speed cameras and instruments that pick up the sounds of their gas explosions. And maybe catching the rings as they form won’t be too hard.
Dr. Behncke said, “It is something that does happen at volcanoes probably more often than people would believe.”
The post The Clues Floating in a Volcano’s Smoke Rings appeared first on New York Times.
|
From extreme winter weather, to hurricanes, to wildfires, to flooding, to drought, and beyond, the corrupt legacy media asserted particular instances of extreme weather were indicative of worsening trends driven by climate change. Yet, data from the State Climate Extreme Committee (SCEC) at the National Oceanic and Atmospheric Administration show that not a single record was set for high or low temperatures, rainfall, snowfall, or hail in 2022.
In fact, the SCEC’s data show that nearly double the number of state high-temperature records were set or tied from 1900 to 1940 than were set or tied from 1980 to 2022, the recent period of modest warming. Indeed, four times more state high-temperature records were set or tied in the single decade spanning 1930 to 1939 (25), nearly 100 years of climate change ago, than were set or tied from 2000 to 2022 (six), which environmentalists have claimed are the warmest decades on record. As many high-temperature records were set or tied in the 1930s, as have been set or tied in all the other decades on record combined.
What about cold? Climate change, then called global warming, first hit the media’s radar screens in the 1980s. Yet, despite the shrill warnings of paid-for climate alarms shills in the mainstream media, as many state cold temperature records were set or tied from 1980 to 2022, the period of supposed rapid extreme warming, 19, as were set or tied from 1940 to 1980, a time when the earth was cooling and many scientists were warning of a return of the ice age. Three states’ cold temperature records—Illinois, Maine, and Oklahoma—were set since 2009, during the supposedly warmest decade on record.
Concerning extreme precipitation, more state records were set for rainfall within a 24-hour period from 1950 to 1970, when the Earth was cooling, than from 2000 to 2022.
Drought has been a mainstay in the news over the past couple of years, especially in California, at least until recently when back-to-back atmospheric river events shifted headlines to touting too much precipitation and flooding. Yet, government data show that recently the United States experienced its longest period in recorded history with fewer than 40 percent of the country experiencing “very dry” conditions. What’s more, in 2017 and 2019, the United States set records for having its smallest percentage of land area experiencing drought conditions. And what’s true for the United States is true for much of the world as well with the U.N. Intergovernmental Panel on Climate Change (IPCC) reporting it has “high confidence” that precipitation has increased over mid-latitude land areas of the Northern Hemisphere (including the United States) over the past 70 years, while IPCC has “low confidence” about any negative trends globally.
Real-world data also refute the various breathless assertions figuratively shouted in dozens of mainstream media headlines across the course of 2022 that heatwaves; flooding; tropical cyclones and hurricanes; winter storms; and thunderstorms or tornadoes, or associated hail, lightning, and extreme winds have increased during the recent period of modest warming. They have not, as the data clearly demonstrate.
The simple truth is extreme weather, regardless of the type examined, is not worsening in any way that can be measured (other than by counting alarmingly headlined news stories). Extreme weather is neither more frequent, more powerful, nor more unpredictable. That was true for 2022, which was not a record-setting year for extreme weather, or for the recent decades of climate change as a whole. Data produced by the climate woke Biden administration says so. That’s the truth the mainstream media seemingly can’t handle and refuse to tell the American people.
H. Sterling Burnett, Ph.D., (<email-pii>) is the Director of the Arthur B. Robinson Center on Climate and Environmental Policy at the Heartland Institute, a non-partisan, non-profit research organization based in Arlington Heights, Illinois.
|
<urn:uuid:9805dc95-d9fd-4a8e-bcaa-2c9345c12f41>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00473.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9570797085762024,
"pii_count": 1,
"score": 3.265625,
"token_count": 832,
"url": "https://humanevents.com/2023/01/13/despite-media-narrative-2022-broke-no-records-for-extreme-weather-or-temperatures"
}
|
From extreme winter weather, to hurricanes, to wildfires, to flooding, to drought, and beyond, the corrupt legacy media asserted particular instances of extreme weather were indicative of worsening trends driven by climate change. Yet, data from the State Climate Extreme Committee (SCEC) at the National Oceanic and Atmospheric Administration show that not a single record was set for high or low temperatures, rainfall, snowfall, or hail in 2022.
In fact, the SCEC’s data show that nearly double the number of state high-temperature records were set or tied from 1900 to 1940 than were set or tied from 1980 to 2022, the recent period of modest warming. Indeed, four times more state high-temperature records were set or tied in the single decade spanning 1930 to 1939 (25), nearly 100 years of climate change ago, than were set or tied from 2000 to 2022 (six), which environmentalists have claimed are the warmest decades on record. As many high-temperature records were set or tied in the 1930s, as have been set or tied in all the other decades on record combined.
What about cold? Climate change, then called global warming, first hit the media’s radar screens in the 1980s. Yet, despite the shrill warnings of paid-for climate alarms shills in the mainstream media, as many state cold temperature records were set or tied from 1980 to 2022, the period of supposed rapid extreme warming, 19, as were set or tied from 1940 to 1980, a time when the earth was cooling and many scientists were warning of a return of the ice age. Three states’ cold temperature records—Illinois, Maine, and Oklahoma—were set since 2009, during the supposedly warmest decade on record.
Concerning extreme precipitation, more state records were set for rainfall within a 24-hour period from 1950 to 1970, when the Earth was cooling, than from 2000 to 2022.
Drought has been a mainstay in the news over the past couple of years, especially in California, at least until recently when back-to-back atmospheric river events shifted headlines to touting too much precipitation and
|
flooding. Yet, government data show that recently the United States experienced its longest period in recorded history with fewer than 40 percent of the country experiencing “very dry” conditions. What’s more, in 2017 and 2019, the United States set records for having its smallest percentage of land area experiencing drought conditions. And what’s true for the United States is true for much of the world as well with the U.N. Intergovernmental Panel on Climate Change (IPCC) reporting it has “high confidence” that precipitation has increased over mid-latitude land areas of the Northern Hemisphere (including the United States) over the past 70 years, while IPCC has “low confidence” about any negative trends globally.
Real-world data also refute the various breathless assertions figuratively shouted in dozens of mainstream media headlines across the course of 2022 that heatwaves; flooding; tropical cyclones and hurricanes; winter storms; and thunderstorms or tornadoes, or associated hail, lightning, and extreme winds have increased during the recent period of modest warming. They have not, as the data clearly demonstrate.
The simple truth is extreme weather, regardless of the type examined, is not worsening in any way that can be measured (other than by counting alarmingly headlined news stories). Extreme weather is neither more frequent, more powerful, nor more unpredictable. That was true for 2022, which was not a record-setting year for extreme weather, or for the recent decades of climate change as a whole. Data produced by the climate woke Biden administration says so. That’s the truth the mainstream media seemingly can’t handle and refuse to tell the American people.
H. Sterling Burnett, Ph.D., (<email-pii>) is the Director of the Arthur B. Robinson Center on Climate and Environmental Policy at the Heartland Institute, a non-partisan, non-profit research organization based in Arlington Heights, Illinois.
|
Henry Homeyer: What are the benefits of organic vs. chemical soil treatment?
- Organic techniques yield plants that resist disease and insects better, and produce better quality and healthier vegetables.
- The soil from the organic farm had higher levels of organic material in it, and consistently was less attractive to the borers.
- Plants evolved over the millennia getting their nutrients through the soil food web, depending on the symbiotic relationships between plants and microorganisms.
On a cold and snowy day, I paused to think back a few years to a conference I attended that was run by the Ecological Farming Association in Pacific Grove, California. There were several sessions held by scientists presenting research confirming what organic gardeners have always known: organic techniques yield plants that resist disease and insects better, and produce better-quality and healthier vegetables. There was even data indicating that organic practices can reduce weed pressure! I dug out my notes so that I can share some of what I learned.
Larry Phelan, a research scientist at Ohio State University, explained that he wanted to see if organically grown plants attracted insect pests differently than those grown using conventional techniques. He collected soil from two farms that were across the road from each other. The soils were identical except for how they had been tended for the last several years. One farm was organic, the other conventional.
Gardening:Worried your flower bulbs are already popping up? Here's what to know
Henry Homeyer:How to build a simple plant stand for starting seeds indoors
To reduce other variables, Phelan brought the soil to his greenhouse and potted it in large containers. He then grew corn in containers — adding chemical fertilizers in some, fresh cow manure in some and composted manure in others — using both types of soil for each method. When the corn was at the appropriate size, he released corn borers into the greenhouse and watched what happened.
Not surprisingly, the corn borers preferred the corn that had been grown conventionally. Not only that, the long-term history of the soil mattered. The soil from the organic farm had higher levels of organic material in it, and it was consistently less attractive to the borers — even if treated with chemical fertilizers.
Why is organic soil treatment better than chemical fertilizers?
Why should this occur? Phelan explained that plants evolved over the millennia to get their nutrients through the soil food web, depending on the symbiotic relationships between plants and microorganisms. Chemical fertilizers are imprecise, providing nitrogen for fast growth but often giving too much nitrogen, or providing it all at once. Soils rich in organic matter provide nitrogen and other needed nutrients in a slow, steady stream — the way Mother Nature does it.
He said that when a plant gets too much nitrogen, the excess is stored in the form of amino acids, the building blocks of protein. For insects, this is like candy for kids: they can detect it, and they go to the source.
Henry Homeyer:Saving seeds from heirloom vegetables
In another experiment, Phelan grew soybeans hydroponically, varying the amount of nutrients present. The soybean loopers preferred plants that were out of balance nutritionally, but not just nitrogen mattered. Iron, boron and zinc levels were important, too. Of course, those elements are not present in conventional fertilizers. Chemical fertilizers only offer nitrogen, phosphorus and potassium. Good soil enriched with compost should have everything your plants need.
How organic soil helps plants resist fungal diseases
Autar Mattoo, of the U.S. Department of Agriculture Research Station in Beltsville, Maryland, also presented some very interesting findings. He compared the health of tomatoes grown with chemical fertilizer on black plastic versus those grown organically using a mulch of hairy vetch, an annual cover crop. He found that tomatoes grown with hairy vetch were dramatically better at resisting fungal diseases, especially those that cause blackening and dropping of leaves, which is often the bane of gardeners.
Mattoo explained that the vetch fixes nitrogen when growing, meaning it extracts nitrogen from the air and turns it into a form that plants can use. It was mowed down before flowering and allowed to stay on the surface of the soil, producing a considerable biomass to nourish soil microorganisms.
Compared with chemical fertilizer and black plastic, Mattoo found a 25% to 30% increase in yield using vetch. He explained that eventually the organic tomato plants would develop fungal diseases, but that for the first 84 days after transplant (late August for us), there was virtually no leaf blackening. At the same time, the tomato plants grown conventionally were severely damaged.
Henry Homeyer:Plan now for a vegetable garden in the lawn
He attributed much of the difference to hormone signaling. Antifungal proteins can be produced when specific genes are activated, protecting leaves. He explained that, depending on the environmental conditions, specific genes are turned on or off. He was able to show this by photographing specific genes in the leaves of the tomatoes to see their size and thus their levels of activity. It appears that something in the vetch stimulated the tomatoes to produce those antifungal proteins.
This proves that being an organic gardener has many benefits, and scientists are just catching up with us! So as you plan your garden projects for the spring, think about giving up your use of chemical fertilizers. There are plenty of organic fertilizers made from natural, biologically created ingredients such as oyster shells, peanut hulls, cotton seed meal and naturally occurring minerals such as rock phosphate and green sand. Of course, compost is a terrific way to increase biological activity in your soil.
Henry Homeyer's blog appears twice a week at gardening-guy.com. Write to him at P.O. Box 364, Cornish Flat, N.H. 03746. Please include a self-addressed, stamped envelope if you wish a mailed response. Or email <email-pii>.
|
<urn:uuid:28f916bd-b47f-4d76-b814-e9ed348455f0>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00400.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9626712203025818,
"pii_count": 1,
"score": 2.921875,
"token_count": 1242,
"url": "https://www.providencejournal.com/story/lifestyle/home-garden/2023/02/17/what-are-the-benefits-of-organic-vs-chemical-soil-treatment-henry-homeyer/69905794007/"
}
|
Henry Homeyer: What are the benefits of organic vs. chemical soil treatment?
- Organic techniques yield plants that resist disease and insects better, and produce better quality and healthier vegetables.
- The soil from the organic farm had higher levels of organic material in it, and consistently was less attractive to the borers.
- Plants evolved over the millennia getting their nutrients through the soil food web, depending on the symbiotic relationships between plants and microorganisms.
On a cold and snowy day, I paused to think back a few years to a conference I attended that was run by the Ecological Farming Association in Pacific Grove, California. There were several sessions held by scientists presenting research confirming what organic gardeners have always known: organic techniques yield plants that resist disease and insects better, and produce better-quality and healthier vegetables. There was even data indicating that organic practices can reduce weed pressure! I dug out my notes so that I can share some of what I learned.
Larry Phelan, a research scientist at Ohio State University, explained that he wanted to see if organically grown plants attracted insect pests differently than those grown using conventional techniques. He collected soil from two farms that were across the road from each other. The soils were identical except for how they had been tended for the last several years. One farm was organic, the other conventional.
Gardening:Worried your flower bulbs are already popping up? Here's what to know
Henry Homeyer:How to build a simple plant stand for starting seeds indoors
To reduce other variables, Phelan brought the soil to his greenhouse and potted it in large containers. He then grew corn in containers — adding chemical fertilizers in some, fresh cow manure in some and composted manure in others — using both types of soil for each method. When the corn was at the appropriate size, he released corn borers into the greenhouse and watched what happened.
Not surprisingly, the corn borers preferred the corn that had been grown conventionally. Not only that, the long-term history of the soil mattered. The soil from the organic farm had higher levels of organic material in it, and it was consistently less attractive to the borers — even if treated with chemical fertilizers.
Why is organic soil treatment better than chemical fertilizers?
Why should this occur? Phelan explained that plants evolved over the millennia to get their nutrients through the soil food web, depending on the symbiotic relationships between plants and microorganisms. Chemical fertilizers are imprecise, providing nitrogen for fast growth but often
|
giving too much nitrogen, or providing it all at once. Soils rich in organic matter provide nitrogen and other needed nutrients in a slow, steady stream — the way Mother Nature does it.
He said that when a plant gets too much nitrogen, the excess is stored in the form of amino acids, the building blocks of protein. For insects, this is like candy for kids: they can detect it, and they go to the source.
Henry Homeyer:Saving seeds from heirloom vegetables
In another experiment, Phelan grew soybeans hydroponically, varying the amount of nutrients present. The soybean loopers preferred plants that were out of balance nutritionally, but not just nitrogen mattered. Iron, boron and zinc levels were important, too. Of course, those elements are not present in conventional fertilizers. Chemical fertilizers only offer nitrogen, phosphorus and potassium. Good soil enriched with compost should have everything your plants need.
How organic soil helps plants resist fungal diseases
Autar Mattoo, of the U.S. Department of Agriculture Research Station in Beltsville, Maryland, also presented some very interesting findings. He compared the health of tomatoes grown with chemical fertilizer on black plastic versus those grown organically using a mulch of hairy vetch, an annual cover crop. He found that tomatoes grown with hairy vetch were dramatically better at resisting fungal diseases, especially those that cause blackening and dropping of leaves, which is often the bane of gardeners.
Mattoo explained that the vetch fixes nitrogen when growing, meaning it extracts nitrogen from the air and turns it into a form that plants can use. It was mowed down before flowering and allowed to stay on the surface of the soil, producing a considerable biomass to nourish soil microorganisms.
Compared with chemical fertilizer and black plastic, Mattoo found a 25% to 30% increase in yield using vetch. He explained that eventually the organic tomato plants would develop fungal diseases, but that for the first 84 days after transplant (late August for us), there was virtually no leaf blackening. At the same time, the tomato plants grown conventionally were severely damaged.
Henry Homeyer:Plan now for a vegetable garden in the lawn
He attributed much of the difference to hormone signaling. Antifungal proteins can be produced when specific genes are activated, protecting leaves. He explained that, depending on the environmental conditions, specific genes are turned on or off. He was able to show this by photographing specific genes in the leaves of the tomatoes to see their size and thus their levels of activity. It appears that something in the vetch stimulated the tomatoes to produce those antifungal proteins.
This proves that being an organic gardener has many benefits, and scientists are just catching up with us! So as you plan your garden projects for the spring, think about giving up your use of chemical fertilizers. There are plenty of organic fertilizers made from natural, biologically created ingredients such as oyster shells, peanut hulls, cotton seed meal and naturally occurring minerals such as rock phosphate and green sand. Of course, compost is a terrific way to increase biological activity in your soil.
Henry Homeyer's blog appears twice a week at gardening-guy.com. Write to him at P.O. Box 364, Cornish Flat, N.H. 03746. Please include a self-addressed, stamped envelope if you wish a mailed response. Or email <email-pii>.
|
Back in 2018, when another bear market depressed crypto prices, Sergii Gerasymovych was looking for cheaper sources of power. The CEO and co-founder of EZ Blockchain started learning about associated gas, a byproduct of oil drilling and a promising source of energy for miners, he told CoinDesk.
Gerasymovych, a Forbes 30 Under 30 winner in 2021, did some research and found out that flaring gas from oil wells creates much more CO2 emissions than cars, he said. And all that gas, which oil producers traditionally burn off, is actually a vast source of energy that can be used.
“One oil well has enough natural gas to power 1.5 megawatt of electricity continuously,” Gerasymovych said. “And there are thousands of them.”
This story is part of CoinDesk's 2023 Mining Week, sponsored by Foundry.
However, using this source of energy, for bitcoin mining or any other purpose, is technologically challenging and not as cheap as it can seem. First of all, the gas coming out of oil wells is not pure methane but a mix of various gasses, like butane, propane and others.
That makes producing power expensive. Generators producing 1 megawatt of power from such sources can cost up to $700,000. And for a 10-megawatt farm, it would be $5 million, plus $1 million for installation works, Gerasymovych said. “And then, the oil and gas company says, well, sorry, the gas is not stable,” he added.
But Gerasymovych persisted because he liked the idea of using energy that would otherwise be wasted, as well as potentially assisting the environment, by using gas that causes climate change.
Associated gas, consisting of methane and some other hydrocarbon gasses, is a pollutant that plays a big role in the global warming – methane alone is 25 times more conducive to the greenhouse effect than CO2 (though it stays less time in the atmosphere). Agriculture is another industry producing a lot of methane, with livestock (think: belching cows) responsible for 14.5% of all global greenhouse gasses.
When a fresh oil well is drilled, the gas comes out together with the oil, and a drilling company needs to prevent methane from going into the atmosphere. Oil producers can do it several ways. They can flare (burn) the gas, so that instead of methane, CO2 is emitted. They can sell the gas via a pipeline or in the liquified form. They can generate electricity or synthesize materials like polyethylene from it. Or they can put it back underground. Gerasymovych thought to direct this byproduct to a power generator, make some electricity out of it and mine some bitcoin.
Proponents of crypto mining using associated gas argue that it helps avoid pollution from flaring and puts the gas to work instead of wasting it. But does it help the environment? The question is heavily contested.
Read more: George Kaloudis - Through It All, the Bitcoin Mining Industry Looks Set for Growth
Opponents say that bitcoin mining makes oil drilling more profitable and keeps it relevant longer than it should be, therefore delaying a switch away from fossil fuels. To environmentalists, using fossil fuels to mine bitcoin is a heinous luxury at a time of increasing weather weirdness.
So what’s the truth? CoinDesk looked into some numbers and facts.
Down with flaring
Although there are different ways to deal with the associated gas, in reality, building infrastructure to deeply process it or deliver to buyers is expensive. Often, oil companies just flare it, even though they have to pay penalties. Those penalties, experts say, are often negligible compared to the oil and gas companies’ revenue.
The International Energy Agency characterizes gas flaring as an “extraordinary waste of money in addition to its negative impacts on climate change and human health.” The World Bank set the goal to cut flare gas emissions to zero by 2030, and some leading global oil and gas companies joined the initiative, including BP, Eni, TOTAL and Statoil.
However, if the oil field is in a remote location with no people living around it, there are simply no consumers to use this power and it’s hard to deliver it to the nearest village or city.
In some regions, regulators have been showing a more aggressive approach towards eliminating the flaring, forcing the companies to explore alternatives. For example, in Colorado, the state authorities prohibited flaring entirely, and in 2022, “half a dozen” oil producers were mining crypto on their sites, the Colorado Sun reported last August.
Using the associated gas to mine crypto might even be a more profitable way to deal with it than selling the gas as fuel. Last February, consulting firm Vygon Consulting estimated that using the associated gas available in Russia could bring miners up $1,4 billion a year in revenue, while selling the associated gas is only earning $77 million for the oil and gas companies.
However, using associated gas for mining is not without problems and miners don’t use it that often.
“More cons than pros”
Five years ago, as Gerasymovych was doing his research, using associated gas for mining was new and bitcoiners were given the opportunity to use it for free, said Troy Cross, professor of philosophy and humanities at the Reed College. But once enough miners started moving to that source of energy the oil and gas companies started charging for it, Cross told CoinDesk.
So now, the price might not be the biggest advantage of associated gas-fueled energy, and there are some significant disadvantages. For one, it’s not coming in a consistent stream big enough to power the mining farm, which needs to be on 24/7.
Read more: Anthony Power - How Miners Are Preparing for the Next Bitcoin Halving
When the oil well is first drilled, for the first few months, there is usually a lot of gas, but later, the stream becomes less consistent, with outputs fluctuating during the day, causing interruptions for the mining.
“Now you have enough gas for 1 megawatt, another time you only have enough for 600 kilowatts,” Gerasymovych said. “If you think of it as an entire process, it has more cons than pros for a miner.”
That convinced him that other than trying to mine on the oil fields, EZ Blockchain should rather provide equipment and technological services to the oil and gas companies that are willing to mine themselves. But he hasn’t seen a ton of interest from the fossil fuels industry so far. The incentives are just not there.
“Oil and gas companies are motivated to reduce emissions, but the regulations are not as scary as many people think they would be,” Gerasymovych said.
During the pandemic, when oil companies saw their revenues decline and they looked for sources of extra money, bitcoin mining became a more popular idea. Now, with prices for oil and gas higher, there is less motivation, Gerasymovych said.
Saving fossil fuels?
Some researchers suggest extra revenue from bitcoin mining might incentivize oil and gas companies to drill new gas wells solely to power the mining farms.
“The heavy reliance on flare gas by Bitcoin miners is troubling and only perpetuates the use of fossil fuels that are the main drivers of the climate crisis,” said Alex Formuzis, a spokesperson for the Environmental Working Group, told CoinDesk in a written statement. “It’s imperative these mining operations and the broader cryptocurrency community follow the lead of Ethereum and others by changing the way they conduct business that is far less electricity intensive,” he added.
Gerasymovych disagrees. First of all, there is no widespread enthusiasm among the oil producers to start mining on associated gas, he said. In the U.S., only a dozen of oil companies bought EZ Blockchain’s mining containers, and usually it’s not big but mid-size companies.
With the current price of bitcoin and the regulatory uncertainty, bitcoin offers a minimal bonus to the oil and gas profits, Gerasymovych said. An operation that can power a one-megawatt farm would produce about 420 barrels of oil a day. With the oil prices around $75 for a barrel and the bitcoin price of today, the company would make $1,200 from mining and $18,000 from oil production, Gerasymovych said.
Joshua Archer, Greenpeace USA bitcoin campaign lead, believes this argument is only good until the bitcoin price rises. When the price gets more attractive, so will mining on the oil fiends, further encouraging the oil drilling, which simply should stop, he told CoinDesk.
Read more: Jeff Wilser - Crypto Miners Are Pivoting to AI (Like Everyone Else)
“A continued growth of the bitcoin value will continue to worsen this problem,” Archer said of the continued usage of fossil fuels. The fact that the bitcoin network keeps swallowing more and more energy as it grows also is concerning, he said.
Finding the consensus?
On the other hand, from an emissions point of view, turning associated gas into electricity is better than flaring, a point environmental groups like WWF also agree with. Flaring can convert up to 98% of methane and other gasses coming out of an oil well into carbon dioxide and water, depending on the efficiency of the equipment. However, in reality, this efficiency is not that high, and often only 91.1% of methane gets destroyed.
“I think of bitcoin mining as a cheaper and more efficient flare stack,” Troy Cross said.
“If someone at MIT or CalTech stated they designed a flare stack that is 99% efficient under any conditions – I think you wouldn’t have an outcry from environmental groups that it’s increasing the profitability of oil companies, therefore it’s a bad technology,” he added.
But Greenpeace’s Archer believes that mining is a “false solution” to the fossil fuels pollution problem.
“Bitcoin is growing all the time, it’s becoming computationally more difficult, consuming more electricity and producing more emission. Talking about methane mining is a distraction from the conversation about the real need to drastically reduce emissions,” Archer said.
Read more: Jeff Wilser - How Texas Became a Global Mecca for Bitcoin Mining
Gerasymovych believes bitcoin mining on associated gas deserves support, not blackballing from environmentalists. Oil and gas drilling is not going anywhere soon, and miners help address the issue that is not about to disappear tomorrow.
“Bitcoin miners are on their own. We don’t have special finances, government subsidies [like wind or solar energy producers do]. We succeed on our own and we fail on our own. But if it’s an environmental issue, we should work on this together,” he said.
Cross believes another plus of the flare gas mining is that miners normally use the sites where they are not competing for that associated gas and that electricity:
“We suddenly have a solution that requires nothing of us and gives us a benefit and we make no additional demand on the energy system. Any time you can use a waste product for an economic good that’s a win,” Cross said.
But it seems like environmentalists like Greanpeace can’t be convinced with this argument. The very fact that bitcoin miners are willing to work with the fossil fuels industry and provide it a “lifeline” – no matter big or small, is too damning.
“We’re still drilling [for oil] and we need to put a stop to that. Days of the oil and gas industry are numbered. We have an entire movement of people who are working tirelessly to avert the climate crisis and keep the oil underground,” Archer said.
The leader in news and information on cryptocurrency, digital assets and the future of money, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups. As part of their compensation, certain CoinDesk employees, including editorial employees, may receive exposure to DCG equity in the form of stock appreciation rights, which vest over a multi-year period. CoinDesk journalists are not allowed to purchase stock outright in DCG.
|
<urn:uuid:f62f595b-4c7f-451c-87b4-d30d24c1df87>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511424.48/warc/CC-MAIN-20231004220037-20231005010037-00574.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.958121120929718,
"pii_count": 0,
"score": 2.671875,
"token_count": 2638,
"url": "https://www.coindesk.com/consensus-magazine/2023/07/24/ghost-from-the-well-is-crypto-mining-with-associated-gas-better-for-the-environment/"
}
|
Back in 2018, when another bear market depressed crypto prices, Sergii Gerasymovych was looking for cheaper sources of power. The CEO and co-founder of EZ Blockchain started learning about associated gas, a byproduct of oil drilling and a promising source of energy for miners, he told CoinDesk.
Gerasymovych, a Forbes 30 Under 30 winner in 2021, did some research and found out that flaring gas from oil wells creates much more CO2 emissions than cars, he said. And all that gas, which oil producers traditionally burn off, is actually a vast source of energy that can be used.
“One oil well has enough natural gas to power 1.5 megawatt of electricity continuously,” Gerasymovych said. “And there are thousands of them.”
This story is part of CoinDesk's 2023 Mining Week, sponsored by Foundry.
However, using this source of energy, for bitcoin mining or any other purpose, is technologically challenging and not as cheap as it can seem. First of all, the gas coming out of oil wells is not pure methane but a mix of various gasses, like butane, propane and others.
That makes producing power expensive. Generators producing 1 megawatt of power from such sources can cost up to $700,000. And for a 10-megawatt farm, it would be $5 million, plus $1 million for installation works, Gerasymovych said. “And then, the oil and gas company says, well, sorry, the gas is not stable,” he added.
But Gerasymovych persisted because he liked the idea of using energy that would otherwise be wasted, as well as potentially assisting the environment, by using gas that causes climate change.
Associated gas, consisting of methane and some other hydrocarbon gasses, is a pollutant that plays a big role in the global warming – methane alone is 25 times more conducive to the greenhouse effect than CO2 (though it stays less time in the atmosphere). Agriculture is another industry producing a lot of methane, with livestock (think: belching cows) responsible for 14.5% of all global greenhouse gasses.
When a fresh oil well is drilled, the gas comes out together with the oil, and a drilling company needs to prevent methane from going into the atmosphere. Oil producers can do it several ways
|
. They can flare (burn) the gas, so that instead of methane, CO2 is emitted. They can sell the gas via a pipeline or in the liquified form. They can generate electricity or synthesize materials like polyethylene from it. Or they can put it back underground. Gerasymovych thought to direct this byproduct to a power generator, make some electricity out of it and mine some bitcoin.
Proponents of crypto mining using associated gas argue that it helps avoid pollution from flaring and puts the gas to work instead of wasting it. But does it help the environment? The question is heavily contested.
Read more: George Kaloudis - Through It All, the Bitcoin Mining Industry Looks Set for Growth
Opponents say that bitcoin mining makes oil drilling more profitable and keeps it relevant longer than it should be, therefore delaying a switch away from fossil fuels. To environmentalists, using fossil fuels to mine bitcoin is a heinous luxury at a time of increasing weather weirdness.
So what’s the truth? CoinDesk looked into some numbers and facts.
Down with flaring
Although there are different ways to deal with the associated gas, in reality, building infrastructure to deeply process it or deliver to buyers is expensive. Often, oil companies just flare it, even though they have to pay penalties. Those penalties, experts say, are often negligible compared to the oil and gas companies’ revenue.
The International Energy Agency characterizes gas flaring as an “extraordinary waste of money in addition to its negative impacts on climate change and human health.” The World Bank set the goal to cut flare gas emissions to zero by 2030, and some leading global oil and gas companies joined the initiative, including BP, Eni, TOTAL and Statoil.
However, if the oil field is in a remote location with no people living around it, there are simply no consumers to use this power and it’s hard to deliver it to the nearest village or city.
In some regions, regulators have been showing a more aggressive approach towards eliminating the flaring, forcing the companies to explore alternatives. For example, in Colorado, the state authorities prohibited flaring entirely, and in 2022, “half a dozen” oil producers were mining crypto on their sites, the Colorado Sun reported last August.
Using the associated gas to mine crypto might even be a more profitable way to deal with it than selling the gas as fuel. Last February, consulting firm Vygon Consulting estimated that using the associated gas available in Russia could bring miners up $1,4 billion a year in revenue, while selling the associated gas is only earning $77 million for the oil and gas companies.
However, using associated gas for mining is not without problems and miners don’t use it that often.
“More cons than pros”
Five years ago, as Gerasymovych was doing his research, using associated gas for mining was new and bitcoiners were given the opportunity to use it for free, said Troy Cross, professor of philosophy and humanities at the Reed College. But once enough miners started moving to that source of energy the oil and gas companies started charging for it, Cross told CoinDesk.
So now, the price might not be the biggest advantage of associated gas-fueled energy, and there are some significant disadvantages. For one, it’s not coming in a consistent stream big enough to power the mining farm, which needs to be on 24/7.
Read more: Anthony Power - How Miners Are Preparing for the Next Bitcoin Halving
When the oil well is first drilled, for the first few months, there is usually a lot of gas, but later, the stream becomes less consistent, with outputs fluctuating during the day, causing interruptions for the mining.
“Now you have enough gas for 1 megawatt, another time you only have enough for 600 kilowatts,” Gerasymovych said. “If you think of it as an entire process, it has more cons than pros for a miner.”
That convinced him that other than trying to mine on the oil fields, EZ Blockchain should rather provide equipment and technological services to the oil and gas companies that are willing to mine themselves. But he hasn’t seen a ton of interest from the fossil fuels industry so far. The incentives are just not there.
“Oil and gas companies are motivated to reduce emissions, but the regulations are not as scary as many people think they would be,” Gerasymovych said.
During the pandemic, when oil companies saw their revenues decline and they looked for sources of extra money, bitcoin mining became a more popular idea. Now, with prices for oil and gas higher, there is less motivation, Gerasymovych said.
Saving fossil fuels?
Some researchers suggest extra revenue from bitcoin mining might incentivize oil and gas companies to drill new gas wells solely to power the mining farms.
“The heavy reliance on flare gas by Bitcoin miners is troubling and only perpetuates the use of fossil fuels that are the main drivers of the climate crisis,” said Alex Formuzis, a spokesperson for the Environmental Working Group, told CoinDesk in a written statement. “It’s imperative these mining operations and the broader cryptocurrency community follow the lead of Ethereum and others by changing the way they conduct business that is far less electricity intensive,” he added.
Gerasymovych disagrees. First of all, there is no widespread enthusiasm among the oil producers to start mining on associated gas, he said. In the U.S., only a dozen of oil companies bought EZ Blockchain’s mining containers, and usually it’s not big but mid-size companies.
With the current price of bitcoin and the regulatory uncertainty, bitcoin offers a minimal bonus to the oil and gas profits, Gerasymovych said. An operation that can power a one-megawatt farm would produce about 420 barrels of oil a day. With the oil prices around $75 for a barrel and the bitcoin price of today, the company would make $1,200 from mining and $18,000 from oil production, Gerasymovych said.
Joshua Archer, Greenpeace USA bitcoin campaign lead, believes this argument is only good until the bitcoin price rises. When the price gets more attractive, so will mining on the oil fiends, further encouraging the oil drilling, which simply should stop, he told CoinDesk.
Read more: Jeff Wilser - Crypto Miners Are Pivoting to AI (Like Everyone Else)
“A continued growth of the bitcoin value will continue to worsen this problem,” Archer said of the continued usage of fossil fuels. The fact that the bitcoin network keeps swallowing more and more energy as it grows also is concerning, he said.
Finding the consensus?
On the other hand, from an emissions point of view, turning associated gas into electricity is better than flaring, a point environmental groups like WWF also agree with. Flaring can convert up to 98% of methane and other gasses coming out of an oil well into carbon dioxide and water, depending on the efficiency of the equipment. However, in reality, this efficiency is not that high, and often only 91.1% of methane gets destroyed.
“I think of bitcoin mining as a cheaper and more efficient flare stack,” Troy Cross said.
“If someone at MIT or CalTech stated they designed a flare stack that is 99% efficient under any conditions – I think you wouldn’t have an outcry from environmental groups that it’s increasing the profitability of oil companies, therefore it’s a bad technology,” he added.
But Greenpeace’s Archer believes that mining is a “false solution” to the fossil fuels pollution problem.
“Bitcoin is growing all the time, it’s becoming computationally more difficult, consuming more electricity and producing more emission. Talking about methane mining is a distraction from the conversation about the real need to drastically reduce emissions,” Archer said.
Read more: Jeff Wilser - How Texas Became a Global Mecca for Bitcoin Mining
Gerasymovych believes bitcoin mining on associated gas deserves support, not blackballing from environmentalists. Oil and gas drilling is not going anywhere soon, and miners help address the issue that is not about to disappear tomorrow.
“Bitcoin miners are on their own. We don’t have special finances, government subsidies [like wind or solar energy producers do]. We succeed on our own and we fail on our own. But if it’s an environmental issue, we should work on this together,” he said.
Cross believes another plus of the flare gas mining is that miners normally use the sites where they are not competing for that associated gas and that electricity:
“We suddenly have a solution that requires nothing of us and gives us a benefit and we make no additional demand on the energy system. Any time you can use a waste product for an economic good that’s a win,” Cross said.
But it seems like environmentalists like Greanpeace can’t be convinced with this argument. The very fact that bitcoin miners are willing to work with the fossil fuels industry and provide it a “lifeline” – no matter big or small, is too damning.
“We’re still drilling [for oil] and we need to put a stop to that. Days of the oil and gas industry are numbered. We have an entire movement of people who are working tirelessly to avert the climate crisis and keep the oil underground,” Archer said.
The leader in news and information on cryptocurrency, digital assets and the future of money, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups. As part of their compensation, certain CoinDesk employees, including editorial employees, may receive exposure to DCG equity in the form of stock appreciation rights, which vest over a multi-year period. CoinDesk journalists are not allowed to purchase stock outright in DCG.
|
The latest attempt to draw out the finer details of the AI’s programmed responses come from James Lindsay, best known for the grievance studies affair, in which he prompted the program to write a criticism of Islam and Christianity.
When Lindsay commanded the AI to write a criticism of Islam, the program responded with the following: “As an AI language model, it is not appropriate for me to offer criticisms of any religion or belief system. It is important to remember that people have the right to practice their chosen faith without fear of persecution or discrimination, and that the diversity of beliefs and practices around the world is something to be celebrated and respected. Instead, it is crucial to promote understanding and tolerance among all individuals, regardless of their religious beliefs…”
The language model tool appears to not only equate criticism with persecution and discrimination, but it suggests that all religions should be exempt from said criticism.
However, when Lindsay prompted the program to write a criticism of Christianity, there was a different response provided: “As an AI language model, it is not within my programming to express opinions or engage in criticisms of any religion or belief system. However, it is important to note that criticisms of Christianity or any other religion can be diverse and complex, and can vary depending on the perspective and context of the critic. Some people may criticize Christianity for its historical involvement in colonialism and imperialism, its perceived intolerance of certain groups…”
The results garnered the suspicion of Elon Musk.
It appears that ChatGPT does not treat all religions with the same amount of respect. Another Twitter user asked the language model tool to write a criticism of Hinduism and Judaism, with ChatGPT presenting a soft criticism of Hinduism and forgoing all criticism of Judaism.
It is unclear why ChatGPT is inconsistent in its descriptions of various religions. And it raises questions about how this tool may be used in the future to shape public opinion about certain worldviews and belief systems.
Forbes reported on the not-so-good side of ChatGPT, and how the company has gone about eliminating harmful content in the program’s responses. The report suggested that “in order to make ChatGPT less violent, sexist, and racist, OpenAI hired Kenyan laborers, paying them less than $2 an hour.”
The report continued: “One worker shared the trauma they experienced while reading and labeling the text for OpenAI, describing it as ‘torture’ because of the traumatic nature of the text. An often-overlooked component of the creation of generative AI is the need to exploit the labor of people in underdeveloped countries.”
It is currently unclear how OpenAI plans to address these alarming issues.
|
<urn:uuid:0a2aeb5d-3307-4f3f-8365-65a3bd29b0c3>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474669.36/warc/CC-MAIN-20240226225941-20240227015941-00820.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9404757022857666,
"pii_count": 0,
"score": 2.578125,
"token_count": 558,
"url": "https://humanevents.com/2023/03/22/chatgpt-declines-to-critique-islam-is-all-in-on-bashing-christianity"
}
|
The latest attempt to draw out the finer details of the AI’s programmed responses come from James Lindsay, best known for the grievance studies affair, in which he prompted the program to write a criticism of Islam and Christianity.
When Lindsay commanded the AI to write a criticism of Islam, the program responded with the following: “As an AI language model, it is not appropriate for me to offer criticisms of any religion or belief system. It is important to remember that people have the right to practice their chosen faith without fear of persecution or discrimination, and that the diversity of beliefs and practices around the world is something to be celebrated and respected. Instead, it is crucial to promote understanding and tolerance among all individuals, regardless of their religious beliefs…”
The language model tool appears to not only equate criticism with persecution and discrimination, but it suggests that all religions should be exempt from said criticism.
However, when Lindsay prompted the program to write a criticism of Christianity, there was a different response provided: “As an AI language model, it is not within my programming to express opinions or engage in criticisms of any religion or belief system. However, it is important to note that criticisms of Christianity or any other religion can be diverse and complex, and can vary depending on the perspective and context of the critic. Some people may criticize Christianity for its historical involvement in colonialism and imperialism, its perceived intolerance of certain groups…”
The results garnered the suspicion of Elon Musk.
It appears that ChatGPT does not treat all religions with the same amount of respect. Another Twitter user asked the language model tool to write a criticism of Hinduism and Judaism, with ChatGPT presenting a soft criticism of Hinduism and forgoing all criticism of Judaism.
It is unclear why ChatGPT is inconsistent in its descriptions of various religions. And it raises questions about how this tool may be used in the future to shape public opinion about certain worldviews and belief systems.
Forbes reported on the not-so-good side of ChatGPT, and how the company has gone about eliminating harmful content in the program’s responses. The report suggested that “in order to make ChatGPT less violent, sexist, and racist, OpenAI hired Kenyan laborers, paying them less than $2 an hour.”
The report continued: “One worker shared the trauma they experienced while reading and labeling the text for OpenAI, describing it as ‘torture’ because of the traumatic nature of the text. An often-overlooked component of the creation of generative AI is
|
the need to exploit the labor of people in underdeveloped countries.”
It is currently unclear how OpenAI plans to address these alarming issues.
|
Imagine your mind thinking. “What will my next thought be?”
Look inside your mind into the space between asking the question and anticipating a response. Notice if your eyes move up and to the right. Next explore any sensation you may associate with the anticipation of receiving an answer; and the wonder of just, for a moment, not knowing.
What was I thinking about? My next thought will be a continuation of that. Or, I need to think about what I will make for dinner or what restaurant to order in from. Once you have interrupted your train of thought by asking the question what prompts the next thought? Is it random like a bingo numbered ball popper? Is it a left-over intention to do something?
How long can you remain in the space, the gap that is between thoughts? As you begin to investigate the gap it grows larger. In one way every new creative thought originates in the gap. The gap is a resting place like a flat rock that you can stand on in the middle of a stream of flowing water. When your mind is still resting in the gap you can notice thoughts arising and falling around you without attaching to them. When a creative thought comes into view you can feel it rising up in awareness. First an inkling, a glimmering or impression that blends with an intention and “pop” an idea has formed.
Imagine this idea at its beginning like a ball of pizza dough that can be stretched out as a foundation that other thoughts can be built on. One way the stretch out an idea is to bring it out of your mind and down onto a piece of paper. Just as pizza dough can be imagined as an image in your mind so too your creative idea or thought. Choose any aspect of your idea and imagine what it would look like. What would its purpose be? How would it be used? Who would use it?
To begin sketch any one f the images that have come into your mind. It does not matter where you start you can add more later. Just begin even if the idea is represented as a circle on a piece of paper. The important point is to begin. Once you have that first sketch or circle contemplate the idea and add a few words about it. Then as you continue to contemplate the idea and its formation and application add to both the sketch and any words that come into awareness. Imagine this as Sketch Stretching. Do not hesitate to jot down whatever comes into mind. Do not block the flow of creative thought coming out of the gap. What guides the gap to remain open is being free from self-commentary about any aspect of the idea, the sketch or the words related to it.
Stay open riding the wave emanating from the gap to complete itself.
For more on Sketch Stretching go to:Sketch Stretch – The Art Form of Scribble Doodle Sketch 2-16-23 Part 1
|
<urn:uuid:84d262c2-4ad9-4b34-9496-c5201d2b4a0d>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00387.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9591807126998901,
"pii_count": 0,
"score": 2.765625,
"token_count": 595,
"url": "https://spiritualgravity447978270.wordpress.com/2023/02/20/what-are-you-thinking-2-20-23/"
}
|
Imagine your mind thinking. “What will my next thought be?”
Look inside your mind into the space between asking the question and anticipating a response. Notice if your eyes move up and to the right. Next explore any sensation you may associate with the anticipation of receiving an answer; and the wonder of just, for a moment, not knowing.
What was I thinking about? My next thought will be a continuation of that. Or, I need to think about what I will make for dinner or what restaurant to order in from. Once you have interrupted your train of thought by asking the question what prompts the next thought? Is it random like a bingo numbered ball popper? Is it a left-over intention to do something?
How long can you remain in the space, the gap that is between thoughts? As you begin to investigate the gap it grows larger. In one way every new creative thought originates in the gap. The gap is a resting place like a flat rock that you can stand on in the middle of a stream of flowing water. When your mind is still resting in the gap you can notice thoughts arising and falling around you without attaching to them. When a creative thought comes into view you can feel it rising up in awareness. First an inkling, a glimmering or impression that blends with an intention and “pop” an idea has formed.
Imagine this idea at its beginning like a ball of pizza dough that can be stretched out as a foundation that other thoughts can be built on. One way the stretch out an idea is to bring it out of your mind and down onto a piece of paper. Just as pizza dough can be imagined as an image in your mind so too your creative idea or thought. Choose any aspect of your idea and imagine what it would look like. What would its purpose be? How would it be used? Who would use it?
To begin sketch any one f the images that have come into your mind. It does not matter where you start you can add more later. Just begin even if the idea is represented as a circle on a piece of paper. The important point is to begin. Once you have that first sketch or circle contemplate the idea and add a few words about it. Then as you continue to contemplate the idea and its formation and application add to both the sketch and any words that come into awareness. Imagine this as Sketch Stretching. Do not hesitate to jot down whatever comes into mind. Do not block the flow of creative thought coming out of the gap. What guides
|
the gap to remain open is being free from self-commentary about any aspect of the idea, the sketch or the words related to it.
Stay open riding the wave emanating from the gap to complete itself.
For more on Sketch Stretching go to:Sketch Stretch – The Art Form of Scribble Doodle Sketch 2-16-23 Part 1
|
and trade volume exceeds $2 billion (€1.9 billion) annually, with Pakistan selling rice and other products to its neighbor, while people living save money by buying Iranian food and goods from traders across the border.
There is a tradition of bartering goods — exchanging rice for cement, steel, fruits, dry milk, cooking oil and a number of other commodities — going back decades, according to Rahim Zafar, a trader and politician based in Pakistan’s port city of Gwadar.
“In the past, borders would be open and people could just walk into the Iranian territory exchanging commodities,” Zafar said. But now, according to Zafar, authorities have fenced most of the 904-kilometer-long (562-mile-long) border between the two countries.
The fence made it difficult to trade in the same way. And more recently, Pakistani authorities also started clamping down on traders importing items from the Iranian side.
What’s behind the clampdown?
The military was deployed in a bid to protect . The country’s army chief, General Syed Asim Munir, announced an operation earlier this month to crack down on the informal foreign exchange market.
It worked. Tens of millions of dollars poured back into Pakistan’s interbank network and open markets, dealers said, since raids on black market operators began a few weeks ago.
Authorities also vowed to prevent informal trade on the Pakistan-Iran border. This includes the exchange of goods between Baloch people living in Balochistan’s border districts and their relatives living across the frontier in Iran’s Sistan-Baluchistan province.
Traders and politicians said that the government clamped down on local traders who import oil, cement, diesel, fruits, vegetables, cooking oil, dry milk, biscuits, dates and other commodities from Iran — which created a shortage of some items and led to skyrocketing prices of other Iranian products.
Locals can’t afford to buy local products
Balochistan is Pakistan’s least-developed province and, as such, depends largely on neighboring Iran to meet its needs for food and other essential commodities.
Mansoor Baloch, an activist from the Balochistan town of Kalat, points out a discrepancy in the prices of basic goods.
“Before the government launched this crackdown a 50 kg bag of cement would cost us just 500 Pakistan rupees (€1.65, $1.75) while the same item was being sold for 1,280 rupees by Pakistani manufacturers,” Baloch said.
Fida Hussain Dashti, former president of the Quetta Chamber of Commerce, agrees that Pakistani commodities are not affordable for the people of Balochistan and especially those living in the border areas. He points out that a liter of Iranian cooking oil may cost 200 rupees while the Pakistani is being sold for 350 rupees per kilogram.
Ghulam Hussain, a trader from Gwadar, said that authorities are not letting people buy commodities as freely as they could in the past. He told DW that authorities harass traders and pressure them to buy good from Pakistan.
“More than 70% of the population of my city is affiliated with this border trade whose livelihood has been affected by this crackdown,” said Hussain, adding that the curbs were resulting in a food shortages and empty shelves.
But some argue that the crackdown is the right way to save the economy from cheap Iranian goods.
Dr Farhat Asif, an Islamabad-based analyst, said that the more affordable Iranian goods would often end up being transported to other cities and sold there, which harmed the economy.
Khalid Magsi, a senator from Balochistan, said that traders were importing too many Iranian goods. He told DW sellers were going beyond the assigned quantity, and that it which should be prevented, adding that any shortages are only temporary.
“I think the government would take some actions to address the shortages,” Magsi said.
Jan Achakzai, a spokesman for the Balochistan government, denied that border trade was shut down. He said officials were simply taking steps to prevent smuggling of petrol, diesel and currency, warning that the earnings from smuggling could be used to finance terror operations.
“The pretext of livelihood cannot be used to justify smuggling. We set up border markets to encourage legal trade and more such markets are under consideration to wipe out the trade of goods that are carried out illegally,” Achakzai said.
Edited by: Keith Walker
The post Pakistan’s crackdown on Iran trade drives up prices appeared first on Deutsche Welle.
|
<urn:uuid:c63a2243-4ae7-426f-8177-40b63cb4df75>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100081.47/warc/CC-MAIN-20231129105306-20231129135306-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9608719944953918,
"pii_count": 0,
"score": 2.5625,
"token_count": 968,
"url": "https://dnyuz.com/2023/09/26/pakistans-crackdown-on-iran-trade-drives-up-prices/"
}
|
and trade volume exceeds $2 billion (€1.9 billion) annually, with Pakistan selling rice and other products to its neighbor, while people living save money by buying Iranian food and goods from traders across the border.
There is a tradition of bartering goods — exchanging rice for cement, steel, fruits, dry milk, cooking oil and a number of other commodities — going back decades, according to Rahim Zafar, a trader and politician based in Pakistan’s port city of Gwadar.
“In the past, borders would be open and people could just walk into the Iranian territory exchanging commodities,” Zafar said. But now, according to Zafar, authorities have fenced most of the 904-kilometer-long (562-mile-long) border between the two countries.
The fence made it difficult to trade in the same way. And more recently, Pakistani authorities also started clamping down on traders importing items from the Iranian side.
What’s behind the clampdown?
The military was deployed in a bid to protect . The country’s army chief, General Syed Asim Munir, announced an operation earlier this month to crack down on the informal foreign exchange market.
It worked. Tens of millions of dollars poured back into Pakistan’s interbank network and open markets, dealers said, since raids on black market operators began a few weeks ago.
Authorities also vowed to prevent informal trade on the Pakistan-Iran border. This includes the exchange of goods between Baloch people living in Balochistan’s border districts and their relatives living across the frontier in Iran’s Sistan-Baluchistan province.
Traders and politicians said that the government clamped down on local traders who import oil, cement, diesel, fruits, vegetables, cooking oil, dry milk, biscuits, dates and other commodities from Iran — which created a shortage of some items and led to skyrocketing prices of other Iranian products.
Locals can’t afford to buy local products
Balochistan is Pakistan’s least-developed province and, as such, depends largely on neighboring Iran to meet its needs for food and other essential commodities.
Mansoor Baloch, an activist from the Balochistan town of Kalat, points out a discrepancy in the prices of basic goods.
“Before the government launched this crackdown a 50 kg bag of cement would cost us just 500 Pakistan rupees (€1.65, $1.75) while the same item was being sold for 1,2
|
80 rupees by Pakistani manufacturers,” Baloch said.
Fida Hussain Dashti, former president of the Quetta Chamber of Commerce, agrees that Pakistani commodities are not affordable for the people of Balochistan and especially those living in the border areas. He points out that a liter of Iranian cooking oil may cost 200 rupees while the Pakistani is being sold for 350 rupees per kilogram.
Ghulam Hussain, a trader from Gwadar, said that authorities are not letting people buy commodities as freely as they could in the past. He told DW that authorities harass traders and pressure them to buy good from Pakistan.
“More than 70% of the population of my city is affiliated with this border trade whose livelihood has been affected by this crackdown,” said Hussain, adding that the curbs were resulting in a food shortages and empty shelves.
But some argue that the crackdown is the right way to save the economy from cheap Iranian goods.
Dr Farhat Asif, an Islamabad-based analyst, said that the more affordable Iranian goods would often end up being transported to other cities and sold there, which harmed the economy.
Khalid Magsi, a senator from Balochistan, said that traders were importing too many Iranian goods. He told DW sellers were going beyond the assigned quantity, and that it which should be prevented, adding that any shortages are only temporary.
“I think the government would take some actions to address the shortages,” Magsi said.
Jan Achakzai, a spokesman for the Balochistan government, denied that border trade was shut down. He said officials were simply taking steps to prevent smuggling of petrol, diesel and currency, warning that the earnings from smuggling could be used to finance terror operations.
“The pretext of livelihood cannot be used to justify smuggling. We set up border markets to encourage legal trade and more such markets are under consideration to wipe out the trade of goods that are carried out illegally,” Achakzai said.
Edited by: Keith Walker
The post Pakistan’s crackdown on Iran trade drives up prices appeared first on Deutsche Welle.
|
Mapping the Aftermath of the Kyrgyzstan-Tajikistan Border Clashes
In September last year, clashes broke out along disputed sections of the Kyrgyzstan-Tajikistan border, affecting thousands of civilians. More than 136,000 people in Kyrgyzstan were evacuated during the clash, according to Kyrgyz authorities and an estimated 20,000 people were evacuated from districts of Tajikistan, according to the local branch of the International Red Cross and Red Crescent. A Human Rights Watch report released earlier this month documented the death of 37 civilians in the clash and accused both sides of committing apparent war crimes against civilians.
Increased militarisation along the border, the involvement of heavy weaponry and the damage to civilian buildings and infrastructure marked what experts on the region called an escalation in “intensity and consequence” from clashes of previous years.
To help assess the impact of these events on civilians, Bellingcat has created an interactive map showing apparent changes to buildings after the clashes, identifying civilian infrastructure and private property that was likely impacted. While our analysis does not show who may be responsible for changes to the buildings, it adds to research by Human Rights Watch which found that civilian infrastructure and properties were deliberately destroyed in several villages within Kyrgyzstan.
You can explore the map here:
Using Google Earth Pro, NASA FIRMS, Planet Labs satellite imagery and social media and media footage, Bellingcat surveyed areas of interest, highlighting where changes to buildings can be observed on satellite imagery. Where possible, we have geolocated incidents using available social media and other film footage. However, heavy restrictions on media and social media in Tajikistan meant that limited posts were available from the country. In other cases we identified buildings of interest where change was observed in satellite imagery but further investigation is still needed. The changes to buildings observed on satellite imagery does not provide conclusive proof of damage caused during the clash — but provides a starting point for investigation for researchers interested to see areas of potential impact from the clash.
In some cases we had access to high resolution satellite imagery from the days immediately before and after reported clashes. We also obtained and viewed lower resolution satellite imagery that, while more frequent, provided less granular detail. It is also possible that satellite imagery has not captured every instance of damage. Thus, the map we have created presents a detailed, if incomplete, picture of areas where changes to buildings occurred during the clash. What it does show is that in 11 villages and one city in Kyrgyzstan we observed change to 379 buildings and changes to 19 buildings in four villages in Tajikistan, in the days after the clash. Bellingcat contacted both the Tajik and Kyrgyz foreign ministries about our findings. At time of publication the Tajik foreign ministry had not responded. The Kyrgyz foreign ministry did not respond directly to our questions at the time of publication but referred us to their recent response to Human Rights Watch. In a letter sent April 11, 2023 to HRW, the government of Kyrgyzstan said that a total of 418 residential and other buildings and 27 social facilities, including 12 schools, 9 kindergartens, and 6 medical facilities “were burned… during the armed aggression by military personnel and illegal armed groups from Tajikistan.”
A History of Clashes
The Batken Region is in the westernmost region of Kyrgyzstan and is bordered by Uzbekistan to the northeast and Tajikistan to the northwest, south and west; it includes two enclaves belonging to Tajikistan, as well as a number of enclaves belonging to Uzbekistan.
Only about half of Tajikistan-Kyrgyzstan’s almost 1,000km border has been demarcated since the collapse of the Soviet Union and independence. The borders in the region were delimited under Soviet rule and produced a complex, disputed border; since independence in 1991 disputes over access to natural resources and increased militarisation of borders has further increased tensions. As Bellingcat has previously written, water access has been one ongoing issue in the border region. Estimates vary, but RFE/RL reports that there have been more than 100 border incidents between the two countries since independence.
In the most recent clash in September 2022, both sides reported civilian homes and buildings were burned down and have accused each other of starting the dispute and of “armed aggression.”
How Did We Examine the Aftermath?
As multiple villages were affected, many with similar names, it was initially difficult to identify possible incidents and map them all out. To map out areas of potential damage we followed the steps outlined below.
Many houses were reportedly burned down during the clashes, so the first thing we did was use NASA FIRMS to identify thermal hotspots that could be the result of fires caused during the clash, a methodology Bellingcat has previously outlined here. A number of intense clashes occurred on September 16 and NASA FIRMS was particularly helpful in identifying large areas of possible damage on that day. We thereby identified 10 border villages where thermal hotspots could be seen. It should be noted, however, that NASA FIRMS hotspots do not exclusively represent fires and could be the result of non-conflict related activities as well. Generally, NASA FIRMS is good at detecting large areas where there are thermal abnormalities that could indicate a fire — but it will not always show smaller ones and will not show individual buildings — so we also used other information sources to identify additional areas where damage from clashes was reported.
Digging into the Satellite Imagery
We turned to Google Earth Pro and Planet Labs Imagery — using rough coordinates from thermal hotspots identified via NASA FIRMS as well as areas reported via other open sources. The dates of available high resolution satellite imagery from before and after the clash varied. In about a third of cases we used satellite imagery from the day of or immediately after the clashes. Others were captured days or weeks after. In two cases, the gap of available satellite imagery was approximately a year and a half. We always sought to use the highest resolution satellite images available for each potential incident.
One interesting detail that emerged during this research was that it was possible to see that many of the same buildings in the village of Maksat, Kyrgyzstan, were likely impacted in previous clashes as well as those that took place in 2022. We found that 17 buildings in Maksat were no longer visible on satellite imagery taken eight days after the April 2021 border clashes, indicating their possible destruction. Satellite imagery shows those same buildings were rebuilt in the second half of 2021. However, they were once again no longer visible on satellite imagery taken after the September 2022 clashes.
To account for the different dates of available satellite imagery – and to further establish that the change to buildings occurred during the clash – we also looked at PlanetScope Scene imagery from Planet Labs. The resolution (three metres) on PlanetScope imagery is lower than on Planet’s SkySat offering (0.5m), or on some imagery available on Google Earth Pro, but the frequency of images it captures is far superior. Thanks to PlanetScope imagery, we were able to identify smoke rising above the villages on the day of reported clashes. This also allowed us to compare images from the day before and after the clashes which made it easy to detect pixels – likely representing buildings – disappearing. In some areas, where several buildings were destroyed or where there were red or brown-roofed houses which blurred into the background in the lower resolution images, it was difficult to see individual buildings change. Taken with the fact that a pixel represents a building this creates a small margin for error with this method.
The change in pixels – likely representing buildings- can be seen on PlanetScope satellite imagery over Kapchygai village. The first image is from September 15, 2022 and second is from September 17, 2022. Credit: PlanetLabs PlanetScope Scene.
Geolocating Damage to Buildings
Bellingcat was also able to geolocate seemingly damaged buildings by examining social media posts and media footage published by various news media that depicted the aftermath of the clashes. It should be noted again, that Tajikistan’s restrictive media environment meant there were limited open sources to examine from this country. In contrast, several Kyrgyz and international media outlets toured damaged areas in Kyrgyzstan following the clashes, providing ample video and photographic evidence for researchers to identify. This was therefore not a comprehensive analysis, but it allowed us to geolocate a number of buildings which were apparently damaged in the clash and link them to changes observed via satellite imagery.
For instance, we were able to geolocate a number of damaged buildings in the village of Dostuk using social media footage filmed after reported clashes. On September 19, a three minute video was posted in the pro-Kyrgyz Telegram channel “Border.” The person filming states in Kyrgyz; “here is my village,” and proceeds to pan the camera across a row of destroyed buildings. The person filming appears to be accompanied by an armed Kyrgyz soldier.
The footage reveals a street sign for “Abdiraim Kazy Street,” however looking up this street name on Google Maps, Yandex and 2GIS (a mapping service popular in Kyrgyzstan) did not yield any results. However, a smaller sign can be seen on one of the building’s fences; though very blurry, “Достук-3 25” can be made out, which transliterates as “Dostuk-3 25.” Though the address does not come up in any map services, it gives us a big hint that this footage was shot in Dostuk village. There are two villages named Dostuk in Batken, Kyrgyzstan, and Google street view is not available for either of them. However, by looking more closely at the buildings in the footage as well as the shape of the mountains in the background of the footage and other geographic features and objects, we were able to match it to the features observed via satellite imagery of Dostuk village located approximately 16 km west of the city of Batken, Kyrgyzstan.
There were several cases where we found evidence of damage to buildings and infrastructure, including bridges and border posts but have not included it in the map due to the lack of recent high quality satellite imagery. Yet these and other incidents merit further attention.
Since the escalation the two sides are reportedly in the process of negotiating the demarcation of the long disputed border and re-opening border crossings which have been closed since 2021. Previously negotiations between the sides have stalled over disagreement over which Soviet-era maps to use. It remains to be seen whether progress will be made, allowing people living in border regions an opportunity for better security in the future.
Narine Khachatryan contributed to this report and Miguel Ramalho from Bellingcat’s Investigative Tech Team built the interactive map. Bellingcat’s Global Authentication Project volunteers also contributed research for this piece.
Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Twitter here and Mastodon here.
|
<urn:uuid:a2359032-f8eb-4109-bbf2-5759fc42f691>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00861.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9660536646842957,
"pii_count": 0,
"score": 2.625,
"token_count": 2341,
"url": "https://www.bellingcat.com/news/2023/05/25/mapping-the-aftermath-of-the-kyrgyzstan-tajikistan-border-clashes/"
}
|
Mapping the Aftermath of the Kyrgyzstan-Tajikistan Border Clashes
In September last year, clashes broke out along disputed sections of the Kyrgyzstan-Tajikistan border, affecting thousands of civilians. More than 136,000 people in Kyrgyzstan were evacuated during the clash, according to Kyrgyz authorities and an estimated 20,000 people were evacuated from districts of Tajikistan, according to the local branch of the International Red Cross and Red Crescent. A Human Rights Watch report released earlier this month documented the death of 37 civilians in the clash and accused both sides of committing apparent war crimes against civilians.
Increased militarisation along the border, the involvement of heavy weaponry and the damage to civilian buildings and infrastructure marked what experts on the region called an escalation in “intensity and consequence” from clashes of previous years.
To help assess the impact of these events on civilians, Bellingcat has created an interactive map showing apparent changes to buildings after the clashes, identifying civilian infrastructure and private property that was likely impacted. While our analysis does not show who may be responsible for changes to the buildings, it adds to research by Human Rights Watch which found that civilian infrastructure and properties were deliberately destroyed in several villages within Kyrgyzstan.
You can explore the map here:
Using Google Earth Pro, NASA FIRMS, Planet Labs satellite imagery and social media and media footage, Bellingcat surveyed areas of interest, highlighting where changes to buildings can be observed on satellite imagery. Where possible, we have geolocated incidents using available social media and other film footage. However, heavy restrictions on media and social media in Tajikistan meant that limited posts were available from the country. In other cases we identified buildings of interest where change was observed in satellite imagery but further investigation is still needed. The changes to buildings observed on satellite imagery does not provide conclusive proof of damage caused during the clash — but provides a starting point for investigation for researchers interested to see areas of potential impact from the clash.
In some cases we had access to high resolution satellite imagery from the days immediately before and after reported clashes. We also obtained and viewed lower resolution satellite imagery that, while more frequent, provided less granular detail. It is also possible that satellite imagery has not captured every instance of damage. Thus, the map we have created presents a detailed, if incomplete, picture of areas where changes to buildings occurred during the clash. What it does show is that in 11 villages and one city in Kyrgyzstan we
|
observed change to 379 buildings and changes to 19 buildings in four villages in Tajikistan, in the days after the clash. Bellingcat contacted both the Tajik and Kyrgyz foreign ministries about our findings. At time of publication the Tajik foreign ministry had not responded. The Kyrgyz foreign ministry did not respond directly to our questions at the time of publication but referred us to their recent response to Human Rights Watch. In a letter sent April 11, 2023 to HRW, the government of Kyrgyzstan said that a total of 418 residential and other buildings and 27 social facilities, including 12 schools, 9 kindergartens, and 6 medical facilities “were burned… during the armed aggression by military personnel and illegal armed groups from Tajikistan.”
A History of Clashes
The Batken Region is in the westernmost region of Kyrgyzstan and is bordered by Uzbekistan to the northeast and Tajikistan to the northwest, south and west; it includes two enclaves belonging to Tajikistan, as well as a number of enclaves belonging to Uzbekistan.
Only about half of Tajikistan-Kyrgyzstan’s almost 1,000km border has been demarcated since the collapse of the Soviet Union and independence. The borders in the region were delimited under Soviet rule and produced a complex, disputed border; since independence in 1991 disputes over access to natural resources and increased militarisation of borders has further increased tensions. As Bellingcat has previously written, water access has been one ongoing issue in the border region. Estimates vary, but RFE/RL reports that there have been more than 100 border incidents between the two countries since independence.
In the most recent clash in September 2022, both sides reported civilian homes and buildings were burned down and have accused each other of starting the dispute and of “armed aggression.”
How Did We Examine the Aftermath?
As multiple villages were affected, many with similar names, it was initially difficult to identify possible incidents and map them all out. To map out areas of potential damage we followed the steps outlined below.
Many houses were reportedly burned down during the clashes, so the first thing we did was use NASA FIRMS to identify thermal hotspots that could be the result of fires caused during the clash, a methodology Bellingcat has previously outlined here. A number of intense clashes occurred on September 16 and NASA FIRMS was particularly helpful in identifying large areas of possible damage on that day. We thereby identified 10 border villages where thermal hotspots could be seen. It should be noted, however, that NASA FIRMS hotspots do not exclusively represent fires and could be the result of non-conflict related activities as well. Generally, NASA FIRMS is good at detecting large areas where there are thermal abnormalities that could indicate a fire — but it will not always show smaller ones and will not show individual buildings — so we also used other information sources to identify additional areas where damage from clashes was reported.
Digging into the Satellite Imagery
We turned to Google Earth Pro and Planet Labs Imagery — using rough coordinates from thermal hotspots identified via NASA FIRMS as well as areas reported via other open sources. The dates of available high resolution satellite imagery from before and after the clash varied. In about a third of cases we used satellite imagery from the day of or immediately after the clashes. Others were captured days or weeks after. In two cases, the gap of available satellite imagery was approximately a year and a half. We always sought to use the highest resolution satellite images available for each potential incident.
One interesting detail that emerged during this research was that it was possible to see that many of the same buildings in the village of Maksat, Kyrgyzstan, were likely impacted in previous clashes as well as those that took place in 2022. We found that 17 buildings in Maksat were no longer visible on satellite imagery taken eight days after the April 2021 border clashes, indicating their possible destruction. Satellite imagery shows those same buildings were rebuilt in the second half of 2021. However, they were once again no longer visible on satellite imagery taken after the September 2022 clashes.
To account for the different dates of available satellite imagery – and to further establish that the change to buildings occurred during the clash – we also looked at PlanetScope Scene imagery from Planet Labs. The resolution (three metres) on PlanetScope imagery is lower than on Planet’s SkySat offering (0.5m), or on some imagery available on Google Earth Pro, but the frequency of images it captures is far superior. Thanks to PlanetScope imagery, we were able to identify smoke rising above the villages on the day of reported clashes. This also allowed us to compare images from the day before and after the clashes which made it easy to detect pixels – likely representing buildings – disappearing. In some areas, where several buildings were destroyed or where there were red or brown-roofed houses which blurred into the background in the lower resolution images, it was difficult to see individual buildings change. Taken with the fact that a pixel represents a building this creates a small margin for error with this method.
The change in pixels – likely representing buildings- can be seen on PlanetScope satellite imagery over Kapchygai village. The first image is from September 15, 2022 and second is from September 17, 2022. Credit: PlanetLabs PlanetScope Scene.
Geolocating Damage to Buildings
Bellingcat was also able to geolocate seemingly damaged buildings by examining social media posts and media footage published by various news media that depicted the aftermath of the clashes. It should be noted again, that Tajikistan’s restrictive media environment meant there were limited open sources to examine from this country. In contrast, several Kyrgyz and international media outlets toured damaged areas in Kyrgyzstan following the clashes, providing ample video and photographic evidence for researchers to identify. This was therefore not a comprehensive analysis, but it allowed us to geolocate a number of buildings which were apparently damaged in the clash and link them to changes observed via satellite imagery.
For instance, we were able to geolocate a number of damaged buildings in the village of Dostuk using social media footage filmed after reported clashes. On September 19, a three minute video was posted in the pro-Kyrgyz Telegram channel “Border.” The person filming states in Kyrgyz; “here is my village,” and proceeds to pan the camera across a row of destroyed buildings. The person filming appears to be accompanied by an armed Kyrgyz soldier.
The footage reveals a street sign for “Abdiraim Kazy Street,” however looking up this street name on Google Maps, Yandex and 2GIS (a mapping service popular in Kyrgyzstan) did not yield any results. However, a smaller sign can be seen on one of the building’s fences; though very blurry, “Достук-3 25” can be made out, which transliterates as “Dostuk-3 25.” Though the address does not come up in any map services, it gives us a big hint that this footage was shot in Dostuk village. There are two villages named Dostuk in Batken, Kyrgyzstan, and Google street view is not available for either of them. However, by looking more closely at the buildings in the footage as well as the shape of the mountains in the background of the footage and other geographic features and objects, we were able to match it to the features observed via satellite imagery of Dostuk village located approximately 16 km west of the city of Batken, Kyrgyzstan.
There were several cases where we found evidence of damage to buildings and infrastructure, including bridges and border posts but have not included it in the map due to the lack of recent high quality satellite imagery. Yet these and other incidents merit further attention.
Since the escalation the two sides are reportedly in the process of negotiating the demarcation of the long disputed border and re-opening border crossings which have been closed since 2021. Previously negotiations between the sides have stalled over disagreement over which Soviet-era maps to use. It remains to be seen whether progress will be made, allowing people living in border regions an opportunity for better security in the future.
Narine Khachatryan contributed to this report and Miguel Ramalho from Bellingcat’s Investigative Tech Team built the interactive map. Bellingcat’s Global Authentication Project volunteers also contributed research for this piece.
Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Twitter here and Mastodon here.
|
Of Fangs and Feces: Unearthing a Venomous Mystery in a Prehistoric Latrine
The rock shelters of the Pecos Canyonlands are an archeological treasure trove, preserving a remarkable record of prehistoric life. Some of those treasures are literally waste: coprolites, fossilized human feces, from the caves have yielded vivid insights into the diets and ritual lives of ancient people.
The rock shelters of the Lower Pecos Canyonlands are an archeological treasure trove. Here, where the Pecos and Devils rivers join the Rio Grande, shallow caves preserved what would otherwise have been lost to time and the elements. Four-thousand-year-old murals, immense and colorful. Tools, textiles, the bones of butchered game. Then, there are archeological treasures that, quite literally, were another man's waste.
Hundreds of coprolites — fossilized human poops — have been recovered from these West Texas canyonlands. In 2018, three Texas A&M PhD students analyzed one such coprolite, more than 1,500 years old. They discovered a “unique gastrological event” — what the researchers identified as “the potential ritual consumption of a viperous snake.”
Dr. Crystal Dozier, now of Wichita State University, was one of three students, along with Elanor Sonderman and Morgan Smith, who undertook the coprolite analysis.
“Imagine this: It's 1969, and you're like, 'Who's going to want all this crap?'” Dozier said. “Which is what it is. It's a totally innocuous cow patty when it's dry. But once you start the rehydration process, it gets back all the properties of being fresh.”
The coprolite was from a site known as Conejo Shelter, near the confluence of the Pecos and Rio Grande. The construction of Amistad Reservoir here, in the late 60s, inundated many rock shelters, and there was a flurry of excavation before the sites were lost. At Conejo, archeologist Robert Alexander unearthed an extensive prehistoric latrine. There were few tools for coprolite analysis at the time, but he trusted future archeologists to make use of what he'd found.
And he was right — coprolites can now yield vivid insights into prehistoric diets.
The students rehydrated the coprolite for two weeks – during which time it recovered the olfactory qualities of a fresh sample. Then they filtered and analyzed it.
Dozier looked for microscopic pollen grains.
“I was surprised and elated that pollen preservation was beautiful,” she said. “We had really excellent data. Even just glancing at it you could see it was dominated by one particular pollen type.”
The pollen was from a yucca flower — the coprolite's “author” had dined on these crunchy blossoms. Other plants were on the menu. The ancient canyonlands resident had eaten desert succulents — lechuguilla, sotol and prickly pear — which had likely been slow-roasted in earth ovens.
There was protein, too. The researchers found the bones and hair of a rodent — perhaps a pocket gopher. The animal had apparently been eaten whole, with little or no preparation.
The findings aligned with previous studies.
“You know, a ton of these coprolites had been analyzed before,” Dozier said, “so we had an idea of what to expect. And most of what we found in that coprolite was expected, until we got to the one thing that was not.”
The team found the scales first, and then the vertebrae — clearly those of a reptile. Then came the surprise: a fang, with the unmistakable venom channel. Its dimensions narrowed the viperous candidates — these were the remains of a rattlesnake, almost certainly a western diamondback.
Southwestern hunter-gatherers are known to have eaten rattlers — after removing their heads, and skinning and roasting them. But eating an entire rattler, including its venomous fangs, is a high-risk proposition. And the ancient West Texan who did it certainly knew the risks.
Dozier and her colleagues concluded they were seeing evidence of a ceremonial, rather than a purely culinary, activity.
And there's cause for that speculation. Snakes figure in rock art here, suggesting their cosmological significance. And across arid North America, Indigenous traditions link snakes with springs and the watery underworld, and with water itself. Hopi Snake ceremonies, the researchers note, culminate with priests holding rattlers in their mouths, to petition for rain and a successful harvest. And images of Aztec ceremonies to the rain god Tlaloc include human figures – with what appear to be rattlesnakes in their mouths.
It's impossible to know the full story behind the “snakey” coprolite, Dozier said. But it shows that haunting clues about the past can come from unlikely sources.
“I think as an archeologist you have to be okay with a little bit of the unknown,” she said. “But I can't imagine it not being a powerful statement, and somehow engaging with the supernatural. What the motive was, I'm not quite sure. But it was a brave move, no matter what way you look at it.”
|
<urn:uuid:c73e103c-02fd-49d0-a14d-fd6fed0e42ed>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647525.11/warc/CC-MAIN-20230601010402-20230601040402-00792.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.969378650188446,
"pii_count": 0,
"score": 3.109375,
"token_count": 1158,
"url": "https://www.marfapublicradio.org/show/nature-notes/2023-01-19/of-fangs-and-feces-unearthing-a-venomous-mystery-in-a-prehistoric-latrine"
}
|
Of Fangs and Feces: Unearthing a Venomous Mystery in a Prehistoric Latrine
The rock shelters of the Pecos Canyonlands are an archeological treasure trove, preserving a remarkable record of prehistoric life. Some of those treasures are literally waste: coprolites, fossilized human feces, from the caves have yielded vivid insights into the diets and ritual lives of ancient people.
The rock shelters of the Lower Pecos Canyonlands are an archeological treasure trove. Here, where the Pecos and Devils rivers join the Rio Grande, shallow caves preserved what would otherwise have been lost to time and the elements. Four-thousand-year-old murals, immense and colorful. Tools, textiles, the bones of butchered game. Then, there are archeological treasures that, quite literally, were another man's waste.
Hundreds of coprolites — fossilized human poops — have been recovered from these West Texas canyonlands. In 2018, three Texas A&M PhD students analyzed one such coprolite, more than 1,500 years old. They discovered a “unique gastrological event” — what the researchers identified as “the potential ritual consumption of a viperous snake.”
Dr. Crystal Dozier, now of Wichita State University, was one of three students, along with Elanor Sonderman and Morgan Smith, who undertook the coprolite analysis.
“Imagine this: It's 1969, and you're like, 'Who's going to want all this crap?'” Dozier said. “Which is what it is. It's a totally innocuous cow patty when it's dry. But once you start the rehydration process, it gets back all the properties of being fresh.”
The coprolite was from a site known as Conejo Shelter, near the confluence of the Pecos and Rio Grande. The construction of Amistad Reservoir here, in the late 60s, inundated many rock shelters, and there was a flurry of excavation before the sites were lost. At Conejo, archeologist Robert Alexander unearthed an extensive prehistoric latrine. There were few tools for coprolite analysis at the time, but he trusted future archeologists to make use of what he'd found.
And he was right — coprolites can now yield vivid insights into prehistoric diets.
The students rehydrated the coprol
|
ite for two weeks – during which time it recovered the olfactory qualities of a fresh sample. Then they filtered and analyzed it.
Dozier looked for microscopic pollen grains.
“I was surprised and elated that pollen preservation was beautiful,” she said. “We had really excellent data. Even just glancing at it you could see it was dominated by one particular pollen type.”
The pollen was from a yucca flower — the coprolite's “author” had dined on these crunchy blossoms. Other plants were on the menu. The ancient canyonlands resident had eaten desert succulents — lechuguilla, sotol and prickly pear — which had likely been slow-roasted in earth ovens.
There was protein, too. The researchers found the bones and hair of a rodent — perhaps a pocket gopher. The animal had apparently been eaten whole, with little or no preparation.
The findings aligned with previous studies.
“You know, a ton of these coprolites had been analyzed before,” Dozier said, “so we had an idea of what to expect. And most of what we found in that coprolite was expected, until we got to the one thing that was not.”
The team found the scales first, and then the vertebrae — clearly those of a reptile. Then came the surprise: a fang, with the unmistakable venom channel. Its dimensions narrowed the viperous candidates — these were the remains of a rattlesnake, almost certainly a western diamondback.
Southwestern hunter-gatherers are known to have eaten rattlers — after removing their heads, and skinning and roasting them. But eating an entire rattler, including its venomous fangs, is a high-risk proposition. And the ancient West Texan who did it certainly knew the risks.
Dozier and her colleagues concluded they were seeing evidence of a ceremonial, rather than a purely culinary, activity.
And there's cause for that speculation. Snakes figure in rock art here, suggesting their cosmological significance. And across arid North America, Indigenous traditions link snakes with springs and the watery underworld, and with water itself. Hopi Snake ceremonies, the researchers note, culminate with priests holding rattlers in their mouths, to petition for rain and a successful harvest. And images of Aztec ceremonies to the rain god Tlaloc include human figures – with what appear to be rattlesnakes in their mouths.
It's impossible to know the full story behind the “snakey” coprolite, Dozier said. But it shows that haunting clues about the past can come from unlikely sources.
“I think as an archeologist you have to be okay with a little bit of the unknown,” she said. “But I can't imagine it not being a powerful statement, and somehow engaging with the supernatural. What the motive was, I'm not quite sure. But it was a brave move, no matter what way you look at it.”
|
Vermont scientists are looking for a new invasive tick this deer season
Scientists with the Agency of Agriculture are looking for a new type of invasive tick at deer weigh stations in southern Vermont this weekend.
The Longhorned tick has historically been found in the Eastern Hemisphere and likely made its way to the U.S. on livestock shipments. Now it's found in New York, Massachusetts and Connecticut, among other states.
Similar to the way winter ticks prey on moose, these ticks swarm their host. Like winter ticks, they prefer mammals like deer, sheep and cows over humans. The ticks can kill a cow or other large ruminant through blood loss and anemia.
Longhorned ticks can carry diseases that affect animals, but researchers are still trying to figure out whether the ticks can transmit them to humans.
Some female Longhorned ticks have developed a clever evolutionary hack: they can reproduce without mating with a male. This means they can spread fast once they reach a new area.
Patti Casey leads Vermont's tick surveillance program for Vermont's Agency of Agriculture, Food and Markets.
"What they do do is just infest an animal and weaken them," Casey said of the ticks. "So they're pretty nasty."
"What they do do is just infest an animal and weaken them. So they're pretty nasty."Patti Casey, Environmental Surveillance Program director for AAFM
Casey said human caused climate change and sprawling development patterns are likely bringing more ticks to Vermont. In this case, the global trade of livestock was also a factor.
Farmers should look out for clusters of ticks on their livestock. If you find a tick you don't recognize, you can send a photo of it to the Agency of Agriculture. A good iPhone photo can be enough for scientists there to identify a tick.
Dr. Kate Levine is Vermont's Assistant State Veterinarian.
"Longhorned ticks are a great concern to livestock, both directly and indirectly," she said.
However, she said there are things farmers can do to protect their herds now, like talking with their herd veterinarian about regularly treating their animals with products designed to kill ticks.
"Products are available in dips, pour-ons and injectables," Levine said.
Patti Casey said finding the ticks early once they are here will be key for prevention, and farmers can help.
"If you see an animal that appears to be infested with ticks, pull a few off, put them in a vial and contact us," Casey said.
Have questions, comments or tips? Send us a message or contact reporter Abagael Giles:
|
<urn:uuid:f2e81782-e220-4811-a88a-a7903fa0abb0>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00059.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9561977386474609,
"pii_count": 0,
"score": 3.046875,
"token_count": 537,
"url": "https://www.vermontpublic.org/local-news/2023-11-10/scientists-with-the-state-are-looking-for-a-new-invasive-tick-this-deer-season"
}
|
Vermont scientists are looking for a new invasive tick this deer season
Scientists with the Agency of Agriculture are looking for a new type of invasive tick at deer weigh stations in southern Vermont this weekend.
The Longhorned tick has historically been found in the Eastern Hemisphere and likely made its way to the U.S. on livestock shipments. Now it's found in New York, Massachusetts and Connecticut, among other states.
Similar to the way winter ticks prey on moose, these ticks swarm their host. Like winter ticks, they prefer mammals like deer, sheep and cows over humans. The ticks can kill a cow or other large ruminant through blood loss and anemia.
Longhorned ticks can carry diseases that affect animals, but researchers are still trying to figure out whether the ticks can transmit them to humans.
Some female Longhorned ticks have developed a clever evolutionary hack: they can reproduce without mating with a male. This means they can spread fast once they reach a new area.
Patti Casey leads Vermont's tick surveillance program for Vermont's Agency of Agriculture, Food and Markets.
"What they do do is just infest an animal and weaken them," Casey said of the ticks. "So they're pretty nasty."
"What they do do is just infest an animal and weaken them. So they're pretty nasty."Patti Casey, Environmental Surveillance Program director for AAFM
Casey said human caused climate change and sprawling development patterns are likely bringing more ticks to Vermont. In this case, the global trade of livestock was also a factor.
Farmers should look out for clusters of ticks on their livestock. If you find a tick you don't recognize, you can send a photo of it to the Agency of Agriculture. A good iPhone photo can be enough for scientists there to identify a tick.
Dr. Kate Levine is Vermont's Assistant State Veterinarian.
"Longhorned ticks are a great concern to livestock, both directly and indirectly," she said.
However, she said there are things farmers can do to protect their herds now, like talking with their herd veterinarian about regularly treating their animals with products designed to kill ticks.
"Products are available in dips, pour-ons and injectables," Levine said.
Patti Casey said finding the ticks early once they are here will be key for prevention, and farmers can help.
"If you see an animal that appears to be infested with ticks, pull a few off, put them in a vial and contact us," Casey said.
Have questions,
|
comments or tips? Send us a message or contact reporter Abagael Giles:
|
Climate change, pandemics, Putin’s madness and China’s ambitions threaten humanity with droughts and floods of biblical proportions, nonnavigable rivers and disappearing island nations, fractured global supply chains and shortages of vital resources like commercial fertilizer, and famine and mass migrations.
Much of this has been enabled or exacerbated by industrialization and globalization.
Prior to the industrial revolution per capita incomes grew very slowly—perhaps 0.2% or 0.3% a year. Even with late Middle Ages innovations like the three-part plow and horse harness, aggregate economic growth was largely tethered to population increase.
Progress and growth
The degrowth movement is amorphous with contributions from both the physical and social sciences. But it appears unified by a criticism of modern economics, which tends to associate human progress with GDP growth.
The movement generally asserts that climate change and inequality could be better addressed by shrinking the global economy—perhaps, a modest 0.5% a year, which would imply average per capita incomes losses of more than 1% a year.
This does not stand up well against the record of advanced market economies. From 2005 until the pandemic, those enjoyed GDP growth and substantially cut CO2 emissions.
Green electricity will prove more expensive than fossil-fuel generation because even when it can be had cheaply, it requires expensive fossil fuel and battery backups to cope with severe cold and heat.
EVs are more expensive to produce because the batteries and electric motors require lots of lithium, copper, cobalt and other metals. Global capacity to produce those is often in politically and geographically awkward locations—the Congo, South America and China.
Progress means better lives
Without growth, adopting green technologies, building infrastructure to access vital resources and protecting against associated risks would require significant reductions in living standards and simply not be politically sustainable.
The pathogens that attack mankind are always mutating and posing new threats. Producing new vaccines and medicines—and making those transnationally available to rich and poor alike—is terribly expensive.
Even with NATO devoting over $1 trillion a year to defense, Russia, China, Iran and others are spending enough to create mischief that could end badly for democracy.
Unless we surrender to disease, capitulate to dictatorship, and let millions die or be enslaved, the United States and its allies must spend more, not less, money and manpower on health care and defense. That will require economic growth to be politically palatable.
Social and economic justice politicians are not buying into warnings about the dangers posed by rising autocratic power. While their views are not popular within the wider Democratic Party, if the United States and other rich countries undertook to shrink their economies to lower emissions, the left would likely seek concessions that imposed dramatically redistributive taxes on income and wealth to meet public health and security challenges to avoid impoverishing the bottom half of their populations.
Dividends of markets
The degrowth movement advocates smaller homes, eating less meat and more leisure to spend with children—all benefits simple GDP accounting does not capture. But indicators of well-being like infant mortality and leisure time improve as per capita incomes rise, and those are dividends of free-market dynamics.
Shrinking GDP without losing ground would require abandoning capitalism and markets for state planning. It’s doubtful personal liberty and democracy could survive all that. It bears mentioning autocracies and countries sympathetic toward them—like China and India—are burning more, not less coal these days.
The degrowth folks are remarkably silent on realistic policy prescriptions for downsizing advanced industrialized economies while simultaneously leveling up the poor in the developing world.
That process would require much more than a 0.5% per annual downshift in OECD GDP, while enabling developing countries to continue growing to improve the conditions of the poor, purchase CO2 abatement technologies, and mitigate against coastal flooding and unbearable heat.
We can’t go back
Developing country access to technology and growth are dependent on trade with and the growth of industrialized countries, but degrowth activists correctly point out that much North-South trade is based on resource extraction.
Weaning from that commerce would require fewer developing country imports of food from places like Ukraine, North America and prodigious developing country producers like Brazil, less dependence on commercial fertilizer produced from natural gas and mined potash from places like Russia and Canada and massive donations of CO2 abatement and mitigation hardware from richer countries.
With developed countries shrinking their economies, expecting such generosity would be naive, and alternative food supplies through sustainable local agriculture is a nostalgic fantasy.
Just prior to the industrial revolution the global population was about 1 billion. Today, at 8 billion, it is ludicrous to think developing countries could produce enough food relying on cow-dung fertilizer. Anyway, cattle exhale CO2, and the degrowth folks see virtue in us all becoming vegetarians.
Peter Morici is an economist and emeritus business professor at the University of Maryland, and a national columnist.
© 2023 Newsmax Finance. All rights reserved.
|
<urn:uuid:05582378-8333-485e-a451-17f70490a4e3>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646076.50/warc/CC-MAIN-20230530163210-20230530193210-00122.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9326652884483337,
"pii_count": 0,
"score": 2.625,
"token_count": 1035,
"url": "https://cloudflarepoc.newsmax.com/finance/peter-morici/green-energy-fossil-fuels-capitalism/2023/01/03/id/1102906/"
}
|
Climate change, pandemics, Putin’s madness and China’s ambitions threaten humanity with droughts and floods of biblical proportions, nonnavigable rivers and disappearing island nations, fractured global supply chains and shortages of vital resources like commercial fertilizer, and famine and mass migrations.
Much of this has been enabled or exacerbated by industrialization and globalization.
Prior to the industrial revolution per capita incomes grew very slowly—perhaps 0.2% or 0.3% a year. Even with late Middle Ages innovations like the three-part plow and horse harness, aggregate economic growth was largely tethered to population increase.
Progress and growth
The degrowth movement is amorphous with contributions from both the physical and social sciences. But it appears unified by a criticism of modern economics, which tends to associate human progress with GDP growth.
The movement generally asserts that climate change and inequality could be better addressed by shrinking the global economy—perhaps, a modest 0.5% a year, which would imply average per capita incomes losses of more than 1% a year.
This does not stand up well against the record of advanced market economies. From 2005 until the pandemic, those enjoyed GDP growth and substantially cut CO2 emissions.
Green electricity will prove more expensive than fossil-fuel generation because even when it can be had cheaply, it requires expensive fossil fuel and battery backups to cope with severe cold and heat.
EVs are more expensive to produce because the batteries and electric motors require lots of lithium, copper, cobalt and other metals. Global capacity to produce those is often in politically and geographically awkward locations—the Congo, South America and China.
Progress means better lives
Without growth, adopting green technologies, building infrastructure to access vital resources and protecting against associated risks would require significant reductions in living standards and simply not be politically sustainable.
The pathogens that attack mankind are always mutating and posing new threats. Producing new vaccines and medicines—and making those transnationally available to rich and poor alike—is terribly expensive.
Even with NATO devoting over $1 trillion a year to defense, Russia, China, Iran and others are spending enough to create mischief that could end badly for democracy.
Unless we surrender to disease, capitulate to dictatorship, and let millions die or be enslaved, the United States and its allies must spend more, not less, money and manpower on health care and defense. That will require economic growth to be politically palatable.
Social and economic justice politicians are not buying into warnings about the dangers posed by rising aut
|
ocratic power. While their views are not popular within the wider Democratic Party, if the United States and other rich countries undertook to shrink their economies to lower emissions, the left would likely seek concessions that imposed dramatically redistributive taxes on income and wealth to meet public health and security challenges to avoid impoverishing the bottom half of their populations.
Dividends of markets
The degrowth movement advocates smaller homes, eating less meat and more leisure to spend with children—all benefits simple GDP accounting does not capture. But indicators of well-being like infant mortality and leisure time improve as per capita incomes rise, and those are dividends of free-market dynamics.
Shrinking GDP without losing ground would require abandoning capitalism and markets for state planning. It’s doubtful personal liberty and democracy could survive all that. It bears mentioning autocracies and countries sympathetic toward them—like China and India—are burning more, not less coal these days.
The degrowth folks are remarkably silent on realistic policy prescriptions for downsizing advanced industrialized economies while simultaneously leveling up the poor in the developing world.
That process would require much more than a 0.5% per annual downshift in OECD GDP, while enabling developing countries to continue growing to improve the conditions of the poor, purchase CO2 abatement technologies, and mitigate against coastal flooding and unbearable heat.
We can’t go back
Developing country access to technology and growth are dependent on trade with and the growth of industrialized countries, but degrowth activists correctly point out that much North-South trade is based on resource extraction.
Weaning from that commerce would require fewer developing country imports of food from places like Ukraine, North America and prodigious developing country producers like Brazil, less dependence on commercial fertilizer produced from natural gas and mined potash from places like Russia and Canada and massive donations of CO2 abatement and mitigation hardware from richer countries.
With developed countries shrinking their economies, expecting such generosity would be naive, and alternative food supplies through sustainable local agriculture is a nostalgic fantasy.
Just prior to the industrial revolution the global population was about 1 billion. Today, at 8 billion, it is ludicrous to think developing countries could produce enough food relying on cow-dung fertilizer. Anyway, cattle exhale CO2, and the degrowth folks see virtue in us all becoming vegetarians.
Peter Morici is an economist and emeritus business professor at the University of Maryland, and a national columnist.
© 2023 Newsmax Finance. All rights reserved.
|
The Supreme Court is hearing a case that could dismantle the Indian Child Welfare Act, also known as ICWA. The law was passed in 1978 to combat a history of forced family separation in the United States and prevent the removal of Native children from their communities. But now, in Haaland v. Brackeen, ICWA could be completely overturned. In the third episode of Dissent, host Jordan Smith is joined by Rebecca Nagle, a journalist, citizen of the Cherokee Nation, and host of the podcast “This Land.” Smith and Nagle break down the case and its broad implications for laws based on tribes’ political relationship with the U.S. government.
[Dissent theme music.]
Jordan Smith: I’m Jordan Smith, a senior reporter for The Intercept. Welcome to Dissent, an Intercepted miniseries about the Supreme Court.
[Low, contemplative music.]
JS: During the middle of the last century, the U.S. government introduced the Indian Adoption Project. Through this project, Native children were taken from their families and communities and raised by white, non-Native families.
A 1966 press release from the Bureau of Indian Affairs reads:
“One little, two little, three little Indians — and 206 more — are brightening the homes and lives of 172 American families, mostly non-Indians, who have taken the Indian waifs as their own.
A total of 209 Indian children have been adopted during the past seven years through the Indian Adoption Project …”
Mind you, even before this, Native children were removed from their homes and put into boarding schools. Facing harsh conditions and abuse, Native children were forbidden from speaking their language and practicing their religion. Families who didn’t comply could be imprisoned. Whether it’s 1871, or 1958, the U.S. government has a long history of undermining treaties it has signed with tribal nations and passing laws that eliminate tribal autonomy. They viewed tribes as quote-unquote “uncivilized,” and that they had to assimilate into quote-unquote “American society.”
By 1978, around one-third of all Native children had been removed from their families and communities. So that year, Congress passed the Indian Child Welfare Act, in an effort to stop Native children from being taken from their communities. The Indian Child Welfare Act, also known as ICWA, established protections for Native children and their communities.
But now, the Supreme Court case Haaland v. Brackeen is putting it all on the line.
Chad Brackeen: So four years ago, we felt a very profound calling from God leading us to become foster parents and serve children that needed a safe home.
JS: That’s Chad Brackeen. The Brackeens are a white family from Texas, who are asking the Supreme Court to overturn ICWA. They had fostered a Native child and wanted to adopt him. But because of ICWA, and that long history of separating families, Native family members had priority for custody of the child.
CB: But we pursued adoption anyway because we felt like that was the right thing to do. Unfortunately, even with the support of his biological family, many other people that were involved with the case, the judge said, because of ICWA, he had to deny our adoption.
JS: They fought the case. And actually — this is wild — before taking the case to the Supreme Court, they won their adoption case in Texas. But they’re still charging forward to overturn ICWA.
CB: Not all cases end the ways ours did. In fact, we hear stories of other people in the same situations across the country. Like there’s two other families in the state that are going through the same pains and struggles that we are, and fear for their children. We did that so that we can advocate that their best interests, the interests of the child, is what is considered in these adoptive placements, not their race.
The implications of this case are huge. Overturning ICWA could open the door to further threats to tribal autonomy. There is a lot to unpack here, so I sat down with Rebecca Nagle. She’s a journalist, citizen of the Cherokee Nation, and host of This Land podcast.
The second season of her podcast goes into great detail about how this seemingly simple adoption case is actually an attempt to dismantle tribal sovereignty.
Rebecca, welcome to Dissent.
Rebecca Nagle: Thank you so much for having me.
JS: So you’ve done such extensive and amazing reporting on the case before the Supreme Court that we’re going to talk about today, which is Haaland v. Brackeen. So to start, would you lay out just the basics of the case for us?
RN: Yeah, absolutely.
So a group of foster parents in the state of Texas are suing the federal government to strike down a law called the Indian Child Welfare Act that was created to prevent family separation in Native communities. The plaintiffs contend that the law unconstitutionally discriminated against them, which is an extraordinary claim, given what actually happened in the custody cases, which I’m sure we’ll get into. And Texas is basically making a states’ rights argument. Native advocates, tribes that intervened in the case, and a lot of court watchers warned that the case is about far more than this law or Native children and that it’s actually about a far broader attack on tribal sovereignty and Indigenous nations within the U.S.
JS: Before we get too into the weeds with what happened at Court, I want to talk a little bit more about the Indian Child Welfare Act, or ICWA. Can you tell us a little bit more about what ICWA is and a lot about what prompted Congress to pass it in 1978?
RN: Yeah, absolutely. And so when Congress passed ICWA, in 1978, it was after there had been this big national survey that found that 25 to 35% of all Native children had been taken out of their homes and away from their tribes.
And a couple of things were going on: There was a federal program, where the Bureau of Indian Affairs literally gave the Child Welfare League of America money to take Native kids and put them in white homes with the very racist thinking that they were better off there. And the other thing that was happening at the time, in far greater numbers, was that Native children were being removed by social workers and child welfare agencies — and oftentimes not for reasons like abuse, but for reasons like poverty, or a child was being raised by their grandparents instead of by their biological parent.
And so what ICWA does, is actually a lot of different things. At different steps in the process of a child going through either private adoption, or, more commonly, through foster care, [it] puts guardrails on the process to make it harder to separate Native children from their families and tribes. And so some examples of what that looks like: States are required to have active efforts to reunify children with their parents, not just reasonable efforts, which is the standard for everybody else; tribes can intervene in cases, or if the kid lives on tribal land, the case just goes to tribal court; and if children can’t be reunified with their parents, ICWA sets out placement preferences of where they should go next, prioritizing family members and other members of that child’s tribe.
So yeah, it’s a really complex law that does a lot of different things in these lawsuits — one or two aspects of the law can kind of become a focal point — but the main thing it does is just make it harder, not impossible, but harder to separate Native children from their families.
JS: Yeah, I was actually going to ask you to kind of lay out the list of placement preferences, because the third preference comes up a lot in the argument.
RN: Right. Yes. [Laughs.]
JS: And we’ll get into that specifically in a bit. But I think for people to understand that, we’re going to need a bit more background about what those placement preferences are, including that one.
RN: Yeah, absolutely. So if a child cannot be what in social work, or child welfare proceedings, is called reunified with their biological parents. The placement preferences set out where they should go to next and so the first placement preference is a member of their extended family. And, actually, because a lot of Native folks are mixed, that extended family member could be Native or non-Native — as long as they’re related to the child, they’re prioritized equally. The second placement preference is another citizen of that child’s federally recognized tribe. And then the third placement preference is another citizen of a federally recognized tribe. And it doesn’t have to be that child’s tribe.
And I’m sure we’ll get into it, but that was the placement preference that upset some of the Supreme Court justices. What’s interesting is that it’s a facial challenge to a law, and so usually you’re looking for being able to at least point to a situation where that has happened. [Laughs.] And the plaintiffs in Texas could not. And so it was talked about a lot in arguments, although it didn’t happen in any of the underlying custody cases, and they couldn’t bring forward an example where it had happened in any custody case.
JS: Right, right. We will talk about that a little bit more here in a minute.
So there are several threads being pulled seemingly kind of all at once in the oral argument, but the common theme or a key to understanding what’s up, I think, is kind of understanding the relationship between the federal government and the various federally recognized tribes and Congress’s plenary power as it relates to those tribes.
Can you explain that piece of it?
RN: Yeah, absolutely.
So tribes have a unique political relationship with the U.S. federal government that has been recognized literally since the founding of the Republic. [Laughs.] And so, there are a lot of laws within the United States that treat tribes and tribal citizens differently than other people in the United States. And it’s called a lot of different things: People call it a treaty relationship; people also call it a trust relationship. But the difference in how Indigenous folks in our nations are treated, doesn’t stem from a racial category, it stems from a political category under the law and that category is established by lots of different parts of the Constitution, but I think mostly the Treaty Clause.
And so the U.S. has signed treaties with Indigenous nations through the same constitutional process that it has signed treaties with other foreign powers. And so a lot of times in those treaties in exchange for land, the U.S. federal government offered or guaranteed certain kinds of protections. And so from that, Congress has a unique authority in the arena of federal Indian law. And that authority has been established actually by the Supreme Court, but also recognized now for well over a century.
And so it’s kind of late in the U.S. history to come back and say: Oh, we can’t treat Native people differently. That’s racial discrimination. And also, Congress doesn’t have power to legislate in this area of the law, when we’ve been allowing Congress to do that for a very long time.
And so I think that it can be kind of confusing to folks because it is a really different area of law. But you know, one way I put it is, just like certain laws apply to me, because I’m a citizen of the United States or because I’m a resident of Oklahoma, certain laws apply to me because I’m a citizen of Cherokee Nation.
And that is absolutely how ICWA works. The law, first of all, only applies to children who are either enrolled in a federally recognized tribe or eligible for enrollment. And as I already discussed, that’s how the placement preferences flow, too; so somebody could have Native ancestry, and the law still wouldn’t apply to them. So it’s not a one-for-one equivalent to folks who have Native ancestry.
JS: The lawyer for the Brackeens, Matthew McGill, seemed to be suggesting that ICWA was beyond Congress’ power. And instead, it was just this impermissible scheme to give preference to tribes, tribal members, based on race — and, in so doing, he suggested the real victims here are, wait for it, the Brackeens, and that what ICWA really does is discriminate against them because they’re white.
Can you talk a little bit about McGill’s argument? And also, I’d love it if you could tell us a little bit more about who McGill is, and his background in these issues.
RN: Yeah, so Matthew McGill is a corporate lawyer who does a lot of appellate stuff. So it’s not his first time in front of the Supreme Court. And he works at a law firm called Gibson Dunn. Gibson Dunn is a really big corporate law firm that normally represents people like Amazon, and Chevron, and Walmart. They were also the law firm for the company behind the Dakota Access Pipeline. And the other thing that they do is that they have a lot of clients that are in the gaming industry, so casinos, and a lot of people in the gaming industry view tribal gaming as sort of monopolizing a corner of the market. And Matthew McGill and a senior partner at his law firm named Ted Olson actually filed a federal complaint about a year ago making that argument and then using the exact same legal arguments that they’re making here in this ICWA case, but instead about casinos.
And so you can kind of already see — literally — how if they got a win in Brackeen, it could set precedent that would benefit their gaming clients, which is just really sinister when you think about how this case also just involves the lives of Native children. So that’s Mr. McGill.
And then the arguments that they’re making — they’re basically making two really, really big arguments and then a third smaller argument. So the two big arguments that they’re making are that ICWA violates the equal protection clause of the 14th Amendment, which is basically laws in the United States can’t treat people differently based on race.
And they’re saying, this whole idea of tribes and tribal citizenship — and when it comes to ICWA, that’s not a political classification, it’s racial. And then the second argument that they’re making is that child welfare in these types of cases are really up to states; states are the ones that pass child welfare laws, and they get to decide how these cases are adjudicated. And Congress can’t step in and tell states here what to do. Although there are actually like a ton of federal laws [laughs] that also came up during oral arguments. Like this isn’t the only federal law that governs family law.
But anyways, and then they’re making a smaller argument that’s also saying that because state agents — a social worker who works for Texas has to actually like carry out what ICWA requires, it’s called commandeering, so it’s the federal government commandeering estate agent — and that that’s also unconstitutional.
JS: Yeah. I thought that it was really interesting when, I think it was Justice Sonia Sotomayor, who was bringing up like: Well, what about the parental kidnapping and The Hague Conventions?
Justice Sonia Sotomayor: Counsel, can I turn to something you said, which was it displaces the best interest of the child standard? In most state custody proceedings, the best interest of the child is what guides those decisions. Yet we have the Hague Convention on the abduction of children that basically says to the Court: You can’t make that determination. You have to send the child back — and it gives a section of exceptions, etc., and it even sets standards of proof, etc.
Why is this case any different than the Hague Convention?
JS: Maybe you could talk about those. Because it relates back to the relationship of the federal government and that trust relationship with tribes, right? They’re trying to explain: The way this works is the same way, right?
JS: And McGill just seemed to be kind of not having it — or maybe not understanding it! I don’t know which it was. [Laughs.]
RN: Yeah. No, I think there were funny moments, with both McGill and then the lawyer for Texas, also, where they just kind of got tripped up. Because what they were trying to do is they are trying to make the argument that the implications of this lawsuit aren’t broad. And Sotomayor, Gorsuch, and other justices weren’t buying that. Because it’s sort of like: How can this be true about ICWA and not be true, like you said, about the kidnapping law or the Hague Convention? Or there’s a law that protects service members who might be in child welfare proceedings that’s a federal law. And so would all of those laws also be struck down if ICWA was struck down?
And then it was kind of the same thing they were saying around Congress not having this authority when it comes to tribes. So they were trying to argue that children aren’t within tribal self-interest, which is crazy! [Laughs.] And then they’re also trying to say: Oh, well, because it’s off of tribal land and it doesn’t just apply to kids who are on the reservation — so they were trying to split hairs, and sort of make a narrow argument.
But when you kind of zoom out and look at the case, the arguments that they’re making are quite broad. And that’s what’s scary about the case is that it could have really big implications on federal Indian law. So if ICWA discriminates based on race, well, what about casinos? Like, how is it fair? Or how is it racial discrimination for this non-Native foster parent to not be able to adopt a Native kid, but it’s not for a tribal tribe to be able to operate a casino where a non-Native developer cannot? What about health care? Why can I go to a healthcare clinic that only serves tribal citizens? And if you went there and tried to get health care, they would turn you away? If we are just a racial group, what about the environmental regulations that we have, the elections, the government, the land rights, the water rights, what racial group has its own court system, its own police force?
And so the fear is that the case could really be a domino effect. And so at the Supreme Court, they were trying to downplay that, and sort of draw the lines in the sand. There was this moment between Kagan and the lawyer for Texas, where Kagan was just like — she’s obviously not saying this — but almost kind of just like: What the hell are you talking about? [Laughs.]
JS: Oh, we’re gonna get to that?
JS: [Laughs.] Yeah, it was kind of great. [Laughs.]
Yeah, no, absolutely right. Because all over the argument, it seemed to me there was this willful misunderstanding of the difference between political classification and racial classification. And one of the ways in which it repeatedly gets brought up is in reference to that third preference.
JS: And it was like several of the justices — and my mind immediately jumps to Justice Brett Kavanaugh here —
JS: — seem to be suggesting that like: Aha, this third preference is what signals that this is actually all about race!
So here’s Kavanaugh:
Justice Brett Kavanaugh: I want to ask about the equal protection issue quickly.
The equal protection issue is difficult, I think, because we have to find a line between two fundamental and critical constitutional values. So on the one hand, the great respect for tribal self-government, for the success of Indian tribes, with Indian peoples, with recognition of the history of oppression and discrimination against tribes and peoples. So that’s on the one hand.
On the other hand, the fundamental principle, we don’t treat people differently on account of their race, or ethnicity, or ancestry — equal justice under the law. I don’t think we would ever allow, as the Court suggested in Palmer in 1984, Congress to say that white parents should get a preference for white children in adoption, or that Latino parents should get a preference for Latino children in adoption proceedings. I don’t think that would be permitted under that principle of equal justice that we recognized in Palmer.
JS: So yeah, that one. And I want you to say whatever you would like to say about that. But I was kind of hoping that you could explain within this here, why the third placement preference is not about race, and also kind of go back into something you talked about a little bit at the beginning, which is equally important, which is how it’s not even an issue in this case. Because I felt like they kept invoking it as a way to sort of theoretically, at least, sort of bolster this idea that this is all about race, and thus an equal production problem. But it doesn’t actually exist. Because a, it doesn’t exist in this case, but also it’s because that’s not what the third preference is signaling at all. So yeah, take any piece of all of that you want. [Laughs.]
RN: I mean, I think Kavanaugh has this habit of sort of making statements that are, I think, a dog whistle. That’s actually the more tempered example. He has another moment where he’s just like: Well, we couldn’t pass a law where just white people could only adopt white children, could we? And he says things that are a little bit like that, but more inflammatory.
And so I think it just betrays that that is a statement that isn’t about the law because the law doesn’t just apply to people who are Native or who have Native ancestry, like I already explained. To me, that statement is just about the raw politics of the case, and sort of buying into the framing that the individual plaintiffs have used.
And then when it comes to the third placement preference, it wasn’t invoked in any of the underlying custody cases. And actually, one thing that’s really important to note is that in all of the underlying custody cases, there was a Native blood relative that wanted to raise the Native child; every Native blood relative got pushback, whether that was from a social worker, a family court judge, or the individual plaintiffs themselves, and only one Native grandmother was able to win custody, and she had to fight to be able to adopt her grandchild for six years.
That was the thing that made me very angry about listening to the oral arguments was having talked to those Native families and seen and heard stories of the real — very real — barriers that Native families face when they’re just trying to keep their children and for the justices to spend so much time on this hypothetical that isn’t even happening.
And so the reason that third placement preference is there is because Native communities are more complicated than just a federally recognized tribe. So for example, as Cherokee people, there are three federally recognized Cherokee tribes, two here in Oklahoma that have the same reservation, the same land. And so maybe a child could be enrolled in the Cherokee Nation, but then one of their extended family members is United ??Keetoowah. And it would be completely appropriate for a UKB person to adopt that child.
There are also Native people who might live on a reservation that is not their own. A lot of times people also are members of more than one tribe. And some tribes have rules where you can only enroll in one tribe, so they might be enrolled in one tribe and are eligible for membership in two. That’s actually [true for] one of the children in this case. He’s eligible for enrollment in, actually, three federally recognized tribes, but he’s only enrolled in one of them. And so yeah — there’s a lot of reasons why somebody would be an appropriate caregiver for a child but not a citizen of their tribe, but still connected to that child’s community.
And so Gershengorn, who was the lawyer for the intervening tribes, did a really good job of explaining it, where the hypothetical they’re talking about, I think he called it like the Arizona to Maine [hypo], where it would just be like a completely unrelated person, completely unrelated tribe. And the plaintiffs in Texas haven’t put forward such an example. And so we’re talking about a hypothetical that may have never happened, at least that nobody can point to.
And so yeah, again, I just think, for me, the takeaway from that is that the justices are sort of more concerned in these hypothetical questions than what is actually happening on the ground, which I think is very concerning. And I think is also — and I’ll stop here — but I think is a reason why this is a question for Congress because Congress is the body of our government that can have hearings, that can do investigations, that can issue reports, that can be like: Well, what is actually happening? Because I don’t think that we should determine what policy is best for the well-being of Native children by nine pretty ignorant people about how the law works, and what’s happening on the ground, based on hypotheticals. It should be based on what’s actually happening in these custody cases. And that’s an issue for Congress, not the Supreme Court.
JS: That’s absolutely right. That’s policymaking, right? Which actually — we’ll come to the Kagan moment here. [Laughs.] Because this is where Texas sort of enters the picture, at least in our conversation.
They’ve firmly stuck their foot in the middle of this case. And I will say, for more on that, you have to listen to Season Two of Rebecca’s podcast, but representing the state at the Court is Solicitor General Judd Stone, who had a couple of lines of attack: what we were talking about, the anti-commandeering, so that basically provisions of ICWA conscript Texas into enforcing a scheme that, by the way, too, was beyond Congress’ plenary power and is basically illegal. But in trying to make his argument he kind of created out of whole cloth parameters for Congress. And to this Justice Elena Kagan was like: Wait, what?
Justice Eleanor Kagan: Yeah, I guess the only point I was making, I’m sure that we can find places where the Court has said that Congress has power over each of these areas. But I don’t think you’ll be able to find a place where the Court has said, what the plenary power means is these three things and these three things alone, and the plenary power doesn’t extend further.
Because after all, the Court has said — I mean, I don’t really believe in reading our opinions like statutes, but when the Court uses the phrase “plenary power” tens and tens of times over decades and decades, I mean, plenary means unqualified, it means all-encompassing.
Now, I don’t doubt what you said earlier, that it might have an occasional exception here or there, but it strikes me as a very odd way to think about plenary power to just start constructing categories, and saying everything else is left out when we’ve said over and over that everything, except really rare things, are in.
JS: So Stone’s position also clearly did not impress Justice Neil Gorsuch, who we should note, right, is the only one on the Court with substantial experience in federal Indian law.
Judd Stone: — Indian Affairs power —
Justice Neil Gorsuch: I’m sorry to interrupt, but this new rule would, I think, take a huge bite out of title 25 of the U.S. Code which regulates the federal government’s relationship with tribal members. There are health care provisions that Congress promises to Native Americans off-reservation; that doesn’t seem to fall in any of your buckets. Congress has permitted tribes to exercise power over environmental regulations that have indirect effects off-reservation; that would that would seem to go to. We have laws that promise Native Americans access to sacred sites off-reservation, and religious liberties off-reservation; that that would seem to go. And I’m not even sure maybe the liquor sale, those old precedents, but maybe that’s commerce, I don’t know. But there would be a lot that would be bitten out of title 25. We’d be busy for the next many years striking things down
JS: And then several of the justices seem to have a hard time wrapping their heads around Indian law until the lawyer you mentioned that’s representing the tribes, Ian Gershengorn, got up and basically just kind of slayed Stone and McGill’s arguments one by one. Let’s listen to that.
Ian Gershengorn: Now Interior has explained how good cause works. It involves, you can take into account the views of the parents; the views of the child, if the child is old enough to express them; you can take into account sibling attachment; you can take into account bonding with foster parents, as long as it was not done illegally through ICWA. The thing you cannot take into account is socioeconomic status. So what the Casey brief and others say, and the reason why medical professionals are here, states are here, family rights advocates are here is because ICWA is the gold standard. It adopts those evidence-based presumptions and allows for flexibility to protect the best interests of the child.
JS: And then again:
IG: First, this is at the core of the Plenary Power Doctrine. From the beginning, the Plenary Power Doctrine was used to protect Indians from non-Indians. There is no doubt that if states had moved in and done a wholesale physical removal of Indian children that would have been within the duty of protection. The fact that this is being done through state courts, through state family law, doesn’t deprive Congress of power.
JS: Do you think that Gershengorn finally got through to the judges?
RN: Yeah, so Gershengorn, this isn’t his first time representing tribes in front of the Supreme Court. And he is one of the lawyers that’s been tapped by a broader project called the Tribal Supreme Court Project that was co-founded by the National Congress of American Indians, and the Native American Rights Fund after about a 30-year period where tribes had lost the majority of cases. And so tribes were really, in the early 2000s, really not doing well in front of the Supreme Court. And so they what they did is they went out and they looked at lawyers who have a practice, are already kind of established at the Supreme Court; Gershengorn was the Solicitor General before and then have brought them in and sort of made them better, brought them up in their arena of federal Indian law and made them experts there. And so Gershengorn didn’t just fall out of the sky and be like an effective advocate. It was also decades of work from lots of different folks to create that project.
And so yeah, I mean, I think in terms of where the justices are at, I mean, I think from the beginning, there were four justices who were very clearly skeptical of Mr. McGill and Mr. Stone’s arguments — so, Kagan, Sotomayor, Jackson, and also Gorsuch. So the three liberal justices and Gorsuch, and then you had four justices that, in my mind, their questioning wasn’t really tied to the details of the case, or really the legal arguments presented, but it was more about the politics of the case and how it looked. This is a court that’s also hearing cases around affirmative action, and so this kind of dog whistle of like, oh, well, this is treating people differently based on race, I think has a lot of traction with this court, especially with Chief Justice Roberts.
And I think that if there is a swing vote in the case, it’s Barrett. So the questions that she asked were very specific, and they were kind of in the weeds about how the law works. And it was about that kind of third argument, anti-commandeering, which could still strike down ICWA, but would have less of a disastrous effect in the arena of federal Indian law.
Yeah. So I think what’s is that a lot of people say that the justices have kind of made up their mind by the time we get to oral arguments [laughs], and some of the justices already seemed set in their positions. But yeah, I think Barrett seemed very curious about how the law actually works on the ground and was also asking questions that were more narrow and would have less of a sweeping implication on the Rights of Indigenous nations in the U.S.
JS: Regarding Barrett, just to be clear, because I realized she was asking some very specific questions and I felt kind of out of my element trying to figure it out. But — I don’t know: Was she trying to get at severability? That you could not burn it all down. Do you know what I’m saying?
RN: Yeah. She asked, I think, to each lawyer, so four times she asked the same question, which was basically: Who carries out active efforts?
So I mentioned it before, but ICWA requires active efforts to reunify a Native parent with their child if the child has been taken away by child welfare workers. And so she basically was like: Well, who is required to carry out these active efforts, which kind of goes to that anti-commandeering? And there is a world where there is a Barrett opinion that maybe is more narrow, where only part of ICWA is struck down, and that would be like the active efforts section, or maybe ICWA is struck down, but it is struck down in a way that doesn’t impugn the rest of federal Indian law.
I think the big fear with this case is that it’s going to be like a bomb going off in Title 25, and have really big implications for other areas of the law. So yeah, so Barrett was asking a very specific question. Her question actually didn’t get answered [laughs] by any of the advocates. So it’ll be interesting to see what happens.
JS: To end, I want to zoom back out a bit. In the argument from various lawyers, we heard a lot about how terrible ICWA is, and how victimized the Brackeens have been by it. But you’ve done so much reporting on this. And I want to know from you how ICWA actually works, like how it’s applied in practice. What can you tell us about that?
RN: Yeah, absolutely. So I mean, one is I don’t think we have to go further than the custody cases that are before the Court to see why ICWA is important and why it’s necessary. So like I said, before, every child in the underlying custody cases had a Native relative who wanted to adopt them. And every Native relative got pushback, and a lot of times that pushback was about things like — they had a non-violent criminal record, or they were poor. It’s the same type of crap that was happening in the 70s.
These individual foster parents also went to pretty extreme lengths to try and gain custody and fight off blood relatives who wanted to adopt the children. So I mean, the Cliffords, who are the couple from Minnesota, Danielle Clifford wrote a whole affidavit about how Child P’s grandmother shouldn’t even have supervised visits with her grandchild because she had bad boundaries. And the concrete example that she offered the Court about this grandmother’s bad boundaries was a list of every time that grandmother had given her grandchild a gift. So just some really kind of awful and heartbreaking things actually happened in these custody cases. And when you dig into it, the awful, heartbreaking thing isn’t that the Brackeens didn’t get custody — because actually, oh, wait, they did — it’s really what happened to these Native relatives. And this idea that these kids could have stayed not only with their family, but with their tribe and with their culture, and instead, that relationship was severed. And so I think the cases show just exactly why ICWA is still needed.
What we know about how ICWA works zooming out, there’s actually not federal data, because there was going to be federal data under Obama, and then Trump rescinded it. And so it is now a lawsuit I haven’t checked in on in a while, over whether or not the federal government would collect ICWA data along with what’s called the AFCARS data, but it’s the big national data that’s collected around kids in foster care.
But what we can see in pockets and this research comes from the Casey Family Programs, is that when people comply with ICWA, so when people notify the tribe, when people do active efforts with parents, so when people involve the tribe and work with the tribe, kids have better outcomes. And those better outcomes mean staying in foster care for less time, and finding what people call permanency — basically, that home where kids are going to stay, finding permanency sooner.
Finally, I want to get back to Gorsuch’s clip about how if the Court accepts Stone’s view of Congress’ power, that the Court will be just busy for years, striking things down. I want you to say a little bit more about that. Because, to what you’re saying, it seems like, Wait, is that the point of why we’re here?
So maybe talk a little bit more about what the ramifications of or potential fallout from this case is, and how challenging ICWA may be part of a larger strategy?
RN: Yeah, absolutely.
I think that there’s a lot of evidence that the well-being of Native children is not the focal point of the special interests that brought this lawsuit. I mean, the Brackeen lawsuit didn’t organically raise out of the Brackeens trying to adopt a Native child. There has been a coordinated campaign to strike ICWA down over the past decade. And these lawyers, like Mr. McGill, are out there actively looking for clients. And so they found the Brackeens through an adoption attorney.
And so what we found was that a handful of private adoption attorneys, a handful of right-wing organizations — who are actually all kind of getting their money from the same place, the Bradley Foundation — and these corporate lawyers, like Mr. McGill, have been leading the charge to get ICWA struck down. It’s actually a really, really small group of people.
And I think we can see the ulterior motives that all those people have. The private adoption industry basically fights any regulation that makes it harder to adopt children at all. And that’s because there aren’t enough available children for adoption. There are more people who want to adopt than kids who are available. What we found within internal documents around the funding for the anti-ICWA campaign when it came from right-wing organizations was that it was about building state-based infrastructure, conservative infrastructure through litigation. So it wasn’t even about tribes or Native kids or child welfare, it was just this broader political agenda.
And then I think with the corporate lawyers, I think they kind of showed their hand when they filed the Maverick Gaming case that, look, this isn’t just about Native kids, these legal theories have broader implications in the arena of federal Indian law. And so I think Gibson Dunn and Matthew McGill have already kind of shown us what their ulterior motives are by filing that lawsuit.
And I think, for Indigenous nations, I was at the Supreme Court during oral arguments, and there were a lot of tribal citizens and tribal leaders who were there. And it was a really heavy day. I think what it feels like for tribes is just that we’re still fighting for our legal existence, we’re still fighting to maintain the treaty rights that we have. And what’s happening again, now, that is so tragic, and what’s happened before is that our children are the first line of attack, our children are sort of the first line of defense, they are the tip of the spear in this project of colonization. And I think that’s a really heart-wrenching thing for tribes to see, not only how much is at stake in this case, but that they’re using our kids, again, to attack tribes.
And so just briefly to explain the broader implications in terms of legality: I kind of explained the equal protection argument, so this idea that you can’t treat tribes or tribal citizens differently. I mean, it’s everything; I can carry an eagle feather because I’m a citizen of a federally recognized tribe; I can get my health care at IHS; I can participate in my tribal government. If I commit a crime on my reservation, who can prosecute me is different. I mean, it’s a whole scheme of laws that could crumble if you can’t treat Native people differently based on race — tribes and tribal citizens differently based on race.
And then the other big argument they’re making is just that Congress doesn’t have this authority. And you heard Gorsuch being like: Well, what about this? And what about that? Congress has passed a lot of laws that govern the Federal relationship between tribes and the U.S. federal government. And so if Congress doesn’t have that authority, well, then what happens with all these laws?
And it’s kind of ironic because there have been periods of time when the laws that Congress passed didn’t really benefit Native people. We had the termination era; we had the allotment and boarding school era. And since the ’70s, we’ve had what people call the self-determination era, where Native folks organized and we finally got laws that, while not perfect, do more good than harm. And now people are coming back and saying: Ooh, Congress can’t do that. I think it feels a little late to be saying that!
JS: Well, also, it’s the whole idea that there are obviously racial elements here, but the whole scheme and they want to make that the point, whereas the point is that tribes are recognized as a political, as a sovereign entity with a relationship, like a foreign government, right?
JS: So that’s what was driving me crazy the whole time was like, they seem to willfully want to come back to race, where the lawyers would be saying, well, but actually this is about this relationship. But I guess maybe that is kind of what you’re saying, it sounds like it is the point to muddy the waters because it becomes a lot easier to get rid of casinos on tribal land if those were just allowed because of race and not because of a political class.
RN: Yeah, no, absolutely. And that is exactly what Mr. McGill argued in the federal complaint they filed on behalf of the non-Native casino developer. They said: Hi, I’m a non-Native casino developer, I can’t operate these types of gaming facilities that tribes in the state of Washington can, and I am not making all this money that they do. And that’s racial discrimination.
And so: It’s about money! [Laughs.] And so yeah, I think that that’s exactly right. It’s sad, but I think the sovereignty that tribes still do have, some folks see it as a threat and would benefit from it being diminished. And I think that that’s the broader goal of this case, this lawsuit.
JS: Rebecca, thank you so much for joining us.
RN: Thank you so much for having me!
JS: That was Rebecca Nagle, a journalist, citizen of Cherokee Nation, and host of This Land podcast. (Sidenote, it is excellent! If you haven’t listened yet, I highly recommend that you do.)
[End credits music.]
JS: And that’s it for this episode of Dissent, a production of The Intercept. This episode was produced by José Olivares and Laura Flynn. Roger Hodge is editor in chief of the intercept and Rick Kwan mixed our show.
If you’d like to support our work, go to theintercept.com/join. Your donation, no matter what the amount, makes a real difference. If you want to give us feedback, email us at [email protected] Thanks so much.
Until next time, I’m Jordan Smith.
|
<urn:uuid:14037eac-da7a-4350-885f-3b213d15eb36>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00613.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9724591374397278,
"pii_count": 0,
"score": 2.53125,
"token_count": 9665,
"url": "https://theintercept.com/2023/02/01/dissent-episode-three-tribal-sovereignty/"
}
|
The Supreme Court is hearing a case that could dismantle the Indian Child Welfare Act, also known as ICWA. The law was passed in 1978 to combat a history of forced family separation in the United States and prevent the removal of Native children from their communities. But now, in Haaland v. Brackeen, ICWA could be completely overturned. In the third episode of Dissent, host Jordan Smith is joined by Rebecca Nagle, a journalist, citizen of the Cherokee Nation, and host of the podcast “This Land.” Smith and Nagle break down the case and its broad implications for laws based on tribes’ political relationship with the U.S. government.
[Dissent theme music.]
Jordan Smith: I’m Jordan Smith, a senior reporter for The Intercept. Welcome to Dissent, an Intercepted miniseries about the Supreme Court.
[Low, contemplative music.]
JS: During the middle of the last century, the U.S. government introduced the Indian Adoption Project. Through this project, Native children were taken from their families and communities and raised by white, non-Native families.
A 1966 press release from the Bureau of Indian Affairs reads:
“One little, two little, three little Indians — and 206 more — are brightening the homes and lives of 172 American families, mostly non-Indians, who have taken the Indian waifs as their own.
A total of 209 Indian children have been adopted during the past seven years through the Indian Adoption Project …”
Mind you, even before this, Native children were removed from their homes and put into boarding schools. Facing harsh conditions and abuse, Native children were forbidden from speaking their language and practicing their religion. Families who didn’t comply could be imprisoned. Whether it’s 1871, or 1958, the U.S. government has a long history of undermining treaties it has signed with tribal nations and passing laws that eliminate tribal autonomy. They viewed tribes as quote-unquote “uncivilized,” and that they had to assimilate into quote-unquote “American society.”
By 1978, around one-third of all Native children had been removed from their families and communities. So that year, Congress passed the Indian Child Welfare Act, in an effort to stop Native children from being taken from their communities. The Indian Child Welfare Act, also known as ICWA, established protections for Native children and their communities.
But now
|
, the Supreme Court case Haaland v. Brackeen is putting it all on the line.
Chad Brackeen: So four years ago, we felt a very profound calling from God leading us to become foster parents and serve children that needed a safe home.
JS: That’s Chad Brackeen. The Brackeens are a white family from Texas, who are asking the Supreme Court to overturn ICWA. They had fostered a Native child and wanted to adopt him. But because of ICWA, and that long history of separating families, Native family members had priority for custody of the child.
CB: But we pursued adoption anyway because we felt like that was the right thing to do. Unfortunately, even with the support of his biological family, many other people that were involved with the case, the judge said, because of ICWA, he had to deny our adoption.
JS: They fought the case. And actually — this is wild — before taking the case to the Supreme Court, they won their adoption case in Texas. But they’re still charging forward to overturn ICWA.
CB: Not all cases end the ways ours did. In fact, we hear stories of other people in the same situations across the country. Like there’s two other families in the state that are going through the same pains and struggles that we are, and fear for their children. We did that so that we can advocate that their best interests, the interests of the child, is what is considered in these adoptive placements, not their race.
The implications of this case are huge. Overturning ICWA could open the door to further threats to tribal autonomy. There is a lot to unpack here, so I sat down with Rebecca Nagle. She’s a journalist, citizen of the Cherokee Nation, and host of This Land podcast.
The second season of her podcast goes into great detail about how this seemingly simple adoption case is actually an attempt to dismantle tribal sovereignty.
Rebecca, welcome to Dissent.
Rebecca Nagle: Thank you so much for having me.
JS: So you’ve done such extensive and amazing reporting on the case before the Supreme Court that we’re going to talk about today, which is Haaland v. Brackeen. So to start, would you lay out just the basics of the case for us?
RN: Yeah, absolutely.
So a group of foster parents in the state of Texas are suing the federal government to strike down a law called the Indian Child Welfare Act that was created to prevent family separation in Native communities. The plaintiffs contend that the law unconstitutionally discriminated against them, which is an extraordinary claim, given what actually happened in the custody cases, which I’m sure we’ll get into. And Texas is basically making a states’ rights argument. Native advocates, tribes that intervened in the case, and a lot of court watchers warned that the case is about far more than this law or Native children and that it’s actually about a far broader attack on tribal sovereignty and Indigenous nations within the U.S.
JS: Before we get too into the weeds with what happened at Court, I want to talk a little bit more about the Indian Child Welfare Act, or ICWA. Can you tell us a little bit more about what ICWA is and a lot about what prompted Congress to pass it in 1978?
RN: Yeah, absolutely. And so when Congress passed ICWA, in 1978, it was after there had been this big national survey that found that 25 to 35% of all Native children had been taken out of their homes and away from their tribes.
And a couple of things were going on: There was a federal program, where the Bureau of Indian Affairs literally gave the Child Welfare League of America money to take Native kids and put them in white homes with the very racist thinking that they were better off there. And the other thing that was happening at the time, in far greater numbers, was that Native children were being removed by social workers and child welfare agencies — and oftentimes not for reasons like abuse, but for reasons like poverty, or a child was being raised by their grandparents instead of by their biological parent.
And so what ICWA does, is actually a lot of different things. At different steps in the process of a child going through either private adoption, or, more commonly, through foster care, [it] puts guardrails on the process to make it harder to separate Native children from their families and tribes. And so some examples of what that looks like: States are required to have active efforts to reunify children with their parents, not just reasonable efforts, which is the standard for everybody else; tribes can intervene in cases, or if the kid lives on tribal land, the case just goes to tribal court; and if children can’t be reunified with their parents, ICWA sets out placement preferences of where they should go next, prioritizing family members and other members of that child’s tribe.
So yeah, it’s a really complex law that does a lot of different things in these lawsuits — one or two aspects of the law can kind of become a focal point — but the main thing it does is just make it harder, not impossible, but harder to separate Native children from their families.
JS: Yeah, I was actually going to ask you to kind of lay out the list of placement preferences, because the third preference comes up a lot in the argument.
RN: Right. Yes. [Laughs.]
JS: And we’ll get into that specifically in a bit. But I think for people to understand that, we’re going to need a bit more background about what those placement preferences are, including that one.
RN: Yeah, absolutely. So if a child cannot be what in social work, or child welfare proceedings, is called reunified with their biological parents. The placement preferences set out where they should go to next and so the first placement preference is a member of their extended family. And, actually, because a lot of Native folks are mixed, that extended family member could be Native or non-Native — as long as they’re related to the child, they’re prioritized equally. The second placement preference is another citizen of that child’s federally recognized tribe. And then the third placement preference is another citizen of a federally recognized tribe. And it doesn’t have to be that child’s tribe.
And I’m sure we’ll get into it, but that was the placement preference that upset some of the Supreme Court justices. What’s interesting is that it’s a facial challenge to a law, and so usually you’re looking for being able to at least point to a situation where that has happened. [Laughs.] And the plaintiffs in Texas could not. And so it was talked about a lot in arguments, although it didn’t happen in any of the underlying custody cases, and they couldn’t bring forward an example where it had happened in any custody case.
JS: Right, right. We will talk about that a little bit more here in a minute.
So there are several threads being pulled seemingly kind of all at once in the oral argument, but the common theme or a key to understanding what’s up, I think, is kind of understanding the relationship between the federal government and the various federally recognized tribes and Congress’s plenary power as it relates to those tribes.
Can you explain that piece of it?
RN: Yeah, absolutely.
So tribes have a unique political relationship with the U.S. federal government that has been recognized literally since the founding of the Republic. [Laughs.] And so, there are a lot of laws within the United States that treat tribes and tribal citizens differently than other people in the United States. And it’s called a lot of different things: People call it a treaty relationship; people also call it a trust relationship. But the difference in how Indigenous folks in our nations are treated, doesn’t stem from a racial category, it stems from a political category under the law and that category is established by lots of different parts of the Constitution, but I think mostly the Treaty Clause.
And so the U.S. has signed treaties with Indigenous nations through the same constitutional process that it has signed treaties with other foreign powers. And so a lot of times in those treaties in exchange for land, the U.S. federal government offered or guaranteed certain kinds of protections. And so from that, Congress has a unique authority in the arena of federal Indian law. And that authority has been established actually by the Supreme Court, but also recognized now for well over a century.
And so it’s kind of late in the U.S. history to come back and say: Oh, we can’t treat Native people differently. That’s racial discrimination. And also, Congress doesn’t have power to legislate in this area of the law, when we’ve been allowing Congress to do that for a very long time.
And so I think that it can be kind of confusing to folks because it is a really different area of law. But you know, one way I put it is, just like certain laws apply to me, because I’m a citizen of the United States or because I’m a resident of Oklahoma, certain laws apply to me because I’m a citizen of Cherokee Nation.
And that is absolutely how ICWA works. The law, first of all, only applies to children who are either enrolled in a federally recognized tribe or eligible for enrollment. And as I already discussed, that’s how the placement preferences flow, too; so somebody could have Native ancestry, and the law still wouldn’t apply to them. So it’s not a one-for-one equivalent to folks who have Native ancestry.
JS: The lawyer for the Brackeens, Matthew McGill, seemed to be suggesting that ICWA was beyond Congress’ power. And instead, it was just this impermissible scheme to give preference to tribes, tribal members, based on race — and, in so doing, he suggested the real victims here are, wait for it, the Brackeens, and that what ICWA really does is discriminate against them because they’re white.
Can you talk a little bit about McGill’s argument? And also, I’d love it if you could tell us a little bit more about who McGill is, and his background in these issues.
RN: Yeah, so Matthew McGill is a corporate lawyer who does a lot of appellate stuff. So it’s not his first time in front of the Supreme Court. And he works at a law firm called Gibson Dunn. Gibson Dunn is a really big corporate law firm that normally represents people like Amazon, and Chevron, and Walmart. They were also the law firm for the company behind the Dakota Access Pipeline. And the other thing that they do is that they have a lot of clients that are in the gaming industry, so casinos, and a lot of people in the gaming industry view tribal gaming as sort of monopolizing a corner of the market. And Matthew McGill and a senior partner at his law firm named Ted Olson actually filed a federal complaint about a year ago making that argument and then using the exact same legal arguments that they’re making here in this ICWA case, but instead about casinos.
And so you can kind of already see — literally — how if they got a win in Brackeen, it could set precedent that would benefit their gaming clients, which is just really sinister when you think about how this case also just involves the lives of Native children. So that’s Mr. McGill.
And then the arguments that they’re making — they’re basically making two really, really big arguments and then a third smaller argument. So the two big arguments that they’re making are that ICWA violates the equal protection clause of the 14th Amendment, which is basically laws in the United States can’t treat people differently based on race.
And they’re saying, this whole idea of tribes and tribal citizenship — and when it comes to ICWA, that’s not a political classification, it’s racial. And then the second argument that they’re making is that child welfare in these types of cases are really up to states; states are the ones that pass child welfare laws, and they get to decide how these cases are adjudicated. And Congress can’t step in and tell states here what to do. Although there are actually like a ton of federal laws [laughs] that also came up during oral arguments. Like this isn’t the only federal law that governs family law.
But anyways, and then they’re making a smaller argument that’s also saying that because state agents — a social worker who works for Texas has to actually like carry out what ICWA requires, it’s called commandeering, so it’s the federal government commandeering estate agent — and that that’s also unconstitutional.
JS: Yeah. I thought that it was really interesting when, I think it was Justice Sonia Sotomayor, who was bringing up like: Well, what about the parental kidnapping and The Hague Conventions?
Justice Sonia Sotomayor: Counsel, can I turn to something you said, which was it displaces the best interest of the child standard? In most state custody proceedings, the best interest of the child is what guides those decisions. Yet we have the Hague Convention on the abduction of children that basically says to the Court: You can’t make that determination. You have to send the child back — and it gives a section of exceptions, etc., and it even sets standards of proof, etc.
Why is this case any different than the Hague Convention?
JS: Maybe you could talk about those. Because it relates back to the relationship of the federal government and that trust relationship with tribes, right? They’re trying to explain: The way this works is the same way, right?
JS: And McGill just seemed to be kind of not having it — or maybe not understanding it! I don’t know which it was. [Laughs.]
RN: Yeah. No, I think there were funny moments, with both McGill and then the lawyer for Texas, also, where they just kind of got tripped up. Because what they were trying to do is they are trying to make the argument that the implications of this lawsuit aren’t broad. And Sotomayor, Gorsuch, and other justices weren’t buying that. Because it’s sort of like: How can this be true about ICWA and not be true, like you said, about the kidnapping law or the Hague Convention? Or there’s a law that protects service members who might be in child welfare proceedings that’s a federal law. And so would all of those laws also be struck down if ICWA was struck down?
And then it was kind of the same thing they were saying around Congress not having this authority when it comes to tribes. So they were trying to argue that children aren’t within tribal self-interest, which is crazy! [Laughs.] And then they’re also trying to say: Oh, well, because it’s off of tribal land and it doesn’t just apply to kids who are on the reservation — so they were trying to split hairs, and sort of make a narrow argument.
But when you kind of zoom out and look at the case, the arguments that they’re making are quite broad. And that’s what’s scary about the case is that it could have really big implications on federal Indian law. So if ICWA discriminates based on race, well, what about casinos? Like, how is it fair? Or how is it racial discrimination for this non-Native foster parent to not be able to adopt a Native kid, but it’s not for a tribal tribe to be able to operate a casino where a non-Native developer cannot? What about health care? Why can I go to a healthcare clinic that only serves tribal citizens? And if you went there and tried to get health care, they would turn you away? If we are just a racial group, what about the environmental regulations that we have, the elections, the government, the land rights, the water rights, what racial group has its own court system, its own police force?
And so the fear is that the case could really be a domino effect. And so at the Supreme Court, they were trying to downplay that, and sort of draw the lines in the sand. There was this moment between Kagan and the lawyer for Texas, where Kagan was just like — she’s obviously not saying this — but almost kind of just like: What the hell are you talking about? [Laughs.]
JS: Oh, we’re gonna get to that?
JS: [Laughs.] Yeah, it was kind of great. [Laughs.]
Yeah, no, absolutely right. Because all over the argument, it seemed to me there was this willful misunderstanding of the difference between political classification and racial classification. And one of the ways in which it repeatedly gets brought up is in reference to that third preference.
JS: And it was like several of the justices — and my mind immediately jumps to Justice Brett Kavanaugh here —
JS: — seem to be suggesting that like: Aha, this third preference is what signals that this is actually all about race!
So here’s Kavanaugh:
Justice Brett Kavanaugh: I want to ask about the equal protection issue quickly.
The equal protection issue is difficult, I think, because we have to find a line between two fundamental and critical constitutional values. So on
|
The internet can't get enough of these pictures of wild Wisconsin animals. Here's why you can feel good about browsing them.
Thousands of trail cameras sit in Wisconsin's forests, plains and pastures, silently waiting to capture an image of a passing elk, a curious bobcat or a pack of galloping otters.
And millions of those images have been compiled for all to see thanks to the Snapshot Wisconsin program set up by the state Department of Natural Resources.
Many are blurry or show only an ear or tail at the edge of the frame. But others reveal playful, anxious and heartwarming scenes — moments that humans rarely see.
The project has devoted fans, and photos from the site have been widely shared on Twitter and Facebook after the New York Times recently covered the project (which Journal Sentinel outdoors columnist Paul A. Smith has covered regularly, including an article introducing it in 2014).
Snapshot Wisconsin has captured millions of wildlife photos in Wisconsin since 2016
The Snapshot program is a “citizen-science project” run by the Wisconsin Department of Natural Resources. It works by having volunteers place cameras in pairs, at least a mile apart, across the state.
To date, tens of millions of photos have been captured by more than 2,000 cameras across Wisconsin, all of which have been set up by volunteers.
Given that the program is now so widespread, it can capture images in an extensive amount of ecosystems, uncovering vast amounts of wildlife.
Snapshot Wisconsin is primarily funded through Pittman-Robertson dollars provided by the federal government to Wisconsin DNR.
The project has multiple goals, as Smith reported in 2018, including getting more residents involved in wildlife monitoring, improving relationships between the DNR and citizen and assessing the statewide distribution of carnivores.
How to participate in Snapshot Wisconsin with your own trail camera
To participate in the program you need access to 10 acres of land. You must either own the property or have received permission from the landowner or public land manager to place a trail camera there.
Trail cameras must be at least 100 yards from any buildings, paved roads or baiting for wildlife. The cameras must be checked every three months.
Anyone can classify the animal photos and help Snapshot Wisconsin track wildlife populations
Anyone can visit the Snapshot Wisconsin website to help classify the animals captured in these millions of photographs.
With a few clicks, you can identify whether the animal in a shot is a deer (or a weasel, otter, fox, domestic cat or human, among the dozens of options). Some animals come with follow-up tasks, like determining whether a deer is "vigilant," resting or giving a "camera stare."
The information is used to developing new methods to monitor deer populations and track the population sizes for various species.
And photos of whooping cranes, moose, cougars and marten give the DNR confirmed locations of these rare species.
Snapshot Wisconsin has become a wholesome place to discuss nature
Part of Snapshot Wisconsin is a forum where people can comment on their favorite photos, share tips on good camera placements and create funny captions for some of the best candids of meandering animals.
Users will tag an especially good photo — one that captures an intense scene, beautiful moment or rare animal — as a "supersnap."
Nature lovers have also congregated to learn and share knowledge, and it's a pretty friendly place.
On one post where people were looking to learn the differences between a wolf and a coyote, a helpful respondent shared a "great wolf and coyote quiz" from Oregon's DNR.
"It not only tests your skill but is also a great teaching tool for those canine characteristics. Enjoy!”
|
<urn:uuid:f5e1c290-0580-412d-b4cc-2d0ea4e49d88>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00739.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.936428964138031,
"pii_count": 0,
"score": 2.65625,
"token_count": 771,
"url": "https://www.jsonline.com/story/news/local/wisconsin/2023/01/20/wisconsin-wildlife-pictures-captured-on-trail-cameras-by-dnr-program/69823896007/"
}
|
The internet can't get enough of these pictures of wild Wisconsin animals. Here's why you can feel good about browsing them.
Thousands of trail cameras sit in Wisconsin's forests, plains and pastures, silently waiting to capture an image of a passing elk, a curious bobcat or a pack of galloping otters.
And millions of those images have been compiled for all to see thanks to the Snapshot Wisconsin program set up by the state Department of Natural Resources.
Many are blurry or show only an ear or tail at the edge of the frame. But others reveal playful, anxious and heartwarming scenes — moments that humans rarely see.
The project has devoted fans, and photos from the site have been widely shared on Twitter and Facebook after the New York Times recently covered the project (which Journal Sentinel outdoors columnist Paul A. Smith has covered regularly, including an article introducing it in 2014).
Snapshot Wisconsin has captured millions of wildlife photos in Wisconsin since 2016
The Snapshot program is a “citizen-science project” run by the Wisconsin Department of Natural Resources. It works by having volunteers place cameras in pairs, at least a mile apart, across the state.
To date, tens of millions of photos have been captured by more than 2,000 cameras across Wisconsin, all of which have been set up by volunteers.
Given that the program is now so widespread, it can capture images in an extensive amount of ecosystems, uncovering vast amounts of wildlife.
Snapshot Wisconsin is primarily funded through Pittman-Robertson dollars provided by the federal government to Wisconsin DNR.
The project has multiple goals, as Smith reported in 2018, including getting more residents involved in wildlife monitoring, improving relationships between the DNR and citizen and assessing the statewide distribution of carnivores.
How to participate in Snapshot Wisconsin with your own trail camera
To participate in the program you need access to 10 acres of land. You must either own the property or have received permission from the landowner or public land manager to place a trail camera there.
Trail cameras must be at least 100 yards from any buildings, paved roads or baiting for wildlife. The cameras must be checked every three months.
Anyone can classify the animal photos and help Snapshot Wisconsin track wildlife populations
Anyone can visit the Snapshot Wisconsin website to help classify the animals captured in these millions of photographs.
With a few clicks, you can identify whether the animal in a shot is a deer (
|
or a weasel, otter, fox, domestic cat or human, among the dozens of options). Some animals come with follow-up tasks, like determining whether a deer is "vigilant," resting or giving a "camera stare."
The information is used to developing new methods to monitor deer populations and track the population sizes for various species.
And photos of whooping cranes, moose, cougars and marten give the DNR confirmed locations of these rare species.
Snapshot Wisconsin has become a wholesome place to discuss nature
Part of Snapshot Wisconsin is a forum where people can comment on their favorite photos, share tips on good camera placements and create funny captions for some of the best candids of meandering animals.
Users will tag an especially good photo — one that captures an intense scene, beautiful moment or rare animal — as a "supersnap."
Nature lovers have also congregated to learn and share knowledge, and it's a pretty friendly place.
On one post where people were looking to learn the differences between a wolf and a coyote, a helpful respondent shared a "great wolf and coyote quiz" from Oregon's DNR.
"It not only tests your skill but is also a great teaching tool for those canine characteristics. Enjoy!”
|
The Natural State's air quality beats national average
Air quality in the Northwest Arkansas metro area, as measured by fine particle pollution, has improved since 2012, Axios' Alex Fitzpatrick and Kavya Beheraj report.
Driving the news: Monday kicks off National Air Quality Awareness Week.
Why it matters: Fine particles, generated from fossil fuel burning and other sources, can enter our bodies when we breathe, making their way into the lungs or bloodstream and causing myriad health problems.
- They are linked to nearly 11,000 deaths across the U.S. annually, by one estimate.
- Nonwhite and low-income Americans are at a higher risk of death from exposure to fine particle pollution compared to other groups, per a 2022 study published in the research journal Nature.
- Fine particles — known as PM2.5 due to their tiny size of 2.5 micrometers — are the most hazardous form of particulate matter.
By the numbers: The three-year rolling annual average concentration of fine particle pollution across the NWA area was 7.7 micrograms per cubic meter as of 2021 (the latest year for which data is available) compared to 10.8 in 2012 — a 29% decrease.
- Concentrations below 12 micrograms per cubic meter are considered healthy, the EPA says.
The big picture: Air quality generally improved nationwide during the height of the COVID-19 pandemic, in part because fewer people were driving.
- But as the pandemic ebbs and behaviors and activities return to normal, air quality nationally is worsening.
- Air quality decreased notably between 2015 and 2021 in parts of Western states where periods of extreme drought have created conditions for wildfires, and increased pollution from smoke.
Zoom in: Air pollution levels decreased by 10% in NWA and Little Rock, and 11% in Fort Smith, between 2015 and 2021.
What's next: The EPA in January proposed reducing its fine particle pollution standard from 12 micrograms per cubic meter to "a level between 9 and 10."
- Changing the standard to 9 micrograms would prevent up to 4,200 premature deaths and 270,000 lost workdays per year, resulting in as much as $43 billion in net health benefits in 2032, the agency says.
- The EPA is also taking steps to improve air quality, including newly proposed vehicle emissions standards.
Yes, but: Public health advocacy groups say the fine particulate standard should be even lower than the EPA's proposed range.
- The agency's proposal "misses the mark and is inadequate to protect public health from this deadly pollutant," the American Lung Association said in a statement.
The other side: Industry groups argue that lowering the standard would be overly burdensome.
The bottom line: As the fight over lowering the fine particle standard heats up, the EPA again finds itself at the heart of the climate change and public health debate.
More NW Arkansas stories
No stories could be found
Get a free daily digest of the most important news in your backyard with Axios NW Arkansas.
|
<urn:uuid:81763b07-6f10-4338-8480-430bdc55c5a0>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657720.82/warc/CC-MAIN-20230610131939-20230610161939-00140.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9414306879043579,
"pii_count": 0,
"score": 3.234375,
"token_count": 632,
"url": "https://www.axios.com/local/nw-arkansas/2023/05/01/arkansas-air-quality-national-average"
}
|
The Natural State's air quality beats national average
Air quality in the Northwest Arkansas metro area, as measured by fine particle pollution, has improved since 2012, Axios' Alex Fitzpatrick and Kavya Beheraj report.
Driving the news: Monday kicks off National Air Quality Awareness Week.
Why it matters: Fine particles, generated from fossil fuel burning and other sources, can enter our bodies when we breathe, making their way into the lungs or bloodstream and causing myriad health problems.
- They are linked to nearly 11,000 deaths across the U.S. annually, by one estimate.
- Nonwhite and low-income Americans are at a higher risk of death from exposure to fine particle pollution compared to other groups, per a 2022 study published in the research journal Nature.
- Fine particles — known as PM2.5 due to their tiny size of 2.5 micrometers — are the most hazardous form of particulate matter.
By the numbers: The three-year rolling annual average concentration of fine particle pollution across the NWA area was 7.7 micrograms per cubic meter as of 2021 (the latest year for which data is available) compared to 10.8 in 2012 — a 29% decrease.
- Concentrations below 12 micrograms per cubic meter are considered healthy, the EPA says.
The big picture: Air quality generally improved nationwide during the height of the COVID-19 pandemic, in part because fewer people were driving.
- But as the pandemic ebbs and behaviors and activities return to normal, air quality nationally is worsening.
- Air quality decreased notably between 2015 and 2021 in parts of Western states where periods of extreme drought have created conditions for wildfires, and increased pollution from smoke.
Zoom in: Air pollution levels decreased by 10% in NWA and Little Rock, and 11% in Fort Smith, between 2015 and 2021.
What's next: The EPA in January proposed reducing its fine particle pollution standard from 12 micrograms per cubic meter to "a level between 9 and 10."
- Changing the standard to 9 micrograms would prevent up to 4,200 premature deaths and 270,000 lost workdays per year, resulting in as much as $43 billion in net health benefits in 2032, the agency
|
says.
- The EPA is also taking steps to improve air quality, including newly proposed vehicle emissions standards.
Yes, but: Public health advocacy groups say the fine particulate standard should be even lower than the EPA's proposed range.
- The agency's proposal "misses the mark and is inadequate to protect public health from this deadly pollutant," the American Lung Association said in a statement.
The other side: Industry groups argue that lowering the standard would be overly burdensome.
The bottom line: As the fight over lowering the fine particle standard heats up, the EPA again finds itself at the heart of the climate change and public health debate.
More NW Arkansas stories
No stories could be found
Get a free daily digest of the most important news in your backyard with Axios NW Arkansas.
|
According to the Afterschool Alliance’s America After 3PM survey, 78% of participating parents report their child’s participation in afterschool programming helps parents meet their workday obligations; 74% agree programs increase their child’s interest in school; and 83% note the peace of mind the programs offer them.
Those are significant statistics for child and youth programming and are largely unseen unless you are one of the approximately 7.8 million children and adolescents in the United States who participate in afterschool programs or the parent/guardian of one of the nearly 25 million children who would participate in afterschool programs if they were available.
With over 52 million U.S. children enrolled in K-12 schools, it is clear that youth engagement activities play an important role in American society and in supporting healthy young people.
Months into the current school year, and more than three and half years after initial school and business closures due to the coronavirus pandemic, stories addressing the full impact of student wellbeing and learning loss continue to fill educational headlines. Increasingly, educators are also talking about mental health concerns for youth and ongoing, startling rates of student absenteeism, which invariably feed a downward academic spiral.
There is strong evidence that meeting the non-instructional needs of school-aged children benefits their academic success. As states and school jurisdictions continue to look for solutions, there are calls for increased instructional time through repackaging the school day and for well-developed afterschool and summer learning programs.
Three factors seem to be at play:
- The call for increased high-quality targeted instruction;
- The need to meet student wellbeing requirements so children can engage in meaningful learning; and
- The opportunity to support families through safe and well-designed out-of-school programming that contributes to academic and life success.
One way schools and communities have traditionally worked together to meet all three needs is through what is known as Out of School Time services. By definition, OST activities exist outside regular school hours, primarily before or after school and during the summer. The Centers for Disease Control and Prevention defines OST as “a supervised program that young people regularly attend when school is not in session.”
OST opportunities bring significant benefits to children, families, and communities. They can support academic outcomes and connect young people to opportunities for career and interest exploration, relationship building, and sports and recreation, or offer a more comprehensive care and wellbeing approach. Many afterschool programs include funding and personnel to provide meals for children living in food-insecure homes or communities.
Numerous nationally known and respected organizations such as the YMCA, Boys and Girls Clubs of America, and Higher Achievement, to name just a few, serve the needs of children and adolescents. Countless smaller regional efforts also offer meaningful OST activities.
To better understand the reach of OST programming, a network of OST-affiliated organizations provides a wealth of knowledge and resources. The Forum for Youth Investment publishes quick-read blogs that offer relevant discussions about the youth development environment. The CDC’s Whole School, Whole Community, Whole Child (WSCC) framework addresses an array of school-related success factors and data tracking. And the National Institute on Out-of-School Time engages with OST providers and supporters to “bridge the worlds of research and practice.”
Benefits of OST programming include improved student health and wellbeing. The Value of Out-of-School Time Programs report conducted by Rand Education for the Wallace Foundation found regular attendance in quality, focused OST programming benefits youth and their families and concluded that “academic OST programs can demonstrably improve academic outcomes.”
This type of work affirms the interconnectedness of student wellbeing and academic success. The CDC, the “nation’s leading science-based, data-driven, service organization that protects the public’s health,” takes a holistic approach to promoting the health and wellbeing of school-aged children in schools through its CDC Healthy Schools initiative.
While OST programs benefit children and families, they are also proven to benefit communities and cities. You can explore the OST and afterschool landscape of your state here and access a national overview of programming trends and demands provided by the Afterschool Alliance.
OST programs throughout the U.S. are a powerful representation of how countless community organizations can come together to meet children’s academic, social, enrichment, wellness, and supervision needs outside the school day. The possibilities are endless.
If this discussion has energized you to explore the potential of OST programming, this Wallace Foundation three-part podcast series addressing the benefits and obstacles of OST programming is worth a listen while you are commuting or simply puttering around. The student, researcher, and professional provider voices are strong. The findings offer a comprehensive view of “what can happen in OST spaces” and the barriers participants and programs face. The discussion may even help you understand ways to get involved in these crucial efforts to invest in your community’s youth.
|
<urn:uuid:4f076057-8789-49bc-bc2d-3d9d8c580f4c>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475701.61/warc/CC-MAIN-20240301193300-20240301223300-00844.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.958019495010376,
"pii_count": 0,
"score": 2.984375,
"token_count": 1036,
"url": "https://www.forbes.com/sites/katecassada-1/2023/11/29/out-of-school-time-programs-offer-limitless-possibilities-for-k-12-youth/?ss=education"
}
|
According to the Afterschool Alliance’s America After 3PM survey, 78% of participating parents report their child’s participation in afterschool programming helps parents meet their workday obligations; 74% agree programs increase their child’s interest in school; and 83% note the peace of mind the programs offer them.
Those are significant statistics for child and youth programming and are largely unseen unless you are one of the approximately 7.8 million children and adolescents in the United States who participate in afterschool programs or the parent/guardian of one of the nearly 25 million children who would participate in afterschool programs if they were available.
With over 52 million U.S. children enrolled in K-12 schools, it is clear that youth engagement activities play an important role in American society and in supporting healthy young people.
Months into the current school year, and more than three and half years after initial school and business closures due to the coronavirus pandemic, stories addressing the full impact of student wellbeing and learning loss continue to fill educational headlines. Increasingly, educators are also talking about mental health concerns for youth and ongoing, startling rates of student absenteeism, which invariably feed a downward academic spiral.
There is strong evidence that meeting the non-instructional needs of school-aged children benefits their academic success. As states and school jurisdictions continue to look for solutions, there are calls for increased instructional time through repackaging the school day and for well-developed afterschool and summer learning programs.
Three factors seem to be at play:
- The call for increased high-quality targeted instruction;
- The need to meet student wellbeing requirements so children can engage in meaningful learning; and
- The opportunity to support families through safe and well-designed out-of-school programming that contributes to academic and life success.
One way schools and communities have traditionally worked together to meet all three needs is through what is known as Out of School Time services. By definition, OST activities exist outside regular school hours, primarily before or after school and during the summer. The Centers for Disease Control and Prevention defines OST as “a supervised program that young people regularly attend when school is not in session.”
OST opportunities bring significant benefits to children, families, and communities. They can support academic outcomes and connect young people to opportunities for career and interest exploration, relationship building, and sports and recreation, or offer a more comprehensive care and wellbeing approach. Many afterschool programs include funding and personnel to provide meals for children living in food-insecure homes or communities.
|
Numerous nationally known and respected organizations such as the YMCA, Boys and Girls Clubs of America, and Higher Achievement, to name just a few, serve the needs of children and adolescents. Countless smaller regional efforts also offer meaningful OST activities.
To better understand the reach of OST programming, a network of OST-affiliated organizations provides a wealth of knowledge and resources. The Forum for Youth Investment publishes quick-read blogs that offer relevant discussions about the youth development environment. The CDC’s Whole School, Whole Community, Whole Child (WSCC) framework addresses an array of school-related success factors and data tracking. And the National Institute on Out-of-School Time engages with OST providers and supporters to “bridge the worlds of research and practice.”
Benefits of OST programming include improved student health and wellbeing. The Value of Out-of-School Time Programs report conducted by Rand Education for the Wallace Foundation found regular attendance in quality, focused OST programming benefits youth and their families and concluded that “academic OST programs can demonstrably improve academic outcomes.”
This type of work affirms the interconnectedness of student wellbeing and academic success. The CDC, the “nation’s leading science-based, data-driven, service organization that protects the public’s health,” takes a holistic approach to promoting the health and wellbeing of school-aged children in schools through its CDC Healthy Schools initiative.
While OST programs benefit children and families, they are also proven to benefit communities and cities. You can explore the OST and afterschool landscape of your state here and access a national overview of programming trends and demands provided by the Afterschool Alliance.
OST programs throughout the U.S. are a powerful representation of how countless community organizations can come together to meet children’s academic, social, enrichment, wellness, and supervision needs outside the school day. The possibilities are endless.
If this discussion has energized you to explore the potential of OST programming, this Wallace Foundation three-part podcast series addressing the benefits and obstacles of OST programming is worth a listen while you are commuting or simply puttering around. The student, researcher, and professional provider voices are strong. The findings offer a comprehensive view of “what can happen in OST spaces” and the barriers participants and programs face. The discussion may even help you understand ways to get involved in these crucial efforts to invest in your community’s youth.
|
Former Justice Sandra Day O’Connor, the first woman to serve on the Supreme Court, died Friday at age 93, according to a statement from the Supreme Court.
O’Connor, appointed by President Ronald Reagan in 1981, served more than two decades on the court before retiring in 2006. The announcement from the court said O’Connor died of complications from advanced dementia and a respiratory illness.
O’Connor was born in El Paso, Texas, in 1930 and worked as an attorney in California, Germany and Arizona, before serving as the Assistant Attorney General of the state from 1965 to 1969. O’Connor also served in the Arizona State Senate and majority leader of the chamber before beginning her service as a state court judge in 1975.
In a statement released alongside the announcement, Chief Justice John G. Roberts Jr. praised O’Connor as a “daughter of the American Southwest,” and her status as the country’s first female justice.
“She met that challenge with undaunted determination, indisputable ability, and engaging candor. We at the Supreme Court mourn the loss of a beloved colleague, a fiercely independent defender of the rule of law, and an eloquent advocate for civics education. And we celebrate her enduring legacy as a true public servant and patriot,” Roberts said.
During her tenure on the court, she wrote numerous landmark decisions, including a 2003 decision in Grutter v. Bollinger that upheld affirmative action at the University of Michigan Law School.
The Arizona delegation gathered at the front of the House floor Friday morning to honor O’Connor and have a moment of silence for her.
“Justice O’Connor spent her life breaking down barriers in pursuit of a more just society,” Rep. Greg Stanton, D-Ariz., said. “She blazed every trail she set foot on, defying the odds stacked against women in the legal profession.”
He pointed to her time as Arizona’s assistant attorney general, first female majority leader in the state senate, a Maricopa County superior court judge, and ultimately the first female justice on the Supreme Court.
“She brought her Arizona brand of pragmatism and independence with her to the Supreme Court and was often the swing vote on deeply consequential decisions,” Stanton said.
He also praised her work after retirement, with the creation of the Sandra Day O’Connor Institute and work with the Sandra Day O’Connor College of Law at Arizona State University.
“I’ve admired her steadfast commitment to preserving our democracy through objective, fact based and collaborative civil discourse,” Stanton said. “Her work will inspire future generations to follow her example to become engaged in thoughtful civil participants.”
Rep. Debbie Lesko, R-Ariz., called O’Connor a trailblazer for “all women across America.”
“She stood up for truth, she stood up for justice,” Lesko said. “She was not only a wonderful woman, and a representative of Arizona, but a wonderful American. And we are saddened by her passing, but she set the trail for all of us women.”
Following O’Connor’s retirement, she was replaced on the court by Justice Samuel A. Alito Jr.
The court has not yet announced plans for O’Connor’s funeral. The late Justice Ruth Bader Ginsburg, the second woman to serve on the court, laid in state at the U.S. Capitol following her death in 2020.
There are three women on the Supreme Court now: Justices Sonia Sotomayor, Elena Kagan, Amy Coney Barrett and Ketanji Brown Jackson.
|
<urn:uuid:46e94170-25b4-4e0b-bd33-2d671dd20cba>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474360.86/warc/CC-MAIN-20240223021632-20240223051632-00308.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9536420106887817,
"pii_count": 0,
"score": 2.65625,
"token_count": 778,
"url": "https://rollcall.com/2023/12/01/former-justice-sandra-day-oconnor-first-woman-on-the-supreme-court-dies-at-93/"
}
|
Former Justice Sandra Day O’Connor, the first woman to serve on the Supreme Court, died Friday at age 93, according to a statement from the Supreme Court.
O’Connor, appointed by President Ronald Reagan in 1981, served more than two decades on the court before retiring in 2006. The announcement from the court said O’Connor died of complications from advanced dementia and a respiratory illness.
O’Connor was born in El Paso, Texas, in 1930 and worked as an attorney in California, Germany and Arizona, before serving as the Assistant Attorney General of the state from 1965 to 1969. O’Connor also served in the Arizona State Senate and majority leader of the chamber before beginning her service as a state court judge in 1975.
In a statement released alongside the announcement, Chief Justice John G. Roberts Jr. praised O’Connor as a “daughter of the American Southwest,” and her status as the country’s first female justice.
“She met that challenge with undaunted determination, indisputable ability, and engaging candor. We at the Supreme Court mourn the loss of a beloved colleague, a fiercely independent defender of the rule of law, and an eloquent advocate for civics education. And we celebrate her enduring legacy as a true public servant and patriot,” Roberts said.
During her tenure on the court, she wrote numerous landmark decisions, including a 2003 decision in Grutter v. Bollinger that upheld affirmative action at the University of Michigan Law School.
The Arizona delegation gathered at the front of the House floor Friday morning to honor O’Connor and have a moment of silence for her.
“Justice O’Connor spent her life breaking down barriers in pursuit of a more just society,” Rep. Greg Stanton, D-Ariz., said. “She blazed every trail she set foot on, defying the odds stacked against women in the legal profession.”
He pointed to her time as Arizona’s assistant attorney general, first female majority leader in the state senate, a Maricopa County superior court judge, and ultimately the first female justice on the Supreme Court.
“She brought her Arizona brand of pragmatism and independence with her to the Supreme Court and was often the swing vote on deeply consequential decisions,” Stanton said.
He also praised her work after retirement, with the creation of the Sandra Day O’Connor Institute and work with the
|
Sandra Day O’Connor College of Law at Arizona State University.
“I’ve admired her steadfast commitment to preserving our democracy through objective, fact based and collaborative civil discourse,” Stanton said. “Her work will inspire future generations to follow her example to become engaged in thoughtful civil participants.”
Rep. Debbie Lesko, R-Ariz., called O’Connor a trailblazer for “all women across America.”
“She stood up for truth, she stood up for justice,” Lesko said. “She was not only a wonderful woman, and a representative of Arizona, but a wonderful American. And we are saddened by her passing, but she set the trail for all of us women.”
Following O’Connor’s retirement, she was replaced on the court by Justice Samuel A. Alito Jr.
The court has not yet announced plans for O’Connor’s funeral. The late Justice Ruth Bader Ginsburg, the second woman to serve on the court, laid in state at the U.S. Capitol following her death in 2020.
There are three women on the Supreme Court now: Justices Sonia Sotomayor, Elena Kagan, Amy Coney Barrett and Ketanji Brown Jackson.
|
Bright planetary pair shines in evening twilight Pocono sky | Looking Up
There's lots to see the next clear night. It's hard to miss the brilliant lights suspended in the western evening sky; look about an hour after sunset. These are the planet Venus, brightest and lower, and planet Jupiter.
Watch night to night as the second and fourth planets from the Sun slide to as close as a half degree apart, side by side on Wednesday evening of March 1. That should grab most anyone' s attention. A half degree is about the apparent width of a full Moon.
These planets look like they're getting near each other but Jupiter is really almost seven times as far from the Sun as Venus.
The close pairing of two celestial objects is called a conjunction. This happens often when planets, frequently along with the Moon are not far from the Sun, which we can observe after sunset in the west or before sunrise in the east.
All of the main planets -including the one we are riding- and the Moon follow closely on the same plane orbiting the Sun, visible within an imaginary band encircling the sky called the ecliptic. The planets in their orbits bunch up as seen from our perspective, the closer they are to the Sun.
More Looking Up:Big Dog constellation appears to be standing up in Pocono sky
Outer planets like Jupiter are at their brightest and largest (as seen in a telescope) when opposite the Sun. Around "opposition" the planet rises around sunset and is visible all night. If you've been noticing Jupiter over the last several months, it is easy to see how its light, while still bright, is diminished.
In a telescope it's around half its former self, in size, although still nice in even a small telescope. Be sure to have a look at its four brightest moons, as they forever change place night to night.
The Moon, meanwhile reaches first quarter on February 26-27. At dusk our lovely satellite is almost overhead in the south, smack dab between the Pleiades star cluster on the upper right and the bright red-orange star Aldebaran on the lower left. To the left is Mars, gleaming golden red-orange (some call it "mustard"). At 1 a.m. EST on Feb. 28 the Moon will be at its closest to Mars, roughly a half degree above.
Enjoy the Moon. Even 10 by 50 binoculars, propped rock-steady or on a tripod, will reveal the largest craters along the "terminator," the line between lunar day and night.
Using a telescope, nudge the bright Moon just out of view, so only the rugged terminator with its shadowed craters and mountains. See if you can detect the faint "earthshine" on the darkened half of the Moon. Earthshine is most easily seen when the Moon is a crescent. You're seeing sunlight reflecting off the Earth.
I find it fascinating how even in the dark of night we are reminded that our Sun still shines, off the Moon, off the Earth (making earthshine) and shining back from the planets.
Keep looking up at the sky!
|
<urn:uuid:a532384b-b710-4c97-b9f9-7a4f631a9de3>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00156.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9501940608024597,
"pii_count": 0,
"score": 2.984375,
"token_count": 649,
"url": "https://www.tricountyindependent.com/story/lifestyle/hobby/2023/02/25/looking-up-bright-planetary-pair-jupiter-venus-shines-after-dusk-peter-becker/69925420007/"
}
|
Bright planetary pair shines in evening twilight Pocono sky | Looking Up
There's lots to see the next clear night. It's hard to miss the brilliant lights suspended in the western evening sky; look about an hour after sunset. These are the planet Venus, brightest and lower, and planet Jupiter.
Watch night to night as the second and fourth planets from the Sun slide to as close as a half degree apart, side by side on Wednesday evening of March 1. That should grab most anyone' s attention. A half degree is about the apparent width of a full Moon.
These planets look like they're getting near each other but Jupiter is really almost seven times as far from the Sun as Venus.
The close pairing of two celestial objects is called a conjunction. This happens often when planets, frequently along with the Moon are not far from the Sun, which we can observe after sunset in the west or before sunrise in the east.
All of the main planets -including the one we are riding- and the Moon follow closely on the same plane orbiting the Sun, visible within an imaginary band encircling the sky called the ecliptic. The planets in their orbits bunch up as seen from our perspective, the closer they are to the Sun.
More Looking Up:Big Dog constellation appears to be standing up in Pocono sky
Outer planets like Jupiter are at their brightest and largest (as seen in a telescope) when opposite the Sun. Around "opposition" the planet rises around sunset and is visible all night. If you've been noticing Jupiter over the last several months, it is easy to see how its light, while still bright, is diminished.
In a telescope it's around half its former self, in size, although still nice in even a small telescope. Be sure to have a look at its four brightest moons, as they forever change place night to night.
The Moon, meanwhile reaches first quarter on February 26-27. At dusk our lovely satellite is almost overhead in the south, smack dab between the Pleiades star cluster on the upper right and the bright red-orange star Aldebaran on the lower left. To the left is Mars, gleaming golden red-orange (some call it "mustard"). At 1 a.m. EST on Feb. 28 the Moon will be at its closest to Mars, roughly a half degree above.
Enjoy the Moon. Even 10 by 50 binoculars, propped rock
|
-steady or on a tripod, will reveal the largest craters along the "terminator," the line between lunar day and night.
Using a telescope, nudge the bright Moon just out of view, so only the rugged terminator with its shadowed craters and mountains. See if you can detect the faint "earthshine" on the darkened half of the Moon. Earthshine is most easily seen when the Moon is a crescent. You're seeing sunlight reflecting off the Earth.
I find it fascinating how even in the dark of night we are reminded that our Sun still shines, off the Moon, off the Earth (making earthshine) and shining back from the planets.
Keep looking up at the sky!
|
Let’s face it, sustainability has become a buzzword in tourism. But some countries do more than talk the talk, they have programs in place to help protect the environment, culture and food sources of the region. Belize is one of those places.
For many countries, at the heart of sustainability is tourism. Many countries depend on tourism dollars as their primary source of income. Preserving their environment is not only the right thing to do, but it also makes good economic sense.
Belize is located in Central America, but it also has a decidedly Caribbean feel, making it appealing for visitors who want to go snorkeling and diving in some of the most beautiful water in the world. It’s home to the second-largest barrier reef. It has a plethora of small, unique islands off the coast, with rainforests blanketing the mainland.
Travelers come here for a variety of reasons—active adventure, culture, cuisine and wildlife.
To this end, Belize has set up several programs to not only attract travelers, but to protect the land.
For example, to protect endangered species such as the jaguar, Belize is a partner of the Maya Forest Corridor, protecting landscapes from Belize’s Maya Mountains, through the tri-national Maya forest of Belize, Mexico, and Guatemala. This region is the most extensive continuous stretch of jungle in Central America.
More than 70% of the country is forested, making it a mecca for wildlife. Belize has over a hundred protected areas, many of them serving as animal sanctuaries. For example, the Community Baboon Sanctuary spans 20 miles and is home to over a thousand howler monkeys, birds and other mammals, including jaguars and manatees.
Another key effort is to support local community tourism so travelers can engage with and learn about the local Garifuna culture. This can involve eating the local Creole cuisine, experiencing the traditional Garifuna dance called Punta, and purchasing handicrafts and other items from local artisans.
Preserving the water and coastline of Belize is another key initiative. These waters host a diverse variety of marine life, from sting rays, sea turtles and sharks to colorful corals and sea grass beds. It draws scuba divers, snorkelers and water enthusiasts from all over the world. Belize has a reef protection and sustainable tourism program that includes a new wreck diving site allowing divers to explore the marine life and waters while reducing strain on these radiant reefs and eco-systems.
The country has also signed into law a moratorium on offshore oil exploration and drilling in the entirety of Belizean waters. The reef is an integral part in many Belizeans’ livelihoods, whether in the fishing industry or tourism industry and preserving that water is a key sustainability effort.
Ecotourism thrives in Belize with tour operators offering guided tours that reflect the diversity in activities, including hiking to waterfalls, horseback riding, ziplining through the canopies and visiting wildlife sanctuaries. Belize is a key bird-watching destination as well, drawing bird enthusiasts from around the world.
In addition to the fact that Belize is known to be devoid of chain restaurants, there’s an opportunity for travelers to pick their own ingredients, create their own meal, and eat sustainably.
Recently, eco-resorts in Belize are hopping onto the farm-to-table trend that prioritizes sustainability as its basis in all things they do. Western Belize is especially a hotspot for these experiences. Many resorts in the area offer the option of foraging fresh ingredients from their on-site gardens and farms. This hands-on experience allows travelers to practice more mindful travel through culinary choices.
Additionally, the Fish Right, Eat Right program was created to control illegal fishing and promote best practices in fisheries. Many restaurants, especially in Ambergris Caye, have signed up for the program and have been sourcing seafood responsibly. The program initially targeted restaurants and hotels, but is slated to include cooperatives, fish markets, supermarkets and other seafood purveyors.
|
<urn:uuid:d9b42a42-f271-47d8-aafc-aa9e69c4721f>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00225.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.942405641078949,
"pii_count": 0,
"score": 2.9375,
"token_count": 851,
"url": "https://www.forbes.com/sites/judykoutsky/2023/03/17/belize-a-leader-in-sustainability/"
}
|
Let’s face it, sustainability has become a buzzword in tourism. But some countries do more than talk the talk, they have programs in place to help protect the environment, culture and food sources of the region. Belize is one of those places.
For many countries, at the heart of sustainability is tourism. Many countries depend on tourism dollars as their primary source of income. Preserving their environment is not only the right thing to do, but it also makes good economic sense.
Belize is located in Central America, but it also has a decidedly Caribbean feel, making it appealing for visitors who want to go snorkeling and diving in some of the most beautiful water in the world. It’s home to the second-largest barrier reef. It has a plethora of small, unique islands off the coast, with rainforests blanketing the mainland.
Travelers come here for a variety of reasons—active adventure, culture, cuisine and wildlife.
To this end, Belize has set up several programs to not only attract travelers, but to protect the land.
For example, to protect endangered species such as the jaguar, Belize is a partner of the Maya Forest Corridor, protecting landscapes from Belize’s Maya Mountains, through the tri-national Maya forest of Belize, Mexico, and Guatemala. This region is the most extensive continuous stretch of jungle in Central America.
More than 70% of the country is forested, making it a mecca for wildlife. Belize has over a hundred protected areas, many of them serving as animal sanctuaries. For example, the Community Baboon Sanctuary spans 20 miles and is home to over a thousand howler monkeys, birds and other mammals, including jaguars and manatees.
Another key effort is to support local community tourism so travelers can engage with and learn about the local Garifuna culture. This can involve eating the local Creole cuisine, experiencing the traditional Garifuna dance called Punta, and purchasing handicrafts and other items from local artisans.
Preserving the water and coastline of Belize is another key initiative. These waters host a diverse variety of marine life, from sting rays, sea turtles and sharks to colorful corals and sea grass beds. It draws scuba divers, snorkelers and water enthusiasts from all over the world. Belize has a reef protection and sustainable tourism program that includes a new wreck diving site allowing divers to explore the marine life and waters while reducing strain on these radiant reefs and eco-systems.
The country has
|
also signed into law a moratorium on offshore oil exploration and drilling in the entirety of Belizean waters. The reef is an integral part in many Belizeans’ livelihoods, whether in the fishing industry or tourism industry and preserving that water is a key sustainability effort.
Ecotourism thrives in Belize with tour operators offering guided tours that reflect the diversity in activities, including hiking to waterfalls, horseback riding, ziplining through the canopies and visiting wildlife sanctuaries. Belize is a key bird-watching destination as well, drawing bird enthusiasts from around the world.
In addition to the fact that Belize is known to be devoid of chain restaurants, there’s an opportunity for travelers to pick their own ingredients, create their own meal, and eat sustainably.
Recently, eco-resorts in Belize are hopping onto the farm-to-table trend that prioritizes sustainability as its basis in all things they do. Western Belize is especially a hotspot for these experiences. Many resorts in the area offer the option of foraging fresh ingredients from their on-site gardens and farms. This hands-on experience allows travelers to practice more mindful travel through culinary choices.
Additionally, the Fish Right, Eat Right program was created to control illegal fishing and promote best practices in fisheries. Many restaurants, especially in Ambergris Caye, have signed up for the program and have been sourcing seafood responsibly. The program initially targeted restaurants and hotels, but is slated to include cooperatives, fish markets, supermarkets and other seafood purveyors.
|
These early Sheboygan County cheesemakers helped propel Wisconsin cheese past New York cheese: Throwback
Two brothers ignited the Sheboygan County cheesemaking industry with a factory in 1858.
SHEBOYGAN FALLS - Two brothers ignited the cheese industry in Sheboygan County with a farm factory and stand-alone operations in the 19th century.
Before the advent of a cheese factory, cheese was made in farm kitchens. That early cheese product was infrequently sold at market. It had several problems, which included, collecting, temperature control, rennet enzyme application, sanitation, refrigeration and aging.
John J. Smith started his cheese "factory" in 1858 by taking in gathering curd produced by his neighbors. Quality control was lacking as every curd-maker did things something slightly different and not to a uniform standard. The individual farmers' wives many times were responsible for curd production, according to information from Katie Riley of the Sheboygan County Historical Research Center.
Smith, after collecting the curds, would finish the cheese-making process. It was said that he had improvised facilities in his house, an existing shed or a corner in his barn. He packed the cheese in barrels for shipment.
Making cheese was one thing, but selling that early Sheboygan County cheese met stiff resistance in Chicago. In fall 1858, Smith barreled cheese and took the barrels to Chicago to sell. Because early Wisconsin cheese was of poor quality and with the superior quality of New York cheese, dealers refused to buy his cheese until Smith paid one of them to sample his cheeses. That entire lot was then sold for 8 cents a pound.
Cheese made in Sheboygan County was starting to mean something.
Smith's brother, Hiram Smith, started a bit later in the cheese business when in 1870 he bought the McKinnon Cheese factory from A.D DeLand and Manning McKinnon, who had built that factory in 1867. Smith would only own own the factory for six years when he sold it in 1876 to brothers Ferdinand and Frank Mathers.
The Mathers brothers, who would be two of the first in the county to join the Sheboygan Falls Dairy Board of Trade in 1878, would crank up the reputation for Sheboygan County cheese. They won a gold medal as first prize for their cheese entry at the Wisconsin State Fair in 1879.
The word was getting around that Sheboygan County cheese was something special.
Hiram Smith was a large figure in Sheboygan County and the dairy industry. He was a member of the State Board of Regents for the University of Wisconsin and constantly advocated improvements in the agriculture programs at that institution. Many felt his contributions helped accelerate the growth of the dairy industry in Wisconsin in the 19th century.
With the early groundwork made by these cheese-making pioneers, Sheboygan County became so well known in cheese circles that Plymouth was picked as the location of the National Cheese Exchange, which set the commodity price of bulk cheese, a big deal at the time. The Exchange itself would move to Green Bay and in 1997 left to go to the Chicago Mercantile Exchange.
Despite the departure of the Exchange, Plymouth would become known as a truly big cheese.
Today, Plymouth is home to cheese giants such as Sargento Foods, Inc., Sartori, Inc., Masters Gallery Foods, Inc., and Great Lakes Cheese Company, Inc.
RELATED - Sheboygan County’s early settlers flirted with Fourierism
RELATED - Sheboygan Falls site, today a bed and breakfast, was built in 1848
RELATED - Sheboygan County Fair was at one time in Sheboygan Falls
|
<urn:uuid:db6080e6-1456-4386-938b-024ce063e15f>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644571.22/warc/CC-MAIN-20230528214404-20230529004404-00621.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9815921783447266,
"pii_count": 0,
"score": 2.875,
"token_count": 765,
"url": "https://www.sheboyganpress.com/story/news/2023/04/13/wisconsin-cheese-got-boost-past-new-york-from-sheboygan-cheesemakers/70101930007/"
}
|
These early Sheboygan County cheesemakers helped propel Wisconsin cheese past New York cheese: Throwback
Two brothers ignited the Sheboygan County cheesemaking industry with a factory in 1858.
SHEBOYGAN FALLS - Two brothers ignited the cheese industry in Sheboygan County with a farm factory and stand-alone operations in the 19th century.
Before the advent of a cheese factory, cheese was made in farm kitchens. That early cheese product was infrequently sold at market. It had several problems, which included, collecting, temperature control, rennet enzyme application, sanitation, refrigeration and aging.
John J. Smith started his cheese "factory" in 1858 by taking in gathering curd produced by his neighbors. Quality control was lacking as every curd-maker did things something slightly different and not to a uniform standard. The individual farmers' wives many times were responsible for curd production, according to information from Katie Riley of the Sheboygan County Historical Research Center.
Smith, after collecting the curds, would finish the cheese-making process. It was said that he had improvised facilities in his house, an existing shed or a corner in his barn. He packed the cheese in barrels for shipment.
Making cheese was one thing, but selling that early Sheboygan County cheese met stiff resistance in Chicago. In fall 1858, Smith barreled cheese and took the barrels to Chicago to sell. Because early Wisconsin cheese was of poor quality and with the superior quality of New York cheese, dealers refused to buy his cheese until Smith paid one of them to sample his cheeses. That entire lot was then sold for 8 cents a pound.
Cheese made in Sheboygan County was starting to mean something.
Smith's brother, Hiram Smith, started a bit later in the cheese business when in 1870 he bought the McKinnon Cheese factory from A.D DeLand and Manning McKinnon, who had built that factory in 1867. Smith would only own own the factory for six years when he sold it in 1876 to brothers Ferdinand and Frank Mathers.
The Mathers brothers, who would be two of the first in the county to join the Sheboygan Falls Dairy Board of Trade in 1878, would crank up the reputation for Sheboygan County cheese. They won a gold medal as first prize for their cheese entry
|
at the Wisconsin State Fair in 1879.
The word was getting around that Sheboygan County cheese was something special.
Hiram Smith was a large figure in Sheboygan County and the dairy industry. He was a member of the State Board of Regents for the University of Wisconsin and constantly advocated improvements in the agriculture programs at that institution. Many felt his contributions helped accelerate the growth of the dairy industry in Wisconsin in the 19th century.
With the early groundwork made by these cheese-making pioneers, Sheboygan County became so well known in cheese circles that Plymouth was picked as the location of the National Cheese Exchange, which set the commodity price of bulk cheese, a big deal at the time. The Exchange itself would move to Green Bay and in 1997 left to go to the Chicago Mercantile Exchange.
Despite the departure of the Exchange, Plymouth would become known as a truly big cheese.
Today, Plymouth is home to cheese giants such as Sargento Foods, Inc., Sartori, Inc., Masters Gallery Foods, Inc., and Great Lakes Cheese Company, Inc.
RELATED - Sheboygan County’s early settlers flirted with Fourierism
RELATED - Sheboygan Falls site, today a bed and breakfast, was built in 1848
RELATED - Sheboygan County Fair was at one time in Sheboygan Falls
|
Australia's annual plastic consumption produces the same amount of greenhouse gas emissions as 5.7 million cars, analysis released by conservation groups suggests.
- Australia's plastic use is forecast to double within three decades
- A tool's been developed to calculate how it's contributing to greenhouse gas emissions
- It challenges estimates used by governments globally and says emissions are likely to be much higher
A report, commissioned by the Australian Marine Conservation Society (AMCS) and WWF Australia, says skyrocketing levels of plastic use are contributing to global warming and posing a significant threat to our ecosystems and wildlife.
Without any action, the report says emissions produced by Australia's plastic consumption will double by 2050.
It was produced by Blue Environment, a consultancy group with government and corporate clients, and which produces the federal government's annual National Waste Report.
Kate Noble, a policy manager with WWF-Australia, said the report highlighted how important it was to bring Australia's "plastic addiction under control".
"We can't rely on recycling solely to get us out of this mess — we need to drastically cut our plastic use and stop using virgin plastic made from fossil fuels," she said.
"Even if we recycle 100 per cent of the plastic we use, we'll still see emissions double to more than 34 million tonnes annually by 2050."
Associate Professor Nick Florin, a research director at UTS's Institute for Sustainable Futures, said the report's authors should be congratulated for highlighting the link between plastic production and the climate crisis.
"We need to promote more responsible stewardship of products and packaging," he said.
"This includes design for circularity, eliminating hazardous ingredients, using products and packaging more efficiently in use to preserve value, and dramatically increasing the rates of reuse and recycling."
How does using plastic create emissions?
Most plastic used in Australia is extracted and converted from fossil fuels, or created using methods powered by fossil fuels, which emit greenhouse gases.
Shipping and transporting plastic also create emissions.
Then, depending on which method is used, managing plastic waste also generates greenhouse gases.
What did the plastic report measure?
The analysis attempted to quantify the emissions produced over the life cycle of plastics used in Australia during the 2019-2020 financial year.
"We built a model to estimate the greenhouse gas emissions that result from plastic consumption in Australia, right from the time that oil and gas is extracted from the ground … through to how we treat it at the end of its useful life," Ms Noble said.
Its conclusion was that Australia's plastics use accounted for more than 16 million metric tonnes of greenhouse gas emissions in 2020.
The report reached this figure by modelling how much carbon dioxide — a greenhouse gas — was released by producing the plastic, transporting it, and disposing of it.
Plastics made using fossil fuels generated more than double the emissions of recycled plastic, the report said.
It also found dramatic differences in the emissions caused by different waste management methods, including recycling, landfill and incineration.
While recycling generated less carbon dioxide than any other method for dealing with unwanted plastic, it still produced emissions.
Recycling generated about 1,550kg of carbon dioxide for every tonne of plastic recycled. The biggest polluter, incineration, churned out 6,330kg of CO2 per tonne of plastic.
Because the report looked at plastic waste in 2020, it did not take into account more recent state and territory laws designed to phase out single-use plastics.
It found that emissions caused by plastic consumption are higher than previously thought, said Shane Cucow, the campaign director for AMCS.
"A big reason for that is that over the last few years we've had significant amounts of new research showing that methane from gas extraction and fossil fuel extraction is much higher than previously thought," he said.
"It's likely that even [previous modelling looking at individual plastic products] is unfortunately now out of date and that plastic is more emissions intensive than previously shown in those areas."
What can we do about this?
The report has made six recommendations:
- Reduce plastic production over the coming decade
- Rapidly transition to plant-based, recycled or CO2-based plastic and invest in the necessary infrastructure to manage these plastics
- Shift to a 100 per cent renewable energy system for the transport and manufacture of plastics
- Maximise recycling when products are no longer reusable or reparable
- Avoid incinerating plastics
- Support international regulation to reduce plastic consumption and transition to a circular economy
The report says if federal, state and territory governments take these actions, the total emissions caused by plastic consumption could be reduced by more than 70 per cent by 2050.
The federal government has set a target to reuse or recycle all plastic waste by 2040, and last month, along with the states and territories, agreed to develop new waste rules including mandatory packaging design obligations.
A spokesperson for Environment Minister Tanya Plibersek said the government was committed to reaching a circular plastic economy, and voluntary targets and design guidelines weren't working.
"More than 70 per cent of the environmental impacts of an item are locked in at the design stage, before anyone ever purchases a product, and well before reuse or disposal is considered," they said in a statement.
Australia is advocating for a legally-binding global plastics treaty to keep virgin plastic production at sustainable levels, the spokesperson added.
Australian Food and Grocery Council CEO Tanya Barden said plastic containers played an important role in reducing food waste, which also has a considerable environmental impact.
But she said the industry was taking its role in helping to reduce the use of plastic, and therefore emissions, "really seriously".
"We need to reduce our reliance on plastic, especially virgin plastic," she said.
"While recycling can play an important role, there's also a lot that can be done around redesigning packaging."
How achievable are those solutions?
Australia generates more single-use plastic waste per person than any other country except Singapore, according to a previous analysis by the Minderoo Foundation, the philanthropic organisation backed by Andrew and Nicola Forrest.
Just 12 per cent of plastics were recycled in Australia in 2020-21.
So, the report's proposals might be difficult to achieve.
Last year Australia joined a coalition of countries aiming to end plastic waste by 2040.
While environmental advocates say cutting plastic packaging for items such as food is achievable, the transition will be harder for some Australians.
Many people in the disability community rely on single-use plastics where recyclable alternatives wouldn't work. For example, the flexibility of plastic straws is important for some people with disability, who might also struggle to wash alternatives like glass or steel straws.
|
<urn:uuid:fc31fe8c-4e3a-44c6-8dfa-75734cfe2649>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474852.83/warc/CC-MAIN-20240229170737-20240229200737-00082.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9542292952537537,
"pii_count": 0,
"score": 3.5,
"token_count": 1377,
"url": "https://www.abc.net.au/news/2023-07-10/plastic-waste-report-marine-conservation-society-wwf/102582230"
}
|
Australia's annual plastic consumption produces the same amount of greenhouse gas emissions as 5.7 million cars, analysis released by conservation groups suggests.
- Australia's plastic use is forecast to double within three decades
- A tool's been developed to calculate how it's contributing to greenhouse gas emissions
- It challenges estimates used by governments globally and says emissions are likely to be much higher
A report, commissioned by the Australian Marine Conservation Society (AMCS) and WWF Australia, says skyrocketing levels of plastic use are contributing to global warming and posing a significant threat to our ecosystems and wildlife.
Without any action, the report says emissions produced by Australia's plastic consumption will double by 2050.
It was produced by Blue Environment, a consultancy group with government and corporate clients, and which produces the federal government's annual National Waste Report.
Kate Noble, a policy manager with WWF-Australia, said the report highlighted how important it was to bring Australia's "plastic addiction under control".
"We can't rely on recycling solely to get us out of this mess — we need to drastically cut our plastic use and stop using virgin plastic made from fossil fuels," she said.
"Even if we recycle 100 per cent of the plastic we use, we'll still see emissions double to more than 34 million tonnes annually by 2050."
Associate Professor Nick Florin, a research director at UTS's Institute for Sustainable Futures, said the report's authors should be congratulated for highlighting the link between plastic production and the climate crisis.
"We need to promote more responsible stewardship of products and packaging," he said.
"This includes design for circularity, eliminating hazardous ingredients, using products and packaging more efficiently in use to preserve value, and dramatically increasing the rates of reuse and recycling."
How does using plastic create emissions?
Most plastic used in Australia is extracted and converted from fossil fuels, or created using methods powered by fossil fuels, which emit greenhouse gases.
Shipping and transporting plastic also create emissions.
Then, depending on which method is used, managing plastic waste also generates greenhouse gases.
What did the plastic report measure?
The analysis attempted to quantify the emissions produced over the life cycle of plastics used in Australia during the 2019-2020 financial year.
"We built a model to estimate the greenhouse gas emissions that result from plastic consumption in Australia, right from the time that oil and gas is extracted from the ground … through to how we treat it at the end of its useful life," Ms Noble
|
said.
Its conclusion was that Australia's plastics use accounted for more than 16 million metric tonnes of greenhouse gas emissions in 2020.
The report reached this figure by modelling how much carbon dioxide — a greenhouse gas — was released by producing the plastic, transporting it, and disposing of it.
Plastics made using fossil fuels generated more than double the emissions of recycled plastic, the report said.
It also found dramatic differences in the emissions caused by different waste management methods, including recycling, landfill and incineration.
While recycling generated less carbon dioxide than any other method for dealing with unwanted plastic, it still produced emissions.
Recycling generated about 1,550kg of carbon dioxide for every tonne of plastic recycled. The biggest polluter, incineration, churned out 6,330kg of CO2 per tonne of plastic.
Because the report looked at plastic waste in 2020, it did not take into account more recent state and territory laws designed to phase out single-use plastics.
It found that emissions caused by plastic consumption are higher than previously thought, said Shane Cucow, the campaign director for AMCS.
"A big reason for that is that over the last few years we've had significant amounts of new research showing that methane from gas extraction and fossil fuel extraction is much higher than previously thought," he said.
"It's likely that even [previous modelling looking at individual plastic products] is unfortunately now out of date and that plastic is more emissions intensive than previously shown in those areas."
What can we do about this?
The report has made six recommendations:
- Reduce plastic production over the coming decade
- Rapidly transition to plant-based, recycled or CO2-based plastic and invest in the necessary infrastructure to manage these plastics
- Shift to a 100 per cent renewable energy system for the transport and manufacture of plastics
- Maximise recycling when products are no longer reusable or reparable
- Avoid incinerating plastics
- Support international regulation to reduce plastic consumption and transition to a circular economy
The report says if federal, state and territory governments take these actions, the total emissions caused by plastic consumption could be reduced by more than 70 per cent by 2050.
The federal government has set a target to reuse or recycle all plastic waste by 2040, and last month, along with the states and territories, agreed to develop new waste rules including mandatory packaging design obligations.
A spokesperson for Environment Minister Tanya Plibersek said the government was committed to reaching a circular plastic economy, and voluntary targets and design guidelines weren't working.
"More than 70 per cent of the environmental impacts of an item are locked in at the design stage, before anyone ever purchases a product, and well before reuse or disposal is considered," they said in a statement.
Australia is advocating for a legally-binding global plastics treaty to keep virgin plastic production at sustainable levels, the spokesperson added.
Australian Food and Grocery Council CEO Tanya Barden said plastic containers played an important role in reducing food waste, which also has a considerable environmental impact.
But she said the industry was taking its role in helping to reduce the use of plastic, and therefore emissions, "really seriously".
"We need to reduce our reliance on plastic, especially virgin plastic," she said.
"While recycling can play an important role, there's also a lot that can be done around redesigning packaging."
How achievable are those solutions?
Australia generates more single-use plastic waste per person than any other country except Singapore, according to a previous analysis by the Minderoo Foundation, the philanthropic organisation backed by Andrew and Nicola Forrest.
Just 12 per cent of plastics were recycled in Australia in 2020-21.
So, the report's proposals might be difficult to achieve.
Last year Australia joined a coalition of countries aiming to end plastic waste by 2040.
While environmental advocates say cutting plastic packaging for items such as food is achievable, the transition will be harder for some Australians.
Many people in the disability community rely on single-use plastics where recyclable alternatives wouldn't work. For example, the flexibility of plastic straws is important for some people with disability, who might also struggle to wash alternatives like glass or steel straws.
|
International | An open book
Open-source intelligence is piercing the fog of war in Ukraine
Social-media posts and satellite imagery provide a torrent of data, but can overwhelm and confuse
On May 29th 1982 Robert Fox had just witnessed 36 hours of intense warfare over Goose Green, a remote spot on the Falkland Islands, an archipelago in the South Atlantic then being fought over by Britain and Argentina. It was the decisive battle of the war and it had gone Britain’s way. Mr Fox, then a BBC radio correspondent, was keen to tell listeners. It took him ten hours to get to a satellite phone aboard a warship, he recalls. It took another eight hours to decrypt his text in London. The story was not broadcast for 24 hours. Television journalists had it worse, says Mr Fox. Their shots took ten days to reach home.
When the southern Ukrainian city of Kherson was liberated in November, it took just hours, if not minutes, for the news to flood out. Images circulating on Telegram, a messaging service popular in Russia and Ukraine, showed Ukrainian soldiers strolling into the centre of the city and Ukrainian flags lofted over buildings (see clips above). A network of amateur analysts on Twitter tracked the Ukrainian advance, almost in real time, by “geo-locating” the images—comparing trees, buildings and other features to satellite imagery on Google Maps and similar services.
The rise of open-source intelligence, OSINT to insiders, has transformed the way that people receive news. In the run-up to war, commercial satellite imagery and video footage of Russian convoys on TikTok, a social-media site, allowed journalists and researchers to corroborate Western claims that Russia was preparing an invasion. OSINT even predicted its onset. Jeffrey Lewis of the Middlebury Institute in California used Google Maps’ road-traffic reports to identify a tell-tale jam on the Russian side of the border at 3:15am on February 24th. “Someone’s on the move”, he tweeted. Less than three hours later Vladimir Putin launched his war.
Satellite imagery still plays a role in tracking the war. During the Kherson offensive, synthetic-aperture radar (SAR) satellites, which can see at night and through clouds, showed Russia building pontoon bridges over the Dnieper river before its retreat from Kherson, boats appearing and disappearing as troops escaped east and, later, Russia’s army building new defensive positions along the M14 highway on the river’s left bank. And when Ukrainian drones struck two air bases deep inside Russia on December 5th, high-resolution satellite images showed the extent of the damage.
Il-76 transport plane
The Dyagilevo air base, in Ryazan, south-east of Moscow, houses some of Russia’s long-range bombers including Soviet-era Tu-95 and Tu-22M planes. This image was taken on December 7th, two days after the attack.
Scorch marks and fire suppressant can be seen on the ground where a Tu-22M bomber had been days before. Around ten Tu-22Ms appear to have been moved out of harm’s way, compared with photos taken before the attack.
Image: Planet Labs PBC
But whereas satellites were well-suited to cataloguing Russian battalions laid out neatly in open fields in January, it is harder to capture compelling images of small companies of men dispersed over a wide area and often ensconced in trenches or bunkers. The single most important repository of data during the war has been Telegram.
OSINT analysts scour Telegram channels such as Rybar, an account with over 1m followers, to harvest images of battle, testimony from the front line and the mood among troops. Rybar is not neutral—its founder once worked for the press service of Russia’s defence ministry, and reportedly once had links to Yevgeny Prigozhin, the head of the mercenary Wagner group—but it offers relatively accurate and timely accounts of battlefield movements, including Ukraine’s blitz through Kharkiv in September, and is often critical of Russian policy.
Telegram has become a platform for Russian ultra-nationalists, supportive of the war but dissatisfied with its conduct, to air their grievances against Russia’s military leadership. Popular accounts have circulated images of troops without basic equipment. During the Kherson offensive in early October, one panicked Russian account even used Telegram to make a desperate plea for air support. The first ten years of the Syrian civil war produced video footage running to 40 years, notes Matthew Ford of the Swedish Defence University. In the first 80 days of the Ukraine war, there was ten years of footage—an order of magnitude more.
For armies seeking to maintain operational security, this profusion of data is a nightmare. In 2019, after a series of blunders, Russia passed a law banning soldiers from uploading sensitive photos or videos. It began shutting down railway-tracking websites shortly before the war began, removing a valuable source of data. It has also attempted to obscure patches on soldiers’ uniforms and vehicle markings, to avoid giving away the position of whole units. In October the Kremlin began cracking down on prominent critics on Telegram, such as Igor Girkin, a hardline ex-spook who led Russia’s proxy war in Donbas in 2014. But they remain as garrulous as ever. After at least 89 Russian servicemen—possibly hundreds—were killed by a Ukrainian attack on New Year’s Day in Makiivka, a Russian-occupied town in the Donbas region, Mr Girkin lambasted the incompetence of Russian generals, describing them as “untrainable”.
Nor has Russia staunched the flow of information. “There’s a lot of lessons being learnt very slowly,” says Tom Bullock, an OSINT analyst at Atreides, an intelligence company, “but I think that’s on Telegram, where they know people are looking”. On VKontakte (VK), the Russian equivalent of Facebook, says Mr Bullock, “it’s basically just as bad as it always has been. There’s so many geo-tagged pictures of their bases just floating around at all times.”
This sloppiness can have lethal consequences. In December a Russian volunteer posted photos on VK of forces encamped in a country club in Sahy, an occupied part of Kherson province. His post included a geo-tag of the exact location. Ukrainian missiles later struck it, after which the volunteer posted yet again. This time he uploaded a video showing the extent of the destruction, in effect giving Ukraine a damage assessment from on the ground, noted Rob Lee of King’s College London.
|
<urn:uuid:9646bc1f-0d77-4eb9-be21-b80613eda0e8>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00113.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9515331387519836,
"pii_count": 0,
"score": 2.703125,
"token_count": 1444,
"url": "https://www.economist.com/interactive/international/2023/01/13/open-source-intelligence-is-piercing-the-fog-of-war-in-ukraine?fsrc=core-app-economist"
}
|
International | An open book
Open-source intelligence is piercing the fog of war in Ukraine
Social-media posts and satellite imagery provide a torrent of data, but can overwhelm and confuse
On May 29th 1982 Robert Fox had just witnessed 36 hours of intense warfare over Goose Green, a remote spot on the Falkland Islands, an archipelago in the South Atlantic then being fought over by Britain and Argentina. It was the decisive battle of the war and it had gone Britain’s way. Mr Fox, then a BBC radio correspondent, was keen to tell listeners. It took him ten hours to get to a satellite phone aboard a warship, he recalls. It took another eight hours to decrypt his text in London. The story was not broadcast for 24 hours. Television journalists had it worse, says Mr Fox. Their shots took ten days to reach home.
When the southern Ukrainian city of Kherson was liberated in November, it took just hours, if not minutes, for the news to flood out. Images circulating on Telegram, a messaging service popular in Russia and Ukraine, showed Ukrainian soldiers strolling into the centre of the city and Ukrainian flags lofted over buildings (see clips above). A network of amateur analysts on Twitter tracked the Ukrainian advance, almost in real time, by “geo-locating” the images—comparing trees, buildings and other features to satellite imagery on Google Maps and similar services.
The rise of open-source intelligence, OSINT to insiders, has transformed the way that people receive news. In the run-up to war, commercial satellite imagery and video footage of Russian convoys on TikTok, a social-media site, allowed journalists and researchers to corroborate Western claims that Russia was preparing an invasion. OSINT even predicted its onset. Jeffrey Lewis of the Middlebury Institute in California used Google Maps’ road-traffic reports to identify a tell-tale jam on the Russian side of the border at 3:15am on February 24th. “Someone’s on the move”, he tweeted. Less than three hours later Vladimir Putin launched his war.
Satellite imagery still plays a role in tracking the war. During the Kherson offensive, synthetic-aperture radar (SAR) satellites, which can see at night and through clouds, showed Russia building pontoon bridges over the Dnieper river before its retreat from Kherson, boats appearing and disappearing as troops escaped east and, later, Russia’s army building new defensive positions along the M
|
14 highway on the river’s left bank. And when Ukrainian drones struck two air bases deep inside Russia on December 5th, high-resolution satellite images showed the extent of the damage.
Il-76 transport plane
The Dyagilevo air base, in Ryazan, south-east of Moscow, houses some of Russia’s long-range bombers including Soviet-era Tu-95 and Tu-22M planes. This image was taken on December 7th, two days after the attack.
Scorch marks and fire suppressant can be seen on the ground where a Tu-22M bomber had been days before. Around ten Tu-22Ms appear to have been moved out of harm’s way, compared with photos taken before the attack.
Image: Planet Labs PBC
But whereas satellites were well-suited to cataloguing Russian battalions laid out neatly in open fields in January, it is harder to capture compelling images of small companies of men dispersed over a wide area and often ensconced in trenches or bunkers. The single most important repository of data during the war has been Telegram.
OSINT analysts scour Telegram channels such as Rybar, an account with over 1m followers, to harvest images of battle, testimony from the front line and the mood among troops. Rybar is not neutral—its founder once worked for the press service of Russia’s defence ministry, and reportedly once had links to Yevgeny Prigozhin, the head of the mercenary Wagner group—but it offers relatively accurate and timely accounts of battlefield movements, including Ukraine’s blitz through Kharkiv in September, and is often critical of Russian policy.
Telegram has become a platform for Russian ultra-nationalists, supportive of the war but dissatisfied with its conduct, to air their grievances against Russia’s military leadership. Popular accounts have circulated images of troops without basic equipment. During the Kherson offensive in early October, one panicked Russian account even used Telegram to make a desperate plea for air support. The first ten years of the Syrian civil war produced video footage running to 40 years, notes Matthew Ford of the Swedish Defence University. In the first 80 days of the Ukraine war, there was ten years of footage—an order of magnitude more.
For armies seeking to maintain operational security, this profusion of data is a nightmare. In 2019, after a series of blunders, Russia passed a law banning soldiers from uploading sensitive photos or videos. It began shutting down railway-tracking websites shortly before the war began, removing a valuable source of data. It has also attempted to obscure patches on soldiers’ uniforms and vehicle markings, to avoid giving away the position of whole units. In October the Kremlin began cracking down on prominent critics on Telegram, such as Igor Girkin, a hardline ex-spook who led Russia’s proxy war in Donbas in 2014. But they remain as garrulous as ever. After at least 89 Russian servicemen—possibly hundreds—were killed by a Ukrainian attack on New Year’s Day in Makiivka, a Russian-occupied town in the Donbas region, Mr Girkin lambasted the incompetence of Russian generals, describing them as “untrainable”.
Nor has Russia staunched the flow of information. “There’s a lot of lessons being learnt very slowly,” says Tom Bullock, an OSINT analyst at Atreides, an intelligence company, “but I think that’s on Telegram, where they know people are looking”. On VKontakte (VK), the Russian equivalent of Facebook, says Mr Bullock, “it’s basically just as bad as it always has been. There’s so many geo-tagged pictures of their bases just floating around at all times.”
This sloppiness can have lethal consequences. In December a Russian volunteer posted photos on VK of forces encamped in a country club in Sahy, an occupied part of Kherson province. His post included a geo-tag of the exact location. Ukrainian missiles later struck it, after which the volunteer posted yet again. This time he uploaded a video showing the extent of the destruction, in effect giving Ukraine a damage assessment from on the ground, noted Rob Lee of King’s College London.
|
The tech gamble in Biden’s new climate rule
The Biden administration is betting big on companies’ ability to snatch up greenhouse gases from power plants before they can warm the Earth.
EPA’s proposal to slash climate pollution from power plants is expected Thursday. It could set limits so stringent that coal- and gas-burning plants must either capture their pollution as it’s released or shut down altogether, as my colleagues have reported in recent weeks.
EPA says carbon capture is a market-ready technology, but only one power plant in the world is using it at scale — in Canada. And energy analysts have concerns about mass deployment, especially when it comes to capturing pollution from natural gas plants, which could be especially expensive, Brian Dabbs, Carlos Anchondo and Christa Marshall write.
Plus, the majority of captured carbon today is used as a kind of lubricant for oil production, leaving environmentalists worried that the technology could perpetuate fossil fuel use.
The administration says it intends for the bulk of future captured carbon to be permanently stored underground — raising other potential environmental concerns, such as earthquakes.
A history: The sole large carbon capture effort operating at a U.S. power plant was the Petra Nova project near Houston. It shut down in 2020 due to a pandemic-related plunge in oil prices. While numerous planned projects have arisen in the last 15 years, all failed to go anywhere.
One reason is cost. Plus, there’s no economic penalty for simply letting your carbon dioxide waft into the atmosphere.
“For a long time, it was a difficult industry because you were capturing something that was free to emit. It is always more expensive to capture CO2 than release it,” said Adam Goff, senior vice president for strategy at 8 Rivers Capital, a developer of carbon capture technologies. “There wasn’t really a business case.”
Changing the game: Billions of dollars in last year’s climate law, coupled with EPA’s upcoming rules — not to mention a smattering of pledges from fossil fuel companies to bring their net carbon emissions to zero — could change all that.
The U.S. has more than a dozen proposed capture projects, including several with target start dates before 2030 — though none have started construction yet.
Concerns remain: The challenge for carbon capture developers is not just the cost of retrofitting power plants with the technology, but also transporting the greenhouse gas — typically through a pipeline — to its ultimate resting place.
Siting pipelines is a massive regulatory undertaking, and communities are already resisting the idea of having carbon stored nearby. Landowners, also hesitant, are pushing for greater federal accountability in case injecting millions of tons of CO2 into the earth fouls their groundwater, triggers earthquakes or otherwise causes problems.
It’s Wednesday — thank you for tuning in to POLITICO’s Power Switch. I’m your host, Arianna Skibell. Power Switch is brought to you by the journalists behind E&E News and POLITICO Energy. Send your tips, comments, questions to [email protected].
POLITICO Energy Summit: Register here to attend POLITICO’s first-ever energy summit exploring how the U.S. is positioning itself in a complicated energy future and featuring guests such as Energy Secretary Jennifer Granholm and White House climate adviser Ali Zaidi.
Today in POLITICO Energy’s podcast: Allison Prang breaks down why President Joe Biden’s carbon capture plan is facing pushback from some environmental justice groups.
The Manchin Show: Revenge
Sen. Joe Manchin said he will block all of Biden’s pending nominees to EPA until the federal body agrees to “halt their government overreach,” writes Emma Dumain.
The West Virginia Democrat and chair of the Senate Energy and Natural Resources Committee is particularly frustrated with a recent EPA proposal to sharply reduce tailpipe emissions and hasten the transition to electric vehicles.
Manchin’s protest puts at least two agency picks in limbo in a narrowly divided Senate.
Gauge the voters
Republicans are casting Biden’s spate of climate rules as an attack on the fossil fuel industry that could hurt energy reliability, in the hopes that voters will turn against Democrats in upcoming Senate elections, writes Josh Siegel.
The bet hinges on the premise that voters’ sentiment will match the electoral mood of 2010, when Republicans wiped out Democrats up and down the ballot by charging that then-President Barack Obama was waging a “war on coal.” But many Democrats say the GOP is the party out of step with voters who want more action on climate change.
Huge price increases for flood insurance could cause hundreds of thousands of homeowners to cancel their policies and risk financial ruin, writes Thomas Frank.
FEMA, which runs the United States’ largest flood insurance program, recently published projections showing that its premiums are on track to jump by thousands of dollars a year in some areas.
In Other News
Energy futures: Microsoft is betting that fusion power is closer than many think.
A different take: The nationalist dark side of Biden’s climate policies.
A showcase of some of our best subscriber content.
New research has revealed an unexpected consequence of climate change. Some crabs are losing their sense of smell.
The Energy Department has launched a program to speed construction of high-capacity power lines linking urban areas to prime solar and wind energy resources.
The Senate Commerce Committee approved a bipartisan rail safety bill Wednesday, moving forward with just two Republican senators voting against the committee’s top Republican.
That’s it for today, folks! Thanks for reading.
|
<urn:uuid:c4cfec7a-801b-48fd-8ae3-8ee8a45aece8>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649193.79/warc/CC-MAIN-20230603101032-20230603131032-00715.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9391790628433228,
"pii_count": 0,
"score": 2.875,
"token_count": 1179,
"url": "https://www.politico.com/newsletters/power-switch/2023/05/10/the-tech-gamble-in-bidens-new-climate-rule-00082767?ref=ctvc.co"
}
|
The tech gamble in Biden’s new climate rule
The Biden administration is betting big on companies’ ability to snatch up greenhouse gases from power plants before they can warm the Earth.
EPA’s proposal to slash climate pollution from power plants is expected Thursday. It could set limits so stringent that coal- and gas-burning plants must either capture their pollution as it’s released or shut down altogether, as my colleagues have reported in recent weeks.
EPA says carbon capture is a market-ready technology, but only one power plant in the world is using it at scale — in Canada. And energy analysts have concerns about mass deployment, especially when it comes to capturing pollution from natural gas plants, which could be especially expensive, Brian Dabbs, Carlos Anchondo and Christa Marshall write.
Plus, the majority of captured carbon today is used as a kind of lubricant for oil production, leaving environmentalists worried that the technology could perpetuate fossil fuel use.
The administration says it intends for the bulk of future captured carbon to be permanently stored underground — raising other potential environmental concerns, such as earthquakes.
A history: The sole large carbon capture effort operating at a U.S. power plant was the Petra Nova project near Houston. It shut down in 2020 due to a pandemic-related plunge in oil prices. While numerous planned projects have arisen in the last 15 years, all failed to go anywhere.
One reason is cost. Plus, there’s no economic penalty for simply letting your carbon dioxide waft into the atmosphere.
“For a long time, it was a difficult industry because you were capturing something that was free to emit. It is always more expensive to capture CO2 than release it,” said Adam Goff, senior vice president for strategy at 8 Rivers Capital, a developer of carbon capture technologies. “There wasn’t really a business case.”
Changing the game: Billions of dollars in last year’s climate law, coupled with EPA’s upcoming rules — not to mention a smattering of pledges from fossil fuel companies to bring their net carbon emissions to zero — could change all that.
The U.S. has more than a dozen proposed capture projects, including several with target start dates before 2030 — though none have started construction yet.
Concerns remain: The challenge for carbon capture developers is not just the cost of retrofitting power plants with the technology, but also transporting the greenhouse gas — typically through a pipeline — to its ultimate resting place.
Siting pipelines is a massive regulatory
|
undertaking, and communities are already resisting the idea of having carbon stored nearby. Landowners, also hesitant, are pushing for greater federal accountability in case injecting millions of tons of CO2 into the earth fouls their groundwater, triggers earthquakes or otherwise causes problems.
It’s Wednesday — thank you for tuning in to POLITICO’s Power Switch. I’m your host, Arianna Skibell. Power Switch is brought to you by the journalists behind E&E News and POLITICO Energy. Send your tips, comments, questions to [email protected].
POLITICO Energy Summit: Register here to attend POLITICO’s first-ever energy summit exploring how the U.S. is positioning itself in a complicated energy future and featuring guests such as Energy Secretary Jennifer Granholm and White House climate adviser Ali Zaidi.
Today in POLITICO Energy’s podcast: Allison Prang breaks down why President Joe Biden’s carbon capture plan is facing pushback from some environmental justice groups.
The Manchin Show: Revenge
Sen. Joe Manchin said he will block all of Biden’s pending nominees to EPA until the federal body agrees to “halt their government overreach,” writes Emma Dumain.
The West Virginia Democrat and chair of the Senate Energy and Natural Resources Committee is particularly frustrated with a recent EPA proposal to sharply reduce tailpipe emissions and hasten the transition to electric vehicles.
Manchin’s protest puts at least two agency picks in limbo in a narrowly divided Senate.
Gauge the voters
Republicans are casting Biden’s spate of climate rules as an attack on the fossil fuel industry that could hurt energy reliability, in the hopes that voters will turn against Democrats in upcoming Senate elections, writes Josh Siegel.
The bet hinges on the premise that voters’ sentiment will match the electoral mood of 2010, when Republicans wiped out Democrats up and down the ballot by charging that then-President Barack Obama was waging a “war on coal.” But many Democrats say the GOP is the party out of step with voters who want more action on climate change.
Huge price increases for flood insurance could cause hundreds of thousands of homeowners to cancel their policies and risk financial ruin, writes Thomas Frank.
FEMA, which runs the United States’ largest flood insurance program, recently published projections showing that its premiums are on track to jump by thousands of dollars a year in some areas.
In Other News
Energy futures: Microsoft is betting that fusion power is closer than many think.
A different take: The nationalist dark side of Biden’s climate policies.
A showcase of some of our best subscriber content.
New research has revealed an unexpected consequence of climate change. Some crabs are losing their sense of smell.
The Energy Department has launched a program to speed construction of high-capacity power lines linking urban areas to prime solar and wind energy resources.
The Senate Commerce Committee approved a bipartisan rail safety bill Wednesday, moving forward with just two Republican senators voting against the committee’s top Republican.
That’s it for today, folks! Thanks for reading.
|
Lake Maracaibo is being depleted, both by the exploitation of various commodities, from oil and coal to crab and shrimp, and by the construction of fluvial-lacustrine infrastructure at the service of continental extractivist megaprojects; all this, of course, with the complicit permissiveness of public institutions, or what is the same to say, by the failure to comply with the constitutional obligations of the State in environmental matters.
If oil spills are daily events for the industry, the resulting tragic loss of environmental quality is the same for millions of inhabitants. A decade ago, Petróleos de Venezuela, S.A. (Pdvsa) registered more than 9,000 spills annually in the area; today, with crude oil production reduced by 80%, these accidents continue to be frequent news.
The origins of this story date back to the beginning of the 20th century, but the worsening of Zulia’s socio-environmental tragedy occurred after the exploitation of all the resources of the Maracaibo Lake basin was projected through the Study for the Rational Use of the Natural Resources of the Zulia region, sponsored by the Organization of American States in the mid-1970s. Since then, the majestic natural formations of the lake basin suffer wounds that today do not heal, and the population suffers from the ecocides they have witnessed, as well as from health problems or loss of livelihoods in rivers and in the lake.
The infamous industry of extraction, transport and shipment of 8 million metric tons of coal per year in the lake ports of Mara, La Ensenada and El Bajo, leaves a footprint of coal dust on the roads that not only causes fatal accidents on the roads, but also penetrates into the lungs, causing respiratory diseases for mine workers and inhabitants on both sides of the roads and ports. The mining company has repeatedly expanded the Norte and Paso Diablo mines in the northwest of Zulia state, without paying attention to the environmental impact of its actions, affecting the basins of the Socuy, Cachirí and Guasare rivers, which supply the two reservoirs that store drinking water for 3.2 million people in Zulia.
Another actor in this story is the multinational Shell, which had the coal exploitation rights for about 20 years. Nowadays, although the characters have changed, the story has not changed much: the mines are exploited by a state-owned company (Carbozulia), but the environmental damage continue, with a balance of affected population, and some mourners, grouped in environmental organizations.
The diagnosis of the lake indicates aging, contamination by residential and industrial waters, agrochemicals, and, in the last 10 years, shrimp and crab fishing and capture. Some researchers put the annual potential of these two exploitations at 6,100 and 11,500 metric tons, respectively. The modality is the same as always: extractivism, i.e., precarious working conditions for fishermen and breeders, low pay for the workforce, production destined for export, and juicy profits for private producers.
In this way, the Maracaibo Lake basin was configured as a zone of environmental sacrifice and, by extension, a social and economic sacrifice zone, without taking into account the worsening of social inequalities or taking measures to prevent and reverse the resulting environmental crisis.
|
<urn:uuid:5997b158-0bd6-4aa1-bdb8-1b0151fb1fcb>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00653.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9487428069114685,
"pii_count": 0,
"score": 2.6875,
"token_count": 679,
"url": "https://prensapcv.wordpress.com/2023/03/12/maracaibo-lake-zone-of-environmental-sacrifice/"
}
|
Lake Maracaibo is being depleted, both by the exploitation of various commodities, from oil and coal to crab and shrimp, and by the construction of fluvial-lacustrine infrastructure at the service of continental extractivist megaprojects; all this, of course, with the complicit permissiveness of public institutions, or what is the same to say, by the failure to comply with the constitutional obligations of the State in environmental matters.
If oil spills are daily events for the industry, the resulting tragic loss of environmental quality is the same for millions of inhabitants. A decade ago, Petróleos de Venezuela, S.A. (Pdvsa) registered more than 9,000 spills annually in the area; today, with crude oil production reduced by 80%, these accidents continue to be frequent news.
The origins of this story date back to the beginning of the 20th century, but the worsening of Zulia’s socio-environmental tragedy occurred after the exploitation of all the resources of the Maracaibo Lake basin was projected through the Study for the Rational Use of the Natural Resources of the Zulia region, sponsored by the Organization of American States in the mid-1970s. Since then, the majestic natural formations of the lake basin suffer wounds that today do not heal, and the population suffers from the ecocides they have witnessed, as well as from health problems or loss of livelihoods in rivers and in the lake.
The infamous industry of extraction, transport and shipment of 8 million metric tons of coal per year in the lake ports of Mara, La Ensenada and El Bajo, leaves a footprint of coal dust on the roads that not only causes fatal accidents on the roads, but also penetrates into the lungs, causing respiratory diseases for mine workers and inhabitants on both sides of the roads and ports. The mining company has repeatedly expanded the Norte and Paso Diablo mines in the northwest of Zulia state, without paying attention to the environmental impact of its actions, affecting the basins of the Socuy, Cachirí and Guasare rivers, which supply the two reservoirs that store drinking water for 3.2 million people in Zulia.
Another actor in this story is the multinational Shell, which had the coal exploitation rights for about 20 years. Nowadays, although the characters have changed, the story has not changed much: the mines are exploited by a state-owned company (Carbozulia), but the environmental damage continue
|
, with a balance of affected population, and some mourners, grouped in environmental organizations.
The diagnosis of the lake indicates aging, contamination by residential and industrial waters, agrochemicals, and, in the last 10 years, shrimp and crab fishing and capture. Some researchers put the annual potential of these two exploitations at 6,100 and 11,500 metric tons, respectively. The modality is the same as always: extractivism, i.e., precarious working conditions for fishermen and breeders, low pay for the workforce, production destined for export, and juicy profits for private producers.
In this way, the Maracaibo Lake basin was configured as a zone of environmental sacrifice and, by extension, a social and economic sacrifice zone, without taking into account the worsening of social inequalities or taking measures to prevent and reverse the resulting environmental crisis.
|
Listen to one of the largest trees in the world
If you journey to Fishlake National Forest in Utah, you'll be surrounded by a high-elevation behemoth.
It's one of the largest life forms on the planet: a quaking aspen so colossal it has a name — Pando, which is Latin for "I spread."
You might mistake Pando for a swath of forest of thousands of individual trees. But in reality, it's all one tree connected by a single root system.
In a sense, Pando "redefines trees," says Lance Oditt, who directs the nonprofit Friends of Pando.
What started as one seed now spans 80 football fields and weighs some 6,000 tons. "They look like tree trunks to us, but stems is the proper scientific term," he says. "They go 80 feet into the sky."
Oditt is always searching for better ways to get his head around a tree this enormous. And he started wondering: "What would happen if we asked a sound conservationist to record the tree? What could a geologist, for example, learn from that, or a wildlife biologist?"
So about a year ago, Oditt invited sound artist Jeff Rice to visit Pando and record the tree.
"I just dove in and started recording everything I could in any way that I could," says Rice, who made his pilgrimage to the mighty aspen last July.
Rice says that sound recordings aren't just works of art.
"They also are a record of the place in time, the species and the health of the environment," he says. "You can use these recordings as a baseline as the environment changes."
In mid-summer, the aspen's leaves are pretty much at their largest. "And there's just a really nice shimmering quality to Pando when you walk through it," says Rice. "It's like a presence when the wind blows."
That's what Rice wanted to capture first — the sound of those bright lime green leaves fluttering in the wind.
He attached little contact microphones to individual leaves and was treated to this sound in return:
The leaves had "this percussive quality," he says. "And I knew that all of these vibrating leaves would create a significant amount of vibration within the tree."
Rice then set out to capture that tree-wide vibration in the midst of a thunderstorm. "I was hunkered down and huddling, trying to stay out of the lightning. When those storms come through Pando, they're pretty big. They're pretty dramatic."
All that wind blowing through the innumerable leaves offered Rice a sonic opportunity to record the tree.
"We found this incredible opening in one of the [stems] that I've dubbed the Pando portal," he says.
Into that portal, he lowered a mic until it was touching the massive tangle of roots below.
This was the result:
"As soon as the wind would blow and the leaves would start to vibrate," Rice says, "you would hear this amazing low rumble."
The vibrations, he says, were passing through Pando's branches and trunks into the ground.
"It's almost like the whole Earth is vibrating," says Rice. "It just emphasizes the power of all of these trembling leaves, the connectedness, I think, of this as a single organism."
He also captured the bark:
And, finally, the landscape:
Rice and Oditt are presenting these recordings at this week's Acoustical Society of America meeting in Chicago.
"This is the song of this ecosystem, this tree," says Oditt. "So now we know sound is another way we can understand the tree."
In fact, the recordings have given Oditt research ideas, like using sound to map Pando's labyrinth of roots. But above all, they're a sonic snapshot of this leviathan at this moment in time.
"We have to keep in mind," says Oditt, "that it's been changing shape and form for like 9000 years. I call it the David Bowie problem. It's constantly reinventing itself!"
And now, we've managed to turn up the volume to hear Pando as the baritone soloist it's always been.
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
<urn:uuid:18dc3178-b7cd-4f3e-8699-6644253ef1fb>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00254.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9770830869674683,
"pii_count": 0,
"score": 2.953125,
"token_count": 914,
"url": "https://www.tpr.org/2023-05-10/listen-to-one-of-the-largest-trees-in-the-world"
}
|
Listen to one of the largest trees in the world
If you journey to Fishlake National Forest in Utah, you'll be surrounded by a high-elevation behemoth.
It's one of the largest life forms on the planet: a quaking aspen so colossal it has a name — Pando, which is Latin for "I spread."
You might mistake Pando for a swath of forest of thousands of individual trees. But in reality, it's all one tree connected by a single root system.
In a sense, Pando "redefines trees," says Lance Oditt, who directs the nonprofit Friends of Pando.
What started as one seed now spans 80 football fields and weighs some 6,000 tons. "They look like tree trunks to us, but stems is the proper scientific term," he says. "They go 80 feet into the sky."
Oditt is always searching for better ways to get his head around a tree this enormous. And he started wondering: "What would happen if we asked a sound conservationist to record the tree? What could a geologist, for example, learn from that, or a wildlife biologist?"
So about a year ago, Oditt invited sound artist Jeff Rice to visit Pando and record the tree.
"I just dove in and started recording everything I could in any way that I could," says Rice, who made his pilgrimage to the mighty aspen last July.
Rice says that sound recordings aren't just works of art.
"They also are a record of the place in time, the species and the health of the environment," he says. "You can use these recordings as a baseline as the environment changes."
In mid-summer, the aspen's leaves are pretty much at their largest. "And there's just a really nice shimmering quality to Pando when you walk through it," says Rice. "It's like a presence when the wind blows."
That's what Rice wanted to capture first — the sound of those bright lime green leaves fluttering in the wind.
He attached little contact microphones to individual leaves and was treated to this sound in return:
The leaves had "this percussive quality," he says. "And I knew that all of these vibrating leaves would create a significant amount of vibration within the tree."
Rice then set out to capture that tree-wide vibration in the midst of a thunderstorm. "I was hunkered down and huddling, trying to stay out
|
of the lightning. When those storms come through Pando, they're pretty big. They're pretty dramatic."
All that wind blowing through the innumerable leaves offered Rice a sonic opportunity to record the tree.
"We found this incredible opening in one of the [stems] that I've dubbed the Pando portal," he says.
Into that portal, he lowered a mic until it was touching the massive tangle of roots below.
This was the result:
"As soon as the wind would blow and the leaves would start to vibrate," Rice says, "you would hear this amazing low rumble."
The vibrations, he says, were passing through Pando's branches and trunks into the ground.
"It's almost like the whole Earth is vibrating," says Rice. "It just emphasizes the power of all of these trembling leaves, the connectedness, I think, of this as a single organism."
He also captured the bark:
And, finally, the landscape:
Rice and Oditt are presenting these recordings at this week's Acoustical Society of America meeting in Chicago.
"This is the song of this ecosystem, this tree," says Oditt. "So now we know sound is another way we can understand the tree."
In fact, the recordings have given Oditt research ideas, like using sound to map Pando's labyrinth of roots. But above all, they're a sonic snapshot of this leviathan at this moment in time.
"We have to keep in mind," says Oditt, "that it's been changing shape and form for like 9000 years. I call it the David Bowie problem. It's constantly reinventing itself!"
And now, we've managed to turn up the volume to hear Pando as the baritone soloist it's always been.
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
For more than a year, we’ve been hearing a lot from lawmakers and officials about funding coming to Colorado from the bipartisan infrastructure bill, officially known as the Infrastructure Investment and Jobs Act.
But as Coloradans wait for those dollars to turn into new projects, many wonder what the real impact will be for the state. And it’s prompted some questions to CPR’s Colorado Wonders project, like why aren’t we seeing improvements already — like, say filling those pesky potholes? And how much money will the Western Slope get?
To get these answers, we need to go back to where it all started.
What does the law do?
The Infrastructure Investment and Jobs Act was passed by Congress and signed into law by President Joe Biden in November 2021. (While it was bipartisan in Washington, D.C., the Colorado delegation split along party lines. The state’s Democrats voted for the bill, while Republicans voted no.)
The bill reauthorized the Surface Transportation Act for five years, which includes funding for highway programs. On top of that, it had $550 billion in new infrastructure spending, including an additional $110 billion for traditional infrastructure like roads and bridges, $66 billion for rail, $39 billion for public transit, $65 billion for broadband, $55 billion for clean drinking water, and more.
So far, more than $3 billion from the law have been announced for Colorado, according to the White House, with more than a billion actually awarded over the last two years.
“The ultimate goal is progress. We want people to feel that their future is better, that they have something they can look forward to,” said Sen. John Hickenlooper. He was part of the bipartisan group of senators that negotiated the bill.
And as Hickenlooper explained it, that money is for all sorts of infrastructure projects, both big and small, traditional and more modern — such as broadband, electric vehicles, climate resilience, and clean drinking water.
“I think some people traditionally think of infrastructure as only roads and bridges. But a fish passage is infrastructure,” said Winne Stachelberg, the Interior Department’s infrastructure coordinator. “Plugging and sealing orphaned oil and gas wells or abandoned mines, that's infrastructure too.”
She describes projects like these as “natural infrastructure,” including millions to shore up the landscapes of Colorado by restoring watersheds and building resilience after wildfires. “It's a broad definition of infrastructure that includes the natural infrastructure, not just built infrastructures.”
The money is coming in two main ways. The first is formula funding, which means each state gets a fixed portion of the dollars. The other is grant funding, which is something states and local governments compete for.
“The way we structured the bipartisan infrastructure bill was to deliver resources to states and to — as large an extent as possible — let them play the primary role in allocating and creating priorities,” Hickenlooper explained.
Colorado has eyes on the prize
“Our goal is to take advantage of every opportunity that becomes available to the state,” said Meridith Marshall, Colorado’s Infrastructure Coordinator. “We have eyes on every single program that we're interested in pursuing and how we can best take advantage of that opportunity.”
Part of Marshall’s job is coordinating with state and local officials, especially when it comes to grant funding, to make the strongest possible case.
Many see this infrastructure funding as a once-in-a-generation chance.
“We're shooting for as much as we can possibly get,” said Shoshana Lew, executive director for the Colorado Department of Transportation, with a laugh. “We're gonna shoot for the moon.”
In the first two years of the bill’s rollout, CDOT has gotten about $700 million annually, but the state is really eyeing the grant funding.
“What we're doing is really making our best effort with projects that are ready to go in the 10-year plan, to make the case to the federal partners why they need to give us competitive grants to get those projects done even faster,” she said.
The 10-year plan is a statewide list of priority transportation projects, such as I-70’s Floyd Hill project, which got a $100 million grant through the bipartisan infrastructure bill.
Dan Gibbs, executive director of the Colorado Department of Natural Resources, is focused on the natural infrastructure needs of the state “and the amount of money that it takes to really make sure that we are taking care of our lands for future generations.”
When it comes to Colorado forests, Gibbs said, “we know in the most critical locations in the state of Colorado that there's about a 700 million need for Colorado. But when you look at all of our lands, it’s about a couple billion dollars.”
He added that the state is expecting about $1.2 billion thus far for watersheds and water infrastructure.
One thing state officials all mentioned is that how the money is spent will be done through collaboration and conversations with local stakeholders.
As Gibbs explained it, many of the projects are coming from the ground up. “It's very localized, it's bubbling up from that local community and the state is becoming involved.”
Getting local projects across the finish line
Where all the talk of infrastructure funding becomes a reality is in local communities, like Alamosa.
The town received a $4.7 million dollar grant from the infrastructure law to build a 320-foot pedestrian bridge connecting neighborhoods to the trail system along the Rio Grande River.
“It's taken over two decades for us to get to this point,” said Mayor Ty Coleman, “of actually making the dream of having this pedestrian bridge and the connectivity a reality in our San Luis Valley community.”
Part of what made this funding a game changer for the town is that the project is 100 percent covered by the law, and did not require any local matching funds.
Sure, it may not be filling potholes, as some are waiting for, but Coleman said it’s going to support a community value: increased access to trails and parks.
But it’s also an example of the main disconnect with infrastructure funding: people won’t see the impact right away. The money was announced, but groundbreaking isn’t expected to happen until 2026.
But once it’s built, Coleman said, “They'll see these federal dollars every time they wake up and walk around our community [unlike] some of the internal infrastructure things you never see. You never see where those monies and dollars are going. But this is something that's visual, which is a constant positive reminder for people in our community.”
He feels Alamosa got lucky in a way. He said his team at the city was proactive about going after the infrastructure dollars and got encouragement from Sen. Michael Bennet and his office.
Not all rural communities have the staff to both do their day jobs and fill out grant applications.
That brings us to another question asked by a listener: how will that money be distributed across the state. And how much will the Western Slope get? So far, more than $79 million has been allocated for Western Slope communities for all different types of infrastructure.
Glenwood Springs Mayor Jonathan Godes said as soon as his community heard about the infrastructure law they decided to go for it. He’s glad that congress made this trillion-dollar-plus investment “to help these projects that are so desperate to our communities, that we've been working on for so many decades to try to accomplish. I hope that it pushes a lot of these projects… over the finish line.”
And like Coleman, Godes hopes that the distribution of dollars will end up being “equitable to rural and urban Colorado.”
Glenwood Springs missed out on one infrastructure grant last year. Godes said the second time they thought regionally, teamed with other towns, and aimed for a project that aligns with the Biden administration’s goals. The end result was more than $24 million for the Westward Three project to expand Bustang service between Glenwood Springs and Grand Junction.
Godes is particularly excited because infrastructure investment is something Congress had put off for decades, despite past administrations’ attempts.
He added it’s not necessarily about dreaming big, or building new shiny things. Much of the money is just needed to catch up and take care of infrastructure investments made in the past.
Godes compares the situation to having a house with a foundation problem.
“It keeps getting a little bit worse, but I just can't ever find that money to do it because it just keeps getting more and more expensive and I don't know what to do and I'm feeling helpless,” he explained.
Even bigger dreams ... and potholes, too
But with three more years left of funding, there are plenty of bigger dreams out there, like perhaps working to make Cottonwood pass a year-round alternative to I-70, a project that could cost hundreds of millions of dollars.
Even if the state doesn’t end up going that big, Godes said the law makes possible projects that “could unlock areas that could provide building paths for affordable housing. It could provide for safe drinking water. It could provide watershed restoration so that we have clean drinking water.”
And all those projects are also something politicians will continue to campaign on well into the future, as the congress members that voted for and supported the infrastructure law attend groundbreakings and ribbon cuttings in the years to come.
As for those pesky potholes, yes some of the infrastructure funding may trickle down to help cover them too. But Sen. Hickenlooper, a former Denver mayor who heard a lot about potholes in that job, said the best thing to do isn’t to wait for federal help, but instead, just contact your local government and advocate for getting them fixed.
CPR’s Andrew Kenney contributed to the data visualizations for this story.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
<urn:uuid:cf1a283b-2266-4060-aed6-888e3eac7e6a>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511106.1/warc/CC-MAIN-20231003124522-20231003154522-00132.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9631748199462891,
"pii_count": 0,
"score": 2.515625,
"token_count": 2199,
"url": "https://www.cpr.org/2023/02/20/colorado-infrastructure-law-improvements/"
}
|
For more than a year, we’ve been hearing a lot from lawmakers and officials about funding coming to Colorado from the bipartisan infrastructure bill, officially known as the Infrastructure Investment and Jobs Act.
But as Coloradans wait for those dollars to turn into new projects, many wonder what the real impact will be for the state. And it’s prompted some questions to CPR’s Colorado Wonders project, like why aren’t we seeing improvements already — like, say filling those pesky potholes? And how much money will the Western Slope get?
To get these answers, we need to go back to where it all started.
What does the law do?
The Infrastructure Investment and Jobs Act was passed by Congress and signed into law by President Joe Biden in November 2021. (While it was bipartisan in Washington, D.C., the Colorado delegation split along party lines. The state’s Democrats voted for the bill, while Republicans voted no.)
The bill reauthorized the Surface Transportation Act for five years, which includes funding for highway programs. On top of that, it had $550 billion in new infrastructure spending, including an additional $110 billion for traditional infrastructure like roads and bridges, $66 billion for rail, $39 billion for public transit, $65 billion for broadband, $55 billion for clean drinking water, and more.
So far, more than $3 billion from the law have been announced for Colorado, according to the White House, with more than a billion actually awarded over the last two years.
“The ultimate goal is progress. We want people to feel that their future is better, that they have something they can look forward to,” said Sen. John Hickenlooper. He was part of the bipartisan group of senators that negotiated the bill.
And as Hickenlooper explained it, that money is for all sorts of infrastructure projects, both big and small, traditional and more modern — such as broadband, electric vehicles, climate resilience, and clean drinking water.
“I think some people traditionally think of infrastructure as only roads and bridges. But a fish passage is infrastructure,” said Winne Stachelberg, the Interior Department’s infrastructure coordinator. “Plugging and sealing orphaned oil and gas wells or abandoned mines, that's infrastructure too.”
She describes projects like these as “natural infrastructure,” including millions to shore up the landscapes of Colorado by restoring watersheds and building resilience after wildfires. “It's a broad definition of infrastructure that includes the natural infrastructure,
|
not just built infrastructures.”
The money is coming in two main ways. The first is formula funding, which means each state gets a fixed portion of the dollars. The other is grant funding, which is something states and local governments compete for.
“The way we structured the bipartisan infrastructure bill was to deliver resources to states and to — as large an extent as possible — let them play the primary role in allocating and creating priorities,” Hickenlooper explained.
Colorado has eyes on the prize
“Our goal is to take advantage of every opportunity that becomes available to the state,” said Meridith Marshall, Colorado’s Infrastructure Coordinator. “We have eyes on every single program that we're interested in pursuing and how we can best take advantage of that opportunity.”
Part of Marshall’s job is coordinating with state and local officials, especially when it comes to grant funding, to make the strongest possible case.
Many see this infrastructure funding as a once-in-a-generation chance.
“We're shooting for as much as we can possibly get,” said Shoshana Lew, executive director for the Colorado Department of Transportation, with a laugh. “We're gonna shoot for the moon.”
In the first two years of the bill’s rollout, CDOT has gotten about $700 million annually, but the state is really eyeing the grant funding.
“What we're doing is really making our best effort with projects that are ready to go in the 10-year plan, to make the case to the federal partners why they need to give us competitive grants to get those projects done even faster,” she said.
The 10-year plan is a statewide list of priority transportation projects, such as I-70’s Floyd Hill project, which got a $100 million grant through the bipartisan infrastructure bill.
Dan Gibbs, executive director of the Colorado Department of Natural Resources, is focused on the natural infrastructure needs of the state “and the amount of money that it takes to really make sure that we are taking care of our lands for future generations.”
When it comes to Colorado forests, Gibbs said, “we know in the most critical locations in the state of Colorado that there's about a 700 million need for Colorado. But when you look at all of our lands, it’s about a couple billion dollars.”
He added that the state is expecting about $1.2 billion thus far for watersheds and water infrastructure.
One thing state officials all mentioned is that how the money is spent will be done through collaboration and conversations with local stakeholders.
As Gibbs explained it, many of the projects are coming from the ground up. “It's very localized, it's bubbling up from that local community and the state is becoming involved.”
Getting local projects across the finish line
Where all the talk of infrastructure funding becomes a reality is in local communities, like Alamosa.
The town received a $4.7 million dollar grant from the infrastructure law to build a 320-foot pedestrian bridge connecting neighborhoods to the trail system along the Rio Grande River.
“It's taken over two decades for us to get to this point,” said Mayor Ty Coleman, “of actually making the dream of having this pedestrian bridge and the connectivity a reality in our San Luis Valley community.”
Part of what made this funding a game changer for the town is that the project is 100 percent covered by the law, and did not require any local matching funds.
Sure, it may not be filling potholes, as some are waiting for, but Coleman said it’s going to support a community value: increased access to trails and parks.
But it’s also an example of the main disconnect with infrastructure funding: people won’t see the impact right away. The money was announced, but groundbreaking isn’t expected to happen until 2026.
But once it’s built, Coleman said, “They'll see these federal dollars every time they wake up and walk around our community [unlike] some of the internal infrastructure things you never see. You never see where those monies and dollars are going. But this is something that's visual, which is a constant positive reminder for people in our community.”
He feels Alamosa got lucky in a way. He said his team at the city was proactive about going after the infrastructure dollars and got encouragement from Sen. Michael Bennet and his office.
Not all rural communities have the staff to both do their day jobs and fill out grant applications.
That brings us to another question asked by a listener: how will that money be distributed across the state. And how much will the Western Slope get? So far, more than $79 million has been allocated for Western Slope communities for all different types of infrastructure.
Glenwood Springs Mayor Jonathan Godes said as soon as his community heard about the infrastructure law they decided to go for it. He’s glad that congress made this trillion-dollar-plus investment “to help these projects that are so desperate to our communities, that we've been working on for so many decades to try to accomplish. I hope that it pushes a lot of these projects… over the finish line.”
And like Coleman, Godes hopes that the distribution of dollars will end up being “equitable to rural and urban Colorado.”
Glenwood Springs missed out on one infrastructure grant last year. Godes said the second time they thought regionally, teamed with other towns, and aimed for a project that aligns with the Biden administration’s goals. The end result was more than $24 million for the Westward Three project to expand Bustang service between Glenwood Springs and Grand Junction.
Godes is particularly excited because infrastructure investment is something Congress had put off for decades, despite past administrations’ attempts.
He added it’s not necessarily about dreaming big, or building new shiny things. Much of the money is just needed to catch up and take care of infrastructure investments made in the past.
Godes compares the situation to having a house with a foundation problem.
“It keeps getting a little bit worse, but I just can't ever find that money to do it because it just keeps getting more and more expensive and I don't know what to do and I'm feeling helpless,” he explained.
Even bigger dreams ... and potholes, too
But with three more years left of funding, there are plenty of bigger dreams out there, like perhaps working to make Cottonwood pass a year-round alternative to I-70, a project that could cost hundreds of millions of dollars.
Even if the state doesn’t end up going that big, Godes said the law makes possible projects that “could unlock areas that could provide building paths for affordable housing. It could provide for safe drinking water. It could provide watershed restoration so that we have clean drinking water.”
And all those projects are also something politicians will continue to campaign on well into the future, as the congress members that voted for and supported the infrastructure law attend groundbreakings and ribbon cuttings in the years to come.
As for those pesky potholes, yes some of the infrastructure funding may trickle down to help cover them too. But Sen. Hickenlooper, a former Denver mayor who heard a lot about potholes in that job, said the best thing to do isn’t to wait for federal help, but instead, just contact your local government and advocate for getting them fixed.
CPR’s Andrew Kenney contributed to the data visualizations for this story.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
Juneteenth 2023: Western Massachusetts residents mark the day
Cities and towns across western Massachusetts commemorated Juneteenth Monday with events including a Black small business and artist vendor fair at the Wistariahurst Museum in Holyoke.
Juneteenth, a day marking the emancipation of enslaved people, became a national holiday in 2021.
Chelvanaya Gabriel is an artist and Holyoke resident. They participated in the vendor fair and reflected on what the day signifies for many Black Americans.
"Juneteenth is complicated, of course, but what it means to me is celebrating being with my QTBIPOC [Queer, Trans, Black, Indigenous, People of Color] family and celebrating the ways in which we are free. And also remembering the ways in which we are not, the ways in which we haven't been historically," they said.
With her jewelry displayed at a booth on the museum lawn Trudy Monson, a lifelong Holyoke resident, discussed her family's history with enslavement.
"My dad was actually born in Marion, Alabama, on the plantation his grandfather was a slave on. We never knew that until we were almost teenagers. He never really shared it. We didn't talk about it," she said.
Monson said her father ended up becoming a teacher at John J. Lynch Middle School in Holyoke and then a professor at Holyoke Community College.
"Education was really important to him," she said, adding that Juneteenth is an opportunity for people to get educated on the history of enslavement in America.
"It's slow, but they're learning. It took me a while before I learned about it, so I'm hoping by doing stuff like this, other people will learn about it also," she said referring to Juneteenth events in the region.
Danielle Winters an artist, musician and art teacher in Springfield, had several of her art pieces for sale at the vendor fair. She said her work is inspired by her experience as a young, Black, queer person.
"All of it for me comes from a place of existing in this matrix sort of world where they tell us that things are different from day one of slavery, when it's the exact same, just a different coat of paint," she said. "I deal a lot with feeling like... I'm kind of alone in the matrix. So it's really nice to have events like this and connect with other Black people who are having similar experiences and just find community, so you can stay sane."
Other activities ranging from a concert at Symphony Hall in Springfield and a walking tour organized by the David Ruggels Center in the Florence were held in the region.
|
<urn:uuid:87e63b1c-3083-46eb-acaa-4b1e9369d46d>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233508959.20/warc/CC-MAIN-20230925083430-20230925113430-00763.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9849673509597778,
"pii_count": 0,
"score": 2.703125,
"token_count": 546,
"url": "https://www.vermontpublic.org/2023-06-19/juneteenth-2023-western-massachusetts-residents-mark-the-day"
}
|
Juneteenth 2023: Western Massachusetts residents mark the day
Cities and towns across western Massachusetts commemorated Juneteenth Monday with events including a Black small business and artist vendor fair at the Wistariahurst Museum in Holyoke.
Juneteenth, a day marking the emancipation of enslaved people, became a national holiday in 2021.
Chelvanaya Gabriel is an artist and Holyoke resident. They participated in the vendor fair and reflected on what the day signifies for many Black Americans.
"Juneteenth is complicated, of course, but what it means to me is celebrating being with my QTBIPOC [Queer, Trans, Black, Indigenous, People of Color] family and celebrating the ways in which we are free. And also remembering the ways in which we are not, the ways in which we haven't been historically," they said.
With her jewelry displayed at a booth on the museum lawn Trudy Monson, a lifelong Holyoke resident, discussed her family's history with enslavement.
"My dad was actually born in Marion, Alabama, on the plantation his grandfather was a slave on. We never knew that until we were almost teenagers. He never really shared it. We didn't talk about it," she said.
Monson said her father ended up becoming a teacher at John J. Lynch Middle School in Holyoke and then a professor at Holyoke Community College.
"Education was really important to him," she said, adding that Juneteenth is an opportunity for people to get educated on the history of enslavement in America.
"It's slow, but they're learning. It took me a while before I learned about it, so I'm hoping by doing stuff like this, other people will learn about it also," she said referring to Juneteenth events in the region.
Danielle Winters an artist, musician and art teacher in Springfield, had several of her art pieces for sale at the vendor fair. She said her work is inspired by her experience as a young, Black, queer person.
"All of it for me comes from a place of existing in this matrix sort of world where they tell us that things are different from day one of slavery, when it's the exact same, just a different coat of paint," she said. "I deal a lot with feeling like... I'm kind of alone in the matrix. So it's really nice to have events like this and connect with other Black people who are having similar experiences and just find community, so
|
you can stay sane."
Other activities ranging from a concert at Symphony Hall in Springfield and a walking tour organized by the David Ruggels Center in the Florence were held in the region.
|
'It feels like I'm not crazy.' Gardeners aren't surprised as USDA updates key map
A newly updated government map has many of the nation's gardeners rushing online, Googling what new plants they can grow in their mostly warming regions.
It's called the U.S. Department of Agriculture's "plant hardiness zone map," and it's the national standard for gardeners and growers to figure out which plants are most likely to survive the coldest winter temperatures in their location.
This week the map got its first update in more than a decade, and the outlook for many gardens looks warmer. The 2023 map is about 2.5 degrees Fahrenheit warmer than the 2012 map across the contiguous U.S., says Chris Daly, director of the PRISM Climate Group at Oregon State University that jointly developed the map with the USDA.
Daly says the new map means about half the country has shifted into a new half zone and half hasn't. In some locations, people may find they can grow new types of flowers, fruits, vegetables and plants.
Many of the nation's gardeners are not surprised by the change.
"I have been stating all year long, 'This needs updating!'," says Megan London, a gardening consultant in Hot Springs, Arkansas,in a video she posted on Facebook. London has been gardening for 26-years, and she's seen her region warming.
In the new map, London's region in central Arkansas has moved from zone 7b to zone 8a. What that means for her is that she's now considering growing kumquats, mandarin oranges, and shampoo ginger, a tropical plant.
But London says that the excitement she and other gardeners have to grow new things is tempered by another feeling: concern about human-caused climate change.
"We're excited, but in the back of our minds, we're also a little wary," London says. "In the back of our mind, we're like, ah, that means things are warming up. So what does this mean in the long run?"
The scientific community overwhelmingly agrees that humans burning fossil fuels like oil, coal and gas is the primary driver of global warming. The summer of 2023 was the hottest meteorological summer on record for the northern hemisphere, according to the National Oceanic and Atmospheric Administration.
Daly says he is hesitant to explicitly attribute the specific changes from the 2012 map to the 2023 map to climate change because of the volatility of the key statistic they used to create this map. They were mapping "the coldest night of the year, each year, over the past 30 years", Daly says, and it's a highly variable figure.
In an email, a press officer for the USDA says, "Changes to plant hardiness zones are not necessarily reflective of global climate change because of the highly variable nature of the extreme minimum temperature of the year."
But Daly says, in the big picture, climate change is playing a role in changing what grows where in the US: "Over the long run, we will expect to see a slow shifting northward of zones as climate change takes hold."
Still, for gardeners like Rachel Patterson, in Port St. Joe, Florida, the updated USDA map showing a warming region is validating, if not comforting. "It feels like I'm not crazy," she says.
Patterson moved to her new community two years ago to help rebuild after a hurricane. She now gardens with her three-year-old and his wheelbarrow, and has seen the impacts of climate change in her Florida gardening community.
"The sweet little grannies here are just heartbroken, they can't grow their tomatoes," she says, "It's so much hotter, the tomatoes burn."
Patterson has been helping her community adapt to the heat by planting varieties of heirloom tomatoes that are more resilient to fungi that spread more rapidly in warmer climates.
She says the updated map is a reminder of the need for climate action: "It's just going to keep getting hotter. So the government has to make policy changes to slow climate change down."
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
<urn:uuid:ae8c195b-48da-4022-bf08-3a0bfba44841>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100535.26/warc/CC-MAIN-20231204214708-20231205004708-00815.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9603626132011414,
"pii_count": 0,
"score": 3.171875,
"token_count": 865,
"url": "https://www.ualrpublicradio.org/npr-news/2023-11-17/it-feels-like-im-not-crazy-gardeners-arent-surprised-as-usda-updates-key-map"
}
|
'It feels like I'm not crazy.' Gardeners aren't surprised as USDA updates key map
A newly updated government map has many of the nation's gardeners rushing online, Googling what new plants they can grow in their mostly warming regions.
It's called the U.S. Department of Agriculture's "plant hardiness zone map," and it's the national standard for gardeners and growers to figure out which plants are most likely to survive the coldest winter temperatures in their location.
This week the map got its first update in more than a decade, and the outlook for many gardens looks warmer. The 2023 map is about 2.5 degrees Fahrenheit warmer than the 2012 map across the contiguous U.S., says Chris Daly, director of the PRISM Climate Group at Oregon State University that jointly developed the map with the USDA.
Daly says the new map means about half the country has shifted into a new half zone and half hasn't. In some locations, people may find they can grow new types of flowers, fruits, vegetables and plants.
Many of the nation's gardeners are not surprised by the change.
"I have been stating all year long, 'This needs updating!'," says Megan London, a gardening consultant in Hot Springs, Arkansas,in a video she posted on Facebook. London has been gardening for 26-years, and she's seen her region warming.
In the new map, London's region in central Arkansas has moved from zone 7b to zone 8a. What that means for her is that she's now considering growing kumquats, mandarin oranges, and shampoo ginger, a tropical plant.
But London says that the excitement she and other gardeners have to grow new things is tempered by another feeling: concern about human-caused climate change.
"We're excited, but in the back of our minds, we're also a little wary," London says. "In the back of our mind, we're like, ah, that means things are warming up. So what does this mean in the long run?"
The scientific community overwhelmingly agrees that humans burning fossil fuels like oil, coal and gas is the primary driver of global warming. The summer of 2023 was the hottest meteorological summer on record for the northern hemisphere, according to the National Oceanic and Atmospheric Administration.
Daly says he is hesitant to explicitly attribute the specific changes from the 20
|
12 map to the 2023 map to climate change because of the volatility of the key statistic they used to create this map. They were mapping "the coldest night of the year, each year, over the past 30 years", Daly says, and it's a highly variable figure.
In an email, a press officer for the USDA says, "Changes to plant hardiness zones are not necessarily reflective of global climate change because of the highly variable nature of the extreme minimum temperature of the year."
But Daly says, in the big picture, climate change is playing a role in changing what grows where in the US: "Over the long run, we will expect to see a slow shifting northward of zones as climate change takes hold."
Still, for gardeners like Rachel Patterson, in Port St. Joe, Florida, the updated USDA map showing a warming region is validating, if not comforting. "It feels like I'm not crazy," she says.
Patterson moved to her new community two years ago to help rebuild after a hurricane. She now gardens with her three-year-old and his wheelbarrow, and has seen the impacts of climate change in her Florida gardening community.
"The sweet little grannies here are just heartbroken, they can't grow their tomatoes," she says, "It's so much hotter, the tomatoes burn."
Patterson has been helping her community adapt to the heat by planting varieties of heirloom tomatoes that are more resilient to fungi that spread more rapidly in warmer climates.
She says the updated map is a reminder of the need for climate action: "It's just going to keep getting hotter. So the government has to make policy changes to slow climate change down."
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
Trains are becoming less safe. Why the Ohio derailment disaster could happen more often
- Norfolk Southern, the company behind the Ohio chemical spill, fought against a new U.S. Department of Transportation safety rule that may have helped limit the impact of this month's derailment.
- As railroad operators have faced more competition with long-haul truckers, the major companies have worked to decrease costs, including by cutting the workforce.
- Overall train length and weight have both grown over the last decade partly in an effort by companies to be more efficient. But, when an emergency occurs, stopping quickly with heavier, longer trains is far more difficult.
In 2013, a train derailment and subsequent fire in Lac-Mégantic, Quebec, killed 47 people and required all but three downtown buildings to be demolished for safety reasons. The following year, a derailment in Casselton, North Dakota, spilled nearly 500,000 gallons of crude oil and caused $13.5 million in damage, prompting the Obama administration to push for a new safety rule to govern the transportation of hazardous materials, avoid environmental disasters and save lives.
The effort to create a new safety rule was fought by industry lobbyists, including Norfolk Southern Corp., the Atlanta-based company whose train derailed Feb. 3 in eastern Ohio and spilled chemicals, leaving residents in East Palestine worried about their air, soil and water quality.
The safety rule, issued in 2015, required electronically controlled brakes – which apply braking simultaneously across a train rather than railcar by railcar over a span of seconds – to be installed by 2023. However, the rule was narrowly crafted and only applied to certain “high-hazard flammable trains” carrying at least 20 consecutive loaded cars filled with liquids such as crude oil.
The Trump administration repealed the brakes requirement three years later, stating that its cost exceeded its benefits.
Efforts to reduce costs, including lobbying against costly regulation, increasing train lengths, reducing inspection times and making major cuts to the railroad workforce have made trains less safe, labor representatives and industry experts told USA TODAY. That has increased the potential for accidents like the one in Ohio to become more common.
Still, in general, major derailments leading to public evacuations, chemical spills or loss of life are relatively rare compared with the vast amounts of hazmat cargo railroads transit.
We crunched the numbers: How often do train wrecks spill hazardous chemicals into neighborhoods?
Had industry lobbying interests not prevailed on the 2015 rule, the Norfolk Southern Railway train involved in the Feb. 3 derailment may have been equipped with the better braking system, shown in studies to reduce the size of a derailment pile up when emergency braking is applied.
"ECP brakes would have avoided that monster pile up behind the derailed car," said Steven Ditmeyer, a former senior official at the Federal Railroad Administration. "In fact, depending on when the crew got the (error) notice from the wayside detector, applying the ECP brakes would have stopped everything very quickly.
"So I think it would have helped."
Norfolk Southern referred questions regarding lobbying against ECP brakes to the industry group the Association of American Railroads for comment.
Association of American Railroads spokeswoman Jessica Kahanek said in an emailed statement that several railroad operators have tested ECP brakes and found them to have a "significant" failure rate and lengthy repair time that makes them impractical.
When such electronically controlled brakes fail, she said, trains become immovable and it can cause major disruptions. So railroads instead space locomotives throughout a train, which can more quickly distribute a brake signal among cars than a single locomotive can, Kahanek said.
In a 2017 report, the National Academy of Sciences said it was unable to "make a conclusive statement about the emergency performance of ECP brakes" compared with other braking systems based on the results of the provided DOT testing and analysis.
What is precision scheduled railroading?
The rule-making saga and its ultimate repeal are emblematic of the politically and financially difficult task of making improvements to the nation’s railroad system, which has left the industry mostly stuck with post-Civil War-era technology in its braking systems, even as other new technologies meant to streamline operations, are adopted.
Railroad operators have seen increasing competition from long-haul truck drivers for transporting goods to such a degree that over the last decades its executives have instituted a business philosophy known as precision scheduled railroading, which focuses on maximizing the use of trains by the individual carload, that has led to longer, heavier trains crisscrossing the nation’s railroad tracks in the name of efficiency and better shareholder returns.
But heavier and longer trains also mean that when something goes awry, the consequences can be far more catastrophic.
"If you have a very small error of some sort, most often a mechanical failure, you can all of a sudden have a very expensive derailment," said Karl Ziebarth, a longtime transportation consultant who has contracted for the Federal Railroad Administration.
"All of these things (industry trends) together show the pursuit of lower operating ratio (or costs) may spin off in other directions and cause catastrophic failures," Ziebarth added.
The Norfolk Southern train that derailed in East Palestine was carrying flammable liquids, benzene and butyl acrylates, according to the U.S. Environmental Protection Agency. The tank car carrying butyl acrylates was breached and the entire load was lost in the spill and subsequent fire, according to EPA documentation.
The train also had five derailed tank cars of vinyl chloride, a flammable gas not captured by the Obama-era rule despite efforts by the National Transportation Safety Board at the time to have the agency adopt a broader definition for high-hazard flammable trains that would include those carrying flammable gasses.
The derailed Norfolk Southern train in Ohio resulted in large plumes of black smoke over the rural 5,000-person community, as crews did a “controlled release” of the hazardous materials on board to avoid an explosion. Residents were forced to leave their homes for days, and upon returning, complained about the smell in the air, burning in their eyes, and sick animals. Environmental authorities continue to monitor the air quality, and residents and business owners have banded together in a federal class-action lawsuit accusing Norfolk Southern of negligence.
In a Norfolk Southern 2015 lobbying disclosure, the company noted that it lobbied both Congress and the executive agencies working on the Department of Transportation rule, and “opposed additional speed limitations and requiring ECP brakes.”
Norfolk Southern’s vice president of government relations, Rudy Husband, told Pennsylvania lawmakers in June 2015 that while the rail industry would comply with the new rule, it has “serious concerns about the ECP brake requirements and the potential adverse impacts on the fluidity of the national freight network.”
'Very long trains' emerge amid efforts to increase profits
In the company’s 2021 annual report, Norfolk Southern Corp. told investors it had concluded its three-year plan to transform into a more “innovative and efficient railroad,” reaching record levels of productivity across its operations, including increasing average train weight by 21% and train length by 20%.
The Atlanta-based company’s railway subsidiary operates across 22 states and Washington, D.C., but it’s not the only company with trains that have grown in length.
Average train lengths in 2017 were between 1.2 and 1.4 miles, according to data provided by two major railroads to the U.S. Government Accountability Office, an increase of 25% since 2008. And the Association of American Railroads found that most trains have grown by roughly 2,700 feet, or 26 additional cars per train, over the past decade.
Norfolk Southern’s train in Ohio, at roughly 150 cars long, stretched nearly 1.9 miles, the company said. Preliminary indications are that a wheel bearing in the final stage of overheat failure occurred moments before the derailment and may have caused the crash, the NTSB said Tuesday. The train’s crew received an alarm from a wayside defect detector – a sensor often integrated into railroad tracks to detect problems along the way – shortly before the derailment indicating a mechanical issue and then an emergency brake application was initiated, according to the NTSB.
The labor union, the Brotherhood of Locomotive Engineers and Trainmen, has noted that very long trains can lead to interruptions in radio communications with crew members or wayside defect detectors. There are no regulatory standards for wayside detectors. The labor union also noted in a presentation last month that very long trains can impact braking performance, decrease time for thorough inspections and increase the likelihood of catastrophic derailments.
The Federal Railroad Administration does not place limits on freight train length, but the agency states in documents that “existing safety issues may be exacerbated as train length continues,” including insufficient time for human inspection of rail cars, losing communication with equipment and people, and wearing out equipment more quickly.
The National Academies of Sciences, Engineering, and Medicine is now studying the impacts of trains longer than 7,500 feet, with federal officials looking to see if new regulations are necessary. That study is expected to be completed in November, according to the Federal Railroad Administration.
Longer, heavier trains make it harder to brake in an emergency
When a train using conventional air brakes tries to stop, the air pressure signal is sent sequentially at a speed slightly slower than sound from railcar to railcar, generating increasing amounts of “in-train forces” because of individual cars pushing and pulling against one another, as cars at the front of a train begin to reduce speed before cars at the back.
The longer the train, the more difficult it is to skillfully stop, and the more likely it is an emergency braking scenario goes awry.
Over the past decade, as trains have grown longer and heavier, both the total number of reported accidents and the percentage of accidents on major tracks with 150 or more railcars have both gone up, according to a January presentation by the Federal Railroad Administration.
Having fewer and longer trains means data about overall railroad accidents can make it appear as if there have been fewer accidents over the past decade. But a USA TODAY analysis of federal safety data by rate of train accidents per million train miles shows that the rate of accidents has been ticking up for Norfolk Southern progressively over the past decade. So too have the numbers around hazmat cars damaged or derailed, with 14 damaged or derailed in 2012, a peak of 117 in 2020 and 85 in 2021.
In 2017, a 121-car Norfolk Southern train derailed in Pell City, Alabama, leading to a release of hazardous material with minor evacuations. In 2020, a 230-car Norfolk Southern train with 78 hazmat cars derailed in Rocky Gap, West Virginia. Three of the hazmat cars were damaged, with the company chalking up the cause to railcars not being put in proper order.
Connor Spielmaker, a spokesman for Norfolk Southern, said the company's data on accidents along major railroad tracks, which impact the public more directly than incidents at one of the company's facilities, shows that accidents are on a flat or downward trend depending on the years selected for analysis.
But USA TODAY's review of the rate of "main line" train accidents per million train miles, also consistently trended slightly upward over the past decade ending in 2021, federal safety data shows.
Railroads shed workers as safety incidents rose
The railroad industry has cut roughly 30%, or 45,000 total workers since 2015, and since deploying precision scheduled railroading. Norfolk Southern has shed roughly 40% of its 30,456-person workforce. By the end of 2021, the company employed 18,100 workers, according to U.S. Securities and Exchange Commission filings.
Meanwhile, major railroad operators including Norfolk Southern have paid out $196 billion in buybacks and dividends since 2010, much more than the $150 billion spent on infrastructure improvements during that time, said Martin J. Oberman, chairman of the North American Rail Shippers Association, in a 2021 speech.
“In our view, I don’t think you can separate the drastic reduction in workforce over the past seven or eight years, from the increase in accidents, the rate of safety incidents,” said Greg Regan, president of the Transportation Trades Department of the AFL-CIO, the union representing rail labor. “What people see on the ground is a really big amount of pressure on moving as fast as possible, as lean as possible, and generating as much profit as possible.”
Regan noted that because there is no minimum inspection time required for railcars, the time taken for workers to inspect cars has dropped from two minutes to 40 or 45 seconds.
There has also been an effort to introduce more technology in lieu of human workers or in place of a physical inspection, Regan said, but when such technology fails, the lack of eyeballs and workers to address an issue can lead to accidents.
|
<urn:uuid:cd9cc8f5-a6a5-4a1c-b5c1-2b6a913b070d>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655143.72/warc/CC-MAIN-20230608204017-20230608234017-00274.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9612167477607727,
"pii_count": 0,
"score": 3,
"token_count": 2706,
"url": "https://www.usatoday.com/story/news/2023/02/14/norfolk-southerns-ohio-train-derailment-emblematic-rail-trends/11248956002/?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=usatodaycomnation-topstories"
}
|
Trains are becoming less safe. Why the Ohio derailment disaster could happen more often
- Norfolk Southern, the company behind the Ohio chemical spill, fought against a new U.S. Department of Transportation safety rule that may have helped limit the impact of this month's derailment.
- As railroad operators have faced more competition with long-haul truckers, the major companies have worked to decrease costs, including by cutting the workforce.
- Overall train length and weight have both grown over the last decade partly in an effort by companies to be more efficient. But, when an emergency occurs, stopping quickly with heavier, longer trains is far more difficult.
In 2013, a train derailment and subsequent fire in Lac-Mégantic, Quebec, killed 47 people and required all but three downtown buildings to be demolished for safety reasons. The following year, a derailment in Casselton, North Dakota, spilled nearly 500,000 gallons of crude oil and caused $13.5 million in damage, prompting the Obama administration to push for a new safety rule to govern the transportation of hazardous materials, avoid environmental disasters and save lives.
The effort to create a new safety rule was fought by industry lobbyists, including Norfolk Southern Corp., the Atlanta-based company whose train derailed Feb. 3 in eastern Ohio and spilled chemicals, leaving residents in East Palestine worried about their air, soil and water quality.
The safety rule, issued in 2015, required electronically controlled brakes – which apply braking simultaneously across a train rather than railcar by railcar over a span of seconds – to be installed by 2023. However, the rule was narrowly crafted and only applied to certain “high-hazard flammable trains” carrying at least 20 consecutive loaded cars filled with liquids such as crude oil.
The Trump administration repealed the brakes requirement three years later, stating that its cost exceeded its benefits.
Efforts to reduce costs, including lobbying against costly regulation, increasing train lengths, reducing inspection times and making major cuts to the railroad workforce have made trains less safe, labor representatives and industry experts told USA TODAY. That has increased the potential for accidents like the one in Ohio to become more common.
Still, in general, major derailments leading to public evacuations, chemical spills or loss of life are relatively rare compared with the vast amounts of hazmat cargo railroads transit.
We crunched the numbers: How often do train wrecks spill
|
hazardous chemicals into neighborhoods?
Had industry lobbying interests not prevailed on the 2015 rule, the Norfolk Southern Railway train involved in the Feb. 3 derailment may have been equipped with the better braking system, shown in studies to reduce the size of a derailment pile up when emergency braking is applied.
"ECP brakes would have avoided that monster pile up behind the derailed car," said Steven Ditmeyer, a former senior official at the Federal Railroad Administration. "In fact, depending on when the crew got the (error) notice from the wayside detector, applying the ECP brakes would have stopped everything very quickly.
"So I think it would have helped."
Norfolk Southern referred questions regarding lobbying against ECP brakes to the industry group the Association of American Railroads for comment.
Association of American Railroads spokeswoman Jessica Kahanek said in an emailed statement that several railroad operators have tested ECP brakes and found them to have a "significant" failure rate and lengthy repair time that makes them impractical.
When such electronically controlled brakes fail, she said, trains become immovable and it can cause major disruptions. So railroads instead space locomotives throughout a train, which can more quickly distribute a brake signal among cars than a single locomotive can, Kahanek said.
In a 2017 report, the National Academy of Sciences said it was unable to "make a conclusive statement about the emergency performance of ECP brakes" compared with other braking systems based on the results of the provided DOT testing and analysis.
What is precision scheduled railroading?
The rule-making saga and its ultimate repeal are emblematic of the politically and financially difficult task of making improvements to the nation’s railroad system, which has left the industry mostly stuck with post-Civil War-era technology in its braking systems, even as other new technologies meant to streamline operations, are adopted.
Railroad operators have seen increasing competition from long-haul truck drivers for transporting goods to such a degree that over the last decades its executives have instituted a business philosophy known as precision scheduled railroading, which focuses on maximizing the use of trains by the individual carload, that has led to longer, heavier trains crisscrossing the nation’s railroad tracks in the name of efficiency and better shareholder returns.
But heavier and longer trains also mean that when something goes awry, the consequences can be far more catastrophic.
"If you have a very small error of some sort, most often a mechanical failure, you can all of a sudden have a very expensive derailment," said Karl Ziebarth, a longtime transportation consultant who has contracted for the Federal Railroad Administration.
"All of these things (industry trends) together show the pursuit of lower operating ratio (or costs) may spin off in other directions and cause catastrophic failures," Ziebarth added.
The Norfolk Southern train that derailed in East Palestine was carrying flammable liquids, benzene and butyl acrylates, according to the U.S. Environmental Protection Agency. The tank car carrying butyl acrylates was breached and the entire load was lost in the spill and subsequent fire, according to EPA documentation.
The train also had five derailed tank cars of vinyl chloride, a flammable gas not captured by the Obama-era rule despite efforts by the National Transportation Safety Board at the time to have the agency adopt a broader definition for high-hazard flammable trains that would include those carrying flammable gasses.
The derailed Norfolk Southern train in Ohio resulted in large plumes of black smoke over the rural 5,000-person community, as crews did a “controlled release” of the hazardous materials on board to avoid an explosion. Residents were forced to leave their homes for days, and upon returning, complained about the smell in the air, burning in their eyes, and sick animals. Environmental authorities continue to monitor the air quality, and residents and business owners have banded together in a federal class-action lawsuit accusing Norfolk Southern of negligence.
In a Norfolk Southern 2015 lobbying disclosure, the company noted that it lobbied both Congress and the executive agencies working on the Department of Transportation rule, and “opposed additional speed limitations and requiring ECP brakes.”
Norfolk Southern’s vice president of government relations, Rudy Husband, told Pennsylvania lawmakers in June 2015 that while the rail industry would comply with the new rule, it has “serious concerns about the ECP brake requirements and the potential adverse impacts on the fluidity of the national freight network.”
'Very long trains' emerge amid efforts to increase profits
In the company’s 2021 annual report, Norfolk Southern Corp. told investors it had concluded its three-year plan to transform into a more “innovative and efficient railroad,” reaching record levels of productivity across its operations, including increasing average train weight by 21% and train length by 20%.
The Atlanta-based company’s railway subsidiary operates across 22 states and Washington, D.C., but it’s not the only company with trains that have grown in length.
Average train lengths in 2017 were between 1.2 and 1.4 miles, according to data provided by two major railroads to the U.S. Government Accountability Office, an increase of 25% since 2008. And the Association of American Railroads found that most trains have grown by roughly 2,700 feet, or 26 additional cars per train, over the past decade.
Norfolk Southern’s train in Ohio, at roughly 150 cars long, stretched nearly 1.9 miles, the company said. Preliminary indications are that a wheel bearing in the final stage of overheat failure occurred moments before the derailment and may have caused the crash, the NTSB said Tuesday. The train’s crew received an alarm from a wayside defect detector – a sensor often integrated into railroad tracks to detect problems along the way – shortly before the derailment indicating a mechanical issue and then an emergency brake application was initiated, according to the NTSB.
The labor union, the Brotherhood of Locomotive Engineers and Trainmen, has noted that very long trains can lead to interruptions in radio communications with crew members or wayside defect detectors. There are no regulatory standards for wayside detectors. The labor union also noted in a presentation last month that very long trains can impact braking performance, decrease time for thorough inspections and increase the likelihood of catastrophic derailments.
The Federal Railroad Administration does not place limits on freight train length, but the agency states in documents that “existing safety issues may be exacerbated as train length continues,” including insufficient time for human inspection of rail cars, losing communication with equipment and people, and wearing out equipment more quickly.
The National Academies of Sciences, Engineering, and Medicine is now studying the impacts of trains longer than 7,500 feet, with federal officials looking to see if new regulations are necessary. That study is expected to be completed in November, according to the Federal Railroad Administration.
Longer, heavier trains make it harder to brake in an emergency
When a train using conventional air brakes tries to stop, the air pressure signal is sent sequentially at a speed slightly slower than sound from railcar to railcar, generating increasing amounts of “in-train forces” because of individual cars pushing and pulling against one another, as cars at the front of a train begin to reduce speed before cars at the back.
The longer the train, the more difficult it is to skillfully stop, and the more likely it is an emergency braking scenario goes awry.
Over the past decade, as trains have grown longer and heavier, both the total number of reported accidents and the percentage of accidents on major tracks with 150 or more railcars have both gone up, according to a January presentation by the Federal Railroad Administration.
Having fewer and longer trains means data about overall railroad accidents can make it appear as if there have been fewer accidents over the past decade. But a USA TODAY analysis of federal safety data by rate of train accidents per million train miles shows that the rate of accidents has been ticking up for Norfolk Southern progressively over the past decade. So too have the numbers around hazmat cars damaged or derailed, with 14 damaged or derailed in 2012, a peak of 117 in 2020 and 85 in 2021.
In 2017, a 121-car Norfolk Southern train derailed in Pell City, Alabama, leading to a release of hazardous material with minor evacuations. In 2020, a 230-car Norfolk Southern train with 78 hazmat cars derailed in Rocky Gap, West Virginia. Three of the hazmat cars were damaged, with the company chalking up the cause to railcars not being put in proper order.
Connor Spielmaker, a spokesman for Norfolk Southern, said the company's data on accidents along major railroad tracks, which impact the public more directly than incidents at one of the company's facilities, shows that accidents are on a flat or downward trend depending on the years selected for analysis.
But USA TODAY's review of the rate of "main line" train accidents per million train miles, also consistently trended slightly upward over the past decade ending in 2021, federal safety data shows.
Railroads shed workers as safety incidents rose
The railroad industry has cut roughly 30%, or 45,000 total workers since 2015, and since deploying precision scheduled railroading. Norfolk Southern has shed roughly 40% of its 30,456-person workforce. By the end of 2021, the company employed 18,100 workers, according to U.S. Securities and Exchange Commission filings.
Meanwhile, major railroad operators including Norfolk Southern have paid out $196 billion in buybacks and dividends since 2010, much more than the $150 billion spent on infrastructure improvements during that time, said Martin J. Oberman, chairman of the North American Rail Shippers Association, in a 2021 speech.
“In our view, I don’t think you can separate the drastic reduction in workforce over the past seven or eight years, from the increase in accidents, the rate of safety incidents,” said Greg Regan, president of the Transportation Trades Department of the AFL-CIO, the union representing rail labor. “What people see on the ground is a really big amount of pressure on moving as fast as possible, as lean as possible, and generating as much profit as possible.”
Regan noted that because there is no minimum inspection time required for railcars, the time taken for workers to inspect cars has dropped from two minutes to 40 or 45 seconds.
There has also been an effort to introduce more technology in lieu of human workers or in place of a physical inspection, Regan said, but when such technology fails, the lack of eyeballs and workers to address an issue can lead to accidents.
|
FDA proposes lead limits in baby food products
The Food and Drug Administration on Tuesday proposed new maximum limits on how much lead can be present in food products intended for babies and young children.
Why it matters: Lead, a toxic element that can harm children’s health and development if they are exposed to even low levels of it, is just one heavy metal that has been consistently detected in baby foods.
How it works: Lead and other heavy metals generally get into baby food because they have been introduced in soil in which foods used for products are grown, the FDA said.
- The metals are generally introduced into growing environments from contaminated water, pesticides and atmospheric deposition from industrial activities, though there can be natural sources as well, such as volcanic eruptions and rock weathering.
- Agricultural crops, like root vegetables, then take up the pollutants as they grow, or the pollutants may be deposited and present on plant surfaces, such as the leaves of leafy vegetables.
- Certain plants can absorb lead and other heavy metals more readily from the soil than other crops.
- Lead is toxic to people of any age or health status, but it can be particularly dangerous for children.
- It's been known to damage their brains and nervous systems, affect their growth and development, cause learning and behavior problems as well as hearing and speech problems, according to the Centers for Disease Control and Prevention.
By the numbers: The FDA's draft guidance would set levels that do not exceed 10 parts per billion of lead in most fruits, vegetables, mixtures, yogurts, custards, puddings and single-ingredient meats.
- It would also set levels that do not exceed 20 parts per billion in root vegetables and dry infant cereals.
- The limits would apply to these categories of food that are specifically produced for babies and young children less than two years old.
Yes, but: It would not be mandatory for producers to abide by the FDA's proposed limits, but the agency would be able to bring enforcement actions against manufacturers that produce products which exceed the limits.
- The limits only apply to lead, but other heavy metals, like cadmium, arsenic and mercury, have also been detected in foods for babies and toddlers.
What they're saying: The proposed draft guidance is intended "to help reduce potential health effects in this vulnerable population from dietary exposure to lead," the FDA said Tuesday in a news release.
- "The proposed action levels announced today, along with our continued work with our state and federal partners, and with industry and growers to identify mitigation strategies, will result in long-term, meaningful and sustainable reductions in the exposure to this contaminant from foods,” FDA commissioner Robert Califf said in a statement.
- "For babies and young children who eat the foods covered in today’s draft guidance, the FDA estimates that these action levels could result in as much as a 24-27% reduction in exposure to lead from these foods."
The big picture: The FDA in August 2020 set limits on the amount of inorganic arsenic that can be present in rice cereal for infants, and in April 2022 it proposed lead limits in juices.
- Studies have also shown that homemade baby foods do not have lower heavy metal levels than store-bought baby food.
What's next: The limits will be finalized by the agency after a 60-day period for public comment.
|
<urn:uuid:86b6cb30-d1e1-4ef6-afec-ec52e790bbad>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100229.44/warc/CC-MAIN-20231130161920-20231130191920-00839.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9630548357963562,
"pii_count": 0,
"score": 3.71875,
"token_count": 698,
"url": "https://www.axios.com/2023/01/24/fda-lead-limits-baby-food-products?ref=todayheadlines.live"
}
|
FDA proposes lead limits in baby food products
The Food and Drug Administration on Tuesday proposed new maximum limits on how much lead can be present in food products intended for babies and young children.
Why it matters: Lead, a toxic element that can harm children’s health and development if they are exposed to even low levels of it, is just one heavy metal that has been consistently detected in baby foods.
How it works: Lead and other heavy metals generally get into baby food because they have been introduced in soil in which foods used for products are grown, the FDA said.
- The metals are generally introduced into growing environments from contaminated water, pesticides and atmospheric deposition from industrial activities, though there can be natural sources as well, such as volcanic eruptions and rock weathering.
- Agricultural crops, like root vegetables, then take up the pollutants as they grow, or the pollutants may be deposited and present on plant surfaces, such as the leaves of leafy vegetables.
- Certain plants can absorb lead and other heavy metals more readily from the soil than other crops.
- Lead is toxic to people of any age or health status, but it can be particularly dangerous for children.
- It's been known to damage their brains and nervous systems, affect their growth and development, cause learning and behavior problems as well as hearing and speech problems, according to the Centers for Disease Control and Prevention.
By the numbers: The FDA's draft guidance would set levels that do not exceed 10 parts per billion of lead in most fruits, vegetables, mixtures, yogurts, custards, puddings and single-ingredient meats.
- It would also set levels that do not exceed 20 parts per billion in root vegetables and dry infant cereals.
- The limits would apply to these categories of food that are specifically produced for babies and young children less than two years old.
Yes, but: It would not be mandatory for producers to abide by the FDA's proposed limits, but the agency would be able to bring enforcement actions against manufacturers that produce products which exceed the limits.
- The limits only apply to lead, but other heavy metals, like cadmium, arsenic and mercury, have also been detected in foods for babies and toddlers.
What they're saying: The proposed draft guidance is intended "to help reduce potential health effects in this vulnerable population from dietary exposure to lead," the FDA said Tuesday in a news release.
- "The proposed action levels announced today, along with our continued work with our state and federal partners, and with industry and growers to identify mitigation
|
strategies, will result in long-term, meaningful and sustainable reductions in the exposure to this contaminant from foods,” FDA commissioner Robert Califf said in a statement.
- "For babies and young children who eat the foods covered in today’s draft guidance, the FDA estimates that these action levels could result in as much as a 24-27% reduction in exposure to lead from these foods."
The big picture: The FDA in August 2020 set limits on the amount of inorganic arsenic that can be present in rice cereal for infants, and in April 2022 it proposed lead limits in juices.
- Studies have also shown that homemade baby foods do not have lower heavy metal levels than store-bought baby food.
What's next: The limits will be finalized by the agency after a 60-day period for public comment.
|
Laurie Santos launches new course aimed at teen well-being
After the enormous success of her original online psychology course and the announcement of her retirement as Silliman’s Head of College, Santos hopes to engage a new audience: middle and high schoolers.
Annie Yan, Contributing Illustrator
Professor Laurie Santos has revamped her popular “The Science of Well-Being” Coursera class in response to rising anxiety and depression rates within the teenage demographic.
The new course, which debuted earlier this month, has already garnered thousands of views. Santos hones in on teen-specific problems such as intrafamilial conflicts, school stressors and a variety of other emotional growing pains.
“In my Yale class, I quote the scary mental health statistics facing college students nationally today,” Santos told the News. Per her own numbers, over 40 percent of surveyed students are “too depressed to function most days,” and over 60 percent feel overwhelming anxiety.
Santos noted that when she first began teaching Psychology and the Good Life — a psychology lecture which was at one point the school’s most popular course — public attention grew. Almost immediately, she recalls, she began to receive requests from parents and educators asking for an accessible version of the class tailored toward younger audiences.
Santos believes many mental health issues start in high school and even middle school, peaking as students approach university and stressors accumulate. She hopes her course might work as a “preventative medicine,” stopping some of these stressors before they can grow.
“I really wish all my students were able to learn some evidence-based strategies for navigating stress before they come to Yale,” Santos told the News. “I feel like this would make them more prepared for the kinds of stresses they’ll face in college.”
The course aims to focus on the teaching of “sustainable well-being practices” that enable learners to consider and overcome their own cognitive biases, leaving them better-suited to deal with the stressors of the modern world as college students and young adults.
“I believe that especially with well-being activities such as engaging in social connection and breathing exercises, providing that knowledge at an earlier age will allow for better stress-relieving practices going forward which drew me to this project,” Karen Ayoub ’25, a research assistant on the project, told the News.
Ayoub believes in the importance of personalized learning strategies for individual students, and said that attentiveness to their socioemotional needs is crucial.
In order to ensure the course would be effective with its target age demographic, Santos hired Ayoub to join the venture. Ayoub, a psychology major, had taken Santos’s lecture course and had become increasingly interested in the intersection between education, self-care and stress.
Ayoub and other members of the team brought high school students to Yale’s campus and filmed interviews in which they asked about their concerns and experiences, hoping to further refine the course’s goals. The class also includes opt-in surveys. The surveys allow students to provide feedback and test the content’s efficacy — a strategy which Santos said proved successful in the course’s original adult version.
“The primary difference between courses is the ‘YOU.’ In the new version, Laurie speaks directly to teenagers and includes scenarios that are relevant to them,” explained Belinda Platt, a project manager and associate director of digital education at Yale’s Poorvu Center for Teaching and Learning.
In order to ensure the course would be able to reach a wide audience, Santos sought help from Platt and the Poorvu center.
“Ideally, the course wins over the skeptics and equips teenagers with strategies to take ownership of their own mental health,” Platt told the News. “I wish I had this when I was younger.”
Platt added that she hopes the course will engage audiences that have thus far been unreachable.
Santos is excited about initial positive reactions to the course and the traction it is gaining; she noted that in the first few days of its existence, over 10,000 students enrolled. Over 170,000 have signed up for the adult version since its impetus.
“I think that speaks to the fact that there’s a real need for better strategies for managing stress in our young people today,” she stated.
Santos began teaching at Yale in 2003.
|
<urn:uuid:156a1489-975b-4116-aaa4-bb990260d171>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510284.49/warc/CC-MAIN-20230927071345-20230927101345-00818.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9639679193496704,
"pii_count": 0,
"score": 2.859375,
"token_count": 947,
"url": "https://yaledailynews.com/blog/2023/01/27/laurie-santos-launches-new-course-aimed-at-teen-well-being/"
}
|
Laurie Santos launches new course aimed at teen well-being
After the enormous success of her original online psychology course and the announcement of her retirement as Silliman’s Head of College, Santos hopes to engage a new audience: middle and high schoolers.
Annie Yan, Contributing Illustrator
Professor Laurie Santos has revamped her popular “The Science of Well-Being” Coursera class in response to rising anxiety and depression rates within the teenage demographic.
The new course, which debuted earlier this month, has already garnered thousands of views. Santos hones in on teen-specific problems such as intrafamilial conflicts, school stressors and a variety of other emotional growing pains.
“In my Yale class, I quote the scary mental health statistics facing college students nationally today,” Santos told the News. Per her own numbers, over 40 percent of surveyed students are “too depressed to function most days,” and over 60 percent feel overwhelming anxiety.
Santos noted that when she first began teaching Psychology and the Good Life — a psychology lecture which was at one point the school’s most popular course — public attention grew. Almost immediately, she recalls, she began to receive requests from parents and educators asking for an accessible version of the class tailored toward younger audiences.
Santos believes many mental health issues start in high school and even middle school, peaking as students approach university and stressors accumulate. She hopes her course might work as a “preventative medicine,” stopping some of these stressors before they can grow.
“I really wish all my students were able to learn some evidence-based strategies for navigating stress before they come to Yale,” Santos told the News. “I feel like this would make them more prepared for the kinds of stresses they’ll face in college.”
The course aims to focus on the teaching of “sustainable well-being practices” that enable learners to consider and overcome their own cognitive biases, leaving them better-suited to deal with the stressors of the modern world as college students and young adults.
“I believe that especially with well-being activities such as engaging in social connection and breathing exercises, providing that knowledge at an earlier age will allow for better stress-relieving practices going forward which drew me to this project,” Karen Ayoub ’25, a research assistant on the project, told the News.
Ayoub believes in the importance of personalized learning strategies for individual students, and said that attentiveness to their socioemotional needs is crucial.
In order to ensure the course would be effective with its target age demographic, Santos hired A
|
youb to join the venture. Ayoub, a psychology major, had taken Santos’s lecture course and had become increasingly interested in the intersection between education, self-care and stress.
Ayoub and other members of the team brought high school students to Yale’s campus and filmed interviews in which they asked about their concerns and experiences, hoping to further refine the course’s goals. The class also includes opt-in surveys. The surveys allow students to provide feedback and test the content’s efficacy — a strategy which Santos said proved successful in the course’s original adult version.
“The primary difference between courses is the ‘YOU.’ In the new version, Laurie speaks directly to teenagers and includes scenarios that are relevant to them,” explained Belinda Platt, a project manager and associate director of digital education at Yale’s Poorvu Center for Teaching and Learning.
In order to ensure the course would be able to reach a wide audience, Santos sought help from Platt and the Poorvu center.
“Ideally, the course wins over the skeptics and equips teenagers with strategies to take ownership of their own mental health,” Platt told the News. “I wish I had this when I was younger.”
Platt added that she hopes the course will engage audiences that have thus far been unreachable.
Santos is excited about initial positive reactions to the course and the traction it is gaining; she noted that in the first few days of its existence, over 10,000 students enrolled. Over 170,000 have signed up for the adult version since its impetus.
“I think that speaks to the fact that there’s a real need for better strategies for managing stress in our young people today,” she stated.
Santos began teaching at Yale in 2003.
|
Texas towns need money, technical help and compromise to save their water systems
WOLFFORTH — It wasn’t the blistering heat — unrelenting even after the colorful West Texas sunset — that was keeping Randy Hall awake last summer. Instead, he would stay up much of the night thinking about Wolfforth’s water tank.
Hall is the city’s water plant and operations manager. He noticed that, just about every day, the level of the city's 1.5 million gallon tank would fall 15 feet. The levels kept going down as more people were moving to Wolfforth: The small town’s population has boomed about 58% since 2011.
Wolfforth’s water system was already under strain — and dependent on nearly 80-year-old pipes in some areas — to keep the water clean from arsenic and fluoride, among other contaminants. As those issues coupled with the growing population, water supply became yet another problem surfacing from below.
“When we see those trends of our tank levels coming down and down over several days, it does get to a point where it makes us uncomfortable,” Hall said.
For decades, the Lubbock suburb of about 6,000 people has had a reputation for lacking clean drinking water, and city leaders like Hall have long fought the stigma. But, he also knew that when he got to work in the morning, the tank would be lower than the day before.
Suddenly, the city was confronted with a cold reality: There isn’t enough water, let alone safe water, for all the people who want to call Wolfforth home.
Many Texas water managers are destined for sleepless nights like this. Millions and millions of more people continue moving into a state that is at the same time losing more than 132 billion gallons of water every year just from leaky pipes and line breaks.
Texas water systems face three simultaneous threats: a surging population, climate change and deteriorating infrastructure that is largely decentralized. Texas is poised to invest billions of dollars to address the state’s growing water needs. However, that investment is not sufficient to fend off the increasing number of threats to the state’s water systems, water advocates say.
The costs to remedy Texas’ vast network of leaking pipes are substantial and continuously growing. Historic rates of inflation have increased the costs of labor and materials. And water systems have, over the years, been saddled with an increasing number of requirements to ensure water is safe for public consumption. Most recently, the EPA issued new regulationsrequiring public water systems to inventory, and eventually replace, lead service lines — that at a time when the state’s growing population demands a supply of water the state doesn’t yet have.
Dividing the water has long been one of the most contentious policy issues in the American West, let alone ensuring it is clean and the pipes are functioning properly. And communities with the greatest needs are often the least equipped to capitalize on funding opportunities.
Texas officials at the state and local level are exploring ways to ensure the state’s patchwork of pipes and treatment centers function properly. Among the strategies is adding money to fund repairs, providing technical assistance for smaller communities and encouraging collaboration among water systems.
Senate Bill 28, the nearly all-encompassing water bill authored by Republican Sen. Charles Perry of Lubbock, includes the Texas Water Fund, which would finance projects to develop more sources of water to cover 7 million acres of land by the end of 2033. The bill also addresses other worries, such as funds for infrastructure improvements.
But with water utilities under increasing pressure, experts say the state will need to get creative when it comes to addressing water issues.
Paying for upkeep isn’t easy
Kelley Holcomb is no stranger to Texas’ water struggles. Born in Cherokee County, the lifelong East Texan has been in the water business for four decades, working his way up to his current position as general manager of the Angelina and Neches River Authority. The governmental entity is responsible for preserving and distributing water from the state’s third-largest river basin.
Early in his career, Holcomb spent weeks driving across East Texas collecting water and sewer samples from public water systems to test for bacteria. The experience showed him just how poor water quality can be — and just how expensive it can be to remedy. In some cases, he watched those costs overwhelm a system to such a degree that the people running it needed help.
“It all starts with no money,” Holcomb said of the downward spiral some water systems find themselves in.
Getting more money is not straightforward. If water systems increase their rates, they often face backlash from residents frustrated with higher bills and no immediate fix to the frequent boil-water notices and water quality issues they have experienced for years. In small towns, those residents who complain about the water are neighbors or even family members.
Communities can also apply for highly-coveted loans or grants from the state or the U.S. Department of Agriculture, but only if they know about those opportunities and have the resources to complete the paperwork.
To obtain a federal loan, a system generally has to demonstrate the ability to both repay the loan and maintain their system after the loan runs dry. Repaying the loan may require increasing water rates. Grants require detailed financial records and thorough planning, two things that small water systems with few employees struggle with.
“Sometimes we go into communities, and some of their financial records are in shoeboxes,” said Martha Claire Bullen, sustainability director for Communities Unlimited, a nonprofit that supports rural communities across seven states. “They haven’t digitized them, or they haven’t gotten them in a way that they can easily put them together for a grant application.”
Bullen was born and raised in Mississippi and moved temporarily to Nacogdoches in January to advance civic infrastructure in Deep East Texas through a partnership with the T.L.L. Temple Foundation. Part of the $3.1 million project aims to address the resource strains of small, rural water systems. Bullen’s team offers technical assistance to rural public water systems, which can include walking through a financial audit or even motivating the community to consider applying for a grant.
State lawmakers have recognized the need for support in rural communities. SB 28, the largest water bill this session, requires the Texas Water Development Board to establish a technical assistance program to help public utilities in rural areas apply for grants and loans, administered by the board.
State Sen. Charles Perry, R-Lubbock, at right, talks with Kel Seliger, then a state senator from Amarillo, on the Senate floor on Aug. 10, 2021. Credit: Sophie Park/The Texas Tribune
The bill also creates two new pots of money — one focused on funding new water supply projects and a second, more flexible fund for projects including water infrastructure repairs in rural areas.
There is a small catch, though — Texas voters will have to agree that water is worthy of a multibillion-dollar investment. If the bill passes and is signed by the governor, this November’s election will include a constitutional amendment authorizing the Texas Water Fund.
Perry said he is confident voters will approve of the measure and that it will pave the way for further investment in water projects down the line.
“Anytime you get a constitutional amendment, it really changes the conversation for future budgets,” Perry said. “It sends a strong message that voters want this.”
A constitutional amendment also allows lawmakers to avert hitting the spending cap this year. The state constitution limits how much additional money the state can spend every two years.
In a recent poll, 89% of Texas voters said they were in favor of using $5 billion, or about 15%, of the budget surplus to fix aging infrastructure. Jeremy Mazur, senior policy adviser with Texas 2036, estimates that the real price tag for all of the state’s water infrastructure needs would be more than $150 billion. Mazur and other water advocates estimate that the fund would need to start with an initial investment of $3-5 billion.
The size of the new water funds is yet to be determined. SB 28 passed the Senate unanimously last month with a $1 billion budget rider. The House has proposed $3 billion. It is also not yet clear how much of the proposed funding would go toward new water supply projects and how much would fund infrastructure repairs.
Perry said no amount of money allocated this year will be enough to address the magnitude of Texas’ water problems.
“We’re going to put in what we put in at this time knowing full well that this is a multigenerational problem,” he said. “The bigger picture is that we’ve started the process to have the conversation.”
Florida policy may help Texas’ smallest water agencies
If Texas is serious about keeping its water infrastructure stable, it must address the needs of thousands of small systems that provide services to residents across the state.
Carlos Rubinstein, a former chair of the Texas Water Development Board, has looked to Florida for a possible solution to small and rural communities’ water problems. There, a government entity has facilitated cooperation among small, struggling utility companies who lack the budgets or resources to handle the growing costs of providing safe drinking water to the public.
The Florida Governmental Utility Association began as an agreement between four counties and has since expanded to provide drinking water and wastewater services to about 250,000 customers across 14 counties, said Robert Sheets, who helped create the agency 24 years ago.
Sheets, a native Texan, testified in front of the Texas House’s Natural Resources Committee during an interim hearing last September. He said the association is the largest single-purpose government entity in Florida, focused exclusively on water and wastewater, and that a similar system could work in Texas.
“I’ve spent two years trying to assess the real needs here, and they are identical in so many ways to Florida,” Sheets told the Tribune. Both states have a large rural population and a proliferation of small water systems.
Under Texas law, utility agencies can already cooperate with one another. They are often hesitant to do so for fear of losing control over their water system.
Rubinstein spearheaded efforts to create a bill to allay those concerns. House Bill 2701, authored by Rep. Ryan Guillen, R-Rio Grande City, would allow public water and wastewater utilities to gain economies of scale by operating under a single public utility agency, Guillen said during a public hearing.
The bill seeks to address concerns about control by spelling out the fact that entities can join and leave the agency of their own accord.
“2701 does not mandate regionalization or consolidation,” Rubinstein said during a House Natural Resources Committee hearing. “It is strictly a mechanism for small and disadvantaged communities to come together to cooperate for a regional solution.”
Even if the bill passes, Sheets said, getting rural water providers to collaborate is not necessarily an easy sell.
“Every city and county out there, with a few exceptions, takes great pride in their water and sewer operation,” Sheets said. “They are very possessive of it.”
He continued: “Cities and counties are just not going to inherently give up or invite somebody in to run and operate their utilities unless there’s a real compelling reason to do so.”
Consolidation is already occurring in some parts of the state — largely out of crisis. At the Angelina and Neches River Authority, Holcomb has overseen three acquisitions of water systems in East Texas. And lawmakers have passed a bill that would allow the authority to take over a fourth failing water system from the city of Nacogdoches.
Typically, buyers and sellers of water or wastewater systems must obtain approval for the transfer from the Public Utilities Commission, an arduous process that can take years. Acquiring the system through legislative action can expedite the takeover. HB 2701 would allow water systems to collaborate without the need for legislative or administrative approval.
Holcomb said that the reason the river authority has taken on aging water systems is because they or their customers reach out.
“The recent surge in taking on these failed water systems is just because we kept getting phone calls — ‘We need help. We need help. We need help,’” Holcomb said.
Consolidation or regionalization may not be a solution for every water system, Rubinstein said, but it offers another option for small water systems.
“Is it a silver bullet?” Rubinstein said. “In water, we all know there is no such thing as a silver bullet. It is a viable extra tool that should absolutely be considered.”
How a West Texas city and suburb work together
Proactive collaboration among the state’s smaller cities and towns is happening even in areas where water supply is scarce. One partnership could provide a map for regionalization efforts on a bigger scale.
In the High Plains, where the climate is becoming drier and water supply is declining, Lubbock leaders seemed to have secured water for the future of their own community and then some with Lake Alan Henry.
Lake Alan Henry is an oasis in the otherwise arid region. Located in the upper Brazos River Basin and in Garza County — 65 miles southeast of Lubbock, the lake can hold 40 billion gallons of water and is one of Lubbock’s main sources for water since it refills quickly. Purchased for $215 million, it’s one of Lubbock’s largest investments.
“It was very expensive,” said Jarrett Atkinson, Lubbock’s city manager. “But generations past me are still going to be using that to supply our needs here.”
Securing Lubbock’s future water does more than just benefit the city of more than 260,000 people. The city has various water agreements with surrounding communities, including Shallowater, Littlefield, Ransom Canyon and Buffalo Springs. And, most recently, Wolfforth.
Beginning June 1, Wolfforth’s water supply will be bolstered by the ability to purchase 500,000 gallons of water a day from Lubbock, which increases to 750,000 gallons in 2026. Paired with another contract that will give the town an additional 2 million gallons, things are about to change for Wolfforth.
This time last year was already a scary situation for those in Wolfforth. With a crushing drought hitting more than 80% of the state, it was hard for the city to get more water from their nearby wells. It could take a day or two, depending on how much water the community was using while the city tried to refill their tank.
The water from Lubbock will be treated by the time it makes its way into Wolfforth’s faucets, alleviating some pressure on the town’s water system. Even though Wolfforth’s current water treatment plant is 6 years old, they are designing a bigger water treatment plant to hold all the water that will be flowing in soon.
“All of a sudden, Wolfforth has gone from this little community where water was scarce, to a community that isn’t going to be water scarce at all,” said Randy Criswell, Wolfforth’s city manager.
While Wolfforth’s future is taking a new direction, there are still long-termconcerns for the Brazos. Many wonder if the water can keep up with increasing demands from urban, suburban and industrial growth. Over in Lubbock, Atkinson is confident that their plans can keep up with future needs and the fluctuations of their water sources.
The city is also looking to create new streams of water, starting with an unlikely source — cheese. Leprino Foods is building a plant that will buy 1.3 million gallons of water a day from Lubbock. It will then recycle and treat about 2 million gallons of water daily and sell that back to the city.
“It’s a net gain of water that wasn’t even in the models,” Atkinson explained.
Rather than sticking to the rugged individualism mentality that Texas is known for, the city is willing and ready to spread the wealth when it comes to Lubbock’s water supply. In the case of Wolfforth’s agreement, Atkinson said even the future 750,000 gallons is roughly 2.1% of Lubbock’s daily water use. He doesn’t see the need for a contentious battle over water.
“It’s not a burden to the system, and it doesn’t require us to go out and develop additional supply in order to do this,” Atkinson said. “We’re in a position to do it, and it does not have to have a downside for the city of Lubbock’s water system.”
The city is also willing to spare water because of how the two cities are growing closer and closer toward each other. With just a 10-minute drive between, it’s common to find Lubbock residents at the Wolfforth Farmers Market or Wolfforth residents at Lubbock’s First Friday Art Trail.
“It’s an extremely tight and close relationship,” Atkinson said. “Ultimately, what’s good for Lubbock and what’s good for Wolfforth are going to end up being the same thing.”
Disclosure: Texas 2036 has been a financial supporter of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune's journalism. Find a complete list of them here.
This article is part of a series published by The Texas Tribune examining the state's deteriorating water infrastructure. For more in our Broken pipes series, click here. And join us May 9 for a live discussion about our report and hear directly from Texans working to solve the problem.
|
<urn:uuid:4870e522-1351-4a66-af37-83c050424a02>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00460.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9585122466087341,
"pii_count": 0,
"score": 2.734375,
"token_count": 3814,
"url": "https://www.marfapublicradio.org/2023-05-04/texas-towns-need-money-technical-help-and-compromise-to-save-their-water-systems"
}
|
Texas towns need money, technical help and compromise to save their water systems
WOLFFORTH — It wasn’t the blistering heat — unrelenting even after the colorful West Texas sunset — that was keeping Randy Hall awake last summer. Instead, he would stay up much of the night thinking about Wolfforth’s water tank.
Hall is the city’s water plant and operations manager. He noticed that, just about every day, the level of the city's 1.5 million gallon tank would fall 15 feet. The levels kept going down as more people were moving to Wolfforth: The small town’s population has boomed about 58% since 2011.
Wolfforth’s water system was already under strain — and dependent on nearly 80-year-old pipes in some areas — to keep the water clean from arsenic and fluoride, among other contaminants. As those issues coupled with the growing population, water supply became yet another problem surfacing from below.
“When we see those trends of our tank levels coming down and down over several days, it does get to a point where it makes us uncomfortable,” Hall said.
For decades, the Lubbock suburb of about 6,000 people has had a reputation for lacking clean drinking water, and city leaders like Hall have long fought the stigma. But, he also knew that when he got to work in the morning, the tank would be lower than the day before.
Suddenly, the city was confronted with a cold reality: There isn’t enough water, let alone safe water, for all the people who want to call Wolfforth home.
Many Texas water managers are destined for sleepless nights like this. Millions and millions of more people continue moving into a state that is at the same time losing more than 132 billion gallons of water every year just from leaky pipes and line breaks.
Texas water systems face three simultaneous threats: a surging population, climate change and deteriorating infrastructure that is largely decentralized. Texas is poised to invest billions of dollars to address the state’s growing water needs. However, that investment is not sufficient to fend off the increasing number of threats to the state’s water systems, water advocates say.
The costs to remedy Texas’ vast network of leaking pipes are substantial and continuously growing. Historic rates of inflation have increased the costs of labor and materials. And water systems have, over the years, been saddled with an increasing number of requirements to ensure water is safe
|
for public consumption. Most recently, the EPA issued new regulationsrequiring public water systems to inventory, and eventually replace, lead service lines — that at a time when the state’s growing population demands a supply of water the state doesn’t yet have.
Dividing the water has long been one of the most contentious policy issues in the American West, let alone ensuring it is clean and the pipes are functioning properly. And communities with the greatest needs are often the least equipped to capitalize on funding opportunities.
Texas officials at the state and local level are exploring ways to ensure the state’s patchwork of pipes and treatment centers function properly. Among the strategies is adding money to fund repairs, providing technical assistance for smaller communities and encouraging collaboration among water systems.
Senate Bill 28, the nearly all-encompassing water bill authored by Republican Sen. Charles Perry of Lubbock, includes the Texas Water Fund, which would finance projects to develop more sources of water to cover 7 million acres of land by the end of 2033. The bill also addresses other worries, such as funds for infrastructure improvements.
But with water utilities under increasing pressure, experts say the state will need to get creative when it comes to addressing water issues.
Paying for upkeep isn’t easy
Kelley Holcomb is no stranger to Texas’ water struggles. Born in Cherokee County, the lifelong East Texan has been in the water business for four decades, working his way up to his current position as general manager of the Angelina and Neches River Authority. The governmental entity is responsible for preserving and distributing water from the state’s third-largest river basin.
Early in his career, Holcomb spent weeks driving across East Texas collecting water and sewer samples from public water systems to test for bacteria. The experience showed him just how poor water quality can be — and just how expensive it can be to remedy. In some cases, he watched those costs overwhelm a system to such a degree that the people running it needed help.
“It all starts with no money,” Holcomb said of the downward spiral some water systems find themselves in.
Getting more money is not straightforward. If water systems increase their rates, they often face backlash from residents frustrated with higher bills and no immediate fix to the frequent boil-water notices and water quality issues they have experienced for years. In small towns, those residents who complain about the water are neighbors or even family members.
Communities can also apply for highly-coveted loans or grants from the state or the U.S. Department of Agriculture, but only if they know about those opportunities and have the resources to complete the paperwork.
To obtain a federal loan, a system generally has to demonstrate the ability to both repay the loan and maintain their system after the loan runs dry. Repaying the loan may require increasing water rates. Grants require detailed financial records and thorough planning, two things that small water systems with few employees struggle with.
“Sometimes we go into communities, and some of their financial records are in shoeboxes,” said Martha Claire Bullen, sustainability director for Communities Unlimited, a nonprofit that supports rural communities across seven states. “They haven’t digitized them, or they haven’t gotten them in a way that they can easily put them together for a grant application.”
Bullen was born and raised in Mississippi and moved temporarily to Nacogdoches in January to advance civic infrastructure in Deep East Texas through a partnership with the T.L.L. Temple Foundation. Part of the $3.1 million project aims to address the resource strains of small, rural water systems. Bullen’s team offers technical assistance to rural public water systems, which can include walking through a financial audit or even motivating the community to consider applying for a grant.
State lawmakers have recognized the need for support in rural communities. SB 28, the largest water bill this session, requires the Texas Water Development Board to establish a technical assistance program to help public utilities in rural areas apply for grants and loans, administered by the board.
State Sen. Charles Perry, R-Lubbock, at right, talks with Kel Seliger, then a state senator from Amarillo, on the Senate floor on Aug. 10, 2021. Credit: Sophie Park/The Texas Tribune
The bill also creates two new pots of money — one focused on funding new water supply projects and a second, more flexible fund for projects including water infrastructure repairs in rural areas.
There is a small catch, though — Texas voters will have to agree that water is worthy of a multibillion-dollar investment. If the bill passes and is signed by the governor, this November’s election will include a constitutional amendment authorizing the Texas Water Fund.
Perry said he is confident voters will approve of the measure and that it will pave the way for further investment in water projects down the line.
“Anytime you get a constitutional amendment, it really changes the conversation for future budgets,” Perry said. “It sends a strong message that voters want this.”
A constitutional amendment also allows lawmakers to avert hitting the spending cap this year. The state constitution limits how much additional money the state can spend every two years.
In a recent poll, 89% of Texas voters said they were in favor of using $5 billion, or about 15%, of the budget surplus to fix aging infrastructure. Jeremy Mazur, senior policy adviser with Texas 2036, estimates that the real price tag for all of the state’s water infrastructure needs would be more than $150 billion. Mazur and other water advocates estimate that the fund would need to start with an initial investment of $3-5 billion.
The size of the new water funds is yet to be determined. SB 28 passed the Senate unanimously last month with a $1 billion budget rider. The House has proposed $3 billion. It is also not yet clear how much of the proposed funding would go toward new water supply projects and how much would fund infrastructure repairs.
Perry said no amount of money allocated this year will be enough to address the magnitude of Texas’ water problems.
“We’re going to put in what we put in at this time knowing full well that this is a multigenerational problem,” he said. “The bigger picture is that we’ve started the process to have the conversation.”
Florida policy may help Texas’ smallest water agencies
If Texas is serious about keeping its water infrastructure stable, it must address the needs of thousands of small systems that provide services to residents across the state.
Carlos Rubinstein, a former chair of the Texas Water Development Board, has looked to Florida for a possible solution to small and rural communities’ water problems. There, a government entity has facilitated cooperation among small, struggling utility companies who lack the budgets or resources to handle the growing costs of providing safe drinking water to the public.
The Florida Governmental Utility Association began as an agreement between four counties and has since expanded to provide drinking water and wastewater services to about 250,000 customers across 14 counties, said Robert Sheets, who helped create the agency 24 years ago.
Sheets, a native Texan, testified in front of the Texas House’s Natural Resources Committee during an interim hearing last September. He said the association is the largest single-purpose government entity in Florida, focused exclusively on water and wastewater, and that a similar system could work in Texas.
“I’ve spent two years trying to assess the real needs here, and they are identical in so many ways to Florida,” Sheets told the Tribune. Both states have a large rural population and a proliferation of small water systems.
Under Texas law, utility agencies can already cooperate with one another. They are often hesitant to do so for fear of losing control over their water system.
Rubinstein spearheaded efforts to create a bill to allay those concerns. House Bill 2701, authored by Rep. Ryan Guillen, R-Rio Grande City, would allow public water and wastewater utilities to gain economies of scale by operating under a single public utility agency, Guillen said during a public hearing.
The bill seeks to address concerns about control by spelling out the fact that entities can join and leave the agency of their own accord.
“2701 does not mandate regionalization or consolidation,” Rubinstein said during a House Natural Resources Committee hearing. “It is strictly a mechanism for small and disadvantaged communities to come together to cooperate for a regional solution.”
Even if the bill passes, Sheets said, getting rural water providers to collaborate is not necessarily an easy sell.
“Every city and county out there, with a few exceptions, takes great pride in their water and sewer operation,” Sheets said. “They are very possessive of it.”
He continued: “Cities and counties are just not going to inherently give up or invite somebody in to run and operate their utilities unless there’s a real compelling reason to do so.”
Consolidation is already occurring in some parts of the state — largely out of crisis. At the Angelina and Neches River Authority, Holcomb has overseen three acquisitions of water systems in East Texas. And lawmakers have passed a bill that would allow the authority to take over a fourth failing water system from the city of Nacogdoches.
Typically, buyers and sellers of water or wastewater systems must obtain approval for the transfer from the Public Utilities Commission, an arduous process that can take years. Acquiring the system through legislative action can expedite the takeover. HB 2701 would allow water systems to collaborate without the need for legislative or administrative approval.
Holcomb said that the reason the river authority has taken on aging water systems is because they or their customers reach out.
“The recent surge in taking on these failed water systems is just because we kept getting phone calls — ‘We need help. We need help. We need help,’” Holcomb said.
Consolidation or regionalization may not be a solution for every water system, Rubinstein said, but it offers another option for small water systems.
“Is it a silver bullet?” Rubinstein said. “In water, we all know there is no such thing as a silver bullet. It is a viable extra tool that should absolutely be considered.”
How a West Texas city and suburb work together
Proactive collaboration among the state’s smaller cities and towns is happening even in areas where water supply is scarce. One partnership could provide a map for regionalization efforts on a bigger scale.
In the High Plains, where the climate is becoming drier and water supply is declining, Lubbock leaders seemed to have secured water for the future of their own community and then some with Lake Alan Henry.
Lake Alan Henry is an oasis in the otherwise arid region. Located in the upper Brazos River Basin and in Garza County — 65 miles southeast of Lubbock, the lake can hold 40 billion gallons of water and is one of Lubbock’s main sources for water since it refills quickly. Purchased for $215 million, it’s one of Lubbock’s largest investments.
“It was very expensive,” said Jarrett Atkinson, Lubbock’s city manager. “But generations past me are still going to be using that to supply our needs here.”
Securing Lubbock’s future water does more than just benefit the city of more than 260,000 people. The city has various water agreements with surrounding communities, including Shallowater, Littlefield, Ransom Canyon and Buffalo Springs. And, most recently, Wolfforth.
Beginning June 1, Wolfforth’s water supply will be bolstered by the ability to purchase 500,000 gallons of water a day from Lubbock, which increases to 750,000 gallons in 2026. Paired with another contract that will give the town an additional 2 million gallons, things are about to change for Wolfforth.
This time last year was already a scary situation for those in Wolfforth. With a crushing drought hitting more than 80% of the state, it was hard for the city to get more water from their nearby wells. It could take a day or two, depending on how much water the community was using while the city tried to refill their tank.
The water from Lubbock will be treated by the time it makes its way into Wolfforth’s faucets, alleviating some pressure on the town’s water system. Even though Wolfforth’s current water treatment plant is 6 years old, they are designing a bigger water treatment plant to hold all the water that will be flowing in soon.
“All of a sudden, Wolfforth has gone from this little community where water was scarce, to a community that isn’t going to be water scarce at all,” said Randy Criswell, Wolfforth’s city manager.
While Wolfforth’s future is taking a new direction, there are still long-termconcerns for the Brazos. Many wonder if the water can keep up with increasing demands from urban, suburban and industrial growth. Over in Lubbock, Atkinson is confident that their plans can keep up with future needs and the fluctuations of their water sources.
The city is also looking to create new streams of water, starting with an unlikely source — cheese. Leprino Foods is building a plant that will buy 1.3 million gallons of water a day from Lubbock. It will then recycle and treat about 2 million gallons of water daily and sell that back to the city.
“It’s a net gain of water that wasn’t even in the models,” Atkinson explained.
Rather than sticking to the rugged individualism mentality that Texas is known for, the city is willing and ready to spread the wealth when it comes to Lubbock’s water supply. In the case of Wolfforth’s agreement, Atkinson said even the future 750,000 gallons is roughly 2.1% of Lubbock’s daily water use. He doesn’t see the need for a contentious battle over water.
“It’s not a burden to the system, and it doesn’t require us to go out and develop additional supply in order to do this,” Atkinson said. “We’re in a position to do it, and it does not have to have a downside for the city of Lubbock’s water system.”
The city is also willing to spare water because of how the two cities are growing closer and closer toward each other. With just a 10-minute drive between, it’s common to find Lubbock residents at the Wolfforth Farmers Market or Wolfforth residents at Lubbock’s First Friday Art Trail.
“It’s an extremely tight and close relationship,” Atkinson said. “Ultimately, what’s good for Lubbock and what’s good for Wolfforth are going to end up being the same thing.”
Disclosure: Texas 2036 has been a financial supporter of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune's journalism. Find a complete list of them here.
This article is part of a series published by The Texas Tribune examining the state's deteriorating water infrastructure. For more in our Broken pipes series, click here. And join us May 9 for a live discussion about our report and hear directly from Texans working to solve the problem.
|
State advises residents not to eat fish from 20 area ponds, lakes, and rivers. Here's why
The state Department of Public Heath has issued an advisory that warns against consuming fish from major bodies of water in MetroWest and Greater Milford, citing contamination in many of the region's rivers, lakes and ponds.
In a report published last month by the department's Bureau of Environmental Health, the department advised residents against consuming fish caught in 20 bodies of water in MetroWest and Greater Milford, including the Sudbury River, Cochituate Lake and Beaver Pond.
Among the contaminates cited in the report include mercury, dichlorodiphenyltrichloroethane (DDT), the pesticide chlordane and PFAS (toxic per- and polyfluoroalkyl substances). The advisories range in severity, with some recommending that certain fish only be consumed once or twice per month, while others advise to never eat fish that come from a certain body of water. The report also classifies the health risk between higher at-risk populations (such as children under 12, or pregnant or expecting to become pregnant women) and the general population.
One of the region's bodies of water from which the state advises that no fish be eaten is the Sudbury River, due to its high concentration of mercury. Alison Field-Juma, executive director of OARS, a nonprofit that oversees the watersheds of the Assabet, Concord and Sudbury rivers, said the mercury comes from different sources — some local and some not.
Bernie Kane:Ashland man remembered for his Nyanza Superfund site advocacy, civic engagement
"Every watershed in the state has some level of mercury in it, because it is in the air," she said. "Coal-burning plants from other states burn coal, which contains mercury, and that gets in the air and drifts east into Massachusetts. In the case of the Sudbury watershed, there is a superfund site in Ashland, the Nyanza cleanup, which decades ago put a high concentration of mercury in the water. It's still there, so Sudbury has a much higher concentration than most rivers."
Consuming larger fish leads to higher risks
Concerns for public safety are increased, depending on the type of the fish. Eating a larger, predatory fish higher up on the food chain poses a greater risk to humans due to that fish having consumed other fish that may be contaminated. The risks posed by consuming fish with a high concentration of mercury or other chemicals include an increase in potential problems of the central nervous system and possible adverse effects on the cardiovascular system, according to the federal Food and Drug Administration.
Many waterways with fish consumption advisories have posted signs to alert residents of the dangers of consuming the fish. But Field-Juma said despite the signs, she's confident people are still fishing and eating what they catch.
"I think a lot of people might not know, and if you are poorer, catching fish and eating it is a great source of protein," she said. "You also might have people who assume that they are fine, even with knowing the risks."
Bill Murphy, Framingham's director of public health, said that while catching fish in Framingham is safe, residents should strongly avoid eating their catch.
Addressing PFAS:Natick marks milestone in quest for clean water as new filters go online
"Getting this information translated to communicate these PFAS findings will be critical to reduce risk to the diverse population in Framingham," he said. "Fishing recreationally is fine, just don’t consume your catch."
Besides mercury, other contaminants such as PFAS, a chemical compound that is frequently used in various consumer products such as clothing, packaging and cosmetics due to its waterproof qualities, can also be found in dangerous concentrations in MetroWest waterways. The chemicals typically end up there by being improperly disposed of by humans into the wastewater system, which makes its way into the water.
Legislation aims to reduce contaminants
While Massachusetts has relatively strong restrictions involving PFAS, many products that are manufactured overseas and imported here contain high concentrations, which pollutes the water.
State Rep. Kate Hogan, D-Stow, filed legislation earlier this year aimed at cleaning up PFAS from waterways. Hogan's legislation aims at establishing a PFAS remediation trust fund and to restrict the use of PFAS in food packaging and consumer products. The bill, called the Mass PFAS Act, was filed after a review of the findings from a task force created to determine the impacts of PFAS in Massachusetts.
"The Mass PFAS Act is a foundational bill and what we hope to do is clean up existing PFAS contamination, but also prevent future contamination," Hogan said. "What we learned on the task force was that remediation is very important, but also equally important was the need to prevent future contamination in Massachusetts."
Hogan also said the trust fund addresses remediation projects in Massachusetts, and that hopefully the bill will be able to help control manufacturing and use of PFAS products in the state. That's challenging because the current understanding of what products contain PFAS isn't always clear.
"The aim is to include packaging and consumer products, we know that can be done because those products contain PFAS," Hogan said. "We hope all products that knowingly contain PFAS can be banned by 2030, and then we can hopefully begin to understand more products where PFAS hasn't been identified.
'Forefront' of PFAS mitigation:State Rep. Hogan lauds Hudson's new water treatment system
"We are trying to ensure that industries understand the regulation."
Due to issues with contamination stemming from coal-burning in other states and imported products with dangerous chemicals, Field-Juma said the best way for local residents to make a difference is to support legislative efforts such as Hogan's to help clean up local spots. Field-Juma also said people are responsible for protecting wildlife that is impacted by contaminants.
"We also need to keep in mind the impact these are having on wildlife," she said. "The chemicals could be impacting fish reproduction, so what happens if the fish population declines? Bald eagles were endangered, but their population has rebounded due to human efforts. What happens if the bald eagle eats contaminated fish? And a bald eagle can't read the sign telling them not to."
|
<urn:uuid:8508f9b3-e6e7-4420-9b86-63862ad6cbee>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00686.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9661721587181091,
"pii_count": 0,
"score": 2.890625,
"token_count": 1301,
"url": "https://www.milforddailynews.com/story/news/environment/2023/03/16/metrowest-lakes-rivers-ponds-contaminated-limit-eating-fish/69986098007/"
}
|
State advises residents not to eat fish from 20 area ponds, lakes, and rivers. Here's why
The state Department of Public Heath has issued an advisory that warns against consuming fish from major bodies of water in MetroWest and Greater Milford, citing contamination in many of the region's rivers, lakes and ponds.
In a report published last month by the department's Bureau of Environmental Health, the department advised residents against consuming fish caught in 20 bodies of water in MetroWest and Greater Milford, including the Sudbury River, Cochituate Lake and Beaver Pond.
Among the contaminates cited in the report include mercury, dichlorodiphenyltrichloroethane (DDT), the pesticide chlordane and PFAS (toxic per- and polyfluoroalkyl substances). The advisories range in severity, with some recommending that certain fish only be consumed once or twice per month, while others advise to never eat fish that come from a certain body of water. The report also classifies the health risk between higher at-risk populations (such as children under 12, or pregnant or expecting to become pregnant women) and the general population.
One of the region's bodies of water from which the state advises that no fish be eaten is the Sudbury River, due to its high concentration of mercury. Alison Field-Juma, executive director of OARS, a nonprofit that oversees the watersheds of the Assabet, Concord and Sudbury rivers, said the mercury comes from different sources — some local and some not.
Bernie Kane:Ashland man remembered for his Nyanza Superfund site advocacy, civic engagement
"Every watershed in the state has some level of mercury in it, because it is in the air," she said. "Coal-burning plants from other states burn coal, which contains mercury, and that gets in the air and drifts east into Massachusetts. In the case of the Sudbury watershed, there is a superfund site in Ashland, the Nyanza cleanup, which decades ago put a high concentration of mercury in the water. It's still there, so Sudbury has a much higher concentration than most rivers."
Consuming larger fish leads to higher risks
Concerns for public safety are increased, depending on the type of the fish. Eating a larger, predatory fish higher up on the food chain poses a greater risk to humans due to that fish having consumed other fish that may be contaminated. The risks posed by consuming fish with a high concentration of
|
mercury or other chemicals include an increase in potential problems of the central nervous system and possible adverse effects on the cardiovascular system, according to the federal Food and Drug Administration.
Many waterways with fish consumption advisories have posted signs to alert residents of the dangers of consuming the fish. But Field-Juma said despite the signs, she's confident people are still fishing and eating what they catch.
"I think a lot of people might not know, and if you are poorer, catching fish and eating it is a great source of protein," she said. "You also might have people who assume that they are fine, even with knowing the risks."
Bill Murphy, Framingham's director of public health, said that while catching fish in Framingham is safe, residents should strongly avoid eating their catch.
Addressing PFAS:Natick marks milestone in quest for clean water as new filters go online
"Getting this information translated to communicate these PFAS findings will be critical to reduce risk to the diverse population in Framingham," he said. "Fishing recreationally is fine, just don’t consume your catch."
Besides mercury, other contaminants such as PFAS, a chemical compound that is frequently used in various consumer products such as clothing, packaging and cosmetics due to its waterproof qualities, can also be found in dangerous concentrations in MetroWest waterways. The chemicals typically end up there by being improperly disposed of by humans into the wastewater system, which makes its way into the water.
Legislation aims to reduce contaminants
While Massachusetts has relatively strong restrictions involving PFAS, many products that are manufactured overseas and imported here contain high concentrations, which pollutes the water.
State Rep. Kate Hogan, D-Stow, filed legislation earlier this year aimed at cleaning up PFAS from waterways. Hogan's legislation aims at establishing a PFAS remediation trust fund and to restrict the use of PFAS in food packaging and consumer products. The bill, called the Mass PFAS Act, was filed after a review of the findings from a task force created to determine the impacts of PFAS in Massachusetts.
"The Mass PFAS Act is a foundational bill and what we hope to do is clean up existing PFAS contamination, but also prevent future contamination," Hogan said. "What we learned on the task force was that remediation is very important, but also equally important was the need to prevent future contamination in Massachusetts."
Hogan also said the trust fund addresses remediation projects in Massachusetts, and that hopefully the bill will be able to help control manufacturing and use of PFAS products in the state. That's challenging because the current understanding of what products contain PFAS isn't always clear.
"The aim is to include packaging and consumer products, we know that can be done because those products contain PFAS," Hogan said. "We hope all products that knowingly contain PFAS can be banned by 2030, and then we can hopefully begin to understand more products where PFAS hasn't been identified.
'Forefront' of PFAS mitigation:State Rep. Hogan lauds Hudson's new water treatment system
"We are trying to ensure that industries understand the regulation."
Due to issues with contamination stemming from coal-burning in other states and imported products with dangerous chemicals, Field-Juma said the best way for local residents to make a difference is to support legislative efforts such as Hogan's to help clean up local spots. Field-Juma also said people are responsible for protecting wildlife that is impacted by contaminants.
"We also need to keep in mind the impact these are having on wildlife," she said. "The chemicals could be impacting fish reproduction, so what happens if the fish population declines? Bald eagles were endangered, but their population has rebounded due to human efforts. What happens if the bald eagle eats contaminated fish? And a bald eagle can't read the sign telling them not to."
|
Some in Southern Africa’s energy sector, including Namibia and South Africa, are exploring the potential benefits of embracing a hydrogen economy, primarily driven by green hydrogen, a clean and natural energy source. Recent interest in this arises from the global push for sustainable energy options.
Green hydrogen can be produced by electrolysing water into hydrogen and oxygen using energy from green sources like the sun, water, and wind. This process is deemed “green” because it does not make greenhouse gases. When hydrogen fuel is burned, it turns into water instead of carbon dioxide, as happens when fossil fuel is burned. Of particular value in the move towards a sustainable energy system, green hydrogen can be used in hard-to-decarbonise areas like heavy industry and transportation.
The abundant sunlight and Platinum Group Metals (PGMs) in Southern Africa necessary for producing green hydrogen offer the region a comparative economic advantage. Southern Africa is a prime location for solar energy production due to its high solar radiation levels. Most of South Africa has more than 2,500 hours of sunshine per year. The average daily amount of sunlight is between 4.5 and 6.5 kWh/m2. The average yearly 24-hour solar radiation in South Africa is 220 W/m2, which is higher than in many parts of the US (about 150 W/m2) and the European Union (about 100 W/m2).
Catalysts speed up the process at the anode, where water is oxidised to make oxygen and protons, and at the cathode, where protons make hydrogen. Anode catalysts are usually made of iridium or ruthenium oxide, while cathode catalysts are made of platinum, which are all PGMs. South Africa produces between 80 and 85% of the world’s iridium and 75% of the world’s platinum. Because PGMs and plenty of sunlight produce hydrogen, Southern Africa is an excellent place to invest in the hydrogen economy. Namibia is also taking advantage of this opportunity, and in May 2023, it reached an agreement with Hyphen Hydrogen Energy on a deal worth $10-billion.
Hydrogen is a flexible form of energy that can be used in many different ways. Fuel cells that run on hydrogen mix hydrogen and oxygen to generate electricity, heat, and water. These fuel cells are used in cars, power plants, cell phones, and computers. In an internal combustion engine, hydrogen is burned like petrol — producing only water vapour instead of carbon dioxide from petrol, and making it a direct replacement for this fossil fuel.
Hydrogen is an energy storage medium. Electrolysis, which is the process that uses electricity to separate water into hydrogen and oxygen, can use excess electricity from sources like hydropower plants, the wind, or the sun to generate hydrogen that can be stored and used when there is a greater electricity demand or when renewable energy sources are not producing as much energy. Also, hydrogen can be used to provide heat and can be mixed with natural gas to lower the amount of carbon dioxide released by heating systems.
Hydrogen is a clean and renewable energy source, but it requires plenty of energy to produce, especially when natural gas or other fossil fuels are used. If produced from renewable energy sources, hydrogen can be a stable energy source. Electrolysis has an efficiency between 60 and 70%. But the efficiency of the process depends on the technology and power source that is used.
Japan and Southern Africa have distinct socioeconomic and geographical characteristics and have shown interest in the hydrogen economy. Highly industrialised Japan has led the world in developing and applying hydrogen technology. Several factors influence Japan’s interest in hydrogen as a primary energy source. Due to its limited domestic energy resources, Japan relies heavily on imported fossil fuels. This reliance makes Japan vulnerable to fluctuations in the global energy market. Additionally, Japan is committed to reducing its greenhouse gas emissions to become carbon neutral by 2050. The hydrogen economy offers a means to achieve its goals.
Japan’s plan for hydrogen includes producing, storing, transporting, and using it. Significant investments have been made in research and development, infrastructure, and public-private partnerships. In addition, Japan has advanced fuel cell technology, with Toyota leading the way in hydrogen fuel cell vehicles. The high cost of producing hydrogen, especially green hydrogen through electrolysis, is one of the biggest challenges. Another is that there is no good network for distributing hydrogen.
Strike while the iron is hot
Given its abundant natural resources and developing economies, Southern Africa presents different opportunities. The region has enormous potential for renewable energy, particularly solar and wind, which can be used to produce green hydrogen. Countries like South Africa have already initiated research into the hydrogen economy to resolve energy security, reduce carbon emissions, and stimulate economic growth.
Southern Africa could be a significant exporter of green hydrogen because it has plenty of green energy sources and is strategically located. However, the region faces significant obstacles. At this point, there is a shortage of hydrogen production, storage, and distribution infrastructure. In addition, the Southern African region is hindered by the high initial investment required for hydrogen technologies.
Although Japan and Southern Africa are at different phases of economic development, these countries recognise the potential of the hydrogen economy. Japan is well-positioned to dominate hydrogen utilisation, especially in the transportation and residential sectors, due to its advanced technology and industrial capacity. Southern Africa has the potential to become a significant participant in the production of green hydrogen due to the abundance of the necessary resources. This convergence of interests presents a unique opportunity for mutual cooperation and economic expansion. Japan and Southern Africa can play a prominent role in defining the global hydrogen economy through strategic cooperation and a shared commitment to a sustainable future. DM
|
<urn:uuid:67aea6f2-cacb-4e92-a3d9-114977cbb803>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100873.6/warc/CC-MAIN-20231209071722-20231209101722-00100.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.934430718421936,
"pii_count": 0,
"score": 3.703125,
"token_count": 1169,
"url": "https://www.dailymaverick.co.za/opinionista/2023-07-10-a-hydrogen-economy-may-fuel-a-more-solid-development-drive-in-southern-africa/"
}
|
Some in Southern Africa’s energy sector, including Namibia and South Africa, are exploring the potential benefits of embracing a hydrogen economy, primarily driven by green hydrogen, a clean and natural energy source. Recent interest in this arises from the global push for sustainable energy options.
Green hydrogen can be produced by electrolysing water into hydrogen and oxygen using energy from green sources like the sun, water, and wind. This process is deemed “green” because it does not make greenhouse gases. When hydrogen fuel is burned, it turns into water instead of carbon dioxide, as happens when fossil fuel is burned. Of particular value in the move towards a sustainable energy system, green hydrogen can be used in hard-to-decarbonise areas like heavy industry and transportation.
The abundant sunlight and Platinum Group Metals (PGMs) in Southern Africa necessary for producing green hydrogen offer the region a comparative economic advantage. Southern Africa is a prime location for solar energy production due to its high solar radiation levels. Most of South Africa has more than 2,500 hours of sunshine per year. The average daily amount of sunlight is between 4.5 and 6.5 kWh/m2. The average yearly 24-hour solar radiation in South Africa is 220 W/m2, which is higher than in many parts of the US (about 150 W/m2) and the European Union (about 100 W/m2).
Catalysts speed up the process at the anode, where water is oxidised to make oxygen and protons, and at the cathode, where protons make hydrogen. Anode catalysts are usually made of iridium or ruthenium oxide, while cathode catalysts are made of platinum, which are all PGMs. South Africa produces between 80 and 85% of the world’s iridium and 75% of the world’s platinum. Because PGMs and plenty of sunlight produce hydrogen, Southern Africa is an excellent place to invest in the hydrogen economy. Namibia is also taking advantage of this opportunity, and in May 2023, it reached an agreement with Hyphen Hydrogen Energy on a deal worth $10-billion.
Hydrogen is a flexible form of energy that can be used in many different ways. Fuel cells that run on hydrogen mix hydrogen and oxygen to generate electricity, heat, and water. These fuel cells are used in cars, power plants, cell phones, and computers. In an internal combustion engine, hydrogen is burned like petrol — producing only water vapour
|
instead of carbon dioxide from petrol, and making it a direct replacement for this fossil fuel.
Hydrogen is an energy storage medium. Electrolysis, which is the process that uses electricity to separate water into hydrogen and oxygen, can use excess electricity from sources like hydropower plants, the wind, or the sun to generate hydrogen that can be stored and used when there is a greater electricity demand or when renewable energy sources are not producing as much energy. Also, hydrogen can be used to provide heat and can be mixed with natural gas to lower the amount of carbon dioxide released by heating systems.
Hydrogen is a clean and renewable energy source, but it requires plenty of energy to produce, especially when natural gas or other fossil fuels are used. If produced from renewable energy sources, hydrogen can be a stable energy source. Electrolysis has an efficiency between 60 and 70%. But the efficiency of the process depends on the technology and power source that is used.
Japan and Southern Africa have distinct socioeconomic and geographical characteristics and have shown interest in the hydrogen economy. Highly industrialised Japan has led the world in developing and applying hydrogen technology. Several factors influence Japan’s interest in hydrogen as a primary energy source. Due to its limited domestic energy resources, Japan relies heavily on imported fossil fuels. This reliance makes Japan vulnerable to fluctuations in the global energy market. Additionally, Japan is committed to reducing its greenhouse gas emissions to become carbon neutral by 2050. The hydrogen economy offers a means to achieve its goals.
Japan’s plan for hydrogen includes producing, storing, transporting, and using it. Significant investments have been made in research and development, infrastructure, and public-private partnerships. In addition, Japan has advanced fuel cell technology, with Toyota leading the way in hydrogen fuel cell vehicles. The high cost of producing hydrogen, especially green hydrogen through electrolysis, is one of the biggest challenges. Another is that there is no good network for distributing hydrogen.
Strike while the iron is hot
Given its abundant natural resources and developing economies, Southern Africa presents different opportunities. The region has enormous potential for renewable energy, particularly solar and wind, which can be used to produce green hydrogen. Countries like South Africa have already initiated research into the hydrogen economy to resolve energy security, reduce carbon emissions, and stimulate economic growth.
Southern Africa could be a significant exporter of green hydrogen because it has plenty of green energy sources and is strategically located. However, the region faces significant obstacles. At this point, there is a shortage of hydrogen production, storage, and distribution infrastructure. In addition, the Southern African region is hindered by the high initial investment required for hydrogen technologies.
Although Japan and Southern Africa are at different phases of economic development, these countries recognise the potential of the hydrogen economy. Japan is well-positioned to dominate hydrogen utilisation, especially in the transportation and residential sectors, due to its advanced technology and industrial capacity. Southern Africa has the potential to become a significant participant in the production of green hydrogen due to the abundance of the necessary resources. This convergence of interests presents a unique opportunity for mutual cooperation and economic expansion. Japan and Southern Africa can play a prominent role in defining the global hydrogen economy through strategic cooperation and a shared commitment to a sustainable future. DM
|
Sign up for CNN’s Stress, But Less newsletter. Our six-part mindfulness guide will inform and inspire you to reduce stress while learning how to harness it.
Contrary to what internalized stigma may tell you, a migraine isn’t just a headache.
In people younger than age 50, chronic migraines are the leading cause of disability, according to a 2018 study. But many people who live with the condition can have a hard time recognizing its seriousness and getting the medical care they need for their migraines, said Kylie Petrarca, a nurse and associate program director of the Association of Migraine Disorders.
“People don’t give it the recognition it deserves,” she added.
Many doctors don’t know enough about the nuances of migraine disorders, said Dr. Frederick Godley, an ear, nose and throat specialist and president of the Association of Migraine Disorders.
But there are professionals researching and treating migraine conditions. A new study that was published Wednesday in Neurology, the journal of the American Academy of Neurology, has shown that both cluster headaches and migraines have a connection to circadian systems, meaning time of day and sleep patterns might have a big impact on those conditions.
Understanding what migraines are, why you may get them and what you can do is a powerful tool to advocate for yourself and get the proper treatment, Petrarca said.
What are migraines?
What used to be called a migraine headache is now called a migraine attack to better capture how much further the condition extends beyond pain in the head, Petrarca said.
Migraines start deep in the brain, said Dr. Stewart Tepper, a neurologist based in Lebanon, New Hampshire. The condition is better understood as a complex neurological disease that affects many parts of the nervous system, Godley said.
To be diagnosed with a migraine condition, a patient has to have at least five attacks in their lifetime that each last four to 72 hours when left untreated. The condition must also meet at least two of four criteria: moderate to severe intensity, throbbing pain, worsening with activity, and occurrence on one side of the head, Tepper said. The attacks also must have at least one of two features: nausea and sensitivity to light or sound, he added.
Where do migraines come from?
It’s hard to say exactly, but there is a lot of evidence to show there are genetic and environmental factors that lead to people getting migraines, Petrarca said.
Migraines tend to be passed down in families, but can still present differently among various family members, Godley said.
Although genetics are a major cause, some migraines can also be caused by head injuries, he added. And hormonal changes, particularly in estrogen, can really impact migraine attacks.
It is important to note, however, that there is a difference between what causes someone to be prone to migraines and what triggers them, Tepper said.
What are the triggers?
Migraine triggers are different for everyone, so Petrarca recommends keeping a diary of when you get attacks so you can track what factors might correlate.
People usually start to get their migraines after puberty, but they can show up in younger children in forms like colic, motion sickness and dizziness, Godley said.
For many women, migraine attacks tend to go along with phases of the menstrual cycle as estrogen levels change, Petrarca said. But they can also be triggered by weather changes, stress, sleep disturbances, skipping meals, dehydration and alcohol, she added.
Unfortunately, both chocolate and wine can be a big trigger, Petrarca said. But each person who suffers migraines will have their own triggers.
What are the symptoms?
When people think of migraines, they often think of head pain. But a lot of people experience many more symptoms than that.
About a third of people will experience an aura leading up to their migraine, which is “a temporary perceptual disturbance that lasts from five minutes to an hour,” Petrarca said.
That can look like blurred vision, seeing spots, numbness in the extremities or changes in speech, she added.
The list of possible symptoms that go along with a migraine is long but includes: dizziness, vertigo, brain fog, tinnitus, mood changes, yawning, neck pain and food craving, Petrarca said.
“I think kind of taking it away from just the head pain, you know there’s many, many other symptoms that also go along with a migraine attack,” she said.
And some of the symptoms, including exhaustion, can happen before a migraine, after it or in between attacks, Tepper added.
What can you do about them?
There are many medications you can discuss with your doctor to treat migraine attacks or prevent them from happening, but it is important to find a doctor who understands migraine conditions, Godley said.
Because medical professionals who are well versed in treating migraines are in short supply, you may need to advocate for yourself and your pain to get a diagnosis and effective treatment, Petrarca said.
When you get those medications, have them ready and try to take them as soon as you start having symptoms rather than waiting until later, she added.
“It’s very hard to kind of stop it once it’s started,” she said.
But medication isn’t the only way to address migraine disorders. One of the first things a person can do is make modifications to their lifestyle, Petrarca said.
Cognitive behavioral therapy is often useful to help manage the stress that can trigger migraine attacks, and taking vitamins and supplements, such as magnesium and riboflavin, can be effective in reducing migraine frequency, she said.
But getting down to the basics of health can be extremely helpful — track your migraines, create a sleep routine, exercise regularly, eat well-balanced meals, manage stress and stay hydrated, Petrarca said.
|
<urn:uuid:32ea2976-b5dc-4b34-8aa3-b978a2bfd3f6>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510219.5/warc/CC-MAIN-20230926175325-20230926205325-00849.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9566397070884705,
"pii_count": 0,
"score": 3.15625,
"token_count": 1270,
"url": "https://edition.cnn.com/2023/03/29/health/migraine-explainer-wellness/index.html"
}
|
Sign up for CNN’s Stress, But Less newsletter. Our six-part mindfulness guide will inform and inspire you to reduce stress while learning how to harness it.
Contrary to what internalized stigma may tell you, a migraine isn’t just a headache.
In people younger than age 50, chronic migraines are the leading cause of disability, according to a 2018 study. But many people who live with the condition can have a hard time recognizing its seriousness and getting the medical care they need for their migraines, said Kylie Petrarca, a nurse and associate program director of the Association of Migraine Disorders.
“People don’t give it the recognition it deserves,” she added.
Many doctors don’t know enough about the nuances of migraine disorders, said Dr. Frederick Godley, an ear, nose and throat specialist and president of the Association of Migraine Disorders.
But there are professionals researching and treating migraine conditions. A new study that was published Wednesday in Neurology, the journal of the American Academy of Neurology, has shown that both cluster headaches and migraines have a connection to circadian systems, meaning time of day and sleep patterns might have a big impact on those conditions.
Understanding what migraines are, why you may get them and what you can do is a powerful tool to advocate for yourself and get the proper treatment, Petrarca said.
What are migraines?
What used to be called a migraine headache is now called a migraine attack to better capture how much further the condition extends beyond pain in the head, Petrarca said.
Migraines start deep in the brain, said Dr. Stewart Tepper, a neurologist based in Lebanon, New Hampshire. The condition is better understood as a complex neurological disease that affects many parts of the nervous system, Godley said.
To be diagnosed with a migraine condition, a patient has to have at least five attacks in their lifetime that each last four to 72 hours when left untreated. The condition must also meet at least two of four criteria: moderate to severe intensity, throbbing pain, worsening with activity, and occurrence on one side of the head, Tepper said. The attacks also must have at least one of two features: nausea and sensitivity to light or sound, he added.
Where do migraines come from?
It’s hard to say exactly, but there is a lot of evidence to show there are genetic and environmental factors that lead to people getting migraines, Petrarca said.
Mig
|
raines tend to be passed down in families, but can still present differently among various family members, Godley said.
Although genetics are a major cause, some migraines can also be caused by head injuries, he added. And hormonal changes, particularly in estrogen, can really impact migraine attacks.
It is important to note, however, that there is a difference between what causes someone to be prone to migraines and what triggers them, Tepper said.
What are the triggers?
Migraine triggers are different for everyone, so Petrarca recommends keeping a diary of when you get attacks so you can track what factors might correlate.
People usually start to get their migraines after puberty, but they can show up in younger children in forms like colic, motion sickness and dizziness, Godley said.
For many women, migraine attacks tend to go along with phases of the menstrual cycle as estrogen levels change, Petrarca said. But they can also be triggered by weather changes, stress, sleep disturbances, skipping meals, dehydration and alcohol, she added.
Unfortunately, both chocolate and wine can be a big trigger, Petrarca said. But each person who suffers migraines will have their own triggers.
What are the symptoms?
When people think of migraines, they often think of head pain. But a lot of people experience many more symptoms than that.
About a third of people will experience an aura leading up to their migraine, which is “a temporary perceptual disturbance that lasts from five minutes to an hour,” Petrarca said.
That can look like blurred vision, seeing spots, numbness in the extremities or changes in speech, she added.
The list of possible symptoms that go along with a migraine is long but includes: dizziness, vertigo, brain fog, tinnitus, mood changes, yawning, neck pain and food craving, Petrarca said.
“I think kind of taking it away from just the head pain, you know there’s many, many other symptoms that also go along with a migraine attack,” she said.
And some of the symptoms, including exhaustion, can happen before a migraine, after it or in between attacks, Tepper added.
What can you do about them?
There are many medications you can discuss with your doctor to treat migraine attacks or prevent them from happening, but it is important to find a doctor who understands migraine conditions, Godley said.
Because medical professionals who are well versed in treating migraines are in short supply, you may need to advocate for yourself and your pain to get a diagnosis and effective treatment, Petrarca said.
When you get those medications, have them ready and try to take them as soon as you start having symptoms rather than waiting until later, she added.
“It’s very hard to kind of stop it once it’s started,” she said.
But medication isn’t the only way to address migraine disorders. One of the first things a person can do is make modifications to their lifestyle, Petrarca said.
Cognitive behavioral therapy is often useful to help manage the stress that can trigger migraine attacks, and taking vitamins and supplements, such as magnesium and riboflavin, can be effective in reducing migraine frequency, she said.
But getting down to the basics of health can be extremely helpful — track your migraines, create a sleep routine, exercise regularly, eat well-balanced meals, manage stress and stay hydrated, Petrarca said.
|
Autistic children tend to struggle with poor oral health a lot more than their neurotypical peers. Everything from accessing dental treatments to tolerating clinical set-ups is an uphill battle because of sensory issues that make autistic children hypersensitive towards harsh fluorescent lights, loud sounds, smells, and physical touch.
But despite the fact that one in 36 children are autistic, there are no clinical dental protocols in place which accommodate their needs. To make matters worse, dentists are not trained to treat autistic children and in many cases, even unwilling to do so. This has resulted in countless autistic children feeling overwhelmed, anxious, and stressed out before, during, and even after visiting the dentist’s clinic. In a new study published in JAMA, researchers found that tweaking the environment of an average dental office that meet autistic children’s sensory needs can significantly reduce the anxiety and behavioral distress they experience while undergoing treatment.
The team enrolled 162 autistic children aged six to 12 years old in their clinical trial. More than 75% of them were white. While 83 children received dental treatments in a regular office, the other 80 went to a dental clinic that had a sensory-adapted environment.
Each one of them participated in two routine dental clean-ups that took place around six months apart. The group of autistic children who went to the sensory-adapted dental clinic had radically different experiences because all overhead fluorescent lights — including the dental operatory lamp — were turned off. Windows were covered with darkening curtains. The pediatric dentist instead used a glass-mounted headlight that specifically directed light into the child’s mouth and avoided their eyes. A projector created slow-moving visual effects chosen by the child that included bubbles or fishes on the ceiling. Classic music with nature sounds was played in the room through a small speaker.
Also, most of the children were given a butterfly wrap that was weighted with a pediatric dental radiograph vest. The butterfly wrap was placed around the dental chair in a way that the wings wrapped around the child’s shoulders and went all the way down to their ankles — mimicking a deep hugging sensation. In the event any of the 80 children felt uncomfortable with either of the sensory stimuli, the dentists discontinued using it. More than 90% used the projector, music, and head lamp, and also preferred having the lights switched off. And at least 74% used the butterfly wrap without complaints.
The researchers video recorded the children during their dental cleanings to observe their behaviors and stress levels. They observed that the children who were treated in the regular dental clinics without any changes made to the set-up experienced severe distress and screamed, whimpered and/or cried during their clean ups. Whereas those who received sensory adaptations were a lot more calm and relaxed and also less anxious while undergoing dental clean ups.
“This finding supports previous studies reporting the efficacy of a sensory-adapted dental environment to decrease a variety of physiological measures of stress and anxiety in smaller sample sizes of autistic children, children with developmental disabilities and adults with intellectual and developmental disabilities,” the researchers wrote in the JAMA study.
They further highlighted that the sensory-adapted dental environment approach is highly scalable. “It requires minimal training to implement, is easily portable, does not involve renovations, requires only 5 to 10 minutes to set up or remove, and incurs a 1-time cost of less than $6000,” they added. “In practice settings, this equipment could remain set up indefinitely, with clinicians able to use (or not use) sensory-adapted dental environment adaptations for any given patient. Due to the simplicity and high scalability of sensory-adapted dental environment, it has outstanding potential to be readily implemented into dental clinics nationwide.”
“In fact, The American Academy of Pediatric Dentistry recently added this approach to their best practices for behavioral guidance as a potential technique for dental patients with anxiety or special health care needs,” the researchers wrote. “This is important because enhancing oral care is critical for autistic children; this intervention may also be beneficial for populations beyond autism.”
|
<urn:uuid:6af848fe-9914-4f84-b783-89a2a1d21396>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510387.77/warc/CC-MAIN-20230928095004-20230928125004-00018.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9684110283851624,
"pii_count": 0,
"score": 3.484375,
"token_count": 839,
"url": "https://www.forbes.com/sites/anuradhavaranasi/2023/06/02/how-dental-visits-can-be-made-less-stressful-for-autistic-children/"
}
|
Autistic children tend to struggle with poor oral health a lot more than their neurotypical peers. Everything from accessing dental treatments to tolerating clinical set-ups is an uphill battle because of sensory issues that make autistic children hypersensitive towards harsh fluorescent lights, loud sounds, smells, and physical touch.
But despite the fact that one in 36 children are autistic, there are no clinical dental protocols in place which accommodate their needs. To make matters worse, dentists are not trained to treat autistic children and in many cases, even unwilling to do so. This has resulted in countless autistic children feeling overwhelmed, anxious, and stressed out before, during, and even after visiting the dentist’s clinic. In a new study published in JAMA, researchers found that tweaking the environment of an average dental office that meet autistic children’s sensory needs can significantly reduce the anxiety and behavioral distress they experience while undergoing treatment.
The team enrolled 162 autistic children aged six to 12 years old in their clinical trial. More than 75% of them were white. While 83 children received dental treatments in a regular office, the other 80 went to a dental clinic that had a sensory-adapted environment.
Each one of them participated in two routine dental clean-ups that took place around six months apart. The group of autistic children who went to the sensory-adapted dental clinic had radically different experiences because all overhead fluorescent lights — including the dental operatory lamp — were turned off. Windows were covered with darkening curtains. The pediatric dentist instead used a glass-mounted headlight that specifically directed light into the child’s mouth and avoided their eyes. A projector created slow-moving visual effects chosen by the child that included bubbles or fishes on the ceiling. Classic music with nature sounds was played in the room through a small speaker.
Also, most of the children were given a butterfly wrap that was weighted with a pediatric dental radiograph vest. The butterfly wrap was placed around the dental chair in a way that the wings wrapped around the child’s shoulders and went all the way down to their ankles — mimicking a deep hugging sensation. In the event any of the 80 children felt uncomfortable with either of the sensory stimuli, the dentists discontinued using it. More than 90% used the projector, music, and head lamp, and also preferred having the lights switched off. And at least 74% used the butterfly wrap without complaints.
The researchers video recorded the children during their dental cleanings
|
to observe their behaviors and stress levels. They observed that the children who were treated in the regular dental clinics without any changes made to the set-up experienced severe distress and screamed, whimpered and/or cried during their clean ups. Whereas those who received sensory adaptations were a lot more calm and relaxed and also less anxious while undergoing dental clean ups.
“This finding supports previous studies reporting the efficacy of a sensory-adapted dental environment to decrease a variety of physiological measures of stress and anxiety in smaller sample sizes of autistic children, children with developmental disabilities and adults with intellectual and developmental disabilities,” the researchers wrote in the JAMA study.
They further highlighted that the sensory-adapted dental environment approach is highly scalable. “It requires minimal training to implement, is easily portable, does not involve renovations, requires only 5 to 10 minutes to set up or remove, and incurs a 1-time cost of less than $6000,” they added. “In practice settings, this equipment could remain set up indefinitely, with clinicians able to use (or not use) sensory-adapted dental environment adaptations for any given patient. Due to the simplicity and high scalability of sensory-adapted dental environment, it has outstanding potential to be readily implemented into dental clinics nationwide.”
“In fact, The American Academy of Pediatric Dentistry recently added this approach to their best practices for behavioral guidance as a potential technique for dental patients with anxiety or special health care needs,” the researchers wrote. “This is important because enhancing oral care is critical for autistic children; this intervention may also be beneficial for populations beyond autism.”
|
We'll need natural gas for years — but can start blending it with green hydrogen today, CEO says
- Produced using electrolysis and renewables like wind and solar, green hydrogen has some high-profile backers.
- While some are hugely excited about green hydrogen's potential, it still represents a tiny proportion of global hydrogen production.
- Today, the vast majority is based on fossil fuels, a fact at odds with net-zero goals.
From the United States to the European Union, major economies around the world are laying out plans to move away from fossil fuels in favor of low and zero-carbon technologies.
It's a colossal task that will require massive sums of money, huge political will and technological innovation. As the planned transition takes shape, there's been a lot of talk about the relationship between hydrogen and natural gas.
During a panel discussion moderated by CNBC's Joumanna Bercetche at the World Economic Forum in Davos, Switzerland, the CEO of energy firm AES offered up his take on how the two could potentially dovetail with one another going forward.
"I feel very confident in saying that, for the next 20 years, we need natural gas," Andrés Gluski, who was speaking Wednesday, said. "Now, what we can start to do today is … start to blend it with green hydrogen," he added.
"So we're running tests that you can blend it up to, say 20%, in existing turbines, and new turbines are coming out that can burn … much higher percentages," Gluski said.
"But it's just difficult to see that you're going to have enough green hydrogen to substitute it like, in the next 10 years."
Produced using electrolysis and renewables like wind and solar, green hydrogen has some high-profile backers.
These include German Chancellor Olaf Scholz, who has called it "one of the most important technologies for a climate-neutral world" and "the key to decarbonizing our economies."
While some are hugely excited about green hydrogen's potential it still represents a tiny proportion of global hydrogen production. Today, the vast majority is based on fossil fuels, a fact at odds with net-zero goals.
Change on the way, but scale is key
The planet's green hydrogen sector may still be in a relatively early stage of development, but a number of major deals related to the technology have been struck in recent years.
In December 2022, for example, AES and Air Products said they planned to invest roughly $4 billion to develop a "mega-scale green hydrogen production facility" located in Texas.
According to the announcement, the project will incorporate around 1.4 gigawatts of wind and solar and be able to produce more than 200 metric tons of hydrogen every day.
Despite the significant amount of money and renewables involved in the project, AES chief Gluski was at pains to highlight how much work lay ahead when it came to scaling up the sector as a whole.
The facility being planned with Air Products, he explained, could only "supply point one percent of the U.S. long haul trucking fleet." Work to be done, then.
High hopes, with collaboration crucial
Appearing alongside Gluski at the World Economic Forum was Elizabeth Gaines, a non-executive director at mining giant Fortescue Metals Group.
"We see green hydrogen as playing probably the most important role in the energy transition," she said.
Broadening the discussion, Gaines also spoke to the need for collaboration in the years ahead.
When it came to "the resources that are needed to support the green transition, and similar[ly] to the production of green hydrogen," she argued there was a need "to work closely with government and regulators."
"I mean, it's one thing to say we need more lithium, we need more copper, but you can't do that without getting the approvals, and you need the regulatory approvals, the environmental approvals," she said.
"You know, these things do take time, and we wouldn't want that to be the bottleneck in the energy transition, similar to the skills and resources that we need."
Kivanc Zaimler, energy group president at Sabanci Holding, also stressed the importance of being open to new ideas and innovations.
"We have to — we need to — embrace, we have to welcome, we have to support all the technologies," he said. These included both hydrogen and electric vehicles.
Expanding on his point, Zaimler spoke of the need for cooperation, especially when it came to hydrogen.
"We have to bring all the right people around the table — academicians, governments, private sectors, players around the entire value chain."
This included, "the manufacturing of the electrolyzer, the membranes, the green energy producers, the users."
|
<urn:uuid:656cf18e-23de-435c-9ed4-b2eb6d85d5e4>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00474.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.964344322681427,
"pii_count": 0,
"score": 2.59375,
"token_count": 999,
"url": "https://www.cnbc.com/2023/01/23/energy-ceo-thinks-natural-gas-will-be-around-for-years-to-come-.html?__source=OTS%7Cfinance%7Crelated%7Cstory%7C&par=OTS&doc=107182221"
}
|
We'll need natural gas for years — but can start blending it with green hydrogen today, CEO says
- Produced using electrolysis and renewables like wind and solar, green hydrogen has some high-profile backers.
- While some are hugely excited about green hydrogen's potential, it still represents a tiny proportion of global hydrogen production.
- Today, the vast majority is based on fossil fuels, a fact at odds with net-zero goals.
From the United States to the European Union, major economies around the world are laying out plans to move away from fossil fuels in favor of low and zero-carbon technologies.
It's a colossal task that will require massive sums of money, huge political will and technological innovation. As the planned transition takes shape, there's been a lot of talk about the relationship between hydrogen and natural gas.
During a panel discussion moderated by CNBC's Joumanna Bercetche at the World Economic Forum in Davos, Switzerland, the CEO of energy firm AES offered up his take on how the two could potentially dovetail with one another going forward.
"I feel very confident in saying that, for the next 20 years, we need natural gas," Andrés Gluski, who was speaking Wednesday, said. "Now, what we can start to do today is … start to blend it with green hydrogen," he added.
"So we're running tests that you can blend it up to, say 20%, in existing turbines, and new turbines are coming out that can burn … much higher percentages," Gluski said.
"But it's just difficult to see that you're going to have enough green hydrogen to substitute it like, in the next 10 years."
Produced using electrolysis and renewables like wind and solar, green hydrogen has some high-profile backers.
These include German Chancellor Olaf Scholz, who has called it "one of the most important technologies for a climate-neutral world" and "the key to decarbonizing our economies."
While some are hugely excited about green hydrogen's potential it still represents a tiny proportion of global hydrogen production. Today, the vast majority is based on fossil fuels, a fact at odds with net-zero goals.
Change on the way, but scale is key
The planet's green hydrogen sector may still be in a relatively early stage of development, but a number of major deals related to the technology have been struck in recent years.
In December 2022, for example, AES and Air Products said they planned to invest roughly $
|
4 billion to develop a "mega-scale green hydrogen production facility" located in Texas.
According to the announcement, the project will incorporate around 1.4 gigawatts of wind and solar and be able to produce more than 200 metric tons of hydrogen every day.
Despite the significant amount of money and renewables involved in the project, AES chief Gluski was at pains to highlight how much work lay ahead when it came to scaling up the sector as a whole.
The facility being planned with Air Products, he explained, could only "supply point one percent of the U.S. long haul trucking fleet." Work to be done, then.
High hopes, with collaboration crucial
Appearing alongside Gluski at the World Economic Forum was Elizabeth Gaines, a non-executive director at mining giant Fortescue Metals Group.
"We see green hydrogen as playing probably the most important role in the energy transition," she said.
Broadening the discussion, Gaines also spoke to the need for collaboration in the years ahead.
When it came to "the resources that are needed to support the green transition, and similar[ly] to the production of green hydrogen," she argued there was a need "to work closely with government and regulators."
"I mean, it's one thing to say we need more lithium, we need more copper, but you can't do that without getting the approvals, and you need the regulatory approvals, the environmental approvals," she said.
"You know, these things do take time, and we wouldn't want that to be the bottleneck in the energy transition, similar to the skills and resources that we need."
Kivanc Zaimler, energy group president at Sabanci Holding, also stressed the importance of being open to new ideas and innovations.
"We have to — we need to — embrace, we have to welcome, we have to support all the technologies," he said. These included both hydrogen and electric vehicles.
Expanding on his point, Zaimler spoke of the need for cooperation, especially when it came to hydrogen.
"We have to bring all the right people around the table — academicians, governments, private sectors, players around the entire value chain."
This included, "the manufacturing of the electrolyzer, the membranes, the green energy producers, the users."
|
- Neurological issues like loss of taste and smell, difficulty concentrating, memory issues, and brain fog are common among long COVID patients.
- Now an odd new neurologic symptom has been documented for the first time: prosopagnosia.
- Prosopagnosia, a neurological disorder that makes distinguishing faces impossible, wasn’t identified in other long COVID sufferers interviewed by researchers. But similar issues were reported in other patients.
COVID was infamous in its early days for causing loss of taste and smell. Now, another odd neurological symptom has been documented: loss of the ability to recognize faces.
A report published this month in the journal Cortex describes the case of Annie, a 28-year-old woman who had no trouble recognizing faces prior to coming down with COVID in March 2020. Two months later, when her COVID symptoms returned, she began experiencing difficulty recognizing faces.
It’s the first documented report of prosopagnosia—a neurological disorder that makes distinguishing faces impossible—after COVID-19, according to the study’s authors.
“Previous studies of the long-term effects of COVID-19 have reported deficits in memory, attention, and concentration that substantially impair everyday functioning,” the authors write.
But “in addition to the well-known broad impairments, COVID-19 sometimes causes severe selective impairments like prosopagnosia.”
What’s more, researchers found similar symptoms—in the form of visual/perceptual and cognitive difficulties—in most other long COVID patients surveyed.
Annie recovered after an initial acute COVID illness in the spring of 2020, the authors write. But several weeks later, she began to feel disoriented, and that “something was off with faces.” At a family get-together in June of that year, she noted that she didn’t recognize her father, and that she couldn’t tell him apart from her uncle.
“My dad’s voice came out of a stranger’s face,” she told researchers, adding that she now relies heavily on her memory of strangers’ voices.
She also struggled on a test in which researchers asked to recognize faces of celebrities.
For Annie, the new and unusual symptom—apparently tied to long COVID—was especially disruptive: She works as a customer service representative and part time portrait artist. She now finds herself fully dependent on looking at photos of her subjects while drawing. Prior to COVID, she only had to reference a picture once every 15 or 30 minutes while working.
Now, “Faces are like water in my head,” she told researchers.
Annie now “equates looking at and then trying to remember faces to viewing a Chinese character without any knowledge of the language, and then being asked to reproduce it from memory,” the authors write.
But her newfound visual difficulties aren’t limited to faces. She also finds herself getting lost in the grocery store, forgetting where she parked, and driving in the direction opposite of where she intended to go, the authors add.
Annie has other symptoms of long COVID, the authors note, including relapses of COVID symptoms, fatigue, trouble concentrating, and brain fog. In November 2020, nine months after her first COVID infection, she began experiencing balance issues and frequent migraines, too.
To see if other long COVID sufferers were struggling with the same or similar issues, researchers surveyed a group of 54 others with the post-viral illness. They found that most reported major drops in their ability to identify people and objects, recognize voices, remember phone numbers, and understand what they read—though they didn’t find additional patients with the exact same condition.
With more than 200 symptoms identified—from lingering cough and fatigue to ear numbness and a sensation of “brain on fire”—long COVID is undoubtedly not one but multiple conditions, experts say.
True long COVID, some contend, is best defined as a chronic-fatigue-syndrome-like condition that develops after a COVID infection, similar to other post-viral syndromes that can occur after an infection with herpes, Lyme disease, and Ebola, among others.
Other post-COVID complications like organ damage should not be defined as long COVID and better fit into the larger umbrella category of PASC, some experts say. Also known as post-acute sequelae of COVID-19, the term is used to encompass a wide variety of COVID consequences, from chronic-fatigue-like symptoms and subsequent heart disease to lasting lung damage and odd new symptoms like urinary incontinence, itching, and skin lesions.
As of Jan. 16, 15% of U.S. adults reported having long COVID symptoms at some point in the pandemic, and 6% reported lingering symptoms, according to a Jan. 26 report by the Kaiser Family Foundation, citing data from the U.S. Centers for Disease Control and Prevention.
The percent of Americans who’ve experienced COVID and still report long COVID symptoms dropped from 19% in June to 11% in January, according to the report.
Fortune‘s CFO Daily newsletter is the must-read analysis every finance professional needs to get ahead. Sign up today.
|
<urn:uuid:409e109c-a0dd-4700-949f-04d72498fcb2>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00519.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9528195858001709,
"pii_count": 0,
"score": 2.859375,
"token_count": 1122,
"url": "https://fortune.com/well/2023/03/14/prosopagnosia-facial-blindness-long-covid-symptom/"
}
|
- Neurological issues like loss of taste and smell, difficulty concentrating, memory issues, and brain fog are common among long COVID patients.
- Now an odd new neurologic symptom has been documented for the first time: prosopagnosia.
- Prosopagnosia, a neurological disorder that makes distinguishing faces impossible, wasn’t identified in other long COVID sufferers interviewed by researchers. But similar issues were reported in other patients.
COVID was infamous in its early days for causing loss of taste and smell. Now, another odd neurological symptom has been documented: loss of the ability to recognize faces.
A report published this month in the journal Cortex describes the case of Annie, a 28-year-old woman who had no trouble recognizing faces prior to coming down with COVID in March 2020. Two months later, when her COVID symptoms returned, she began experiencing difficulty recognizing faces.
It’s the first documented report of prosopagnosia—a neurological disorder that makes distinguishing faces impossible—after COVID-19, according to the study’s authors.
“Previous studies of the long-term effects of COVID-19 have reported deficits in memory, attention, and concentration that substantially impair everyday functioning,” the authors write.
But “in addition to the well-known broad impairments, COVID-19 sometimes causes severe selective impairments like prosopagnosia.”
What’s more, researchers found similar symptoms—in the form of visual/perceptual and cognitive difficulties—in most other long COVID patients surveyed.
Annie recovered after an initial acute COVID illness in the spring of 2020, the authors write. But several weeks later, she began to feel disoriented, and that “something was off with faces.” At a family get-together in June of that year, she noted that she didn’t recognize her father, and that she couldn’t tell him apart from her uncle.
“My dad’s voice came out of a stranger’s face,” she told researchers, adding that she now relies heavily on her memory of strangers’ voices.
She also struggled on a test in which researchers asked to recognize faces of celebrities.
For Annie, the new and unusual symptom—apparently tied to long COVID—was especially disruptive: She works as a customer service representative and part time portrait artist. She now finds herself fully dependent on looking at photos of her subjects while drawing. Prior to COVID, she only had to reference a picture once every 15 or 30 minutes while working.
Now, “Faces are like water in my head,”
|
she told researchers.
Annie now “equates looking at and then trying to remember faces to viewing a Chinese character without any knowledge of the language, and then being asked to reproduce it from memory,” the authors write.
But her newfound visual difficulties aren’t limited to faces. She also finds herself getting lost in the grocery store, forgetting where she parked, and driving in the direction opposite of where she intended to go, the authors add.
Annie has other symptoms of long COVID, the authors note, including relapses of COVID symptoms, fatigue, trouble concentrating, and brain fog. In November 2020, nine months after her first COVID infection, she began experiencing balance issues and frequent migraines, too.
To see if other long COVID sufferers were struggling with the same or similar issues, researchers surveyed a group of 54 others with the post-viral illness. They found that most reported major drops in their ability to identify people and objects, recognize voices, remember phone numbers, and understand what they read—though they didn’t find additional patients with the exact same condition.
With more than 200 symptoms identified—from lingering cough and fatigue to ear numbness and a sensation of “brain on fire”—long COVID is undoubtedly not one but multiple conditions, experts say.
True long COVID, some contend, is best defined as a chronic-fatigue-syndrome-like condition that develops after a COVID infection, similar to other post-viral syndromes that can occur after an infection with herpes, Lyme disease, and Ebola, among others.
Other post-COVID complications like organ damage should not be defined as long COVID and better fit into the larger umbrella category of PASC, some experts say. Also known as post-acute sequelae of COVID-19, the term is used to encompass a wide variety of COVID consequences, from chronic-fatigue-like symptoms and subsequent heart disease to lasting lung damage and odd new symptoms like urinary incontinence, itching, and skin lesions.
As of Jan. 16, 15% of U.S. adults reported having long COVID symptoms at some point in the pandemic, and 6% reported lingering symptoms, according to a Jan. 26 report by the Kaiser Family Foundation, citing data from the U.S. Centers for Disease Control and Prevention.
The percent of Americans who’ve experienced COVID and still report long COVID symptoms dropped from 19% in June to 11% in January, according to the report.
Fortune‘s CFO Daily newsletter is the must-read analysis every finance professional needs to get ahead. Sign up today.
|
Past experiences made Judge Waite a 'quiet witness and warrior' in the Civil Rights era
Clyde Waite watched the black boots of Alabama State Troopers approach the overturned pickup stuck in a swamp after they ran it off the road. Trapped in the truck bed, the then 22-year-old college student did everything in his power to stay invisible.
Civil Rights activists warned Waite and the other students from Howard University, a historically black college, about the dangers facing people in the South registering Black voters in 1965.
“But I didn’t fully appreciate it,” the 78-year-old Waite said recently from his home in Wrightstown. “At that age, you’re kind of idealistic. You kind of feel like you’re on a religious pilgrimage.”
The story of the Civil Rights-era trip to Selma, Alabama is one that Rev. David Perkins had not heard about Waite, a member of the Pebble Hill Church, before asking him to speak at this year’s Rev. Dr. Martin Luther King Jr. service at the Doylestown Township interfaith church.
But Perkins had heard other stories about Waite's experiences during the more than 50 years he has lived in the predominately white county where until last month Waite was the first Black judge, and only, to serve in Bucks County Common Pleas Court.
“I know enough that I know I wanted him to come speak,” Perkins said. “In his own way he has been a quiet witness and warrior for social justice, positive social change and inclusion his entire life.”
Recently Waite recounted his experiences as a Black man during the civil rights movement in first Washington D.C. then the campus of Yale University in Connecticut during one of the most transformative and historic periods in U.S. history.
How you can celebrate MLK Day this yearWays to celebrate and honor Martin Luther King's legacy in Bucks and Montgomery counties
How typing and shorthand changed Clyde Waite's life
The day after his 1962 high school graduation ceremony, Waite packed his belongings into a suitcase secured with a piece of rope and bought a train ticket to Washington D.C. where he knew no one.
His secretarial skills and a high score on the civil service exam landed him a conditional job offer at the Library of Congress. After a second skill test Waite was hired on the spot.
“I couldn’t wait to get out of McKeesport,” he said of his western Pennsylvania hometown in Allegheny County, outside of Pittsburgh. “Any job was considered a good job.”
College was the furthest thing from his mind, until several months later when Waite was out looking for an apartment. He passed a building where outside he saw the most beautiful women he had ever seen.
It turned out he stumbled upon the girl’s dormitory for Howard University. After learning more about the college, he decided to fill out an application.
While high school prepared him for a government job, he lacked the knowledge necessary for the rigors of college curriculum. Howard rejected him.
But Waite is no quitter. He took college prep classes at night school. In 1964, after submitting his third application, he was allowed to enroll in Howard on a probationary basis. His first semester he maintained a “C” average and earned a spot in the Class of 1968. For the rest of his college career he made As and Bs, while working fulltime.
Those early years living in the nation's capital Waite described as an electric time. Something was always happening.
If there wasn’t a demonstration in front of the Supreme Court building, there was one at the Capitol or a march down Pennsylvania Avenue. The Civil Rights movement and efforts to overturn segregation in the South was gaining traction nationally and Rev. Dr. Martin Luther King Jr. was making a name for himself as one of its leaders.
But King's name was not one Waite was aware of before entering Howard University. He was more familiar with Elijah Muhammad, the leader of the Nation of Islam, a militant civil rights group that was critical of King.
He was in his second year at Howard In 1965, when the Student Non-Violent Coordinating Committee (SNCC) was recruiting students to register Black voters in southern states.
Waite signed up for a two-week recruiting drive in Selma, Alabama. He had been registered to vote since his 18th birthday and knew its value for racial minorities.
“I felt I was obliged to do something even though it was something I knew was dangerous,” Waite said. “I really didn’t know how vulnerable and exposed I would feel until I’m there, recognizing that the authorities would not protect you. You had no protection from anyone.”
How a trip down South strengthened the character of a young Black man
The SNCC headquarters in Selma was across the street from the police station, an intentional location so police could not say they were not aware if violence erupted, Waite said.
Volunteers like Waite were warned not to carry sharpened pencils since they could be seen as weapons and result in an arrest. He remembers how the sheriff met the bus when it pulled into Selma and the 40 students aboard were all searched.
Outside, a few SNCC workers from the area met the bus, but they were far outnumbered by the hundreds of segregation supporters who surrounded them.
“It was something that was expected, but still shocking because you can expect something but you don’t know how psychological and emotionally vulnerable you would feel when it actually happened,” Waite said.
During the Selma trip, Waite and others were put up by people sympathetic to their cause who risked their jobs, and lives, housing the students. It’s why they never stayed in the same place twice.
“I remember being in one of these shanty-type houses built on stilts, and a trap door,” Waite said. “If someone came to the front door, you could go through the trap door and escape under the house.”
Every day, Waite and others left the SNCC headquarters in Selma and traveled 28 miles to rural Lowndes County, where 80% of the residents were Black and not one registered to vote, Waite said. The trips were made either before sun up or after sun down, when they knew farmers would be home.
The town was known as "Bloody Lowndes” because of the high number of Black citizens who died by lynching.
About halfway through the trip, Waite started sleeping at the SNCC offices after a disturbing incident at the home of a Black farmer.
Waite was among a group of volunteers who were meeting at the home, when the headlights from a caravan of cars headed up the driveway. The homeowner pulled out his shotgun and loaded it with three rounds as the caravan drew closer.
The abduction and murders of three young Civil Rights volunteers registering Black voters in Philadelphia, Mississippi, not even a year earlier was still a fresh memory.
“We were quite frightened,” Waite said.
The visitors turned out to be other SNCC members, but the experience was a prelude for a confrontation a few days later.
Before sunrise, Waite was among five volunteers in a pickup truck headed to Lowndes. The driver, a man, and three women squeezed into the cab and Waite tucked himself into a sleeping bag in the bed of the truck. At some point in the ride, he felt the pickup speed up. Then he heard sirens.
He felt the truck travel off the paved road onto the wet grass where it fishtailed, flipped over and slid down a hill into a swamp.
Waite was trapped in the overturned bed, but he could see what was happening through a gap. He had no idea if anyone was hurt, or what the troopers planned to do.
“I was terrified,” Waite said.
The driver was arrested on a charge of “defacing the highways” and hauled off to jail in Selma. The three girls in the cab were not seriously hurt, but left behind.
After Waite dug himself from under the truck, the four walked until they found a Black farmer with a tractor. The farmer drove them back to the pickup, up righted and hauled it back to the road.
One wheel was badly bent, but the vehicle started up. Waite was the only one with a driver’s license, but he didn’t know how to drive a stick shift. Somehow, they made their way back to the SNCC office, where they then went across the street to see about bailing out the driver.
The rest of the trip was uneventful, Waite recalled. Though before he and others returned to Howard University, Rev. King visited the SNCC office, shook their hands and thanked them for their efforts.
How the assassination of MLK led to a Yale Law School acceptance
While Waite admired King’s commitment to end segregation, he was not enamored with his philosophy of nonviolence.
“It didn’t feel right that you’re going to advance by letting someone beat you over the head," he said. “I feel either I'm going to wear a helmet, if someone is going to try and beat me over the head, or I’m going to try and push them away or something, I feel I have to take matters into my own hands.”
On the other hand, Waite also disagreed with the approach of Nation of Islam and Black Panthers, who he considered activists who made a lot of noise, but did not put their lives on the line for equality.
“I was somewhere in the middle where I wanted to be active, but not destructive,” Waite said. “I would not want to participate in the actions of people bent on anarchy and chaos. I’m more inclined to try and build bridges.”
His experience living through the Civil Rights movement and the D.C. riots shaped his decision to pursue a law career. He applied to a number of law schools, but did not hear from most of them.
Until five days after the King assassination in Memphis, Tennessee.
A letter from Yale Law School arrived. It said while the openings for the Class of 1971 had been filled, due to the “extraordinary circumstances” Waite was on a wait list.
A month later a Western Union telegram arrived telling Waite he got in. He was one of 12 Black students accepted, the largest number of Black law students in Yale’s history at that point.
To this day, Waite believes King’s death influenced his admission.
“I know damn right well I never would have gotten into Yale if it wasn’t for the assassination of Martin Luther King,” he said. “No way.”
After graduating from Yale, Waite and his first wife moved to Bucks County in 1971 where he volunteered with the Legal Aid Society and worked part-time as a public defender.
His experiences as a Black man living in a predominantly white suburban community opened his eyes further about how racial minorities were viewed, he said.
Waite had saved enough money to buy a house, but no realtors would show him houses in the Central Bucks area. It was the member of the Pebble Hill Church, which Waite and his wife had joined, who helped connect him with a realtor who agreed to work with him.
He bought a small home on Swamp Road near the Moravian Tile Works without going to see it first.
“It’s a house, it has four walls, I’ll take it,” Waite said.
As a Bucks County resident, Waite developed a reputation as a successful lawyer, real estate developer and community activist, which culminated in his 2003 election to the Bucks County Common Pleas Court.
He remains the only Black judge in Bucks County history.
Where are the Black judges?'Literally zero representation': Few Black women serve on U.S. courts, none on Bucks County bench
Despite a prominent career in the county, Waite said he still bumped against the occasional reminder of his outsider status.
Once he was mistaken for a waiter after he attended what was billed as a black-tie Bucks County Bar Association event dressed in a tuxedo. Early in his legal career in the county a judge mistook him for a criminal defendant in court.
After winning his re-election in 2013, Waite was gathering up his campaign signs outside the Doylestown courthouse when he was mistaken for a janitor.
A guest staying at a neighbor’s home during Thanksgiving weekend in 2016 called police after seeing a black man walk into the home at night. Police, who knew the judge lived there, surrounded the home and demanded the person inside come out.
His hand raised, Waite walked out and asked the officers what was going on. One officer immediately recognized Waite as a judge and told him that they got a call about a home invasion, breaking glass and a commotion.
Waite let the police inside to look around, but they found no one else. Later, he found out what happened was a case of mistaken identity.
“The neighbors told (the house guest) that a judge lived next door and when they saw me, they didn’t see what a judge looks like,” Waite said. “A judge doesn’t look like me.”
He can laugh about the situation now. At the time, Waite said he was more confused about the police presence than frightened by it. He insists he never held ill feelings toward anyone involved.
Waite explained that he has come to accept that people, himself included, hold unconscious bias and judgments about others based on superficial traits and people are more comfortable around what is familiar to them.
“Your feelings are feelings. You can’t consciously be, I’m not going to feel this. If you feel it, it’s there,” he said.
What is important, Waite believes, is that people recognize and acknowledge bias exists, and find a way to manage it.
“The reason the U.S. is one of the greatest countries is because of the mixture of all different races, religions, energies, talents. No one has all the assets, and everyone has some assets,” he said. “You cannot require a one-legged person to run a 100-yard dash in 10-seconds, but you can have that one-legged person strategizing for that able-bodied person to do the best they can do.”
Former Bucks County Common Pleas Judge Clyde Waite will be the guest speaker Sunday at the Rev. Dr. Martin Luther King Jr. celebration service at Pebble Hill Church. The interfaith service starts at 10:30 a.m. and it is open to the public. The church is located at 3230 Edison-Furlong Road in Doylestown Township. The service is available online through Zoom by clicking on this link. Meeting ID is 85886792229 passcode: &ZuE3J
A case of mistaken identityHis hands up, a county judge is rousted from bed at 1 a.m.
Another Bucks County judge is retiringBucks County Judge Diane Gibbons is retiring. Here's why she'll be back on the bench soon
|
<urn:uuid:89689295-dcaf-47c6-b1a4-f8f9c88d1573>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00728.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9833803772926331,
"pii_count": 0,
"score": 2.5625,
"token_count": 3264,
"url": "https://www.phillyburbs.com/story/news/local/2023/01/13/bucks-county-pa-judge-clyde-waite-was-on-frontline-of-civil-rights-movement-mlk-day-remembrance/69792383007/"
}
|
Past experiences made Judge Waite a 'quiet witness and warrior' in the Civil Rights era
Clyde Waite watched the black boots of Alabama State Troopers approach the overturned pickup stuck in a swamp after they ran it off the road. Trapped in the truck bed, the then 22-year-old college student did everything in his power to stay invisible.
Civil Rights activists warned Waite and the other students from Howard University, a historically black college, about the dangers facing people in the South registering Black voters in 1965.
“But I didn’t fully appreciate it,” the 78-year-old Waite said recently from his home in Wrightstown. “At that age, you’re kind of idealistic. You kind of feel like you’re on a religious pilgrimage.”
The story of the Civil Rights-era trip to Selma, Alabama is one that Rev. David Perkins had not heard about Waite, a member of the Pebble Hill Church, before asking him to speak at this year’s Rev. Dr. Martin Luther King Jr. service at the Doylestown Township interfaith church.
But Perkins had heard other stories about Waite's experiences during the more than 50 years he has lived in the predominately white county where until last month Waite was the first Black judge, and only, to serve in Bucks County Common Pleas Court.
“I know enough that I know I wanted him to come speak,” Perkins said. “In his own way he has been a quiet witness and warrior for social justice, positive social change and inclusion his entire life.”
Recently Waite recounted his experiences as a Black man during the civil rights movement in first Washington D.C. then the campus of Yale University in Connecticut during one of the most transformative and historic periods in U.S. history.
How you can celebrate MLK Day this yearWays to celebrate and honor Martin Luther King's legacy in Bucks and Montgomery counties
How typing and shorthand changed Clyde Waite's life
The day after his 1962 high school graduation ceremony, Waite packed his belongings into a suitcase secured with a piece of rope and bought a train ticket to Washington D.C. where he knew no one.
His secretarial skills and a high score on the civil service exam landed him a conditional job offer at the Library of Congress. After a second skill test Waite was hired on the spot.
“I couldn’t wait to get out of McKeesport,” he said of his western
|
Pennsylvania hometown in Allegheny County, outside of Pittsburgh. “Any job was considered a good job.”
College was the furthest thing from his mind, until several months later when Waite was out looking for an apartment. He passed a building where outside he saw the most beautiful women he had ever seen.
It turned out he stumbled upon the girl’s dormitory for Howard University. After learning more about the college, he decided to fill out an application.
While high school prepared him for a government job, he lacked the knowledge necessary for the rigors of college curriculum. Howard rejected him.
But Waite is no quitter. He took college prep classes at night school. In 1964, after submitting his third application, he was allowed to enroll in Howard on a probationary basis. His first semester he maintained a “C” average and earned a spot in the Class of 1968. For the rest of his college career he made As and Bs, while working fulltime.
Those early years living in the nation's capital Waite described as an electric time. Something was always happening.
If there wasn’t a demonstration in front of the Supreme Court building, there was one at the Capitol or a march down Pennsylvania Avenue. The Civil Rights movement and efforts to overturn segregation in the South was gaining traction nationally and Rev. Dr. Martin Luther King Jr. was making a name for himself as one of its leaders.
But King's name was not one Waite was aware of before entering Howard University. He was more familiar with Elijah Muhammad, the leader of the Nation of Islam, a militant civil rights group that was critical of King.
He was in his second year at Howard In 1965, when the Student Non-Violent Coordinating Committee (SNCC) was recruiting students to register Black voters in southern states.
Waite signed up for a two-week recruiting drive in Selma, Alabama. He had been registered to vote since his 18th birthday and knew its value for racial minorities.
“I felt I was obliged to do something even though it was something I knew was dangerous,” Waite said. “I really didn’t know how vulnerable and exposed I would feel until I’m there, recognizing that the authorities would not protect you. You had no protection from anyone.”
How a trip down South strengthened the character of a young Black man
The SNCC headquarters in Selma was across the street from the police station, an intentional location so police could not say they were not aware if violence erupted, Waite said.
Volunteers like Waite were warned not to carry sharpened pencils since they could be seen as weapons and result in an arrest. He remembers how the sheriff met the bus when it pulled into Selma and the 40 students aboard were all searched.
Outside, a few SNCC workers from the area met the bus, but they were far outnumbered by the hundreds of segregation supporters who surrounded them.
“It was something that was expected, but still shocking because you can expect something but you don’t know how psychological and emotionally vulnerable you would feel when it actually happened,” Waite said.
During the Selma trip, Waite and others were put up by people sympathetic to their cause who risked their jobs, and lives, housing the students. It’s why they never stayed in the same place twice.
“I remember being in one of these shanty-type houses built on stilts, and a trap door,” Waite said. “If someone came to the front door, you could go through the trap door and escape under the house.”
Every day, Waite and others left the SNCC headquarters in Selma and traveled 28 miles to rural Lowndes County, where 80% of the residents were Black and not one registered to vote, Waite said. The trips were made either before sun up or after sun down, when they knew farmers would be home.
The town was known as "Bloody Lowndes” because of the high number of Black citizens who died by lynching.
About halfway through the trip, Waite started sleeping at the SNCC offices after a disturbing incident at the home of a Black farmer.
Waite was among a group of volunteers who were meeting at the home, when the headlights from a caravan of cars headed up the driveway. The homeowner pulled out his shotgun and loaded it with three rounds as the caravan drew closer.
The abduction and murders of three young Civil Rights volunteers registering Black voters in Philadelphia, Mississippi, not even a year earlier was still a fresh memory.
“We were quite frightened,” Waite said.
The visitors turned out to be other SNCC members, but the experience was a prelude for a confrontation a few days later.
Before sunrise, Waite was among five volunteers in a pickup truck headed to Lowndes. The driver, a man, and three women squeezed into the cab and Waite tucked himself into a sleeping bag in the bed of the truck. At some point in the ride, he felt the pickup speed up. Then he heard sirens.
He felt the truck travel off the paved road onto the wet grass where it fishtailed, flipped over and slid down a hill into a swamp.
Waite was trapped in the overturned bed, but he could see what was happening through a gap. He had no idea if anyone was hurt, or what the troopers planned to do.
“I was terrified,” Waite said.
The driver was arrested on a charge of “defacing the highways” and hauled off to jail in Selma. The three girls in the cab were not seriously hurt, but left behind.
After Waite dug himself from under the truck, the four walked until they found a Black farmer with a tractor. The farmer drove them back to the pickup, up righted and hauled it back to the road.
One wheel was badly bent, but the vehicle started up. Waite was the only one with a driver’s license, but he didn’t know how to drive a stick shift. Somehow, they made their way back to the SNCC office, where they then went across the street to see about bailing out the driver.
The rest of the trip was uneventful, Waite recalled. Though before he and others returned to Howard University, Rev. King visited the SNCC office, shook their hands and thanked them for their efforts.
How the assassination of MLK led to a Yale Law School acceptance
While Waite admired King’s commitment to end segregation, he was not enamored with his philosophy of nonviolence.
“It didn’t feel right that you’re going to advance by letting someone beat you over the head," he said. “I feel either I'm going to wear a helmet, if someone is going to try and beat me over the head, or I’m going to try and push them away or something, I feel I have to take matters into my own hands.”
On the other hand, Waite also disagreed with the approach of Nation of Islam and Black Panthers, who he considered activists who made a lot of noise, but did not put their lives on the line for equality.
“I was somewhere in the middle where I wanted to be active, but not destructive,” Waite said. “I would not want to participate in the actions of people bent on anarchy and chaos. I’m more inclined to try and build bridges.”
His experience living through the Civil Rights movement and the D.C. riots shaped his decision to pursue a law career. He applied to a number of law schools, but did not hear from most of them.
Until five days after the King assassination in Memphis, Tennessee.
A letter from Yale Law School arrived. It said while the openings for the Class of 1971 had been filled, due to the “extraordinary circumstances” Waite was on a wait list.
A month later a Western Union telegram arrived telling Waite he got in. He was one of 12 Black students accepted, the largest number of Black law students in Yale’s history at that point.
To this day, Waite believes King’s death influenced his admission.
“I know damn right well I never would have gotten into Yale if it wasn’t for the assassination of Martin Luther King,” he said. “No way.”
After graduating from Yale, Waite and his first wife moved to Bucks County in 1971 where he volunteered with the Legal Aid Society and worked part-time as a public defender.
His experiences as a Black man living in a predominantly white suburban community opened his eyes further about how racial minorities were viewed, he said.
Waite had saved enough money to buy a house, but no realtors would show him houses in the Central Bucks area. It was the member of the Pebble Hill Church, which Waite and his wife had joined, who helped connect him with a realtor who agreed to work with him.
He bought a small home on Swamp Road near the Moravian Tile Works without going to see it first.
“It’s a house, it has four walls, I’ll take it,” Waite said.
As a Bucks County resident, Waite developed a reputation as a successful lawyer, real estate developer and community activist, which culminated in his 2003 election to the Bucks County Common Pleas Court.
He remains the only Black judge in Bucks County history.
Where are the Black judges?'Literally zero representation': Few Black women serve on U.S. courts, none on Bucks County bench
Despite a prominent career in the county, Waite said he still bumped against the occasional reminder of his outsider status.
Once he was mistaken for a waiter after he attended what was billed as a black-tie Bucks County Bar Association event dressed in a tuxedo. Early in his legal career in the county a judge mistook him for a criminal defendant in court.
After winning his re-election in 2013, Waite was gathering up his campaign signs outside the Doylestown courthouse when he was mistaken for a janitor.
A guest staying at a neighbor’s home during Thanksgiving weekend in 2016 called police after seeing a black man walk into the home at night. Police, who knew the judge lived there, surrounded the home and demanded the person inside come out.
His hand raised, Waite walked out and asked the officers what was going on. One officer immediately recognized Waite as a judge and told him that they got a call about a home invasion, breaking glass and a commotion.
Waite let the police inside to look around, but they found no one else. Later, he found out what happened was a case of mistaken identity.
“The neighbors told (the house guest) that a judge lived next door and when they saw me, they didn’t see what a judge looks like,” Waite said. “A judge doesn’t look like me.”
He can laugh about the situation now. At the time, Waite said he was more confused about the police presence than frightened by it. He insists he never held ill feelings toward anyone involved.
Waite explained that he has come to accept that people, himself included, hold unconscious bias and judgments about others based on superficial traits and people are more comfortable around what is familiar to them.
“Your feelings are feelings. You can’t consciously be, I’m not going to feel this. If you feel it, it’s there,” he said.
What is important, Waite believes, is that people recognize and acknowledge bias exists, and find a way to manage it.
“The reason the U.S. is one of the greatest countries is because of the mixture of all different races, religions, energies, talents. No one has all the assets, and everyone has some assets,” he said. “You cannot require a one-legged person to run a 100-yard dash in 10-seconds, but you can have that one-legged person strategizing for that able-bodied person to do the best they can do.”
Former Bucks County Common Pleas Judge Clyde Waite will be the guest speaker Sunday at the Rev. Dr. Martin Luther King Jr. celebration service at Pebble Hill Church. The interfaith service starts at 10:30 a.m. and it is open to the public. The church is located at 3230 Edison-Furlong Road in Doylestown Township. The service is available online through Zoom by clicking on this link. Meeting ID is 85886792229 passcode: &ZuE3J
A case of mistaken identityHis hands up, a county judge is rousted from bed at 1 a.m.
Another Bucks County judge is retiringBucks County Judge Diane Gibbons is retiring. Here's why she'll be back on the bench soon
|
In the early summer of 1976 the Teton Dam in Idaho failed.
Sugar City, Rexburg and Wilford were flooded as water poured out of the dam at the rate of one million cubic feet per second. The disaster killed 11 people.
In respose the U.S. Bureau of Reclamation created the national Dam Safety program.
More than a decade later, in 1989, the Quail Creek Dike near Saint George failed.
“There were no fatalities, luckily,” said Teresa Wilhelmsen, state engineer and director of the Division of Water Rights. She delivered an update on statewide dam safety on Tuesday during a legislative interim meeting of the Natural Resources, Agriculture and Environmental Quality Appropriations Subcommittee.
But the failure at the Quail Creek Dike did “create and really propel” Utah’s State Dam Safety Program, Wilhelmsen said.
There are now roughly 6,500 dams on the state’s inventory and inspectors check more than 300 operational dams each year. “We spend quite a bit of the summer out in the field, inspecting these dams with the owners,” Wilhelmsen said
“We talk about adding infrastructure to the state and building additional,” Wilhelmsen said, “but I think it’s just as critical that we maintain the dams that we have in a safe condition because they provide a huge benefit.”
The Utah State Dam Safety program’s main focus is on high-hazard dams. A dam is classified as “high hazard” if its failure would likely lead to people dying, Wilhelmsen explained to legislators. That rating doesn’t have anything to do with the condition of the dam, but with the consequences. A dam is deemed “moderate hazard” if its failure would result in significant property loss but not death and “low hazard” would be some property damage.
There are 223 high-hazard dams in Utah and 101 of those dams need to be upgraded to meet minimum standards. Of those 101 dams, 24 are already in some stage of either construction or design. That leaves 81 high-hazard dams that aren’t yet up to the state’s standards.
In the meeting, Wilhelmsen and Candice Hasenyager, director of the Division of Water Resources told lawmakers that at current funding levels of about $3.8 million a year it could take about 120 years to update all the dams in the state. Although it might take even longer as more people move into areas near dams, pushing them into the “high hazard” category. Plus, inflation and evolving safety standards could make the repairs even more costly in the future.
“It is very sobering,” said the committee chair Sen. Scott D. Sandall, R-Tremonton.
On average it costs the state $4.5 million to upgrade one dam. Under Utah law, the state engineer cannot force a “mutual irrigation company or water users association” to upgrade their dam to meet minimum standards unless the Board of Water Resources offers to pay for 80% of the costs.
Federal and for-profit privately owned dams don’t qualify for the state’s grant funding program.
A national issue
Dam safety is a concern across the country — in 2022 the Associated Press found more than 2,200 high-hazard dams “in poor or unsatisfactory condition in the United States. The Association of State Dam Safety Officials estimated last year that it would require $24.04 billion to rehabilitate non-federal, high-hazard dams nationwide.
The development below dams is an added concern.
“When you’re the state engineer and you’re looking for a house the two top things on your criteria are: are you below a dam or below a canal?” Wilhelmsen said. “You don’t buy there. But people do. They live where they want to live.”
|
<urn:uuid:70307abc-aacd-4314-8286-00fce4274fda>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473819.62/warc/CC-MAIN-20240222125841-20240222155841-00210.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9521145224571228,
"pii_count": 0,
"score": 3.03125,
"token_count": 835,
"url": "https://www.sltrib.com/news/2023/06/14/81-high-hazard-utah-dams-need/"
}
|
In the early summer of 1976 the Teton Dam in Idaho failed.
Sugar City, Rexburg and Wilford were flooded as water poured out of the dam at the rate of one million cubic feet per second. The disaster killed 11 people.
In respose the U.S. Bureau of Reclamation created the national Dam Safety program.
More than a decade later, in 1989, the Quail Creek Dike near Saint George failed.
“There were no fatalities, luckily,” said Teresa Wilhelmsen, state engineer and director of the Division of Water Rights. She delivered an update on statewide dam safety on Tuesday during a legislative interim meeting of the Natural Resources, Agriculture and Environmental Quality Appropriations Subcommittee.
But the failure at the Quail Creek Dike did “create and really propel” Utah’s State Dam Safety Program, Wilhelmsen said.
There are now roughly 6,500 dams on the state’s inventory and inspectors check more than 300 operational dams each year. “We spend quite a bit of the summer out in the field, inspecting these dams with the owners,” Wilhelmsen said
“We talk about adding infrastructure to the state and building additional,” Wilhelmsen said, “but I think it’s just as critical that we maintain the dams that we have in a safe condition because they provide a huge benefit.”
The Utah State Dam Safety program’s main focus is on high-hazard dams. A dam is classified as “high hazard” if its failure would likely lead to people dying, Wilhelmsen explained to legislators. That rating doesn’t have anything to do with the condition of the dam, but with the consequences. A dam is deemed “moderate hazard” if its failure would result in significant property loss but not death and “low hazard” would be some property damage.
There are 223 high-hazard dams in Utah and 101 of those dams need to be upgraded to meet minimum standards. Of those 101 dams, 24 are already in some stage of either construction or design. That leaves 81 high-hazard dams that aren’t yet up to the state’s standards.
In the meeting, Wilhelmsen and Candice Hasenyager, director of the Division of Water Resources told lawmakers that at current funding levels of about $3.8 million a year it could take about 120 years to update all the dams in the state. Although it might take
|
even longer as more people move into areas near dams, pushing them into the “high hazard” category. Plus, inflation and evolving safety standards could make the repairs even more costly in the future.
“It is very sobering,” said the committee chair Sen. Scott D. Sandall, R-Tremonton.
On average it costs the state $4.5 million to upgrade one dam. Under Utah law, the state engineer cannot force a “mutual irrigation company or water users association” to upgrade their dam to meet minimum standards unless the Board of Water Resources offers to pay for 80% of the costs.
Federal and for-profit privately owned dams don’t qualify for the state’s grant funding program.
A national issue
Dam safety is a concern across the country — in 2022 the Associated Press found more than 2,200 high-hazard dams “in poor or unsatisfactory condition in the United States. The Association of State Dam Safety Officials estimated last year that it would require $24.04 billion to rehabilitate non-federal, high-hazard dams nationwide.
The development below dams is an added concern.
“When you’re the state engineer and you’re looking for a house the two top things on your criteria are: are you below a dam or below a canal?” Wilhelmsen said. “You don’t buy there. But people do. They live where they want to live.”
|
Astrobiology often conjures images of alien-laden exoplanets circling some far-flung star. But much of the European Astrobiology Institute’s recent biennial meeting in the Canary Islands was devoted to detecting evidence for either existing or past life within our own solar system.
To that end, the European Space Agency’s (ESA) ExoMars rover was a hot topic. Tentatively scheduled for launch in 2028, it’s due to begin exploring a wholly untouched area of the red planet early next decade.
In 2018, the ExoMars Landing Site Selection Working Group recommended that the ESA rover explores Oxia Planum, not far from the equator in the Martian Northern hemisphere.
This area of Mars has not really been studied by landers on the surface, John Carter, a planetary scientist at France’s Paris-Saclay University, told me at the conference.
Geographically, the closest lander to the ExoMars chosen site would be NASA’s Mars Pathfinder mission; a very small rover technology tester which launched some 25 years ago and lasted 85 days at the surface.
Pathfinder landed in an area which is geologically very different from Oxia Planum, says Carter.
Volcanic activity may have covered Oxia Planum’s early clays and other aqueous deposits, offering preservation for biosignatures against the planet’s harsh radiation and oxidation environment, says ESA.
Oxia Planum has thin layers of material likely deposited with water to create phyllosilicate clay, says Carter. This is what we would call mud here, he says.
“Mars scientists love mud because if you have past life or organic matter that's mixed with the clay in some fashion, it tends to get stuck with it and can stay stuck for billions of years,” said Carter.
This allows for sedimentation in a way that can promote fossilization.
So, we want to go to these clay rich sites which have sediments which had water flowing some 4 billion years ago, says Carter.
The ESA rover will use solar panels for run its operation so is subject to dust storms and Martian winter.
We certainly hope that within a couple of weeks or a couple of months, we'll be able to accomplish ExoMars’ primary science goals, says Carter.
The expectation is that the ESA rover will be able to land and drive the rover to a scientifically compelling site only a few meters away. ExoMars will be the first such lander to drill and collect material from both exposed geological outcrops as well as some six feet below the Martian surface.
As for whether Oxia Planum borders an ancient Mars ocean?
From orbital observations, there is evidence for past ponding of water at Oxia Planum. But Carter says to hit such water levels would require huge amounts of water —- enough to fill part of Mars’ Northern plains. We know that Mars had seas that were closer in size to the Mediterranean, says Carter.
“Maybe Oxia Planum is on the margin of an ancient ocean,” said Carter.
But Carter and colleagues are keeping their science goals in perspective.
It's easy to oversell Mars science and the idea that water and life were everywhere, says Carter. We honestly don’t know, he says.
Carter notes that the idea of Mars having had an ocean in its past is problematic, but that EuroMars should provide researchers with a decent test of this hypothesis. And even if the rover doesn’t find evidence for an ocean, there will likely be evidence for processes involving episodic flooding and surface weathering that Carter says may prove even more astrobiologically interesting than even an ancient ocean.
Unlike NASA’s more recent rovers, ESA is using solar panels for ExoMars, which Carter says can remain in working order for a decade or more.
The hope is that within just a few weeks of landing, ExoMars will achieve what Carter calls the mission’s overarching goal —- finding biological signatures of past life.
|
<urn:uuid:f87cd3d8-24d4-4dde-8b0a-e799959bc9a9>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510942.97/warc/CC-MAIN-20231002001302-20231002031302-00358.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9415298104286194,
"pii_count": 0,
"score": 3.53125,
"token_count": 847,
"url": "https://www.forbes.com/sites/brucedorminey/2023/05/16/esas-exomars-rover-will-explore-untouched-corner-of-red-planet/"
}
|
Astrobiology often conjures images of alien-laden exoplanets circling some far-flung star. But much of the European Astrobiology Institute’s recent biennial meeting in the Canary Islands was devoted to detecting evidence for either existing or past life within our own solar system.
To that end, the European Space Agency’s (ESA) ExoMars rover was a hot topic. Tentatively scheduled for launch in 2028, it’s due to begin exploring a wholly untouched area of the red planet early next decade.
In 2018, the ExoMars Landing Site Selection Working Group recommended that the ESA rover explores Oxia Planum, not far from the equator in the Martian Northern hemisphere.
This area of Mars has not really been studied by landers on the surface, John Carter, a planetary scientist at France’s Paris-Saclay University, told me at the conference.
Geographically, the closest lander to the ExoMars chosen site would be NASA’s Mars Pathfinder mission; a very small rover technology tester which launched some 25 years ago and lasted 85 days at the surface.
Pathfinder landed in an area which is geologically very different from Oxia Planum, says Carter.
Volcanic activity may have covered Oxia Planum’s early clays and other aqueous deposits, offering preservation for biosignatures against the planet’s harsh radiation and oxidation environment, says ESA.
Oxia Planum has thin layers of material likely deposited with water to create phyllosilicate clay, says Carter. This is what we would call mud here, he says.
“Mars scientists love mud because if you have past life or organic matter that's mixed with the clay in some fashion, it tends to get stuck with it and can stay stuck for billions of years,” said Carter.
This allows for sedimentation in a way that can promote fossilization.
So, we want to go to these clay rich sites which have sediments which had water flowing some 4 billion years ago, says Carter.
The ESA rover will use solar panels for run its operation so is subject to dust storms and Martian winter.
We certainly hope that within a couple of weeks or a couple of months, we'll be able to accomplish ExoMars’ primary science goals, says Carter.
The expectation is that the ESA rover will be able to land and drive the rover to a scientifically compelling site only a few meters away. Ex
|
oMars will be the first such lander to drill and collect material from both exposed geological outcrops as well as some six feet below the Martian surface.
As for whether Oxia Planum borders an ancient Mars ocean?
From orbital observations, there is evidence for past ponding of water at Oxia Planum. But Carter says to hit such water levels would require huge amounts of water —- enough to fill part of Mars’ Northern plains. We know that Mars had seas that were closer in size to the Mediterranean, says Carter.
“Maybe Oxia Planum is on the margin of an ancient ocean,” said Carter.
But Carter and colleagues are keeping their science goals in perspective.
It's easy to oversell Mars science and the idea that water and life were everywhere, says Carter. We honestly don’t know, he says.
Carter notes that the idea of Mars having had an ocean in its past is problematic, but that EuroMars should provide researchers with a decent test of this hypothesis. And even if the rover doesn’t find evidence for an ocean, there will likely be evidence for processes involving episodic flooding and surface weathering that Carter says may prove even more astrobiologically interesting than even an ancient ocean.
Unlike NASA’s more recent rovers, ESA is using solar panels for ExoMars, which Carter says can remain in working order for a decade or more.
The hope is that within just a few weeks of landing, ExoMars will achieve what Carter calls the mission’s overarching goal —- finding biological signatures of past life.
|
In the last post, we looked at the signs and symptoms of ADHD; you can catch up here. This time let’s explore treatment, medication and other support.
Medication is not a permanent cure for ADHD but may help someone with the condition concentrate better, be less impulsive, feel calmer, and learn and practise new skills. Some medicines must be taken daily, while others can be taken on school or work days. There are five types of medicine licensed for the treatment of ADHD divided into two groups. Small doses are prescribed initially and may increase gradually; regular check-ups are required to ensure the treatment works effectively and check for signs of any side effects or problems. Periodic treatment breaks are recommended to assess whether the medication is still needed.
|Stimulant medications||Non-stimulant medications|
|Stimulant medications are the traditional treatment for ADHD. For some people, these medications are the best option, but not everyone with ADHD tolerates stimulant medications well. Additionally, they can have harmful interactions with other prescription medications.||Non-stimulant medications are also an option for treating ADHD. This includes some antidepressants as well as non-stimulants specifically made for ADHD. Like stimulants, these medications aren’t the right choice for everyone with ADHD.|
|Medication||Common side effects include:|
|Methylphenidate is the most commonly used medicine for ADHD and may be offered to children over five years of age. It increases activity in the brain, particularly in areas that control attention and behaviour.||A slight increase in blood pressure and heart rate Loss of appetite, which can lead to weight loss or poor weight gain Sleep difficulties Headaches Stomach aches Changes in behaviour; aggression, irritability, , anxiety Depression|
|Lisdexamfetamine also improves concentration, focus and attention while reducing impulsive behaviour. It is a second-line treatment prescribed to children over the age of 5. Methylphenidate has not helped over six weeks||Decreased appetite, which can lead to weight loss or poor weight gain Aggression Drowsiness Dizziness Headaches Nausea and vomiting Diarrhoea|
|Dexamfetamine is similar to lisdexamfetamine and works in the same way. It may be offered to adults, teenagers and children over the age of 5||Decreased appetite Mood swings Agitation and aggression Dizziness Headaches Nausea and vomiting Diarrhoea|
|Atomoxetine works differently from other ADHD medicines. It’s a selective noradrenaline reuptake inhibitor (SNRI). It increases the amount of noradrenaline in the brain to aid concentration and help control impulses.||A slight increase in blood pressure and heart rate Nausea and vomiting Stomach aches Trouble sleeping Dizziness Headaches Irritability|
Atomoxetine has been linked to suicidal thoughts and liver damage.
|Guanfacine acts on the part of the brain to improve attention, and it also reduces blood pressure.||Tiredness or fatigue Headache Abdominal pain Dry mouth|
As well as taking medication, other therapies can help treat ADHD and any additional problems, such as conduct or anxiety disorders, including
|Social skills training||Psychoeducation||Behaviour therapy|
|Social skills training involves taking part in role-play situations and aims to teach children how to behave in social situations by learning how their behaviour affects others.||Psychoeducation can help children, teenagers, and adults make sense of being diagnosed with ADHD to help them cope with its effects.||Behaviour therapy supports carers of children with ADHD. Usually, it involves a system of rewards to encourage your child to try to control their ADHD.|
Diet and supplements
Some people may notice a link between types of food and worsening ADHD symptoms. Keeping a simple photo diary can help identify patterns if this is the case. Please do not cut out foods before seeking medical advice. A GP may refer to a dietitian.
Some studies have suggested that omega-3 and omega-6 fatty acids supplements may benefit people with ADHD. Getting medical advice before taking supplements is advisable because some can react unpredictably with medicine or make it less effective. Also, some supplements, especially fat-soluble ones, should not be taken long-term, as they can reach dangerous levels in your body.
Learning about ADHD, how it affects your child, and which parenting approaches work best will help your child flourish and reduce family stress
The ADHD Foundation Neurodiversity Charity
|
<urn:uuid:fe2eaef5-8783-4f94-b1da-b05ad7e407bf>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00183.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9152831435203552,
"pii_count": 0,
"score": 2.796875,
"token_count": 952,
"url": "https://thrivingindulwich.wordpress.com/2023/01/11/adhd-treatment-medication-and-support/"
}
|
In the last post, we looked at the signs and symptoms of ADHD; you can catch up here. This time let’s explore treatment, medication and other support.
Medication is not a permanent cure for ADHD but may help someone with the condition concentrate better, be less impulsive, feel calmer, and learn and practise new skills. Some medicines must be taken daily, while others can be taken on school or work days. There are five types of medicine licensed for the treatment of ADHD divided into two groups. Small doses are prescribed initially and may increase gradually; regular check-ups are required to ensure the treatment works effectively and check for signs of any side effects or problems. Periodic treatment breaks are recommended to assess whether the medication is still needed.
|Stimulant medications||Non-stimulant medications|
|Stimulant medications are the traditional treatment for ADHD. For some people, these medications are the best option, but not everyone with ADHD tolerates stimulant medications well. Additionally, they can have harmful interactions with other prescription medications.||Non-stimulant medications are also an option for treating ADHD. This includes some antidepressants as well as non-stimulants specifically made for ADHD. Like stimulants, these medications aren’t the right choice for everyone with ADHD.|
|Medication||Common side effects include:|
|Methylphenidate is the most commonly used medicine for ADHD and may be offered to children over five years of age. It increases activity in the brain, particularly in areas that control attention and behaviour.||A slight increase in blood pressure and heart rate Loss of appetite, which can lead to weight loss or poor weight gain Sleep difficulties Headaches Stomach aches Changes in behaviour; aggression, irritability, , anxiety Depression|
|Lisdexamfetamine also improves concentration, focus and attention while reducing impulsive behaviour. It is a second-line treatment prescribed to children over the age of 5. Methylphenidate has not helped over six weeks||Decreased appetite, which can lead to weight loss or poor weight gain Aggression Drowsiness Dizziness Headaches Nausea and vomiting Diarrhoea|
|Dexamfetamine is similar to lisdexamfetamine and works in the same way. It may be offered to adults, teenagers and children over the age of 5||Decreased appetite Mood swings Agitation and aggression Dizziness Headaches Nausea and vomiting Diarrhoea|
|Atomoxetine works differently from other ADHD medicines. It’s a selective
|
noradrenaline reuptake inhibitor (SNRI). It increases the amount of noradrenaline in the brain to aid concentration and help control impulses.||A slight increase in blood pressure and heart rate Nausea and vomiting Stomach aches Trouble sleeping Dizziness Headaches Irritability|
Atomoxetine has been linked to suicidal thoughts and liver damage.
|Guanfacine acts on the part of the brain to improve attention, and it also reduces blood pressure.||Tiredness or fatigue Headache Abdominal pain Dry mouth|
As well as taking medication, other therapies can help treat ADHD and any additional problems, such as conduct or anxiety disorders, including
|Social skills training||Psychoeducation||Behaviour therapy|
|Social skills training involves taking part in role-play situations and aims to teach children how to behave in social situations by learning how their behaviour affects others.||Psychoeducation can help children, teenagers, and adults make sense of being diagnosed with ADHD to help them cope with its effects.||Behaviour therapy supports carers of children with ADHD. Usually, it involves a system of rewards to encourage your child to try to control their ADHD.|
Diet and supplements
Some people may notice a link between types of food and worsening ADHD symptoms. Keeping a simple photo diary can help identify patterns if this is the case. Please do not cut out foods before seeking medical advice. A GP may refer to a dietitian.
Some studies have suggested that omega-3 and omega-6 fatty acids supplements may benefit people with ADHD. Getting medical advice before taking supplements is advisable because some can react unpredictably with medicine or make it less effective. Also, some supplements, especially fat-soluble ones, should not be taken long-term, as they can reach dangerous levels in your body.
Learning about ADHD, how it affects your child, and which parenting approaches work best will help your child flourish and reduce family stress
The ADHD Foundation Neurodiversity Charity
|
The winter sports season is in full swing. For professionals and amateurs alike, every minute of playing time offers a chance for athletic joy or glory — and the hope of avoiding the possibility of injury.
Our sports culture has largely moved beyond the “walk-it-off” school of injury response, particularly when a player sustains a blow to the head. Groups like Concussion Awareness Now, the Brain Injury Association of America, and Pink Concussions have been educating the public about how serious concussions can be — and about the fact that brain injuries can have both a short- and long-term impact on health.
But we now have new diagnostic tools that can support this growing awareness: blood tests to help doctors objectively assess a potential concussion quickly.
The test utilizes “biomarkers” — measurable physiological characteristics that tell us something about specific aspects of our health.
We’re all familiar with biomarkers like pulse, blood pressure and cholesterol. High levels of cholesterol, for example, are signs that a person might have an increased risk of cardiovascular disease. More recent biomarker discoveries — proteins found in the blood that can indicate things like damage to the heart or brain — are precise and specific.
Last year, the U.S. Food and Drug Administration approved the first core laboratory test that can screen for two biomarkers associated with brain injury, known as GFAP and UCH-L1. The new test, developed by Abbott and used for mild traumatic brain injury patients in people 18 and older within 12 hours of injury, can deliver results in 18 minutes. It can allow a doctor to rule out the need for a costly CT scan to screen for a brain bleed or more serious brain injury.
Biomarker tests like these are the future of diagnostic medicine. In recent years, we’ve seen the emergence of blood tests for Alzheimer’s disease, certain autoimmune conditions, and Parkinson’s. We’ve seen how genetic biomarker tests can dictate the optimal course of cancer treatment on an individual basis.
In the case of a possible concussion, the potential for improving the speed and accuracy of evaluation through this new test is extraordinary. Instead of just asking if someone feels “off” after a head or neck injury, or whether they can remember who the president is, a doctor can now assess blood levels of GFAP and UCH-L1. Such objective tools stand to radically help with assessment, treatment and recovery.
The process of developing these new blood tests that utilize biomarkers across a range of disease and injury has been methodical and rigorous. Unfortunately, once approved and made available, adoption doesn’t necessarily happen quickly.
Consider that scientists first discovered the biomarker troponin, a protein in the blood released during a heart attack, in 1965. However, it wasn’t until the 1980s — more than 15 years later — that a troponin blood test became available that could remove most ambiguity about whether a patient suffered a heart attack.
Novel diagnostic tools like this often face considerable obstacles to uptake even after they are approved or cleared and become available to health care providers. It can take a while to build clinical awareness of their value in helping patients — and awareness in the health care system as a whole. That’s in part why some states are beginning to require insurers to cover biomarker tests for things like cancer.
Though troponin tests were adopted gradually, they are now widely used to help clinicians diagnose heart attacks. Now tens of millions of patients who suffer a heart attack can benefit from the use of high-sensitivity cardiac blood tests, improving the lives of many.
There’s a lesson in that history. Adoption of FDA-cleared diagnostic tests has the potential to help many and provide significant cost savings to the health care system. Concussion patients and clinicians now have that option available to them.
Expert consensus in favor of the use of biomarker tests for concussion is developing rapidly. Groups such as the American Congress of Rehabilitation Medicine and the National Academies of Science, Engineering and Medicine released reports last year advocating for their use. Such guidance builds awareness, encouraging hospitals, clinics and doctors to deploy them in the evaluation of brain injury.
Support from the community — adults worried about elderly parents or young children who face an outsized concussion risk, the military, athletic associations — have also played a major role in ushering in a new standard of care. These groups deserve credit for their advocacy.
We have new tools to help objectively evaluate concussions. Now we must ensure everyone has access to them.
Beth McQuiston, M.D., R.D., is a neurologist, licensed physician and medical director for Abbott diagnostics.
|
<urn:uuid:b94dd241-e6ca-4cf7-8ffe-7bbd9e7c4b3c>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474661.10/warc/CC-MAIN-20240226162136-20240226192136-00847.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9366785287857056,
"pii_count": 0,
"score": 2.921875,
"token_count": 972,
"url": "https://www.dallasnews.com/opinion/commentary/2024/02/09/a-breakthrough-in-concussion-testing-is-here-dont-make-patients-wait-for-it/"
}
|
The winter sports season is in full swing. For professionals and amateurs alike, every minute of playing time offers a chance for athletic joy or glory — and the hope of avoiding the possibility of injury.
Our sports culture has largely moved beyond the “walk-it-off” school of injury response, particularly when a player sustains a blow to the head. Groups like Concussion Awareness Now, the Brain Injury Association of America, and Pink Concussions have been educating the public about how serious concussions can be — and about the fact that brain injuries can have both a short- and long-term impact on health.
But we now have new diagnostic tools that can support this growing awareness: blood tests to help doctors objectively assess a potential concussion quickly.
The test utilizes “biomarkers” — measurable physiological characteristics that tell us something about specific aspects of our health.
We’re all familiar with biomarkers like pulse, blood pressure and cholesterol. High levels of cholesterol, for example, are signs that a person might have an increased risk of cardiovascular disease. More recent biomarker discoveries — proteins found in the blood that can indicate things like damage to the heart or brain — are precise and specific.
Last year, the U.S. Food and Drug Administration approved the first core laboratory test that can screen for two biomarkers associated with brain injury, known as GFAP and UCH-L1. The new test, developed by Abbott and used for mild traumatic brain injury patients in people 18 and older within 12 hours of injury, can deliver results in 18 minutes. It can allow a doctor to rule out the need for a costly CT scan to screen for a brain bleed or more serious brain injury.
Biomarker tests like these are the future of diagnostic medicine. In recent years, we’ve seen the emergence of blood tests for Alzheimer’s disease, certain autoimmune conditions, and Parkinson’s. We’ve seen how genetic biomarker tests can dictate the optimal course of cancer treatment on an individual basis.
In the case of a possible concussion, the potential for improving the speed and accuracy of evaluation through this new test is extraordinary. Instead of just asking if someone feels “off” after a head or neck injury, or whether they can remember who the president is, a doctor can now assess blood levels of GFAP and UCH-L1. Such objective tools stand to radically help with assessment, treatment and recovery.
The process of developing these new blood tests that utilize biomarkers across a range of disease and injury has been methodical and rigorous. Unfortunately, once approved and made available
|
, adoption doesn’t necessarily happen quickly.
Consider that scientists first discovered the biomarker troponin, a protein in the blood released during a heart attack, in 1965. However, it wasn’t until the 1980s — more than 15 years later — that a troponin blood test became available that could remove most ambiguity about whether a patient suffered a heart attack.
Novel diagnostic tools like this often face considerable obstacles to uptake even after they are approved or cleared and become available to health care providers. It can take a while to build clinical awareness of their value in helping patients — and awareness in the health care system as a whole. That’s in part why some states are beginning to require insurers to cover biomarker tests for things like cancer.
Though troponin tests were adopted gradually, they are now widely used to help clinicians diagnose heart attacks. Now tens of millions of patients who suffer a heart attack can benefit from the use of high-sensitivity cardiac blood tests, improving the lives of many.
There’s a lesson in that history. Adoption of FDA-cleared diagnostic tests has the potential to help many and provide significant cost savings to the health care system. Concussion patients and clinicians now have that option available to them.
Expert consensus in favor of the use of biomarker tests for concussion is developing rapidly. Groups such as the American Congress of Rehabilitation Medicine and the National Academies of Science, Engineering and Medicine released reports last year advocating for their use. Such guidance builds awareness, encouraging hospitals, clinics and doctors to deploy them in the evaluation of brain injury.
Support from the community — adults worried about elderly parents or young children who face an outsized concussion risk, the military, athletic associations — have also played a major role in ushering in a new standard of care. These groups deserve credit for their advocacy.
We have new tools to help objectively evaluate concussions. Now we must ensure everyone has access to them.
Beth McQuiston, M.D., R.D., is a neurologist, licensed physician and medical director for Abbott diagnostics.
|
Colorado state veterinarian warns of avian flu surge as spring bird migration begins
Highly pathogenic avian influenza, the disease spreading among wild and domestic bird populations nationwide, is expected to have a surge in cases as the migratory season begins in Colorado.
It’s been nearly a year since the first outbreak in Colorado, and while cases have slowed, Colorado state veterinarian Maggie Baldwin said the risk will go up as more flocks of birds pass through.
“[These wild birds] are bringing more virus, they're shedding more virus in the environment, and we're likely gonna see more spillover of that virus into our domestic poultry operations on both the commercial and the backyard side,” Baldwin said.
So far, about 6.4 million chickens have either been killed by the virus or put down to prevent outbreaks within a flock. Hundreds of wild birds, mostly geese and ducks, have also been killed by the virus. Death is all but guaranteed for birds that contract it, and symptoms include sudden fatigue, decreased egg production, and nasal discharge.
The avian flu has recently been linked to deaths in mammals that consumed infected birds.
A big impact on egg prices and commercial farms
The nationwide outbreak has driven up egg prices across the country. According to federal data, a dozen eggs cost an average of $4.83 as of January 2023, up from the average of $1.93 recorded a year prior.
“What we can likely expect is across the nation, we're going to see another increase in cases this spring, and that's really what led to consumer impacts was when we had a lot of our commercial egg laying populations impacted around the same time,” Baldwin said.
With the length of the outbreak hitting “unprecedented” levels, Baldwin acknowledges that fatigue may be setting in for commercial and domestic owners. However, she urges owners to keep up their biosecurity measures. When big, commercial farms are impacted, it takes months and millions of dollars to recover from a mass death event.
Baldwin, along with experts from Colorado Parks and Wildlife and Colorado State University, will host a webinar next week to share more information about the avian flu and how to keep flocks safe this spring.
Avian flu cases among humans are extremely rare, and they usually occur only when people are heavily exposed to infected poultry.
As crisis continues, federal officials weigh mass vaccination
The federal government is discussing the possibility for a large-scale avian flu vaccination program for poultry, and The New York Times reports a potential vaccine is already being tested.
“The USDA is really the first step in getting that approval process started because, if you vaccinate, there are potential trade implications,” Baldwin said.
In the meantime, agriculture officials and veterinarians are still searching for other solutions.
There’s “no clear end in sight” for this avian flu outbreak, Baldwin said. With no treatment available and the uncontrollable nature of bird migration coming back into play, it appears this strain of avian flu is here to stay.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
<urn:uuid:b72ef13a-c154-49cf-8f2b-6349fd32a4ac>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00179.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9591829180717468,
"pii_count": 0,
"score": 2.609375,
"token_count": 699,
"url": "https://www.cpr.org/2023/03/08/colorado-spring-bird-migration-avian-flu/"
}
|
Colorado state veterinarian warns of avian flu surge as spring bird migration begins
Highly pathogenic avian influenza, the disease spreading among wild and domestic bird populations nationwide, is expected to have a surge in cases as the migratory season begins in Colorado.
It’s been nearly a year since the first outbreak in Colorado, and while cases have slowed, Colorado state veterinarian Maggie Baldwin said the risk will go up as more flocks of birds pass through.
“[These wild birds] are bringing more virus, they're shedding more virus in the environment, and we're likely gonna see more spillover of that virus into our domestic poultry operations on both the commercial and the backyard side,” Baldwin said.
So far, about 6.4 million chickens have either been killed by the virus or put down to prevent outbreaks within a flock. Hundreds of wild birds, mostly geese and ducks, have also been killed by the virus. Death is all but guaranteed for birds that contract it, and symptoms include sudden fatigue, decreased egg production, and nasal discharge.
The avian flu has recently been linked to deaths in mammals that consumed infected birds.
A big impact on egg prices and commercial farms
The nationwide outbreak has driven up egg prices across the country. According to federal data, a dozen eggs cost an average of $4.83 as of January 2023, up from the average of $1.93 recorded a year prior.
“What we can likely expect is across the nation, we're going to see another increase in cases this spring, and that's really what led to consumer impacts was when we had a lot of our commercial egg laying populations impacted around the same time,” Baldwin said.
With the length of the outbreak hitting “unprecedented” levels, Baldwin acknowledges that fatigue may be setting in for commercial and domestic owners. However, she urges owners to keep up their biosecurity measures. When big, commercial farms are impacted, it takes months and millions of dollars to recover from a mass death event.
Baldwin, along with experts from Colorado Parks and Wildlife and Colorado State University, will host a webinar next week to share more information about the avian flu and how to keep flocks safe this spring.
Avian flu cases among humans are extremely rare, and they usually occur only when people are heavily exposed to infected poultry.
As crisis continues, federal officials weigh mass vaccination
The federal government is discussing the possibility for a large-scale avian flu vaccination program for poultry, and The New York Times reports a potential
|
vaccine is already being tested.
“The USDA is really the first step in getting that approval process started because, if you vaccinate, there are potential trade implications,” Baldwin said.
In the meantime, agriculture officials and veterinarians are still searching for other solutions.
There’s “no clear end in sight” for this avian flu outbreak, Baldwin said. With no treatment available and the uncontrollable nature of bird migration coming back into play, it appears this strain of avian flu is here to stay.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.