Datasets:

meta
dict
text
stringlengths
224
571k
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9782186150550842, "language": "en", "url": "https://bonsaifinance.com/ca-en/what-the-basic-income-experiment-tells-us-about-no-credit-check-loans-ontario/", "token_count": 1145, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.205078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c134b528-1163-487f-aa8a-53d6aac05c90>" }
If you’ve been following the news in and around Ontario, you have no doubt heard of the landmark basic income experiment that was carried out recently. Although it did not address lending directly, the results of this experiment had some interesting implications concerning the need for no credit check loans Ontario. Keep reading to learn more about what this experiment was and why it’s important to consider its findings in the context of the province’s lending market. What Was the Basic Income Experiment? Ontario’s basic income experiment began in 2017 and was intended to test the effectiveness of giving money to poor people directly as a way to reduce poverty in the province. The trial was open to residents of Hamilton, Lindsay and Thunder Bay, and an estimated 4,000 people participated overall across these three locations. Each person who participated received just under $17,000 per person per year, rising to about $24,000 if that person had a disability. Couples who enrolled in the program together got about $24,000 as well. These payments were split into monthly installments to better accomodate budgeting. The only requirements for signing up for this pilot program were that you had to be living in one of the pilot sites and living on a low income as well (as defined by the low income cut-off measure, or LICO, that is used in other parts of social policy). The idea was to evaluate whether or not the extra money could help people achieve positive outcomes in terms of improved mental and physical health, better food and housing security, increased participation in education and the labour market, and many other indicators of an improved quality of life. If it had shown promise in any of these areas, Ontario’s liberal government had promised to consider implementing it on a province-wide basis. Though it was supposed to last a full three years in order to provide enough time for these potential effects to fully play out, Ontario’s new conservative government prematurely ended the study in early 2019. What Did It Show Us About Ontario’s Budgets? The basic income experiment may not have lasted long enough to make any definitive conclusions about the things it was trying to study, but there are still many things we can learn from how it played out. It was intended to study the effects of poverty, or in other words, a lack of money – this is the exact problem that loans also attempt to solve in a different way. The fact that people were so eager to participate in something like this points to a deeply entrenched problem with the common person’s finances. Virtually all of the participants were deeply financially troubled; more than half of the participants had been behind on their bills for at least 2 consecutive months in the year prior to the experiment beginning. Now that the program has ended and the participants are sharing their stories, there is a burgeoning recognition that much of Ontario is in this same boat. People are having a lot of difficulty making ends meet in this province, and in that context, the groundswell of support for a program that gives out free money with very few conditions is perfectly understandable. People are stretched very thin, and they need some sort of solution that allows them to live their daily lives under less financial pressure than they are currently facing. Now that the basic income experiment has been cancelled and seems unlikely to be implemented on a wider basis, that relief seems to still be a long ways away. However, the people’s need is as acute as ever, and there’s no reason to think that that’s going to change any time soon. Where Do No Credit Check Loans Fit In? If there’s one thing that this whole experiment makes clear, it’s that Ontarians occasionally need some help getting on track with their expenses for the month. No credit check loans Ontario are not free government money like the basic income would have been, but they do allow you to grasp a bit more of your potential spending power now in exchange for giving up more of it later. They can help you to get through a difficult time financially by allowing you to cover some vital expenses right away – expenses such as, for example, some overdue bills. You can get them no matter what your credit history might look like at the moment, and they can spare you some very unpleasant short-term suffering while you work on hopefully getting your finances to a better place. With this information to provide context for us, it is clear that no credit check loans Ontario are more important to have around than they have ever been. They are a handy, accessible potential solution to the problems that keep many people from leading their lives to their full potential. These products will never make a good long-term solution, but they are far better than nothing and may help you to bridge the gaps in a very uncertain situation. Judging from what came of the basic income experiment, there is a lot of demand, both open and hidden, for this kind of help for the budgets of residents of Ontario. It’s No Hassle To Find No Credit Check Loans Ontario While it would be nice to think that the govenrment will some day intervene in the clear hardships that Ontarians are facing, right now we can only rely on ourselves. Products like no credit check loans Ontario are your best bet for stretching your money for now, and you can find multiple versions of them to choose from through Bonsai Finance. There’s no reason to ever choose a loan on your own again; we’ll gladly hold your hand through the whole process and help you to come out satisfied on the other side.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9650756120681763, "language": "en", "url": "https://charlottemuseum.org/bank-of-america/", "token_count": 868, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.41015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:de597d2d-447e-4cba-a645-b2734fffac48>" }
When thinking of financial centers, cities like New York and London immediately come to mind, as do names like Citibank and Goldman Sachs. However, another city and another name should also come to mind – Charlotte and Bank of America. How did Charlotte, a small southern city become the headquarters for one of the largest financial institutions in the world? The answer is an odd and interesting combination of history, industry and legal quirks. Bank of America can trace its roots to 1874 when, with only $50,000 in equity and $200,000 in deposits, the Commercial National Bank was founded in Charlotte. The bank filled the void left in the aftermath of the Civil War, during which every bank in the Carolinas failed. At that time, Charlotte had a population of about 4,000, but the city was quickly becoming a rail center and textile hub in need of a local bank. The first big steps to this amazing story occurred in the late 1950s, when Commercial National merged with Charlotte’s American Trust and Security National of Greensboro. Recognizing that the institution was no longer strictly a Charlotte bank, the banks became the North Carolina National Bank (NCNB). By the 1970s, these mergers, along with an aggressive growth strategy, made NCNB the largest bank in North Carolina. North Carolina National Bank was poised for further growth, but expansion was difficult. During this time, state laws throughout the country prevented banks from having branches in more than one state. Therefore, a North Carolina bank could not have a branch in another state and vice versa. However, this prohibition on interstate banking was not enough to stop the management of NCNB; they soon found a way. In 1982, the bank owned a small Florida trust company, which was not a bank and therefore, under state laws, could be owned by an out of state bank. NCNB used this loophole to purchase a small bank in Florida and, in accordance with the law, begin expansion within the state. With branches now open in Florida, NCNB officially became one of the first interstate banks. In 1983, a new leader, hungry for expansion, took the reins at NCNB. As the new CEO, Hugh McColl implemented an acquisition strategy that brought the bank to national and international prominence. McColl’s strategy was aided by the North Carolina Legislature, which, in 1985 passed the Southeastern Regional Banking Compact, a law allowing North Carolina banks to own branches in other states, while forbidding northern banks from entering North Carolina. This act of protectionism allowed NCNB to continue expansion with little fear of acquisition by an outside bank. The field was now open for the next big merger. In 1989, First Republic, a Texas bank, failed during an oil bust and was taken over by the FDIC. Fortunately, Texas interstate banking restrictions did not apply when buying a failed bank from the FDIC, allowing NCNB to purchase First Republic. Just like in Florida, once NCNB owned a Texas bank it could expand within the state. These early entries into interstate banking put NCNB ahead of other banks struggling to get over the wall of interstate branch restrictions. In later years, as the legal walls came down, other banks crossed them only to find NCNB already there. Further mergers followed, and in recognition of its growing interstate presence, in 1982, NCNB changed its name to NationsBank. Another name change soon followed when NationsBank made its pinnacle acquisition, changing the bank from a primarily eastern institution into a truly national one. In 1998, NationsBank acquired the San Francisco based BankAmerica and became Bank of America. BankAmerica had a huge west coast presence and the purchase created the first coast-to-coast bank. What started as a small, local bank transformed into one of the largest banks in the United States. Today, Bank of America employs over 223,000 people and has revenues of over $85.1 billion. Although, the era of large mergers and outsized growth ended with the recent recession, the long path from a small Charlotte bank to an international giant firmly placed Charlotte on the financial map. Rick Rothacker, Banktown, (Winston-Salem 2010) Greg Farrell, Crash of the Titans, (New York 2010)
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9375687837600708, "language": "en", "url": "https://elm-trading.com/our-trading-interests/wind-energy-renewables", "token_count": 360, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:21b055c5-b778-4e85-9e6e-058c5639689f>" }
Background to wind energy The UK’s first commercial wind farm was built in 1991 in Cornwall and since then there has been significant growth in the industry. As the windiest country in Europe, it is not surprising that wind is now the UK’s largest and most cost effective source of renewable energy. The UK Government has a binding commitment under the EU Renewable Energy Directive to source at least 15% of its primary energy from renewable sources by 2020. To help achieve this obligation, the Government offers various subsidies including the Feed-in Tariff (FIT) scheme and Renewable Obligation Certificates (ROCs). Qualification criteria for the FIT scheme and ROCs varies but both offer RPI linked prices for the energy generated through renewable sources, providing a secure income stream for shareholders. Participating in wind energy Elm Trading’s wind energy division has a portfolio of 34 operational wind projects, comprising 40 turbines with a generating capacity of more than 24.06MW’s. The sites are located in North and South Wales, Scotland and Devon, and each benefits from Government backed renewable energy incentives. Two projects qualify for ROCs while the other 32 sites qualify under the FIT scheme. Why wind energy is an attractive trading opportunity - The turbines themselves and the land represent tangible assets with a related value - FIT and ROC subsidies provide a secure income stream for up to 20 years, the level of which is known up front for each site - Stable income return - There is a strong market for mature, energy generating assets which helps provide liquidity within the overall portfolio “The UK is the windiest country in Europe and could power itself several times over using wind.”
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8949439525604248, "language": "en", "url": "https://money.howstuffworks.com/stagflation2.htm", "token_count": 1263, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1376953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:710c83e6-39bd-4915-8992-4454333caee8>" }
How to Prevent Stagflation Economist Milton Friedman was one of the first to predict the stagflation of the 1970s. Friedman understood that the Federal Reserve wields incredible power to increase or decrease inflation in the U.S. In Friedman's worldview, inflation happens when the Fed allows too much money to circulate in the economy. His formula for inflation is simple: "Too much money chasing too few goods." The dual mission of the Fed is to keep prices stable and maximize employment [source: Hobson]. The strategy for achieving this mission is called monetary policy. Modern monetary policy is heavily influenced by Friedman's theories. When the economy is growing, the Fed raises interest rates to limit the amount of money in circulation. When the economy slows, the Fed lowers interest rates to encourage borrowing and increase the amount of money in circulation. The goal is to strike a precarious balance where the economy grows at a healthy rate without allowing inflation to get out of control. In the 1960s, in an effort to maximize employment at all costs, the Fed lowered interest rates and flooded the economy with money. This led to increased demand for goods and services and rising prices. When it was clear in the 1970s that inflation was spiraling out of control, the Fed and the federal government took the erroneous approach of pumping more money into the system even as real economic output sagged. This fit Friedman's formula for inflation: too much money chasing too few goods. It wasn't until 1979, with the appointment of Fed chairman Paul Volcker, that the Fed put Friedman's monetary policy theory into practice [source: Orphanides]. Volcker raised interest rates, choking off the flow of money into the economy. It meant high unemployment and a significant recession in the early 1980s, but inflation returned to normal levels and the economy eventually stabilized. The threat of stagflation is greatly increased during a recession, when GDP is slumping and unemployment is on the rise. According to standard monetary policy, the Fed lowers interest rates during a recession to encourage borrowing and spending. The key to preventing stagflation is to avoid allowing too much money to enter the economy too quickly. To successfully avoid stagflation during a recession, Fed economists need to accurately predict both the short- and long-term performance of the economy. They have the difficult job of identifying the turning point -- when the country emerges from recession -- and slowly pulling money out of circulation. This requires impeccable timing. If the Fed raises interest rates too soon, it could kick the legs out from under the restarting economy. If it waits too long, the economy can become overheated with extra cash, causing prices to rise and inflation to soar [source: Gogoll]. Keep reading for lots more information about the economy, the Fed and currency. Related HowStuffWorks Articles - Braham, Lewis. MoneyWatch.com. "5 Nightmare Scenarios for the Economy." October 28, 2009http://www.cbsnews.com/blogs/2009/10/28/business/econwatch/entry5439301.shtml - Burton, David R.; Conda, Cesar. The Washington Times. "Why stagflation is coming." June 28, 2009http://www.washingtontimes.com/news/2009/jun/28/why-stagflation-is-coming/ - Cleveland, Harold van B.; Huertas, Thomas F. Foreign Affairs. "Stagflation: How We Got Into It - How to Get Out." Fall 1979.http://www.foreignaffairs.com/articles/32973/harold-van-b-cleveland-and-thomas-f-huertas/stagflation-how-we-got-into-it-how-to-get-out - Energy Information Administration. "25th Anniversary of the 1973 Oil Embargo."http://www.eia.doe.gov/emeu/25opec/anniversary.html - Gogoll, Ted. Businessweek. "What the Fed Could Do If Inflation Ramps Up." August 10, 2009.http://www.businessweek.com/investor/content/aug2009/pi20090810_463275.htm - Hobson, Jeremy. Marketplace. "Bernanke tackles price stability, jobs." November 16, 2009.http://marketplace.publicradio.org/display/web/2009/11/16/pm-bernanke/ - Jubak, Jim. MSN Money. "Is '70s-style stagflation returning?" January 4, 2008.http://articles.moneycentral.msn.com/Investing/JubaksJournal/Is70sStyleStagflationComing.aspx - Nielsen, Barry. Investopedia. "Stagflation, 1970s Style."http://www.investopedia.com/articles/economics/08/1970-stagflation.asp?viewed=1 - Orphanides, Athanasios; Williams, John C. The Federal Reserve Board. "The Decline of Activist Stabilization Policy: Natural Rate Misperceptions, Learning and Expectations." April 2004.http://www.federalreserve.gov/PUBS/ifdp/2004/804/ifdp804.htm - Ryan, Paul D. The New York Times. "Thirty Years Later, a Return to Stagflation." February 13, 2009.http://www.nytimes.com/2009/02/14/opinion/14ryan.html - Samuelson, Robert J. Newsweek. "The Specter of Stagflation." March 3, 2008.http://www.newsweek.com/id/114803 - Samuelson, Robert J. The Washington Post. "The Return of Inflation?" June 24, 2008.http://www.washingtonpost.com/wp-dyn/content/article/2008/06/23/AR2008062301830.html?opattr=The_Return_of_Inflation%3F
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9809523224830627, "language": "en", "url": "https://www.americanprogress.org/issues/women/news/2004/07/02/892/working-mothers-caught-in-a-bind/", "token_count": 1003, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.076171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7a22f507-27c9-44c3-ad5a-1e65dbb90c55>" }
For the past few months, the labor market has turned a corner and begun to create jobs at a growing rate, although its pace seems to have slowed again. As jobs are being created, more and more people who had completely given up looking for work are drawn back into the labor market. Although they are not the majority of new job holders, women still constitute a large proportion of them. And while women tend to be the primary caregivers of their children, the job opportunities that are expanding the fastest for women are those where child care benefits are rarest, part-time and service-industry jobs. And not only are employers becoming increasingly stingy in offering child care benefits, the government has been cutting back as well. Consequently, affordable child care will continue to be an important issue as the labor market continues to grow. Recent job creation has drawn many workers back into the labor market, who had completely given up looking for a job. Over the past four months the economy has created 1 million jobs, including 112,000 in June, according to figures released today by the Bureau of Labor Statistics. The share of the population that was either employed or actively looking for work dropped from 67.1 percent in March 2001 to 65.9 percent in February of this year. Since then job growth has accelerated, and the labor force participation rate has stabilized, reflecting the fact that people are entering the labor force at the rate of population growth. In June alone an estimated 77,000 people entered the labor force. This also explains the phenomenon that despite job growth, the unemployment rate has been unchanged at 5.6 percent in recent months. Employment has been fastest in jobs, typically associated with women's employment, such as part-time and service sector jobs. Since February 2004, after which job growth accelerated, the majority of newly created jobs were part-time jobs, and the service sector added 830,000 jobs compared to 194,000 jobs in the goods producing sector of the economy. Thus, the sectors that generally offer women better employment opportunities than other sectors are expanding faster than others. As women re-enter the labor market, they need to find affordable child care. However, the jobs that are expanding fastest – part-time jobs and service-sector jobs – are also jobs that pay lower wages and have fewer benefits. Inflation adjusted wages in the service sector were about 13 percent less than in the goods-producing sector in May 2004, the last month for which data are available. And importantly, inflation-adjusted wages in both the service and goods-producing sectors have been falling for some time now, reaching their lowest level since March 2003 in May 2004. Further, today's figures showed a decline of weekly earnings by 2.4 percent in the service sector and by 1.8 percent in the goods producing sector in the last month alone before inflation is even taken into account. Not only are women likely going into jobs that pay less and offer fewer benefits, but there is clearly no offsetting wage effect that would allow them to pay more for child care on their own. In addition, companies are actually reducing benefits that could help working parents to take care of their children. A recent report by the Society of Human Resource Management found that the share of companies that offered paid family leave dropped from an already low 27 percent in 2001 to 23 percent in 2003. Also, a study by tax and business information provider CCH found that the share of companies offering the possibility of compressed work weeks, telecommuting or job sharing dropped precipitously from 2002 to 2003. In light of declining benefits in other areas, such as employer sponsored health insurance and pension benefits in recent years, it is clear that employers have taken advantage of the weak labor market to reduce their costs by reducing access to benefits. That is, working mothers face rapidly rising costs as they reenter the labor market. The government is not filling the gap in child care, either. Many states have recently reduced access to child care to fix their budgets. The Bush administration has also proposed to reduce after-school programs and other child-care efforts in the face of large budget deficits. The Children's Defense Fund, for instance, estimated earlier this year that the Bush Administration's proposed level funding in its 2005 budget would have left 1.32 million children behind who should have access to after-school programs. And the administration continues to propose cuts to child-care programs, cutting off access to hundreds of thousands of children each year. All of this ultimately makes it harder for parents, especially mothers, to enter or stay in the labor market as the costs of labor market reentry are soaring. While much of the public policy focus during the years of job decline was on helping workers who could not find a job, the rising labor market raises additional concerns about the inadequate support that working families need to make ends meet, even when they are working. Christian E. Weller is a senior economist at the Center for American Progress.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9810980558395386, "language": "en", "url": "https://www.fgcbolsa-fgcfinancialmarkets.info/2020/06/news-business-uk-economy-uk-inflation.html", "token_count": 343, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0245361328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3aa94406-f173-4a89-b9fe-35860bd3b005>" }
3minutes - Source: BBC The Consumer Prices Index (CPI) fell from 0.8% in April, the Office for National Statistics (ONS) said. Supermarkets were among the few shops allowed to open during the month and food prices rose. However, this was offset by a fall in clothing and footwear prices, as well as cheaper petrol, the ONS said. "The growth in consumer prices again slowed to the lowest annual rate in four years," said ONS deputy national statistician for economic statistics Jonathan Athow. "The cost of games and toys fell back from last month's rises, while there was a continued drop in prices at the pump in May, following the huge crude price falls seen in recent months. "Outside these areas, we are seeing few significant changes to the prices in the shops." The ONS admitted that it had difficulty compiling inflation statistics for May, since many areas of the economy were completely shut down. For instance, inflation figures for holidays had had to be "imputed", it said. What is inflation?Inflation is the rate at which the prices for goods and services increase. It's one of the key measures of financial wellbeing because it affects what consumers can buy for their money. If there is inflation, money doesn't go as far. It's expressed as a percentage increase or decrease in prices over time. For example, if the inflation rate for the cost of a litre of petrol is 2% a year, motorists need to spend 2% more at the pump than 12 months earlier. And if wages don't keep up with inflation, purchasing power and the standard of living falls.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9381198883056641, "language": "en", "url": "https://www.iestxsolar.com/post/how-california-s-solar-initiative-impacts-us", "token_count": 707, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.00872802734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:747fcb2b-0bc1-4caa-99d3-1b2a6250e543>" }
How California's Solar Initiative Impacts US You may not have heard here in Texas, but California recently made history by mandating that all new homes — and many apartments and condos — incorporate solar beginning in 2020. No matter how you feel about California's culture in comparison to life here in Texas, when it comes to green energy, California is one of the nation's forerunners in innovation. The state already comfortably leads all others when it comes to solar production. In fact, California has installed about five times as much solar as the second leading state, North Carolina. According to the US Energy Information Administration, the state also outranks all others in its production of energy from two other renewable sources: geothermal and biomass. In addition, California tops all others states on energy storage and leads the entire world in putting electric vehicles on the road. These rankings are particularly significant, given that California is the fifth largest economy on the planet. So when it backs a product, like solar, it has the ability to create economies of scale that drive down costs. For that reason, analysts expect cost benefits of the new solar mandate to bleed into other states, hastening what’s already been a dramatic drop in the cost to install solar — a 70 percent decline since 2010, according to the Solar Energy Industries Association. Moreover, it won’t be just the residential solar program spurring greater renewable development in the state. Outgoing Gov. Jerry Brown signed into law a series of new clean energy bills toward the end of his term in 2018, most notably a requirement that California use 100 percent zero-carbon electricity by 2045. Like most forms of renewable energy, solar power is zero carbon, so the requirement will increase its use and likely continue the virtuous cycle of reducing its cost—encouraging more people to adopt solar, which in turn reduces cost, and so on. Home solar is the new deal As Congress considers rebuilding US infrastructure and creating jobs under what’s being called a Green New Deal, California again serves as a guide. Since 2002 when the state began setting ambitious green energy goals, it has created 519,000 clean energy jobs and injected $49 billion in public and private clean energy economy investment, according to a report by Environmental Entrepreneurs. This is not to say that California’s programs cannot be improved upon. The home solar requirement, a change made by the California Building Standards Commission, is expected to increase the price of new homes $8,000 to $20,000 — and California is already a pricey real estate market. That worries housing activists. However, analysts expect energy savings from the solar panels to offset the price increase. Of course, not everyone can install solar panels on their roofs. Shade, for example, can get in the way of solar production. But the rule offers flexibility. Solar panels may be built on each individual roof or developers can create central solar systems shared by several homes, under the new rule. Special accommodations are made for areas with shade or other problems that impede solar generation. State energy policymakers — the chief drivers of energy innovation in the US — will be watching California as it institutes the new policy. It’s unlikely all 50 states will institute requirements that new homes include solar or set goals for 100 percent zero-carbon electricity. Time will tell, but chances are the adage will hold: As goes California, so goes the rest of the nation when it comes to clean energy innovation.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9781815409660339, "language": "en", "url": "https://www.piggington.com/history", "token_count": 1448, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.2578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c6a74b15-8fb4-428c-ba3c-6d5462e59728>" }
San Diego Housing Market News and Analysis A History of the Housing Bubble Submitted by Rich Toscano on June 19, 2005 - 8:34pm If San Diego home prices do not reflect fundamentals, exactly how did they get to such rarified heights, and what keeps them aloft? It's a fair question, and an important one: understanding how the bubble started will be crucial in identifying how and when the bubble ends. The purpose of this article is to provide a very brief overview of how we got here. Birth of a Bull Market Let's flash all the way to back 1997. The San Diego real estate market was fairly depressed, having weathered several years of adverse conditions: the fallout from the prior housing bubble exacerbated a local recession as the end of the Cold War sent thousands of defense and aerospace workers packing. By this time, as the following graph shows, San Diego homes were as undervalued as they'd been since 1985: But San Diego's economy was already undertaking a brisk recovery as the late-90s "Goldilocks Economy" got into full swing. As would be expected, the housing market finally began to recover and head back towards (and eventually beyond) fair value. As home prices steadily rose over the next several years, the negative sentiment towards real estate that was prevalent throughout the mid-90s was gradually replaced with optimism. By 2002, despite the fact that the Roaring 90s were over and the nation had just been through a recession, the home price to income ratio had reached its historical high point again: Given the relatively high expense of housing and the shakiness of the economy, one might have expected that home prices would begin to moderate at this point and make their way down to the bottom of the valuation channel, as they had typically done in the past. But one would have been mistaken. This time, as the saying goes, was different, and the difference could be described in one word: credit. The Credit Tsunami In an attempt to revive the faltering economy in the wake of a stock crash, a recession, and the 9/11 attacks, Alan Greenspan had recently embarked on a rate-slashing frenzy that would not stop until the Federal Funds Rate had gone from 6.5% down to a multi-generational low of 1%. The idea of such low short-term rates is to stimulate economic activity thusly: Borrowing was stimulated, to be sure, but not exactly in the intended manner. Businesses, instead of borrowing money to rebuild their workforces, offshored enormous amounts of work and used the savings to clean up their balance sheets. Consumers, on the other hand, borrowed like there was no tomorrow, running up credit cards, getting into previously unimaginable amounts of mortgage debt, and generally taking the national savings rate to an all-time low. Meanwhile, a new paradigm was emerging in the currency markets that added even more fuel to the borrowing fire. Bear with me for a few paragraphs as I try to explain this rather complex and nuanced topic as briefly as possible. What was happening was that the export-heavy Asian economies were going to great lengths to keep their currencies low against the dollar, thus keeping their products cheap for Americans to buy. The primary mechanism for this currency intervention was as follows, using Japan as an example: Japan would receive into its banking system US dollars which came from Americans who were buying Japanese products. The Bank of Japan did not want to sell these dollars in exchange for yen, because doing so would add to the supply of dollars for sale and add to the demand for yen, effectively strengthen the yen against the dollar and driving up dollar-denominated prices for Japanese goods. So they kept their money in US dollars by using it to buy US financial assets, primarily Treasuries and other debt instruments, thus avoiding weakening the dollar. The Asian central banks' heroic efforts to keep their currencies weak against the dollar, combined with their own incredibly low short-term rates, also encouraged their own private investors to buy US debt instruments. Using Japan as an example, again: a Japanese saver (and there are a lot of them—unlike here in the US, there is a huge cultural prerogative to save money in Asia) would be looking around for a place to invest money. Japanese government bonds would be yielding between 1% and 2%. US Treasuries would be yielding around 4%—not great, but better than Japanese bonds. And due to Japanese central bank's obvious commitment to keeping their currency weak against the dollar, there was little chance for the private investor to lose money via currency exchange rates. The choice was obvious: buy dollar-denominated bonds. So we had Asian central banks lending money to Americans (buying a bond, after all, is the same as lending money to the bondholder), we had private Asian investors lending money to Americans, and we had American financial firms borrowing freshly-minted money from the Fed and lending it to other Americans. In short, there was literally more money being lent—and more eagerness to lend it—than the world had ever seen. Meanwhile, Back at the Ranch... OK, let's go back to San Diego circa 2002. Homes had gotten fairly expensive, and normally at this point in the housing cycle the lack of affordability would start to reign in home prices. But San Diego never got there because as home prices marched upwards the credit markets became flooded with ever more money. The resultant lowering of rates and lending standards allowed San Diegans to carry mortgages on homes at ever-higher prices. Having built up a nice head of steam thus far, San Diego home prices exploded into 2003 and 2004 on the wave of easy credit. The sense of invincibility and optimism was now absolutely ubiquitous, and people's willingness to take on more risk grew as fast as prices. Lending begot more lending as borrowed money drove home prices ever upwards and lending institutions, emboldened by the rise in the value of their collateral, started to lend more and more freely. By 2004, home prices were well beyond all rational measures of fair value, but that didn't matter. No price was too high to pay for the money-making machine that was a San Diego home. The market culminated with an early-2004 bout of panic-buying, featuring the kind of frenzied bidding wars that only take place when the hordes have abandoned their senses. Things have calmed down a bit since then, due partly to the Fed's leisurely tightening efforts, partly to the dawning recognition by many San Diegans that we are indeed experiencing a bubble, and to plain old exhaustion, as the market runs out of people willing to buy at these levels. Prices have pretty well flattened out since mid-2004. And so we wait. One cannot predict exactly how this bubble will play out. But one can adopt the premise that we are indeed in experiencing housing bubble and can use that framework to properly interpret events and market data as they occur. Doing so is one of the primary goals of this website. ~Active forum topics~
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9129361510276794, "language": "en", "url": "http://www.bluedolphinnambucca.com/NewSouthWales/size-of-new-south-wales", "token_count": 458, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06591796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fdb0aef2-c4ae-4a15-81cc-0886409b77bf>" }
Size of New South Wales New Southern Wales (NSW) is Australian Continent's largest condition economic climate, with 31percent regarding the nation's GDP. Another biggest state, Victoria, featuring its capital, Melbourne, adds 22%. With a populace of greater than 7.3 million, or near 1 / 3 regarding the Australian total, NSW features a large and developing domestic market. Per capita GSP, at nearly $65, 000, was the second highest of all states in 2012-13. In 2012-13, 33per cent of nationwide family spending took place hawaii, totalling $277.5 billion. Exclusive business financial investment Virtually $58.0 billion worth of personal company investment ended up being recorded inside state in 2012-13, representing real development of 8.7% across past 12 months. NSW has actually taped genuine typical increases of 7.0percent per year in business financial investment within the 10 years to 2012-13. NSW has extensive backlinks to worldwide areas. In 2012-13, their state's exports of goods and solutions had been valued at a lot more than $62.7 billion on a stability of repayments foundation, representing 21per cent of nationwide exports. Exports of products and solutions grew by 3.4per cent in real terms in 2012-13, driven primarily by record high coal amounts. Services exports taken into account one third of New South Wales exports, compared with 17percent for Australian Continent overall, reflecting the diversification and higher level nature associated with condition's export base, which can be less reliant on sources than various other says.Gross State item (GSP) at existing costs - Australian says and regions, 2012-13 |State/Territory||GSP ($ million)||GSP as per cent of Australian GDP| |New South Wales||476, 434||31.3| |West Australian Continent||242, 697||16.0| |South Australia||95, 123||6.3| |Australian Capital Territory||35, 088||2.3| |Northern Territory||20, 113||1.3| |Australia (GDP)||1, 521, 465|
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9465696215629578, "language": "en", "url": "https://blog.luz.vc/en/What-is/accounting/", "token_count": 1687, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.032470703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f6207f4d-47fb-445e-8017-5d5461662494>" }
Accounting is the science that studies the assets movements (assets, rights and obligations) in organizations. Through accounting, the economic and financial statements that form the basis for various day-to-day and mandatory operations of any formally constituted entity such as payment of charges and taxes are generated. What are the Benefits of Accounting How to Make My Accounting? Under federal law, any company is either for-profit or not, it must carry out its accounting. Law 10.406 / 2002 (New Civil Code), art. 1.179 - The entrepreneur and the company are obliged to follow a mechanized accounting system, based on the uniform bookkeeping of their books, in correspondence with the respective documentation, and to annually balance sheet and that of economic result. In addition to performing the accounting, this also needs to be done and signed by an accountant registered in the CRC (Regional Accounting Council). However, it must be understood that there are two branches of accounting: managerial and financial. Management Accounting vs. Financial Accounting Although the main methods, calculations and indicators are used universally in accounting, there are different applications for accounting that should be perceived: Management: this is the modality usually used by the managers themselves, since it has no commitment to the standards of the law. So you can, squeeze using accounting principles and tools, make customizations that give you more relevant data about your business. Financial: this is the most traditional mode performed by accounting offices. Although it also provides interesting data for management, it is tied to government and bank standards, so it does not have flexibility. Main Accounting Methods I do not want here to simplify the wealth of tools that exist in accounting, but you will commonly hear about these key tools: Cash flow: is the vision of the financial inflows and outflows of your business according to the cash scheme. That is, focused when money actually entered or left, regardless of the date that was previously expected. Statement of Income for the Year: is the vision of the financial inflows and outflows of your business according to the accrual basis. That is, focused on when money should have come in and out. It is completely at the cash flow. Balance Sheet: is the picture of the financial situation and capital structure of a business at any given time. It is basically focused on two pillars: Assets (assets and rights) and Liabilities (bonds). From the balance sheet it is possible to generate several accounting indicators that we will see later. Intersection between Accounting and Law One big question that exists around accounting is the limit of it with the right. In other words, it is knowing in which situations to seek an accountant or a lawyer. What happens is that the process of opening companies, carried out by accountants, has several legal elements such as the case of the social contract. In small businesses, the accountant usually recommends and guides a standard contract, but in businesses with a more complex corporate structure, a lawyer is required. In addition, there are the issues of taxes and taxes. On ordinary occasions, the accountant knows how to act because it is already market practice and will be enough. But if your company is innovating, you will need a legal opinion to understand its framework and also your risk. Main Accounting Terms Below is a brief list of terms you will come across as you delve deeper into cash flow, dre, or balance sheet studies: 1. Amortization: is the value of the periodic abatement of a value. For example, if you paid actual 20 for an actual 100 debt. This means that you have repaid R $ 20 or 20% of the debt. 2. To rent: is the fixed-term rent of an asset. For example, you can lease a farm by paying a fixed amount to the owner, but getting all the profit you can generate. 3. Active: is the set of goods and rights of an organization. Goods can be real estate, vehicles, etc. Rights are usually future earnings guaranteed by contract. 4. Current assets: These are the assets that can be received in at most 1 year. Enter money into account, stocks, duplicates, etc. 5. Permanent assets: Opposite to current, are those that will be fixed for more than one accounting year. Usually they are long term investments. 6. Share capital: It is the amount defined in the articles of association or bylaws that will define the participation of the partners or shareholders of the company. 7. Available: It refers to any cash immediately available as cashier, bank account checks for collections. 8. Social exercise: It is the period of 12 months in which each organization must determine its result. It may or may not coincide with the calendar-year (January-December) 9. Passive: Obligations of the organization that must generate capital outflows. 10. Current Liabilities: Short-term obligations to be paid in less than 1 year. 11. Liabilities Liabilities: Are obligations to third parties that become creditors of the organization 12. Non-Current Liabilities: They are the obligations that only sell in the following fiscal year. 13. Net worth: Value that the owners of the company has applied: It includes the capital stock, capital reserves, and profits. The Main Accounting Indicators Although there are dozens of different indicators that can be drawn from a statement of cash flow or operating result such as profitability and operating profit, when we refer to accounting indicators, we are talking about those indicators extracted from the balance sheet. 1. Liquidity Ratios a) Current Liquidity: has the purpose of evaluating the balance between current assets and current liabilities. LC = Current Assets / Current Liabilities The higher this result, the better, as this indicates that the company has more assets than liabilities. b) Immediate liquidity: means how much the company has available immediately to cover its liabilities. LI = Available / Current Liabilities Again, the higher the score, the better, as this indicates immediate ability to pay your bills. c) General Liquidity: this indicator takes into account all assets and liabilities in both the short and long term. LG = (Current Assets + Non-Current Assets) / (Current Liabilities + Non-Current Liabilities) Like all liquidity ratios, the higher the better. 2. Indices of Indebtedness a) Degree of Indebtedness: indicates the percentage of third-party capital compared to the equity of the company GE = (Equity Capital / Equity) * 100 In that case, the less one depends on third-party capital, the better. b) Composition of Indebtedness: It analyzes the ratio between short- and long-term liabilities. CE = (Current Liabilities / Third Party Capital) * 100 The smaller the better, as this indicates that your debts are spread over time. c) Impairment of Shareholders' Equity: indicates how much of the company's equity is invested in permanent assets. IPL = Permanent Assets / Shareholders' Equity The smaller, the better, because it indicates that your own capital is not immobilized and allows you greater freedom of payment / negotiation of media. 3. Profitability Indicators a) Return on Investment: one of the most famous indicators of all, indicates how much money has returned to each monetary unit invested ROI = (Net Income / Total Assets) * 100 The higher, the better, as it indicates that you are getting more return on your invested capital. b) Return on Equity: points to the company's return on invested capital. RPL = (Net Income / Shareholders' Equity) * 100 The bigger the better, because it indicates a higher return. c) Net Operating Margin: is the percentage operating result that is left over from net revenue MLO = (Earnings Before Financial Income / Expenses / Net Revenue) * 100 The higher, the better, because it indicates greater profitability of the business. Like the post? Would you like to do these calculations for your company? Look our spreadsheets for accounting.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9026719331741333, "language": "en", "url": "https://p21decision.com/the-new-power-plant/new-power-generation-technology/", "token_count": 944, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.041748046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d6c73bf0-fa1d-4de6-8ce8-bc923b1cf5f3>" }
We’re dedicated to producing energy in the most socially, economic and environmental ways we can. On 11/12/2012 the City of Holland voted to approve our recommendations to construct a new, state-of-the-art combined-cycle natural gas power plant. We are still in the early stages of the project, but check our news blog periodically to stay up-to-date with the latest information on the project. Our customers are sometimes surprised to hear how invested we are in a diverse array of power generation options. In fact, with our Power Purchase Agreements (PPAs), over 16% of our power comes from atypical energy sources like wind, biomass, and landfill gas, significantly more than the state required amount of 10%. What’s a Power Purchase Agreement (PPA)? Power purchase agreements are federally regulated contracts between two power production entities. The Holland BPW engages in several power agreements that allows us to invest and purchase power from alternative energy sources across the “grid” without having to build the infrastructure for it on our own land. How does our current power plant work? How did we decide on these options for powering Holland? Our recommendations for Holland’s future power are the result of a two-year long study that involved an SROI, community input and consulting from other experts. Our 2012 Annual Report does a great job of explaining the SROI. Check it out here. Explore some of the diverse power options we’re involved in. What is Combined-Cycle Natural Gas? In a Combined-Cycle Natural Gas (CCNG) system, two turbines work in tandem to generate electricity. The first—in this case, a natural gas turbine—generates electricity, and exhaust heat from that combustion is used to turn a second turbine, which is typically a steam turbine. This reuse of excess heat provides much more efficient generation than is provided by a natural gas turbine alone. (Video courtesy of Siemens AG) What are the benefits of natural gas? - Natural gas is reliable, available and sourced in the United States. - Natural gas is one of the most cost-effective fuel sources available on the market today. - Natural gas is significantly more efficient, which makes it much more environmentally friendly than coal. It produces an average of 50% less CO2, lower levels of NOx, fewer particulate emissions, and virtually no SO2 and mercury emissions. If you’re up for an interactive experience, visit an entry from the 2014 Annual Report to experience how all the components of a combined cycle system work together. For a more technical experience, dive deep into our commissioning diagram. We use wind energy in Holland? The Holland BPW has Power Purchase Agreements (PPAs) with two wind-energy arrays: - A 16.8 MW array in Ithaca, MI - A 15 MW array in Elwood, IN. How does wind energy work? Most wind-based power generation setups involve an array of wind turbines (like the one pictured here) built in an area with consistent, year-round wind. Wind drives the turbine’s propeller-like blades, which in turn drive the electric generator inside. Power is then transferred down the shaft to a transformer at the base, where power is converted to be integrated into the grid. What are the benefits of wind energy? Although wind power is intermittent (when the wind’s now blowing, they don’t contribute to the grid), it can be an excellent supplement to base load energy production. Wind energy, after the turbine is manufactured, produces no carbon emissions and has a very small ecological footprint, which makes it one of the most environmentally friendly energy sources. Landfill Gas Power We use landfill gas to power Holland? Since early 2010, the Holland BPW has contracted in PPAs with landfill gas power plants from Granger and North American Natural Resources (NANR), which together produce over 5.1 MW of energy and should increase to 9.8 MW by 2018. What is landfill gas power? With hundreds of millions of tons of garbage going into landfills every year in the United States, waste can breakdown to produce an enormous amount of volatile gases, especially methane. Without safe ways of venting or repurposing this gas, methane buildups can cause dangerous explosions. A great way to solve this problem is to use the gas to produce electricity. Landfill gas power plants drill into landfills to release gases and safely burn methane to drive turbines.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9412054419517517, "language": "en", "url": "https://www.financegeek.org/accounting/debits-and-credits/", "token_count": 667, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2145f251-6084-4190-a683-73a44b9e35f4>" }
Recall the fundamental accounting balance sheet equation: Assets = Liabilities + Shareholders Equity The left side of the equation is assets. The right side of the equation is liabilities and shareholders equity. Remember this part. It is important to understand debits and credits. Debit and Credit Explained A debit is when: -increase in the left side of the equation -decrease in the right side of the equation A credit is when: -decrease in the left side of the equation -increase in the right side of the equation So the opposite of a debit is a credit. General rule to remember: a debit is an increase on the left side. Therefore an increase to an asset, such as an increase in cash, is a debit. It increases the left side of the equation. A decrease in an asset such as cash has the opposite effect and is called a credit. It decreases the left side of the equation. What about for liabilities and shareholders equity? It’s just the reverse of the general rule above. Therefore an increase to a liability, such as an increase in debt, is a credit. It increases the right side of the equation. An decrease in a liability such as debt has the opposite effect and is called a debit. It decreases the right side of the equation. Debit and Credit Examples Let’s walk through some examples. Increase in cash This is a debit since it increases the left side of the equation. Cash is an asset, so an increase in cash results in greater assets which means an increase on the left side of the fundamental accounting equation. Decrease in cash This is a credit since it decreases the left side of the equation. Cash is an asset, so a decrease in cash results in lower assets which means an decrease on the left side of the fundamental accounting equation. Increase in accounts receivable Accounts receivable is an asset since it is a balance of how much you are owed. So therefore an increase to an asset is a debit. Decrease in plants, property, and equipment (PPE) PPE is an asset since it is a balance of how much you are owed. So therefore a decrease to an asset is a credit. Increase in taxes payable Taxes payable is a liability since it represents how much you owe in taxes. So therefore an increase to a liability is a credit. How to categorize accounts in accounting? Is an account an asset, liability, or something else? What is an accounting asset? What is an accounting liability? What goes in shareholders equity? Definition: An asset is a future economic increase. It results in cash or a gain in the future. A liability is a future economic decrease. It results in a decrease in cash or a loss in the future. Common Accounts in Accounting Cash Accounts Receivable Accrued Revenue Deferred Expenses Machinary and Equipment Plant, Property, and Equipment Accounts Payable Accrued Expenses Unearned Revenue Wages Payable Taxes Payable Deferred Expenses, Accrued Expenses, Accrued Revenue, Unearned Revenue This is a topic that confuses some people starting out in accounting. I’ll try to explain it as simply as possible.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.949028730392456, "language": "en", "url": "https://www.myhomeworkhelp.net/finance/", "token_count": 1320, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07666015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c2ea766b-72df-4be9-a92d-9c400f240cd4>" }
Types Of Finance Explained By Our Assignment Writers Finance is divided into three major categories namely, personal finance, corporate finance, and public finance. Our finance assignment writers look into each of these classifications in detail. - Personal finance: This type of financial planning involves evaluating the current position of individuals and formulating strategies and plans for future needs. Since every person has different financial needs, personal finance also is specific to each individual’s activity and situation. It largely depends on the individual’s earnings, goals, desires, and living requirements. It includes purchasing of financial products like insurance, credit cards, investments, mortgages, etc. - Corporate finance: This branch of finance is concerned with the financial activities related to the operations of a corporation. Usually, there is a department or division in place to oversee the running of financial activities. - Public finance: This area of finance included budgeting, debt issuance, spending, and tax policies that affect the way a government pays for all of the services it provides to the members of the public. A government body is expected to ensure sufficient allocation resources, economic stability, and distribution of income while preventing market failure. In addition it should maintain a stable economy that allows citizens to save safely. To scoop the best grades in finance studies and become a legendary finance manager in the future, all the concepts covered in these areas must stay at your fingertips. My Homework Help, the hub for all academic writing, will assist you with your finance assignments so that you can spend more time in mastering the subject. Also, if you encounter any challenges understanding the above branches of finance, you can seek assistance from us. We offer outstanding finance tutoring online that you can take advantage of to comprehend these areas. Topics Covered By Our Finance Assignment Help Experts When it comes to issuing finance assignments, there are plenty of areas that professors derive the questions from. Among these include: - Interest rates and spreads - Dividends and coupon payments - Financial statements - Cash flow - Rates of return (ROA, ROI, IRR) - Cost of capital - Creating value - Behavioral finance - Risk and return We have managed to provide trustworthy help with finance assignments derived from of these topics, all thanks to our amazing team of writers. We have equipped ourselves with the high-end research resources demanded by each of these topics to make sure that we are delivering satisfactory finance assignment solutions all the time. If you have any assignment issued from these topics or any other finance topic and would like us to offer some help, feel free to contact us. What Services Can You Avail From Our Finance Homework Help Platform? My Homework Help is the primary provider of online finance writing help. Students from the US, UK, Canada, Ireland, New Zealand, UAE, Australia, Malaysia, Singapore, Germany, and other parts of the world have grown to trust us with their assignments because they have seen what we can do to their grades. But the major reason why students prefer seeking help with finance homework is that we offer a wide variety of services. Thus, college goers can get quality solutions regardless of the topic their paper needs to cover. Some of the services you can enjoy by just sending us a “ Write my finance assignment” include: We provide the above services at the most affordable rates in the market. So if you are looking for cheap finance assignment help services that will deliver quality academic solutions, we are the best company for the job. Our online finance homework writing platform has been around for many years, meaning, out experts have gathered enough experience to help you with any issue you may have regarding your assignment. Get in touch with them through our live chat, email or call services and get the best help finance project help. Reasons You Should Seek Our Finance Assignment Writing Help Services Writing finance assignments can take a toll on students, which makes seeking help at such moments a wise move. Here are some of the reasons you may want to consider taking help with your finance assignments from My Homework Help: - To create more time for other stuff. We know finance assignments are into the only thing on your plate. Taking help from us will enable you to accomplish other things in your life without stressing about the assignments. - To score better grades than your peers or than you did last semester. The fact that our solutions are prepared with world class research resources and by highly experienced experts says it all – entrusting your finance projects to us will always assure you of a good grade. - To improve your understanding of the topic. The approach we use when writing your assignments guarantees that the solution you receive is legible and easy to understand. We simplify even the most complicated areas of your assignment for you. Our aim is not only to help you attain a good grade but also to ensure that you are understanding the solution in a manner that will not require you to seek help from an expert when working on similar projects in the future. - To meet your deadlines. If you have plenty of assignments, it would be a great idea to seek professional help than risk missing your deadline. My Homework Help provides assistance with finance projects and any other subject taught in institutions of higher learning. We are simply a one-stop solution for all your assignment needs. Features Of Our Finance Homework Writing Help - Affordability: We don’t over charge our clients, which makes us one of the cheapest finance homework help service in the assignment writing - Accuracy: We follow your instructions to the letter to ensure that we are writing your paper the right way. Moreover, we use the most up-to-date information sources to make sure the end solution is as accurate as possible. - 24/7 availability:When you hire an academic writer, you want the person to be available whenever you need them to answer your queries. Our finance project writing services are operated round the clock to ensure convenience. Money back guarantee: At My Homework Help, we work exceptionally hard to produce quality solutions and never have we had a client complain about substandard work. However, in the event you find that you didn’t get what you paid for, we always provide unlimited revisions. But if the work doesn’t impress you even after the fine-tuning, we are open to give you a full refund. Nevertheless, your claim must be backed up by concrete evidence that we indeed didn’t meet your expectations.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9374573230743408, "language": "en", "url": "https://www.plant.ca/sustainability/alberta-crude-production-and-reserves-way-up-regulator-says-103248/", "token_count": 698, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.271484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:63cf6063-cb0c-4941-9b81-1ced73d1b443>" }
Alberta crude production and reserves way up, regulator says Crude oil production totalled 556,000 barrels of oil per day with a yearly total of 204 million barrels. CALGARY – Alberta’s energy regulator says higher oil prices and new technology have led to the largest increase in decades of both conventional oil production and reserves. In its latest report the Energy Resources Conservation Board records a 14% increase in production in 2012 and 9.5% increase in reserves over 2011 levels, due to the higher production rates from horizontal wells. Alberta’s crude oil production totalled 556,000 barrels of oil per day with a yearly total of 204 million barrels. “We saw the trend start in 2010, but with the price environment and with the successful use by industry of multi-stage fracturing technology in horizontal wells it’s definitely a great success story from that perspective,” said Carol Crowfoot, chief economist for the ERCB. Prior to that she said there had been a “flattening” in production because of the maturing of reserves in the oil basin. Hydraulic fracturing, or fracking, injects sand, water and chemicals to break apart rock and free the oil or gas inside. “In the province certainly natural gas prices have been in the doldrums for quite some time so industry is focusing on producing other commodities that bring a better rate of return obviously – crude oil being one of them,” Crowfoot added. The ERCB forecasts Alberta’s annual raw crude bitumen production will total 3.8 million barrels per day for a total of 1.39 billion barrels per year by 2022. It says since 1967 Alberta has produced about 8.8 billion barrels of raw crude bitumen from the oil sands and 16.7 billion barrels of crude oil since 1914. Crowfoot said the forecast is dependent on oil prices, access to capital and whether the market flattens out or is in decline. Alberta’s total remaining established crude bitumen and crude oil reserves amount to 169.6 billion barrels, consisting of 167.9 billion barrels of crude bitumen and 1.7 billion barrels of crude oil. “It’s in the range of about eight or nine years. That means if we don’t find more reserves and we keep production at current levels that will last us eight or nine years. But of course we find more reserves,” explained Crowfoot. “You have to remember we’re in a maturing basin and on the conventional oil side even though we’re seeing a definite change in trend from a decline in production – it’s a diminishing pie of remaining reserves.” Conventional natural gas reserves in the province stood at 33 trillion cubic feet in 2012 which is down 3% from 2011. Reserves of natural gas liquids were 1.6 billion barrels which is virtually unchanged from the year previous. Despite the diminishing pie of reserves Crowfoot said that doesn’t include what people commonly refer to as shale oil, shale gas, and shale natural gas liquids. “If development occurs on the shale side then there is tremendous resources. For shale, we have a treasure trove of resources under the ground, then of course we would have significantly more reserves than we do now.” ©The Canadian Press
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9785041213035583, "language": "en", "url": "https://www.wisegeek.com/what-is-the-smoot-hawley-tariff-act.htm", "token_count": 667, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.3828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4093c0a6-a3b5-4728-9b85-6162bd81bd33>" }
The Smoot-Hawley Tariff Act was a law passed in the United States, in 1930, as an attempt to legislatively address the Great Depression and counteract its effects. The specific objective of the legislation was to greatly increase tariffs on thousands of imported goods in order to spur consumption of American-made products, and protect American jobs. The act has historically been considered, at best, ineffectual and, at worst, a failure that significantly prolonged the Depression. It is commonly cited as a prime example of the policy known as protectionism. It was named for its authors, senators Reed Smoot of Utah and Willis Hawley of Oregon. Both men were Republican committee chairmen — Smoot of the Senate Finance Committee, and Hawley of the Senate Ways and Means Committee. At the time, both these committees were very powerful and, in turn, so too did their chairs wield a great deal of influence. In the Smoot-Hawley Tariff Act, both men were making good on a 1928 campaign promise of President Herbert Hoover. Also a Republican, Hoover promised beleaguered American farmers that he would increase the price of foreign farm products to help them sell their goods domestically. With Republicans controlling Congress, this was a promise Hoover could keep. Companion bills were introduced in both the House and the Senate around 1929. The House passed their version first, and the Senate theirs several months later, in March of 1930. The differences between the two bills were resolved in a negotiated Conference Committee, with many of the higher tariffs present in the adopted House bill. Though Hoover actually opposed the bill due to its likely negative impact on America's foreign relations, he signed it into law in deference to party pressure and the influence of various American captains of industry. Essentially the act made it very expensive for Americans to purchase a wide variety of foreign-made goods, with the idea that they would instead buy domestic products. This predictably angered all nations involved in commercial trade with the United States. Countries around the world reacted to the Smoot-Hawley Tariff Act by raising their own tariffs. European countries and Canada, accounting for a large proportion of foreign consumption of American goods at the time, did particular harm to American exports by raising theirs. The tariff levels set by the Smoot-Hawley Tariff Act, both in America and reactionary ones around the world, remained largely in place until the demands of World War II prompted their abrogation in the 1940s. Though opinions on the effect of the Smoot-Hawley Tariff Act differ, various statistics are often presented in support or opposition to its success. In particular, when it was passed in 1930, the unemployment rate in the United States was less than 8%. Within three years it had more than tripled, to nearly 25% in 1932. Supporters of the act, and of protectionism in general, claim that correlation in this case does not equate to causation, and that other factors were more to blame for the length and severity of the Depression. Critics argue the act provoked a kind of economic arms race, in which national governments ultimately did more harm than good to their economies by trying to artificially set the price of goods. The act has remained a symbolic bone of contention in modern policy debates among 21th century economists and politicians.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9184879064559937, "language": "en", "url": "http://b2btycoons.com/technology/blockchain-in-the-internet-of-things-market/", "token_count": 373, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.09228515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:515b4406-1f98-4fe7-87d5-f00395e23ff6>" }
What is Blockchain? A Blockchain comprises of a growing rundown of records, each connected utilizing cryptography. Each square contains a cryptographic hash of the past one, alongside a timestamp, and exchange information. Essentially, Blockchain information can’t be altered, and encourages the production of a perpetual record of exchanges. A Blockchain is, a permanent time-stepped arrangement record of information that is conveyed and oversaw by bunch of PCs. Blockchain Technologies and the IOT Since the Internet of Things is such an enormous, quickly growing and now and again approximately characterized “organize”, security can be a test. Executing Blockchain advances in IoT implies that each component in the IoT condition can have an extraordinary character, which can be duplicated over every single other component, making it harder for an outsider to obtain entrance. Blockchain permits a further layer of security, in light of the fact that regardless of whether a system is undermined it is hard for an assailant to utilize false ID inside the framework. It might require some investment to calibrate the utilizations of Blockchain in the condition, however it will occur. In media communications, extraordinary advancement has just been made, with a developing consciousness of the advantages of Blockchain, and fundamental layer advances as of now set up. The subsequent stage for CSPs and others in this industry is to altogether assess accessible items and actualize those that best meet their business needs. This incorporates examination of regions of activity that can be improved utilizing Blockchain innovation, on the grounds that not all zones will be appropriate for all the CSPs, and some ought not be applied simultaneously in a similar activity. interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9419459104537964, "language": "en", "url": "http://data.sagepub.com/sagestats/document.php?id=15942", "token_count": 164, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.043212890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:87c83e13-e909-43e5-9f06-c35ecf59320c>" }
* Percentages are calculated based on data published by the EIA. Years Available: 2014–2018 Permanent Link: http://data.sagepub.com/sagestats/15942 General Notes: A "#" symbol indicates that the value is not meaningful due to a large relative standard of error. "Distributed solar energy" or "Distributed Photovoltaic" includes all solar powered electricity generation (including photovoltaic and thermal). That is, electricity that is produced at or near the point where it is used. Distributed solar energy can be located on rooftops or ground-mounted, and is typically connected to the local utility distribution grid. States, cities and towns are experimenting with policies to encourage distributed solar to offset peak electricity demand and stabilize the local grid.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9496603012084961, "language": "en", "url": "https://arnoldpeterweiss.net/everything-your-business-needs-to-know-about-ai/", "token_count": 419, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.3125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6be4c032-5fb1-462b-a2f6-62a587e44266>" }
Modern technology continues to amaze the public in numerous ways. Artificial Intelligence (AI) helps drive sales due to targeted marketing programs, saves lives via sterile robotic surgeries, and entertains masses through virtual reality experiences. This issue has supporters and detractors on both sides of the fence in wonder. What is the ultimate potential? How can AI affect business? What are the potential benefits and consequences of this effect? Contrary to popular opinion, there is little evidence that AI will result in any dramatic statistical changes in human resources. For most companies and industries, self-learning technologies will create new jobs. Internal processes will need updating, and the enhancements brought on by AI will require additional support from customer support and production teams. The number of machine operators and factory workers that may be replaced by AI or an otherwise robotic workforce simply does not warrant the mass panic that stood ready to erupt less than a decade ago. Experience has proven that the reality of AI is that even the most sophisticated technologies cannot replace human for many of the simplest tasks. Even some of the staunchest opponents of AI advancements have now come to appreciate the many benefits of adopting this technology. One of those benefits is job creation – not a reduction in the workforce. Leaders and managers are generally aware of potential changes that could result from the implementation of AI. However, most executives and operating boards are unsure about the best ways to navigate these changes. Streamlining internal and external business processes can take years to complete, even with the most efficient staff members. Utilizing AI technologies can speed up this process by capturing existing trends and creating algorithms that instantly provide a synopsis of current practices. AI can then manipulate that data, again instantly, and determine the potential outcomes of a variety of different hypothetical situations. The lure of AI for most corporations is the ability to quickly determine the pros and cons of a particular course of action. New was of applying AI technologies into businesses are developed practically every day. It is nearly impossible to accurately manage the ways in which AI is, will, or could be used to improve the standard operating procedure.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9597401022911072, "language": "en", "url": "https://businessfirstfamily.com/usd-to-inr-exchange/", "token_count": 888, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.018798828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c4ce0cf4-2d43-4307-b719-c85108432c4a>" }
The USD to INR exchange rate has fluctuated significantly over the years. Over the past three years, specifically, the gap between these two currencies has gotten smaller. If you are interested in learning more about the USD to INR exchange rate, why it fluctuates and how this effects and reflects the state of the economy, keep reading below. You will be surprised to find that it is not much different from the relationship between 1 AUD to USD. What Is USD? USD stands for United States Dollar. This is the currency used and backed by the American government. The dollar is the world’s primary reserve currency. Reserve currency is the money held in significant quantities by governments and institutions as part of their foreign exchange reserves. Many other countries use this as their official currency as well. Rather than holding actual financial weight like silver and gold coins, the U.S Dollar receives its value from the support of the United States government. This is very similar to PPI, for those familiar with the topic. What Is INR? INR stands for Indian Rupee. The Indian Rupee is the official currency of the Republic of India. However, rupees are also used as currency in a variety of other countries such as Pakistan and Nepal. The Indian Rupee can be subdivided into 100 paise, similarly to how the American dollar can be divided into 100 cents. Prior to the use of paper rupees, India utilized silver coins as their means of currency. What Is An Exchange Rate? The exchange rate is, essentially, the price of one nation’s currency in comparison to another. There are several factors that influence the exchange rate including inflation, interest rates, political stability and market data regarding trade rates. Today, the exchange rate between the Indian Rupee and the United States Dollar is approximately 0.68 INR to 1 USD. This means that you would need approximately 1 and 1/3 Indian Rupees in order to have the equivalent value of 1 American dollar. Exchange Rate For Remittances When it comes to remittances, finding the best exchange rates for dollar to rupee is imperative. You can save yourself a lot of money, which means more money being sent back to your family. Currently, Transferwise has the best USD to INR transfer rate, at ₹ 64.85 for each $1. However, you should definitely check which is the best INR to USD transfer rate at the exact time you plan to send your remittances. The transfer rate is constantly fluctuating. These cents will add up over time. Why Does It Fluctuate? The exchange rate between any two currencies is bound to fluctuate. You do not need to understand the differences between finance vs accounting to understand why. This is primarily because the underlying factors behind the exchange rate, such as those listed above, are bound to change over time. Political stability and military power, for example, could change in times of war. Additionally, other less violent aspects of the exchange rate include trade. The amount that a country imports versus how much it exports will make a significant difference in its currency value. When a country exports more than it imports, it is more likely to have a stronger currency. All money is valued based on the profitability and trust in a particular government. When Will It Change? The USD to INR exchange rate is not set in stone. Many economists have been able to approximate how they think the exchange rate will change over time. The “forecast” can be found online, listing day-by-day how much the rupee is worth in comparison to the dollar. Overall, it is expected to stay about the same throughout this year. However, in the years to come it is expected that the rupee may rise in value. The USD to INR exchange rate is just one example of a complex world economy. It is important to understand the individual currency and respective governments when determining an exchange rate. This can certainly help bank reconciliation processes. Only then can you take into account the many factors involved. Photo from http://www.india.com/business/inr-to-usd-forex-rates-today-rupee-steady-at-66-74-vs-usd-in-late-morning-deals-1376603/
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9373282194137573, "language": "en", "url": "https://cointelegraph.com/news/advancing-blockchain-act-the-us-ticket-for-blockchain-superiority", "token_count": 667, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2177734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:777f684b-2475-498e-badd-d93cab2e2816>" }
Governments worldwide are reaping the benefits of blockchain integration within a number of fields. As Dubai deploys blockchain as part of its smart city initiative, citizens of Georgia interact with the technology to register and transfer land titles. It takes three minutes to register a title, and the blockchain framework behind it enables security, longevity and transparency — qualities that go a long way in restoring people’s faith in their governments. Elsewhere in the world, South Korea’s customs service employs blockchain for the import and export of goods. The United Kingdom has piloted the technology to track the origin and distribution of cattle meat. Switzerland has tested blockchain identification and voting systems. China alone has registered hundreds of blockchain projects, with $1.6 billion in governmental funds set aside for blockchain initiatives. The list goes on and on. Around the world, governments are rapidly experimenting with and deploying blockchain to improve efficiency, secure platforms and promote transparency in areas such as supply chain management, identification, titles, bookkeeping, energy consumption, voting and more. The United States, however, represents one such government that continues to lag behind its innovative counterparts. Pushing forward via the Advancing Blockchain Act The Advancing Blockchain Act is a bill proposed by Rep. Brett Guthrie, a Republican from Kentucky. It is the third blockchain bill introduced by Guthrie, following the Blockchain Promotion Act of 2018 and the Blockchain Promotion Act of 2019, both of which stalled. If passed, the bill would initiate a comprehensive survey of blockchain technology and compile a report with legislative recommendations to promote the growth and adoption of blockchain, address regulatory barriers and advance the U.S. as a global leader in blockchain technology. While private-sector adoption of blockchain technology has been slowed by regulatory uncertainty, government agencies have explored it for innovative use cases. The Food and Drug Administration launched a pilot that utilizes blockchain for tracking and authenticity verification of subscription drugs, and the Air Force deployed a solution for supply chain security. There are five definitions of cryptocurrency in the U.S. alone and no distinct taxonomy for the various digital assets created by the cryptocurrency industry. This lack of regulatory clarity makes it harder to raise capital and forces startups to disproportionately spend what they do raise on legal expenses. Blockchain startups throughout the nation are moving offshore to countries with distinct regulatory clarity such as Singapore and Switzerland. Should the Advancing Blockchain Act pass through Congress, an expansive survey on the benefits of blockchain and the successes of other nations will be conveyed to it. The report would create urgency to provide clear regulations that will enable the U.S. to retain and grow its blockchain industry. Cryptocurrency firms have actively engaged with politicians and regulators for years to work toward clarity in law and effective consumer protections. The Advancing Blockchain Act is something the crypto world cannot afford to sit on the sidelines for. Without strong advocacy, this bill is likely to be denied a hearing. This bill represents a wake-up call and rallying cry for innovators in the public and private sectors to band together, not only to catch up with but also to succeed against their global counterparts in the “blockchain race.” The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9609049558639526, "language": "en", "url": "https://www.piggybankdreams.com/the-mistakes-people-make-with-saving-money/", "token_count": 684, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0311279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ca2a3a14-d2a0-478a-b7f1-3af911c66ab3>" }
Saving money is one of the most common things people do when they’re paid. You get your weekly or monthly salary, you start budgeting, then you reserve as much money as possible in your savings account. While this behaviour is completely normal and understandable, it’s also questionable because it’s not always the correct choice. Always have a goal for your money Let’s face it, where does that money go when you save it? Unless you have a goal for the future, there’s almost no point saving your money and in some cases, planning for the future is better done by spending money instead of keeping it in your bank. Let’s use your future as an example. If you plan to have children, then it’s common to save up for their college tuition and plan around their expenses. You’ll look up how much it costs to pay for college and schooling, then you’ll work it out in your budget and set aside some money each paycheck to put towards your child’s college. This is one of the better examples of saving money. In short, always have a goal or a plan if you’re going to save your cash. Build up an emergency fund and no more When things go wrong, you need to be prepared with some emergency funds. This is one of the most basic uses for the money you’ve saved, but there comes a point when you can save up too much money. Building up an emergency fund is similar to investing. You’re putting money aside to improve your future by dealing with problems that may or may not pop up. Essentially, you’re gambling money that could be spent right now to improve your life and you’re betting on the future. While this is acceptable for some people, others find that the money could be better used immediately to improve their lives and make them happier. Without an emergency fund, you’ll need to rely on asking friends for help or seeking out payday loans to help you afford repairs and other needed expenses. Even though we try to avoid these situations, it’s still useful to contact a business such as Cigno Loans for help with emergency expenses. However, once you learn how Cigno Loans can help, you might be keen on overspending money and relying on them as a safety net. This is a terrible idea and has to be avoided at all costs if you want to manage your money properly, and it’s one of the mistakes people make. Money is better spent Let’s face it, if it’s a choice between spending money to improve your life now versus improving it in the future, you may as well increase your happiness while you can. Let’s say you have a washing machine that constantly breaks down and requires minor repairs every couple of weeks. Most people might wait until the machine is completely broken until they replace it, but that would be the wrong solution. The best thing to do in this case is to actually replace it as soon as possible because not only are the repairs eating up your money, you’re getting a less efficient machine that could potentially break when you most need it. Think of buying a new washing machine as a future investment that will ultimately end up being cheaper.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9313912987709045, "language": "en", "url": "https://www.thefreelibrary.com/Children+as+income-producing+assets%3A+the+case+of+teen+illegitimacy...-a020916034", "token_count": 20737, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b433bb79-9e3a-4e0e-9271-931a5e0fc682>" }
Children as income-producing assets: the case of teen illegitimacy and government transfers. The assertion that economic considerations play a significant role in family formation and fertility decisions is neither new nor controversial. The observation that there is a systematic interplay between economic considerations and fertility dates at least to Malthus (1798). In his Essay on Population (1798) and Summary View on Population (1830), he provided a series of conjectures and empirical evidence in support of the view that agricultural productivity provided an overall restraint on the positive and negative influences on birth rates. Subsequently refined and debated, the classical theory of population was summarized by Blaug (1978) as the proposition that . . . the production of children, [is] not as a means of spending income on "consumer goods" to acquire satisfaction, but as a method of investment in "capital goods" for the sake of a future return. (Blaug 1978, p. 78) While this classical view has been adequate for loosely explaining population dynamics in agrarian societies, the modern economic theory of the family, due mainly to Becker (1991), views children as primarily a consumption, rather than an investment, good.(1) Undoubtedly, across most of the range of the income distribution in industrialized economies, the consumption view of children is the more suitable and powerful explanation. However, for individuals in poverty, various public cash and in-kind transfers create a series of economic incentives which, as we shall develop below, make the childbearing decision equivalent to the Malthusian analysis that children are income-producing assets as well as sources of utility. In the modern welfare state, it is the transfer system, rather than agricultural production, that creates income-producing opportunities. The growth in public transfers has been accompanied by a sizable empirical literature in sociology and economics on the interaction between various family formation decisions and the welfare state. In general, the empirical evidence, from studies that have used differences in welfare benefits across time and across states to test whether the welfare system encourages illegitimacy, is inconclusive.(2) One possible reason for the mixed results is that illegitimacy might affect per recipient benefits either directly, due to voters' concerns about illegitimacy, or indirectly, because it affects the size of the state's welfare population. Increases in the size of the welfare population increase the cost to voters of providing a given level of benefits and, therefore, might cause voters to reduce per recipient benefits. In either case, benefit levels and illegitimacy are codetermined. Once we control for this endogeneity, we find strong evidence that welfare has a large and statistically robust effect on illegitimacy. We find transfer elasticities with regard to teen illegitimacy rates on the order of + 1.3 and +2.1 for white and black teens, respectively. In addition, we find own wage elasticities with regard to teen illegitimacy rates on the order of -0.4 for white teens and that own wage elasticities are not significantly different from zero for black teens. We focus specifically on the effect that welfare has on out-of-wedlock teen fertility for several reasons. The first is that, although few Aid to Families with Dependent Children (AFDC) households are headed by teenagers (only 3-4%), a much larger proportion are headed by women who were teen mothers (around 40%) (General Accounting Office 1994). A second point is that teen mothers tend to be less educated and spend more time on AFDC than other participants.(3) Finally, about half of unwed teen mothers become welfare recipients within two years of the birth of their first child (General Accounting Office 1994). Overall, unwed mothers are likely to enter the AFDC program, and once this happens, they are in the program longer than older women. The organization of the paper is as follows. Section 2 presents a brief review of empirical modeling considerations arising from the literature and some stylized facts. Section 3 develops and explores a formal economic model of the dependency decision. A young, fertile teenager is viewed as facing the choice to (i) complete her education and seek work or get married or (ii) have a child and thus gain access to AFDC, food stamps, Medicaid, and housing and energy assistance. Section 4 discusses the data collected to test the model and econometric modeling considerations. Section 5 presents and summarizes the empirical results, while section 6 concludes. Also, Appendices A and B contain, respectively, data definitions and sources and a discussion of sample size in other studies of teen illegitimacy that use individual-level data. 2. Some Stylized Facts on Illegitimacy and Welfare Much of the recent concern about the effect of welfare benefits on illegitimacy has been stimulated by the large increase in births to unmarried women since the end of the Second World War. The number of illegitimate births per 1000 single women of childbearing age has nearly tripled over the past 40 years and has more than tripled among teens [ILLUSTRATION FOR FIGURE 1 OMITTED]. It is commonly asserted that the increase in illegitimacy has been caused by welfare benefits encouraging, or at least allowing, single women to bear children out-of-wedlock. The objection to this conjecture is that, in real terms, the welfare benefits available to single mothers have not grown continuously over this period. One measure of the value of welfare benefits, the combined value of AFDC and food stamp payments to a family with no other income, grew slowly in the early 1960s and then faster through the mid-1970s.4 However, since the mid-1970s, benefits have been flat or declining in real terms. Moffitt (1992) notes this stagnation makes it unlikely that changes in welfare benefits alone explain the rapid growth in illegitimacy. This does not mean that benefits have played no role in the increase in illegitimacy. Even if the real value of welfare benefits has not been growing continuously, it might have been growing relative to other economic opportunities available to the young woman. In A Treatise on the Family (1991, p. 16), Becker observed that . . . my analysis of the marriage market indicates that the incentive to remain single depends upon income while single relative to income expected if married. The real wage rate of young male high school dropouts and the lowest quartile of graduates has dropped by more than 25% over the past 15 years and these young men may have become less attractive marriage partners for other reasons as well. Welfare might have interacted with other variables to cause the rapid growth of illegitimacy, even if it is not the only, or even the main, contributor to the growth. Looking at changes in illegitimacy and welfare benefits over time is one way of testing the relationship between the two. Another is to take advantage of the federal system where states set their own AFDC benefit levels.(5) As Murray (1993, p. 225) notes, this variation appears to ". . . provide a natural experiment for testing the proposition that welfare is linked to family breakup." If welfare were the primary cause of the increase in illegitimacy, then states with higher per-recipient benefits might have higher illegitimacy rates since women in those states might be more likely to give birth out-of-wedlock. Many studies have exploited differences in benefits across time or states, using discrete choice models to test whether welfare benefits affect the probability that an unmarried woman has an out-of-wedlock birth or to test the aggregate relation between benefit levels and the state's illegitimacy rate. However, empirical work exploiting these differences has not been conclusive. Some studies have found modest positive relations (e.g., Ozawa 1989; Caudill and Mixon 1993), many others have found mixed or statistically insignificant positive results (e.g., Duncan and Hoffman 1990; Lundberg and Plotnick 1990; Acs 1993), and others have even found negative correlations among their results (e.g., Ellwood and Bane 1985). In a recent paper, Rosenzweig (1995) finds that high AFDC benefit levels increase the probability that a woman will give birth out-of-wedlock before her 23rd birthday, especially for women who grew up in low-income households. Moffitt (1992) summarizes various studies written between 1982 and 1990 on the effects of welfare benefits and concludes that there is only "mixed evidence of an effect of the welfare system on illegitimacy." Murray (1993) and Acs (1993) examined other studies and reached the same conclusion. However, differences in per-recipient benefits between states and across time are not the result of a natural experiment. Ellwood and Bane (1985) and others note that the state's benefit level is not set independently of the social and political structure of the state.(6) First, as noted by Ellwood and Bane (1985), omitted or perhaps unmeasurable state attributes might affect both the benefits the state offers single parents and encourage, or discourage, single motherhood. If these traits are omitted, the estimated coefficient on benefits will be inconsistent; this criticism applies equally to individual and aggregate studies. If these traits are (more or less) constant for a given state over the period studied or (more or less) constant across all states for a given time period, then state (or time) dummy variables in a regression analysis will control for them. However, illegitimacy rates might also affect benefit levels directly, either due to voters' concerns about illegitimacy or because they affect the size or composition of the state's welfare population. The public choice literature on how states set welfare benefits (e.g., Orr 1976) shows theoretically that the size of the welfare population (relative to the number of taxpayers) increases the price of per-recipient benefits for voters. If changes in the price of benefits to voters affect per-recipient benefits, then benefits and illegitimacy rates will be simultaneously determined.(7) Benefit levels might also affect the composition of the state's welfare population (e.g., changes in the relative number of divorced and never-married women), and this might also affect voters' preferred levels of benefits if voters are more sympathetic toward certain subgroups of recipients. If so, in either case, there is a simultaneity problem, where benefits and illegitimacy both affect the other. This is discussed further below. 3. A Model of Teen Fertility Recently, the literature on fertility has concentrated on the interplay between child quantity and quality and on inter-generational transfers (e.g., Cigno 1986; Barro and Becker 1988; Becker 1991; Hanushek 1992). In these models, the number of children, child quality, and consumption goods enter the family's utility function, which is then maximized with respect to some budget constraints. Leisure is not usually included in the utility function. In our simple model, a utility-maximizing woman faces a discrete choice between some combination of marriage and work on the one hand and welfare on the other. It is assumed that the woman has the number of children that she wants (i.e., there is no stochastic element to childbearing) and that her utility function is continuous and satisfies a nonsatiation condition. The price of consumption is normalized to one. Whichever choice she makes, she maximizes utility by choosing appropriate amounts of leisure, children, and market-consumption goods. For women who choose work and marriage, children are essentially a consumption good. However, for poor women on welfare, children also act as an income-producing asset. Table 1 contains variable definitions. Children consume two different types of parental resources: (i) money - there is a financial cost, [p.sub.b] [greater than] 0, associated with raising each child; and (ii) time - there is an additional time cost, [t.sub.b] [greater than] 0, involved with raising each child. These costs are fixed and represent the minimum time and money investment the mother needs to bear and raise the child.(8) They are not substitutable - one cannot reduce the time commitment by increasing money expenditures and cannot reduce the financial commitment by increasing time expenditures. Table 1. Variables in the Model Variable Interpretation b Children (babies) c Consumption l Leisure I Partner's income [t.sub.b] Time cost of child [P.sub.b] Money cost of child w Woman's wage L Woman's labor m Time endowment V(b, L) Cost of child care given b children and L hours of labor [g.sub.1] Basic government welfare grant (guarantee) [g.sub.2] Additional welfare grant per child A woman who chooses marriage and work must divide her time between child rearing, work, and leisure and her income between child care if she works, child rearing, and consumption. Since her utility function has a nonsatiation property, the inequalities in the budget constraints are replaced with equalities. Since either partner's income I or hours spent working, L, could be zero, this case encompasses single mothers who work and married mothers who do not work, so that [Mathematical Expression Omitted]. The first constraint is the financial constraint. The woman spends her labor income wL and her partner's income I (assumed not to be a function of the number of children), on consumption, bearing and raising children, and child care. The second is her time constraint. She divides her time between leisure, child care, and labor. This constraint would hold in the same way for a single mother not receiving welfare, although her time cost for children may be different. If the woman chooses welfare instead of work and marriage, it is assumed that she does not work and cannot marry, perhaps due to program requirements.(9) Since 1990, states operating AFDC programs have been required to operate an Aid to Families with Dependent Children for Unemployed Parents (AFDC-UP) program.(10) However, since the primary breadwinner must be unemployed to receive AFDC-UP payments, spousal income would still be zero. Her maximization problem, if she chooses welfare, is therefore max U(b, c, l) such that l + [t.sub.b]b = m [p.sub.b]b + c = g(b), where g(b) is the welfare payment function, the money a woman receives from the state to support b children. The variables [t.sub.b] and [p.sub.b] are the time and money costs of having a child (not necessarily the same as [Mathematical Expression Omitted] and [Mathematical Expression Omitted], the time and money costs faced by a married woman). For simplicity, g(b) is assumed to take the form g(b) = [g.sub.1] + [g.sub.2]b for b [greater than] 0 = 0 for b = 0. Throughout, [g.sub.1] is referred to as the base welfare grant, the money all women on welfare receive, and [g.sub.2] is referred to as the additional grant for additional children. It is assumed that no consumption (of purchased goods) is extremely unattractive and that positive consumption is possible (i.e., for some [b.sup.*], m - [t.sub.b][b.sup.*] [greater than] 0 and [g.sub.1] + ([g.sub.2] - [p.sub.b])[b.sup.*] [greater than] 0). That is, if a woman chooses welfare, she will have at least one child and will, therefore, receive a strictly positive payment net of raising the child.(11) Note that [g.sub.2] may be either less than or greater than the minimum financial cost of having a child ([p.sub.b]). Two results follow from nonsatiation. (i) Increasing I, spousal income, or w, the woman's own wage, pushes the financial budget constraint for married/working women outward. Nonsatiation implies that the woman's utility must improve since the original bundle can still be obtained. Increasing w only weakly increases utility since, if the woman chooses not to work at either wage, her utility will remain unchanged. (ii) Assuming no consumption is unacceptable, women on welfare will have at least one child (g(0) = 0). Therefore, increasing either [g.sub.1] or [g.sub.2] shifts her budget constraint outward and increases her utility. Note that changes in benefits affect only welfare and changes in partner's income and own wages affect only work/marriage. Therefore, they unambiguously affect the relative attractiveness of the choices. Increases in welfare payments increase the attractiveness of welfare and hence should be associated with more women choosing welfare, while an increase in the income of potential partners or in the woman's own wage unambiguously makes marriage and work more attractive. Even if a woman prefers more consumption to less, more children to fewer, and the additional welfare payment more than covers the minimum cost of an additional child, increases in the base welfare grant [g.sub.1] or the additional payment per child [g.sub.2] will not necessarily increase the number of children a mother on welfare will choose to have. The argument is similar to the argument that increasing wages does not immediately encourage the individual to work more. Having more children means less time is available for leisure. Since she cannot have more of both, which one she chooses depends on her preferences. In the same way, increasing I (partner's income) or w (the woman's own wage rate) will not necessarily cause a woman who chooses work/marriage to have more children either. This result does not rely on obscure utility functions where children are not a normal good. If [g.sub.2] [greater than] 0, the number of children is a decreasing function of [g.sub.1] with a Cobb-Douglas utility function.(12) In summary, the theoretical model makes the following testable predictions. (i) Increasing either the base welfare grant or the additional welfare grant per child increases utility from welfare and does not affect the utility from marriage and work. Hence, we would expect more women to choose welfare over work and maritage when welfare payments increase. (ii) Increasing spousal income or the woman's own wage will increase (weakly in the case of own wage) utility from work and marriage and does not affect utility from welfare. Hence we would expect fewer women to choose welfare as spousal income or women's wages increase. (iii) Both the base grant [g.sub.1] and the additional grant for extra children [g.sub.2] have ambiguous effects on the number of children that a woman on welfare chooses and it is possible that changes in the base and additional grants could have opposite effects. An increase in the additional grant [g.sub.2] is more likely to have a positive effect since it is more likely to encourage the woman to have additional children.(13) These points are important for two reasons. First, the illegitimacy rate is affected by both the number of women choosing welfare and the number of children women on welfare have. Therefore, benefit changes have an ambiguous theoretical effect on illegitimacy. Focusing on teens reduces this concern because most births to teens are first births. Because women must have at least one birth to qualify for AFDC, the effect on first births is unambiguous. Second, the empirical literature has often not distinguished between base and additional grants in regressions relating out-of-wedlock fertility to welfare payments. Going beyond the simple theoretical model presented in this section, [g.sub.1] may be more important when initially choosing welfare, especially if teens have high discount rates. 4. Data and Econometric Issues Issues in Measuring Illegitimacy The empirical estimation focuses on out-of-wedlock births to teenagers rather than out-of-wedlock births to women of all ages for three reasons. First, as noted earlier, women on welfare who give birth as teenagers tend to be on welfare for longer and tend to become more dependent on welfare than women who give birth when older. Second, we wish to focus on the choice between welfare and work or marriage rather than on the number of children that unmarried women have. The illegitimacy rate - defined as the number of out-of-wedlock births per 1000 women of childbearing age - could be affected by welfare in two ways. The base and additional welfare grants might affect both the number of women that give birth out-of-wedlock and the number of children the women have. This is a concern because, as noted in the theoretical section, welfare might affect these decisions differently (in both direction and magnitude). Since most births to teens are first births, this distinction is less important for teens than for older women.(14) Finally, we focus on teens because women of different ages might be affected by welfare differently. It is easier to interpret the coefficients when the population is homogeneous. The theoretical model suggests that the left-hand-side variable should be either the illegitimacy rate or the observation of an out-of-wedlock birth.(15) It also suggests several right-hand-side variables that should be included in a reduced form model of the woman's decision. These include the value of the base and additional AFDC grants, the wage available to the woman if she does not have a child out-of-wedlock as a teen, the wage or income of the woman's prospective spouse if she does not have a child out-of-wedlock, and perhaps the availability of potential spouses. Beyond the issue of the appropriate dependent and independent variables, there are several conceptual and econometric issues. (i) Unobserved Differences Between States and Across Time. One issue frequently discussed in the literature on the effects of welfare on family formation decisions is the inclusion of fixed effects.(16) Ellwood and Bane (1985) note that omitted variables, such as unobserved state-level social and political characteristics, might affect both the probability of out-of-wedlock births and the AFDC benefit the state offers. For example, Ellwood and Bane (1985) suggest Minnesota's Scandinavian tradition might encourage both strong family ties and generous welfare benefits. In addition to inherently unmeasurable differences in attitude, one can also think of other omitted variables (potentially measurable and potentially unmeasurable) that might be correlated with both benefit levels and the prevalence of out-of-wedlock births. If these variables are (approximately) constant over time (for each individual state) or (approximately) constant across states (for each time period), then time and state dummies will effectively remove them from the regression, allowing unbiased estimation of other parameters. If the variables are not constant, then, in general, fixed effects estimation will not be consistent. This problem becomes more likely as the period studied becomes longer. Omitted variables that are similar when comparing 1983 to 1984 may be far less similar when comparing 1960 to 1995. (ii) Endogeneity of AFDC Benefits. AFDC benefit levels are the result of decisions made directly by state-level politicians, and therefore indirectly by voters, in each state. Beating this in mind, there are at least two reasons that AFDC benefits might be endogenous. First, as noted above, there may be omitted societal variables that affect both the collective decision regarding benefit levels and the individual decision of whether to give birth out-of-wedlock. Second, teen pregnancy rates might affect the benefit level directly rather than through other omitted variables, resulting in a classic simultaneity problem. This might occur if some voters, concerned about teen pregnancy, believe teenagers have children out-of-wedlock to receive welfare payments. These voters, then, might try to cut benefits to discourage out-of-wedlock childbearing. In addition, out-of-wedlock births to teens might (at least in the long run) affect either the size or the composition of the welfare population. This might, in turn, affect voters' perceptions of welfare and the generosity of benefits. The public choice literature on state welfare policies shows, theoretically, that the size of the welfare population (relative to the number of taxpayers) is the fiscal price of per-recipient benefits to voters. The intuition is that, as the number of recipients increases, it becomes more costly to pay a given per-recipient benefit (Orr 1976). Hence, if the number of teens giving birth out-of-wedlock affects the size of the welfare population, there is a simultaneity problem. The composition of the welfare population might also be important. For example, voters may be more or less sympathetic toward widows or divorced recipients (when compared to never-married recipients). If changes in benefit levels change the relative numbers of women in each group (for example, if one group is more responsive than the other to benefits), endogeneity will be a concern. These problems occur in studies using both individual and aggregate data, although they can be handled in different ways. It is clear that omitted variables, correlated with both benefit levels and illegitimacy, are a concern in both types of studies. Therefore, including state and time dummies is likely to be important when using either aggregate or individual data. Endogeneity is an obvious concern for aggregate studies, such as this one, but could also be relevant for individual studies. If the error term (which may include omitted variables) is independently distributed across teens within the state, then endogeneity caused by the aggregate illegitimacy rate affecting benefit levels would not be a large concern. Any individual teen's decision would have only a small net effect on aggregate birth rates, and so the benefit level would be trivially correlated with the individual's error term. However, if there are omitted variables that are correlated across teens in the state (violating the assumption of independently distributed errors), the effect on aggregate illegitimacy rates might not be trivial. This makes endogeneity a concern whether the omitted variables affect benefit levels directly or not. If these omitted variables are constant for each state over the entire period (constant across states for each period), then including state (time) dummies will be sufficient. If not, then dummy variable estimation will not give consistent parameter estimates when using individual data. Individual Versus Aggregate Data In this study, we use aggregate state-level data rather than individual data. Although individual data have several advantages over aggregate data, there are some reasons to favor the latter.(17) The main problem with individual data is that relatively small samples, and the relative rarity of out-of-wedlock births to teens (especially white teens), make it difficult to estimate empirical models when state dummies are included in the regression.(18) For white teens, the main problem is that out-of-wedlock births are relatively rare. In a sample of 600 white, female teens from the National Longitudinal Survey of Youth (NLSY), only 44 births are observed in only 22 states (see Table B1 in Appendix B). This will obviously make it difficult to include 50 state dummies in the regression. If there are no births in the state, then including a state dummy removes all observations for that state from the regression, reducing variation in the AFDC variable and decreasing true sample size. For black teens, although birth events are more common, samples tend to be smaller and more geographically concentrated. This issue is discussed in greater detail in Appendix B. An additional point when using individual data is that it is important to be careful when trying to control for the other economic opportunities available to the teen. In particular, including individual characteristics in the regression might not fully control for wage differences across states and for changes in wage structure across time. For example, if wages for high school graduates are lower in Mississippi than in California, then even controlling for education level (which is not observed for teens) would not fully capture wage differences.(19) The data used in this study indicate that wages at the bottom of the income distribution are highly correlated with benefit levels. In 1990, women's and men's wages (for persons with a high school education or less, aged between 21 and 35) have respective correlations of 0.77 and 0.75 with combined AFDC, food stamp, and Medicaid benefits for a family of two. Therefore, it seems important to at least include aggregate measures of wages in individual regressions to avoid potential bias. The data used are aggregate state-by-state data for 1980 through 1990 and are documented in Appendix A. The dependent variable is the illegitimacy rate. An important question is whether births to all teens or only births to unmarried teens is a better numerator. As Acs (1993) points out, the fertility of married teens may also be affected by changes in welfare benefit levels if married teens see welfare as insurance against divorce. However, it seems unlikely that changes in welfare benefits would affect married teens to the same extent as unmarried teens. Furthermore, the marriage decision itself may be influenced by the welfare benefit level. For this reason one should interpret the results in this paper carefully; the effect of government benefits and wages on illegitimacy may be primarily due to effects on marriage rather than on fertility. A positive coefficient on government benefits might not imply that welfare encourages more teens to get pregnant, just that fewer teens get married when they do become pregnant. Finally, we note that out-of-wedlock births among teenagers might be thought to be a greater social problem than in-wedlock births in terms of welfare dependency and outcomes for children. The three main explanatory variables of interest are (i) the value of AFDC, food stamps, and Medicaid benefits for a family of two; (ii) the median weekly wage of women between 21 and 35, working full-time, with a high school diploma or less; and (iii) the median weekly wage of men between 21 and 35, working full-time, with a high school diploma or less.(20) The effect of the additional grant (the difference between the value of the benefit package for a family of two and a family of four) is also tested. The wage variables represent the value of work and marriage to women at the lower end of the income distribution. A potential problem with these regressors is that the three variables are highly correlated across states. This might make it difficult to interpret the effects of each variable separately with a high degree of confidence. An additional included variable is the incarceration rate. This is intended to control for the size of the pool of marriageable men (Garfinkel and McLanahan 1986; Wilson 1987).(21) Recent increases in illegitimacy may be partially due to a decline in the number of men available as potential marriage partners. The incarceration rate is likely to be correlated with other factors, such as high drug use and high mortality rates, also related to this concern. However, it is also possible that this variable may simply pick up increased juvenile delinquency. Because of this, a precise interpretation of the coefficient is difficult. In conclusion, the predictions from the theoretical model are that wages will be negatively correlated with the illegitimacy rate among teens, welfare benefits will be positively correlated with the illegitimacy rate among teens, and the incarceration rate will be positively correlated with the illegitimacy rate among teens. Additional variables are included as controls. The availability of abortions is proxied by the percentage of counties in the state with an abortion provider, an admittedly imprecise proxy. It is plausible that easy access to abortion may reduce the number of births to unmarried teens. Although easier access to abortion may encourage sexual activity among teens, it seems reasonable to suppose that only teens who would choose to have an abortion if pregnant would be encouraged to become sexually active. However, Akerlof, Yellen, and Katz (1996) note, in the context of a theoretical model, that it is possible that increased access to abortion might also increase sexual activity among individuals who would not obtain abortions if pregnant.(22) They write Before the technology shock (the introduction of abortion or contraception) abstinence would be the norm for all women. After the technology shock those women who would use contraception or would be willing to obtain an abortion in the event of pregnancy or both engage in premarital sexual activity. However, those women who are not willing to use contraception or obtain an abortion will also engage in sexual activity, since they fear that if they abstain their partners would seek satisfaction elsewhere. The advent of contraception and abortion used by others may result in an unwanted increase in sexual participation for those who reject the new technology. (p. 296) The unemployment rate and female unemployment rate are included as measures of the state of the labor market. Finally, the percentage of the population living in metropolitan areas and the infant mortality rate, a common variable in studies of fertility (see Shields and Tracy 1986), are also included.(23) An immediate problem is finding variables that can serve as instruments for the AFDC benefit level. This is difficult because many variables that would seem likely to affect the AFDC benefit level might also affect the illegitimacy rate. For example, demographic variables, such as the share of the population living in urban areas, might affect the AFDC benefit level but also might affect the illegitimacy rate. Because of this, it seems important to have more instruments than endogenous variables so that the overidentifying restrictions can be tested. Another reason that we want more than one instrument is that we would like to be able to separately test the effects of the base and additional grants. The public choice literature on welfare benefits suggests several possible instruments for AFDC benefits.(24) The main variables used in the public choice literature are income of the median voter and the price of benefits. The price of benefits is the recipiency rate multiplied by the state's share of costs (1 - federal matching rate). Even if benefit levels have no incentive effects (i.e., benefit levels do not directly affect the behavior of potential recipients), more people will qualify for AFDC as benefit levels increase, making the recipiency rate endogenous. Therefore, we use the federal matching rate rather than the recipiency rate multiplied by the federal matching rate as an instrument. Federal matching varies between 50% and 83% of the total benefit and depends on the state's per capita income.(25) Federal matching affects the cost of benefits to the state and therefore is likely to affect the state's benefit level but would not directly affect the choice of the teen. Median voter income (proxied by per capita income) might be an acceptable instrument if teens are mainly affected by movements of income at the bottom of the income distribution. Another plausible instrument is the percent of the population that is over 65. This might affect the distribution of public funds but should have little effect on the illegitimacy rate. Using all three instruments, we can test the overidentifying assumptions.(26) A potential problem is that per capita income, which is correlated with the wage measures, might be correlated with the benefit level even after controlling for wages at the bottom of the income distribution (e.g., perhaps due to nonlinear relations between wages and illegitimacy rates). Further, since the matching rate is a function of state per capita income (relative to national per capita income), this would make this instrument endogenous also. Although tests of overidentifying assumptions are included to ease concerns, we also present results using the percent of the population that is over 65 as the sole instrument and including per capita income as an independent variable. 5. Empirical Results The basic model is illegitimacy [rate.sub.it] = [[Alpha].sub.i] + [[Gamma].sub.t] + [Beta][prime][x.sub.it] + [[Epsilon].sub.it], where i indexes state and t indexes time. There is an observation for each state for each year [TABULAR DATA FOR TABLE 2 OMITTED] from 1980 through 1990.(27) The error term consists of (i) [Alpha], a state effect; (ii) [Gamma], a time effect; and (iii) [Epsilon], the remaining individual error for that observation. The model is estimated with standard panel data techniques. Previous research has noted that fertility outcomes for black and white teenagers are different, and so this study estimates separate equations for black and white teenagers.(28) As a first exercise, mainly for comparison with later results, columns 1 and 2 in Table 2 show results from a simple OLS regression omitting state and time dummies and treating benefits as exogenous. These preliminary results do not support the theoretical model. For white teens, the coefficient on AFDC benefits has a theoretically incorrect, but statistically insignificant, negative sign (indicating high benefits are correlated with low rates of illegitimacy). The median weekly wage for women aged between 21 and 35, working full-time, with a high school education or less (referred to as the female wage) is statistically insignificant. The median weekly wage for men between 21 and 35, working full-time, with a high school diploma or less (referred to in the tables as the male wage) has the expected sign and is significant at a 5% level. The results for black teenagers (column 2) are even less encouraging - only the male wage and the incarceration rate have the theoretically expected signs, and the male wage variable is statistically insignificant. Further, the benefit variable has a theoretically incorrect, and statistically significant, negative sign. Columns 3 and 4 show results from basic two-way fixed effects or least squares dummy variable regressions, treating combined AFDC, food stamp, and Medicaid benefits as exogenous. For white teens, the coefficients on female wages and government benefits have the anticipated signs but are insignificant at a 5% level (although the coefficient on government benefits is significant at a 10% level). The coefficients are also quite modest - the implied elasticity of illegitimacy with respect to government benefits is 0.18 and the implied elasticities of female and male wages are -0.16 and 0.05, respectively.(29) For black teens, the coefficients on all the main variables have theoretically incorrect signs and the coefficient on female wages is statistically significant. Columns 5 and 6 of Table 2 show results when state and time effects are included and the benefits variable is considered endogenous (using the public choice instruments - the AFDC matching rate, per capita income, and percentage of the population over 65).(30) For white teens, the coefficient on the benefit variable becomes large in absolute value, significant at a 5% level, and has the expected sign. The implied elasticity with respect to benefits (estimated at the means of all variables) is 1.81. The coefficient on female wages is also larger in absolute value and is significant at a 5% level. The implied elasticity is 0.41. The coefficient on male wages is insignificant at conventional levels, small in absolute value, but has the expected sign. The results for black teenagers are also closer to those predicted by theory. The coefficient on the benefits variable is positive and is significant at a 5% level, with an implied elasticity of 2.66. The coefficient on male wages is negative but is only significant at a 13% level. The coefficient on female wages is statistically insignificant, although it does have the expected sign. Because these results vary greatly, a first question is which estimation technique is appropriate. For both OLS and 2SLS and for both black and white teenagers, the null hypothesis that the time and state dummy variables are jointly insignificant is rejected at a 1% level.(31) For white teens, the null hypothesis that the time dummies follow a simple trend is rejected in favor of the alternative hypothesis of time dummies. For black teenagers, the null hypothesis of a time trend can generally not be rejected in the 2SLS and generalized method of moments (GMM) regressions. However, the results are similar when a time trend, rather than time dummies, is included in terms of both size and statistical significance.(32) Overall, these results favor models with state and time dummies. Broadly speaking, there are at least three criteria to consider when selecting instruments: (i) Whether the moment restrictions imposed are valid. Tests of overidentifying assumptions might ease concerns regarding whether the instruments are exogenous. (ii) Whether the proposed instruments are correlated with the endogenous variable. In this case, the question is whether the extra instruments are correlated with the benefit level.(33) (iii) Whether instrumental variable estimation is needed at all (i.e., whether benefits are endogenous). If the benefit level is exogenous, then instrumental variables estimation will be less efficient than OLS, even when the instruments are valid. For both black and white teens, the public choice variables perform satisfactorily on all three counts. Hansen (1982) tests of the overidentifying restrictions fail to reject the null hypothesis that the instruments are uncorrelated with the error term for both white and black teens (the [[Chi].sup.2] statistics are 0.62 and 0.94, respectively). The partial F-statistic [3,465] on the three public choice variables from the first stage regression is 6.95. This is not as large as one might wish but does indicate that the instruments are highly correlated with benefits. To test whether the benefit variable is endogenous, Wu-Hausman tests (Wu 1973; Hausman 1978) are performed. The tests compare the OLS and 2SLS coefficients on the (possibly) endogenous benefit variable. Under the null hypothesis that the AFDC benefit level is uncorrelated with the error term, OLS is efficient and consistent while 2SLS is merely consistent. Under the alternative, that the benefit level is correlated with the error term, OLS is inconsistent while 2SLS remains consistent. The [[Chi].sup.2] (1) statistics are 11.20 for white teens and 16.06 for black teens, rejecting the null hypotheses and indicating that 2SLS is appropriate.(34) These results confirm that the public choice instruments are reasonable and that OLS fixed effects estimation is not appropriate. Although the tests of overidentifying restrictions fail to reject the null hypothesis that the instruments are valid, as we noted earlier, there might be reasons to suspect that per capita income remains correlated with illegitimacy rates, even after controlling for wages at the bottom of the incomes distribution. However, including per capita income in the base regression and dropping the Medicaid matching rate as an instrument does not affect results for either black or white teens. Results from this regression are shown in columns 7 and 8 of Table 2.(35) For white teens, the coefficients on government benefits and female wages remain statistically significant with the theoretically expected signs.(36) For black teens, the coefficient on government benefits remains statistically significant and the coefficients on the wage variables remain statistically insignificant.(37) As an additional check for robustness, we also try a different set of instruments similar to variables used to explain AFDC benefits in Ribar and Wilhelm (1996).(38) The instruments are the difference between the percentage of the state's population and AFDC caseload that is black [TABULAR DATA FOR TABLE 3 OMITTED] and administrative costs per AFDC family.(39) The difference between the racial composition of the recipient population and the state population might affect benefit levels if voters are less sympathetic toward recipients of different races than their own. Administrative costs might affect benefit levels since they affect program costs and might affect the public's perception about program efficiency. Since these variables probably do not affect the teens' decisions directly, they might be plausible instruments. Results using these instruments are shown in columns 9 and 10 of Table 2. For white teens, the coefficient on government benefits remains positive and is significant at a 5.2% level, while the coefficient on female wages remains negative and statistically significant. The point estimates are larger than in columns 5 and 7. The results for black teens are less encouraging - the coefficient on government benefits becomes statistically insignificant but remains positive. Results for both white and black teens are similar when per capita income is included directly in the regression (using the AFDC program variables as instruments).(40) Tests of overidentifying assumptions fail to reject these instruments at conventional levels (see Table 2). However, the AFDC program variables are less attractive in at least one way - they are less highly correlated with benefit levels than the public choice variables. The partial F-statistic in the first stage regression is 2.95, with a significance level of 9.8%. Several variables have been consistently insignificant throughout the preliminary analysis. In particular, the female unemployment rate, the infant mortality rate, the share of the population living in metropolitan areas, and the incarceration rate are both singly and jointly insignificant for white teens (with an F[4, 467] statistic of 0.6184). For black teens, two of the variables are significant at a 10% level, although they are jointly insignificant (F[4, 467] statistic of 1.91). Keeping female unemployment and infant mortality in regressions for black teens similar to those in Table 3 does not affect the magnitude or the statistical significance of the coefficient on the benefit variable. However, the male wage variable's significance level drops below 10% when these variables are included. Excluding the four variables from the regression for white teens results in the coefficient on total unemployment becoming significant and negative. The negative sign (implying that high unemployment is correlated with low rates of illegitimacy) is hard to interpret in the context of the theoretical model. Reverse causality seems unlikely since most of the teens in the sample would not be in the work force even if not pregnant. Further, female teenagers do not make up a large portion of the work force and so small increases in illegitimacy rates would not affect the unemployment rate. Estimating the reduced regression in a GMM framework, allowing for heteroscedasticity of unknown type, gives similar results to the larger regressions in Table 2 (Hansen 1982).(41) For white teens, female wages and government benefits are significantly correlated with teen illegitimacy rates in the expected directions, and the coefficient on male wages remains insignificant (see Table 3, column 1). For black teens, only the coefficient on benefit levels is consistently statistically significant with the theoretically expected positive sign. The coefficient on the female wage variable remains statistically insignificant throughout the entire analysis, and, as noted above, the male wage variable is not robustly correlated with the illegitimacy rate at conventional levels (column 2).(42) In 1990, the simple correlation between male and female wages is 0.80, and across the whole sample, the simple correlation between the two is 0.74 - high enough to make multicollinearity a concern. However, dropping male wages from the regression has little effect on results for either white or black teens (see columns 3 and 4). For black teens, it increases the significance of female wages, although the coefficient remains insignificant at conventional levels.(43) One question we have not discussed is the appropriate way of dealing with the proxy for abortion services.(44) The abortion proxy is the share of counties in the state that have an abortion provider. This variable is quite possibly endogenous because areas with low rates of teen pregnancy might have little demand for providers.(45) When the abortion variable is dropped, the results are similar to earlier results for both black and white teens (Table 3, columns 7 and 8). This regression is repeated treating the abortion variable as endogenous (Table 3, columns 5 and 6). This has little effect on the coefficient on government benefits for either white or black teens. The coefficient on female wages becomes insignificant for white teens. However, the partial F-statistic on the extra instruments in the first-stage abortion variable regression indicates the public choice instruments have little explanatory power for this variable.(46) Another question is what effects additional increments in government benefits for additional children have on illegitimacy among teens. The theoretical section indicates that additional increments are more likely to encourage second or later births but that increases in increments may also increase the chance of a first birth. This is because increases in increments affect the budget constraint when the woman chooses welfare. However, the level of benefits for a family of two may be more relevant if teens are not forward looking. Table 3, column 9 shows that the results for white teens are similar to earlier results except that the significance level of the base payment drops to a 10% level. However, the sign of the coefficient on incremental benefits is counterintuitive. The result for black teens is also similar (see Table 3). These results might [TABULAR DATA FOR TABLE 4 OMITTED] be due to multicollinearity caused by the high correlation between base and additional payments. The partial correlation between the two variables is 0.83. As a final exercise, we consider the effects of lagged illegitimacy rates.(47) It seems reasonable that changes in the prevalence of out-of-wedlock childbearing may effect attitudes and thus effect future rates. In a short panel, such as the one used here, it is not possible to test this hypothesis fully due to the limited number of time periods available. However, to test a limited form of this hypothesis, we include a single lag of the illegitimacy rate. As noted in Keane and Runkle (1992a), first differencing, rather than taking the fixed effects transformation, eliminates the individual state effects and allows instruments to be predetermined rather than strictly exogenous.(48) In particular, Keane and Runkle (1992a) propose a GLS estimator that eliminates possible serial correlation in the error term [u.sub.it] with a forward-filtering transformation and only requires the instruments to be predetermined.(49) For white teens, the results appear consistent with the earlier results, government benefits remain negative and significant, and the coefficient on female wages remains negative but is insignificant (see Table 4). Since the coefficient on lagged illegitimacy rates is small and statistically insignificant, column 2 drops the insignificant illegitimacy rate. For black teens, none of the coefficients on the independent variables are statistically significant when lagged illegitimacy rates are included in the regression. When the lagged illegitimacy rate is dropped from the regression, the coefficient on female wages becomes statistically significant with a theoretically inconsistent positive sign (see Table 4).(50) Overall, these results give no support to the hypothesis that increased illegitimacy among teens leads to changes in attitudes that then lead to further increases in illegitimacy. However, this might be due to the very limited form of hypothesis that is being tested. Table 5. Estimated Elasticities at Means of Variables Table 3, Table 2, Table 3, Column 3 Column 3 Column 1 (GMM, Omitting (OLS-FE) (GMM) Male Wages) White Teens Government benefits 0.18(a) 1.53(b) 1.28(b) Female wages -0.16 -0.38(b) -0.37(b) Male wages 0.05 -0.07 Black Teens Government benefits -0.15 2.72(b) 2.08(b) Female wages 0.36 -0.16 -0.25 Male wages 0.01 -0.31(a) a Significant at the 10% level. b Significant at the 5% level. Summary of Estimation Results and Implications The results of the econometric investigation are summarized below. Table 5 shows elasticities, computed at the means of all variables, for results from Table 2, columns 3 and 4; Table 3, columns 1 and 2; and Table 3, columns 3 and 4). The basic results from the empirical estimation are as follows. (i) Including fixed effects in the regression, but otherwise treating AFDC as exogenous, leads to parameter estimates for white teens that are consistent with theory but imply very modest elasticities of illegitimacy with respect to the variables of interest (0.18 for benefits and -0.16 for female wages). For black teens, the parameter estimates are inconsistent with theory, although the government benefit variable's parameter is statistically insignificant at conventional levels. These results - small but positive elasticities for white teens and inconsistent and statistically insignificant results for black teens - are similar to results in other studies that have used state-level data and fixed effects estimation to study the effect of welfare payments on illegitimacy among teenagers (e.g., Jackson and Klerman 1994). However, Wu-Hausman tests strongly reject the null hypothesis that AFDC benefits are uncorrelated with the error term from the illegitimacy regression for both black and white teens. This strongly indicates that fixed effects estimation does not resolve all endogeneity concerns in these regressions. (ii) The instruments used for the benefit variable are similar to those suggested in the public choice literature on the determinants of state welfare policy - per capita income, the Medicaid matching rate, and the percentage of the population that is over 65. Tests of overidentifying restrictions fail to reject these variables at conventional levels. These instruments are highly correlated with the benefit levels. One concern, despite the test of overidentifying restrictions, is that per capita income might not be an appropriate instrument. However, as shown in the results section, including per capita income in the regression and dropping the Medicaid matching rate as an instrument (since this is a function of per capita income) does not affect the results for either black or white teens. (iii) For both white and black teens, the combined value of AFDC, food stamps, and Medicaid benefits for a family of two are positively correlated with illegitimacy rates. Results are similar when just AFDC and food stamp benefits are used.(51) The point estimates of elasticities (see Table 5) are 1.28 and 2.08 for white and black teens, respectively.(52) This is larger than point estimates in studies that have not taken the endogeniety of benefits into account. However, the 95% confidence intervals are large ([0.47, 2.10] and [0.73, 3.43], respectively). This result is highly robust for white teens but is slightly less robust for black teens. In particular, for black teens, the coefficient on benefits becomes statistically insignificant when lagged illegitimacy rates are included in the regression and when an alternate set of instruments is used. The incremental benefit for additional children is insignificant for both black and white teens, but this may be because it is highly correlated with the benefits for a family of two. (iv) The measures of male and female wages, used in this paper, are the median weekly wages for persons working full-time, aged between 21 and 35, with a high school diploma or less. For white teens, female wages are robustly negatively correlated with illegitimacy rates. The point estimate of the elasticity is -0.37, with a confidence interval of (-0.62, -0.12). Male wages are not statistically significantly correlated with illegitimacy rates. For black teens, male wages are statistically significant at a 10% level in some specifications, although this result is not highly robust. Female wages are consistently statistically insignificant throughout the analysis for black teens. (v) There is no evidence that increases in illegitimacy lead to changes in attitudes that then lead to further increases in illegitimacy. However, we were only able to test a limited form of this hypothesis due to a lack of data. (vi) Other control variables (including female unemployment rates, share of counties in the state with an abortion provider, infant mortality rates, incarceration rates, and share of the population living in metropolitan areas) are generally statistically insignificant throughout the analysis. The unemployment rate has a counterintuitive, negative correlation with illegitimacy rates, implying high unemployment is correlated with low illegitimacy rates. Reverse causation seems unlikely because female teenagers only make up a small portion of the workforce. (vii) The independent variables do not fully explain the trends in illegitimacy over the period studied. For white teens, the coefficients on time dummies generally trend upward over the period, although they were fairly fiat between 1982 and 1986. In contrast, the coefficients on the time dummies for black teens are largest between 1982 and 1984 and then generally trend downward [ILLUSTRATION FOR FIGURE 2 OMITTED]. Overall, the results for both white and black teens are remarkably consistent with theory and stress the importance of economic incentives on the choice between work and welfare. A 1% increase in welfare benefits appears to increase illegitimacy among both white and black teens by more than 1%. A 1% increase in female wages appears to have a more modest effect of about a 0.4% decrease in illegitimacy for white teens but does not appear to affect illegitimacy rates for black teens. The results also stress that fixed effects estimation alone does not appear to control for the endogeneity of benefits in aggregate data. Fixed effects estimation treating government benefits as exogenous yields much smaller estimates of elasticities for white teens and results that are inconsistent with theory for black teens. Several points should be made with regard to the results from the empirical estimation. The first is that the results are only for illegitimacy among teens and should not be extrapolated to other groups or to other questions. An increase in illegitimacy among teens may only have a modest effect on female headship, even in the long run. Furthermore, it is possible that, even though government benefits might affect illegitimacy among teens, the effect on other family decisions, such as out-of-wedlock childbearing among older women or divorce among married women, might be far smaller. Second, the results do not imply that benefits necessarily have as large an effect on teen childbearing as they do on illegitimacy. The positive correlation could be due to benefits discouraging marriage among pregnant teens rather than encouraging births. The observation that the U.S. welfare system might encourage fertility among poor single women by making children income-producing assets as well as consumption goods is not novel (see, e.g., Becker 1991). Nevertheless, past empirical work, which has used differences in benefit levels over time and across states to test this hypothesis, has found that the effect is weak, inconsistent, and often statistically insignificant (see Moffitt 1992 or Murray 1993). However, benefit levels in state welfare programs are not the result of a natural experiment - politicians and voters choose the benefits that their state offers. If voters' perceptions about welfare dependency or illegitimacy affect their preferred benefit levels, then benefits and out-of-wedlock births will be codetermined. If so, coefficients from fixed effects estimation (which is a form of ordinary least squares) will be biased and inconsistent. Hypothesis tests confirm that benefit levels are endogenous. Once we control for this, we find large and statistically significant results for both black and white teens. These results are robust to several different instrument choices. In summary, we find the following. (i) As noted by Ellwood and Bane (1985), many omitted, and potentially unmeasurable, state characteristics might affect both benefit levels and illegitimacy. If these characteristics are (roughly) constant for each state across the entire period, then including state dummy variables might allow unbiased estimation of coefficients in the illegitimacy regression. However, hypothesis tests suggest that including state (and time) fixed effects does not adequately control for endogeneity. For white teens, the coefficients from standard fixed effects estimation are consistent with theory and are statistically significant but small (elasticities of about 0.2). For black teens, results from standard fixed effects estimation are inconsistent with theory and are statistically insignificant. Once endogeneity is controlled for, welfare benefits are strongly and robustly related to teen illegitimacy for both white and black teens, with elasticities of around 1.3 and 2.1, respectively. (ii) Female wages, for women aged between 21 and 35, working full-time, with a high school education or less, are robustly correlated with illegitimacy rates among white teens, with an elasticity of around -0.4. Female wages are not significantly correlated with illegitimacy rates for black teens. Male wages (for the same group) do not appear correlated with illegitimacy rates among white teens. For black teens, male wages are statistically significant (with the expected negative sign) in some specifications, but this result is not highly robust. A number of theoretical and empirical questions remain open and deserve further investigation. Although the coefficients on the wages and benefits have the correct theoretical signs, the variables do not entirely explain the rapid growth in illegitimacy over the past 10 years. For white teens, time dummies are highly significant and increasing over this time period. Explaining this growth in a more satisfactory, and testable, manner than changes in attitudes would seem an important goal for future research.(53) Because the dependent variable is the illegitimacy rate, not the birth rate, it is important to note that a positive coefficient on government benefits (and a negative coefficient on wages) does not necessarily imply benefits encourage (or wages discourage) teens to get pregnant. The signs could be primarily the result of fewer pregnant teens getting married rather than more teens bearing children. Finally, extending the research back to the 1970s might be useful. However, this may be difficult because of changes in the food stamp program and abortion access during this time period.(54) Appendix A: Data Definitions and Sources Means and variances of all variables are presented in Table A1. The sources of the data are listed below. Incarceration Rates. Number of sentenced prisoners in the state per 100,000 resident population. United States Department of Justice, Bureau of Justice Statistics, Sourcebook of Criminal Justice Statistics, 1991. Abortion. Percent of counties in a state with an abortion provider of five or more abortions. Alan Guttmacher Institute, Abortion Factbook: 1992 Edition. Data for 1989 and 1990 are the number of providers in 1988. Wages. Data for weekly wages for women and men are computed from data from the National Bureau of Economic Research's CPS Labor Extracts (2nd edition) on CD-ROM. They are the (weighted) median of weekly wages for persons with a high school education or less, working full-time (over 35 hours a week), aged between 21 and 35 of the relevant sex. Prices are inflated to 1991 prices by using the June consumer price index for urban consumers. Share of Population Living in Metropolitan Areas. The percent of the population living in MSA's in that year. Data for 1981-1982 are interpolated. Statistical Abstract of the United States, various years. Infant Mortality. Deaths per 1000 infants in the first year of life. Vital Statistics of the United States, various years. Unemployment Rates (Both Women's Unemployment and Total Unemployment). Unemployment rate. 1980, Bureau of Labor Statistics bulletin "Geographic Profile of Employment and Unemployment." 1981, Bureau of Labor Statistics Bulletin 2175, "Handbook of Labor Statistics, 19837," 1982-1989, Bureau of Labor Statistics Bulletin 2340, "Handbook of Labor Statistics, 1989." Data for 1990 are from the "Geographic Profile of Employment and Unemployment," 1990. Illegitimacy Rates. The number of illegitimate births per 1000 single females aged 15-19. The number of illegitimate births is from Vital Statistics of the United States, various years. The estimate of the number of females is calculated as follows. The number of females in each state in the single year groups for 15- through 19-year olds is from Bureau of Census, Current Population Division estimates. (These figures were corrected by the Bureau of Census to be consistent with the 1980 and 1990 census population counts.) To get the number of females by race, the percentages of females who are white and black in each year group are interpolated between census years. These percentages are multiplied by the total number of females in that year group, and the resulting number by year group are summed to get a total for females aged 15-19. AFDC, Food Stamps, and Medicaid Benefits. The total monthly payment of AFDC and food stamps to a family with two members and no other income and average Medicaid expenditures for one adult and one child in the AFDC program. The incremental payment is the difference between the monthly payment for a family of four and a family of two. It is assumed the additional members are children. The food stamp payment is calculated assuming that the maximum shelter deduction and the standard deduction were used when calculating payments. AFDC and food stamp data were provided by the Congressional Research Service. Medicaid data were provided by the Division of Medicaid Statistics at the Health Care Financing Administration. Per Capita Income. Per capita income. Statistical Abstract of the United States. [TABULAR DATA FOR TABLE A1 OMITTED] Medicaid Matching Rate. Federal matching rate for the medicaid program. Characteristics of State Plans for AFDC, various years. Percent of Population over 65. The percent of the population aged over 65. Data were provided by the Bureau of Census, Current Population Estimates Division. Percentage of the AFDC Caseload That Is Black. The percent of the AFDC caseload that is headed by an individual who is Black. Data are from Committee on Ways and Means "Overview of Entitlement Programs" (various years). Data for 1981-1982 and 1983-1984 are interpolated. Data for 1980 are missing for several states and so these observations are dropped when this variable is used as an instrument.(55) Administrative Costs per AFDC Case. State administrative expenditures per AFDC case. Data are from Committee on Ways and Means "Overview of Entitlement Programs" (various years). Appendix B: Sample Sizes in Individual Studies Individual studies of the effects of AFDC on teen illegitimacy tend to use data from two sources. The first is the National Longitudinal Survey of Youth (NLSY) and the second is the Panel Study of Income Dynamics (PSID). As Plotnick (1990, p. 738) notes, "differences in [out-of-wedlock childbearing] behavior can not be adequately captured by inclusion of race/ethnicity dummies since, as several studies have shown, the effects of explanatory variables tend to differ among the groups." As a result, samples are often separated for black and white teens. Once divided, the typical sample size in published studies is often between 500 and 1000 individual teens.(56) However, out-of-wedlock births to white teens are relatively rare events, and although more common among black teens, births are still only observed in a few states since the black population is more concentrated and sample sizes tend to be smaller.(57) This complicates estimation since, when there are no out-of-wedlock births in a state, the state dummy can perfectly "predict" nonbirths by becoming arbitrarily, negatively large.(58) This reduces variation in the AFDC variable and decreases actual sample size since only teens from states where births are observed can be included in the regression. Table B1 shows births to white and black teens for a sample from the NLSY. lt shows total observations (including observations in nonbirth states), total out-of-wedlock births, and the number of states where teens gave birth. This last column gives information on how many states would be excluded by including fixed effects in the regression. The teens in this sample are all the teens who turned 15 or 16 in 1979. This is essentially the whole sample that is observed during their entire (fertile) teenage years and is the sample used in Plotnick (1990).(59) Table B1. Sample Sizes, Number of Births, and Number of States in NLSY Samples Number of Number of Number of Observations(a) Births States White Teens Family income included 561 44 22 Base regression 688 56 23 Black Teens Family income included 281 94 23 Base regression 322 102 23 For women who gave birth out-of-wedlock as a teenager (before 20th birthday). a Includes teens in states where no births are observed. To take account of missing independent variables, we only count observations for which data on family income in 1978, mother's education level, and family structure at age 16 are available. As noted, 44 white teens from only 22 states gave birth before their 20th birthday in this sample (out of 561 teens).(60) Dropping family income (in general this variable is often missing even when interviews took place) increases the sample to 688 cases with 56 total births. However, even in this larger sample, the births occur in only 23 states. Including older teens, for example teens who turned 17 in 1979, increases sample size.(61) However, this means that many control variables will not be observed during the woman's teen years. Further, even in these larger samples, the number of states where births occur remains small. For example, when no other regressors are included, although sample size for white teens is increased to over 1000 individuals and 85 births, the births occur in only 25 states. We wish to thank the editor, three anonymous referees, Marcus Berliant, George Boyer, Robert Cull, Sheldon Danziger, Stanley Engennan, Miguel Gouveia, Bruce Hansen, W. Lee Hansen, Eric Hanushek, Ronald Haskins, Robert Haveman, Catherine Jackson, Robert Klerman, Neal Masia, Robert Moffitt, Elaine Peterson, Wendell Primus, Karl Scholz, and participants at workshops at the University of Rochester, the University of Virginia, the University of Missouri at Columbia, the Public Choice Meetings, and the Department of Labor Economics at Cornell University for comments and suggestions on earlier versions of this paper. G.R.G.C. gratefully acknowledges financial support from the Alfred P. Sloan Foundation and R.P.S. gratefully acknowledges support from the Alex C. Walker Educational and Charitable Foundation. Responsibility for any errors and opinions rests solely with the authors. All findings, interpretations, and conclusions expressed in this paper are entirely those of the authors and do not necessarily represent the views of the World Bank, its executive directors, or the countries they represent. This is a revised version of a Rochester Center for Economic Research Working Paper (July 1995). 1 Another interesting example of the investment view of children is the English Poor Laws in the 18th and 19th centuries. Boyer (1990) finds that child allowances, a common form of poor relief for able-bodied laborers, had a positive effect on birthrates. 2 See Moffitt (1992) or Murray (1993). 3 Women who enter the welfare system under the age of 22 at the time of their first spell spend an average total duration of 8.23 years on AFDC; women who are between 22 and 30 spend only 7.08 years; and women aged 31 to 40 spend only 5.15 years (Committee on Ways and Means 1992). 4 Since food stamps were introduced in the late 1960s, this measure is usually extended prior to this by only counting AFDC benefits. 5 This has led to benefits varying greatly between states - in 1991, the AFDC payment to a mother with one child and no other income varied between $120 per month in Mississippi and $694 per month in California. Including food stamps in the measure of total benefits reduces interstate differences, but differences still remain substantial. 6 In the same way, variations in benefits across time may depend on changing social values or political structure. 7 Empirical estimates that take into account the endogeneity of recipiency rates suggest that the elasticity of AFDC benefits with respect to the recipiency rate is probably not very large. For example, Ribar and Wilhelm (1996) and Shroder (1995) both conclude that their results indicate that the effects, if negative, are small and note that their results are sensitive to the estimation technique used. 8 Results in this section are similar if the time cost of children is assumed to be an increasing. but possibly nonlinear, function of the number of children. To ensure in the welfare case that the woman's budget set is compact, it is necessary to make the additional assumption that either there is a physiological maximum on the number of children that the woman can have, that [g.sub.2] [less than] 0, or that the time cost of an additional child is always greater than or equal to some [Delta] that is strictly greater than zero. 9 AFDC participants can work but, due to high child care costs or high marginal tax rates, over 90% do not. Committee on Ways and Means (1992) reports that, in 1990, only 8.2% of recipients earned any income. 10 Committee on Ways and Means (1992). AFDC-UP provides aid to needy children in families where the primary breadwinner is unemployed. 11 For example. assuming that u(b, 0, l) [less than] u(b[prime], c[prime], l[prime]), where b, l, b[prime] l[prime] [greater than or equal to] 0 and c[prime] [greater than or equal to] 0, will ensure this. 12 Proofs of these assertions are available from the authors on request. 13 Proof available from the authors on request. 14 In 1989, 77% of out-of-wedlock births to teens were first births (83% for white teens and 69% for black teens), whereas for women over 20, only 36% were first births. The percentage for teens varied between 77 and 79% over the decade. 15 The illegitimacy rate, defined as the number of out-of-wedlock births per 1000 women, is only affected by the women choosing to have out-of-wedlock births. The illegitimacy ratio, defined as the ratio of out-of-wedlock to in-wedlock births is also affected by the number of women who have in-wedlock births. Since changes in male and female wages might affect the childbearing decisions of married women, it would not be possible to disentangle these effects using illegitimacy ratios. 16 See, for example, Jackson and Klerman (1994), Moffitt (1994), and Hoynes (1997). 17 For example, panel data that follow individuals across time, such as the NLSY (Plotnick 1990; Acs 1993; Lundberg and Plotnick 1994) and the Panel Study of Income Dynamics (PSID) (Duncan and Hoffman 1990), allow the researcher to control for a wide range of individual-specific, family background, and neighborhood effect variables (i.e., parental education levels, number of siblings, and birth order) in the estimation. In addition, they allow the researcher to address questions that cannot be easily addressed with aggregate data, such as the effect of childhood events (An, Haveman, and Wolfe 1993) or church attendance (Plotnick 1990). 18 This problem is more severe when examining the effect of welfare benefits on out-of-wedlock births to teens than when studying other questions, such as whether or not welfare benefits encourage female headship in the population (see Moffit 1994; Hoynes 1997). For these more general questions, sample sizes tend to be larger and the events more common. 19 Another point is that controlling for the teen's (and her potential partner's) other economic opportunities is harder for teens than for older women because available proxies are weaker. The most reasonable proxy for individual wages, completed education level, is not observed for teens. At age 15, most teenagers, whether they eventually complete only high school or if they go on to get a doctorate, will have similar levels of education. Duncan and Hoffman (1990) estimate predicted income at age 26 for a sample of black teenagers from the PSID. The women were teens between 1968 and 1985. They find that income at age 26 has a strong and statistically significant effect on childbearing and that the effect of AFDC is weak and statistically insignificant. However, as they note, most of the variables they use to predict age 26 income if the woman did not give birth as a teen could be included in the birth regression, resulting in linear dependence among regressors. Additionally, they do not include state or time dummies in either regression. if, as noted below, AFDC benefits are correlated with wages across states or time, this could be problematic. 20 Results using just the combined food stamp and AFDC guarantee are broadly similar to the results presented here. See footnote 51 for a full description of the differences between the two sets of results. Analogous tables with the AFDC and food stamp guarantee included in place of the AFDC, food stamp, and Medicaid guarantee are available from the authors on request. 21 We also test an additional variable, the ratio of male to female teens aged 15-19, to control for the pool of marriageable men. Results for the benefit and wage variables are similar in terms of both size and statistical significance when this variable is included in the regression. For black teens, the coefficient on this variable is consistently statistically insignificant, while for white teens it has a counter intuitive positive sign in most regressions (i.e., the number of male teens per female teen is positively correlated with the number of births). 22 Improved access to birth control might have an ambiguous effect on out-of-wedlock childbearing even if only teens who would use birth control are encouraged to become sexually active. Access to birth control might encourage teens who would give birth if they became pregnant to become sexually active, as well as teens who would have an abortion. As a result, the aggregate effect depends on the number of teens (who would not have an abortion if pregnant) who switch to more effective birth control methods and the number who switch from abstinence to less effective forms of birth control. 23 It has been suggested that infant mortality might be a consequence, rather than a cause, of teen fertility (see, e.g., Pampel and Pillai 1986; Cramer 1987: Bennet 1992). However, results in this study are similar in terms of size and statistical significance whether this variable is included or not. 24 See, for example, Orr (1976) or Moffitt (1990b). 25 In practice, the highest federal matching rate was 78.85%. 26 Besley and Case (1994) suggest using political variables, such as the composition of the state legislature. the governor's party, and whether the governor can run for re-election, as instruments for policy variables. They note that these variables make it quite clear where the variation in policy is coming from. However, it is not obvious that these variables are appropriate instruments for welfare benefits. If voters believe that welfare is an important issue, welfare and illegitimacy might directly affect the outcome of elections. However, as discussed in Besley and Case (1994), in this case also, the abundance of political variables means overidentifying assumptions can be tested. In practice, we found that state-level political variables performed very poorly as instruments. In particular, tests of overidentifying restrictions rejected the null hypothesis that the instruments were valid, and the instruments were only weakly correlated with benefit levels. 27 Arizona is omitted because it did not have a comparable Medicaid program over this period. Alaska and Hawaii. which have separate food stamp guarantees and food stamp income disregards, are missing data for 1980 and 1981. 28 See Plotnick (1990). In addition, since some states have few black residents, results excluding states with few black residents are available from the authors on request. Fourteen states with less than 2000 black teens aged between 15 and 19 are excluded from the regression for black teens in the reduced-sample estimates. However, we are unable to reject the null hypothesis that the estimated coefficients for the subsample with few black teens are different from the estimated coefficients from the estimation for the remaining states. Therefore, the results from the larger sample would appear preferable. We note in the text where results for the smaller sample differ significantly from results for the larger sample. 29 Elasticities are evaluated at the means of the respective variables. 30 Generalized method of moments results are similar, in terms of both size and statistical significance, to the 2SLS results in Table 2. 31 F[58, 467] = 66.40 and F[58, 467] = 23.87 tests reject the null hypothesis that the dummies should be omitted in the OLS model with fixed effects for white and black teens, respectively. Results are similar for the 2SLS and GMM models. 32 The primary difference is that the significance level drops for male wages in the results displayed in Table 3. The coefficient on male wages becomes insignificant at even a 10% level in columns 2 and 4. 33 The issue is whether the instruments not included in the second-stage regression are correlated with the endogenous variable in the first-stage regression (see Staiger and Stock or Pagan and Jung ). 34 [T.sub.3] from Bowden and Turkington (1984). 35 GMM estimates are similar for this regression also. 36 However, this is not because the percentage of the population over 65 is driving the results when all three variables are used as instruments. Results for both white and black teens are similar to those in columns 5 and 6, in terms of both size and statistical significance, when the percentage of the population over 65 is dropped as an instrument (using per capita income and the matching rate as the instruments) and when per capita income is included in the regression with the matching rate serving as the sole instrument. 37 The coefficient on government benefits is statistically insignificant when states with few black teenagers are omitted from the regression for black teens (when the percentage of the population over 65 is the sole instrument). 38 In Ribar and Wilhelm (1996), both the percentage of the AFDC caseload that is black and administrative costs were statistically significantly correlated with benefit levels in some specifications when regional, rather than state, dummies were included in the regression. They found that high program overhead and largely black caseloads were significantly negatively correlated with benefit levels. 39 Ribar and Wilhelm (1996) use administrative expenditures as share of total program expenditures. However, this variable was not readily available for the entire period in our study (1980-1990). 40 When the percentage of the population over 65 and the AFDC program variables are used as instruments, the results are similar to those reported in columns 7 and 8 (when only the percentage of the population that is over 65 is used). For whites, the point estimate of the coefficient on government benefits is 0.08 and is significant at a 1 % level and the coefficient on female wages is -0.05 and is also significant at a 1% level. For blacks, the coefficient on government benefits is 0.36 and is significant at a 3% level. Once again, results for both black and white teens are similar when per capita income is included in the regression. Finally, when all the public choice and AFDC program variables are used simultaneously as instruments, the results are similar to those reported in columns 5 and 6 in terms of both size and significance. The only difference is that the coefficient on government benefits in the regression for black teens is smaller (0.22). In all regressions using the program variables as instruments (including regressions with additional public choice instruments), we continue to fail to reject the null hypothesis that the overidentifying restrictions are valid, at at least a 10% level. 41 Two-stage least squares results are similar for the reduced regression. 42 These results are also robust to dropping per capita income and the Medicaid matching rate as instruments and including per capita income in the regression. 43 Excluding female wages from the regression for black teens does not have a large effect on the coefficient for male wages. The coefficient remains significant at a 10% level when the female unemployment and infant mortality rates are excluded from the regression and remain insignificant when they are included in the regression. 44 Jackson and Klerman (1994) and Kane and Staiger (1996) address the issue of the effect of abortion on teen childbearing. 45 Since many older women also have abortions, this does not necessarily follow. 46 The F[3, 474] statistic is 0.794. 47 Winegarden and Bracy (1997) explore this issue using aggregate U.S. data from 1973-1992. They find that lagged illegitimacy has a statistically significant effect on illegitimacy. They suggest that this variable is a control for cultural change. 48 Strict exogeneity requires that E([u.sub.it][z.sub.is]) = 0 for all s and t, whereas predetermined only requires E([u.sub.it][z.sub.is]) = 0 for s [less than or equal to] t. First differencing makes i.i.d. errors follow an MA(1) process. 49 This requires instruments from period t - 1 or earlier. We use lagged first differences of the exogenous variables and the public choice instruments and allow for the errors to follow a more general MA process (as described in Keane and Runkle 1992a). In general, the estimation method Keane and Runkle (1992a) propose is not as efficient as a GMM estimator proposed by Arellano and Bond (1991). However, the Arellano and Bond (1991) GMM estimator uses all lags of all predetermined variables (and all leads and lags of strictly exogenous variables) as instruments. This means there are literally hundreds of moment conditions. In this application, where there are only 423 observations, this estimation method is not practical (see Chamberlain 1992; Keane and Runkle 1992b). 50 Results are similar, but not identical, when this model is estimated in a GMM framework using first differences and lags of first differences of the exogenous variables and public choice instruments as instruments (and allowing the error term to follow a general MA process). For white teens, benefits remain significant, female wages become significant, and the coefficient on lagged illegitimacy rates remains insignificant and positive. For black teens, the lagged illegitimacy rate has a counterintuitive negative sign (indicating that high illegitimacy rates last year are correlated with low illegitimacy rates this year) and all other independent variables are insignificant. 51 The main differences are as follows. (i) For White Teens. When per capita income is included in the regression, using the percentage over 65 as the sole instrument, the significance level on female wages tends to drop. In particular, in Table 2, column 8, the coefficient on female wages is only significant at a 10% level. (ii) For Black Teens. The significance level of the coefficient on male wages tends to increase. In Table 2, column 6, it becomes significant at a 10% level and in Table 3, column 2, it becomes significant at a 5% level. In the smaller model, when per capita income is included in the regression, the coefficient on benefits drops to a significance level of 10%. (iii) For Both Black and White Teens. In the regression where abortion is treated as endogenous, (Table 3, columns 5 and 6), the significance level of the coefficient on the benefit (AFDC and food stamps) variable drops to 10%. In the regression where the difference between AFDC and food stamps for a family of four and a family of two is included (Table 3, columns 9 and 10), the significance level of the coefficient on the benefit variable (AFDC and food stamps) increases to a 5% level. 52 The point estimate of the elasticity for the sample of states that excludes states with fewer than 2000 black teens is smaller (1.10). This falls in the 95% confidence interval for the full sample of states for black teens. As noted earlier, a Chow test fails to reject the null hypothesis that the coefficients are the same for states with many and few black teens. 53 See, for example, Nechyba (1997) for a theoretical model of illegitimacy and the AFDC program with changes in attitudes. 54 Prior to 1977, food stamp recipients were required to buy food stamps at a discount of their face value; this was abolished because of concerns that individuals were not always able to afford their allotments (Ohls and Beebout 1993). Roe v. Wade in 1973 and the Hyde amendment in 1976, which ended federal Medicaid funding of abortion, may also be important. 55 Results in columns 9 and 10 in Table 4 are similar in terms of both size and significance when interpolated points are dropped from the regression. 56 See Duncan and Hoffman (1990), Plotnick (1990), and Lundberg and Plotnick (1994). Using the Current Population Survey or the Public Use Micro Sample (PUMS) from the Census Bureau would increase sample size. However, this would mean much panel information would be lost. The data loss would be most severe for teens who give birth and then move out of their parent's household since family background data, which in general provides the best available proxies for lifetime wages, will be lost. Omitting teens who moved out of their parents' households might exclude those most likely to have been influenced by the availability of government benefits. 57 For black teens, there are also several states where there are no women who did not give birth as teenagers. 58 For studies of the effect of total out-of-wedlock fertility (e.g., Rosenzweig 1995), it would be possible to follow the teens for longer periods (i.e., to age 23 or age 26). This would increase the number of births (both by increasing the length of time the women are at-risk of having an out-of-wedlock birth and because out-of-wedlock births are more common among women in their early 20s than among women in their teens). However, as noted earlier, this is a slightly different policy question given that older women are less likely to become dependent on welfare. 59 Plotnick (1990) has sample sizes of 488 individuals for white teens and 230 individuals for black teens. Plotnick (1990) includes different explanatory variables in the regression, which might result in more missing values. 60 Sample sizes include teens who would be dropped from the regression if state dummies were included in the regression. 61 For example, Lundberg and Plotnick (1994) include teens who turned 17 in 1979. Acs, Gregory. 1993. The impact of AFDC on young women's childbearing decisions. Unpublished paper, The Urban Institute. Akerlof, George A., Janet L. Yellen, and Michael L. Katz. 1996. An analysis of out-of-wedlock childbearing in the United States. Quarterly Journal of Economics 111:277-318. An, Chong-Bum, Robert Haveman, and Barbara Wolfe. 1993. Teen out-of-wedlock births and welfare receipt: The role of childhood events and economic circumstances. Review of Economics and Statistics 75:195-207. Arellano, M., and S. Bond. 1991. Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Review of Economic Studies 58:277-97. Barro, Robert J., and Gary S. Becker. 1988. A re-formulation of the economic theory of fertility. Quarterly Journal of Economics 103:1-25. Becker, Gary S. 1991. A treatise on the family. Enlarged edition. Cambridge, MA: Harvard University Press. Bennet, Trade. 1992. Marital status and infant health outcomes. Social Science and Medicine 35:1179-87. Besley, Timothy, and Anne Case. 1994. Unnatural experiments? Estimating the incidence of endogenous policies. NBER Working Paper No. 4956. Blaug, Mark. 1978. Economic theory in retrospect. Cambridge, UK: Cambridge University Press. Bowden, Roger J., and Darrel A. Turkington. 1984. Instrumental variables. Cambridge, UK: Cambridge University Press. Boyer, George R. 1990. An economic history of the English Poor Law, 1750-1850. Cambridge, UK: Cambridge University Press. Caudill, Steven B., and Franklin G. Mixon. 1993. A note on the effects of AFDC payments on birthrates. Rivista Internazionale di Scienze Economiche e Commercali 40:379-84. Chamberlain, Gary. 1992. Comment: Sequential moment restrictions in panel data. Journal of Business and Economic Statistics 10:20-5. Cigno, Alessandro. 1986. Fertility and the tax-benefit system: A reconsideration of the theory of family taxation. Economic Journal 96:1035-51. Committee on Ways and Means, U.S. House of Representatives. 1992. 1992 green book: Overview of entitlement programs. Washington, DC: U.S. Government Printing Office. Cramer, James C. 1987. Social factors and infant mortality: Identifying high-risk groups and proximate causes. Demography 24:299-321. Duncan, Greg J., and Saul D. Hoffman. 1990. Welfare benefits, economic opportunities, and out-of-wedlock births among black teenage girls. Demography 27:519-35. Ellwood, David T., and Mary Jo Bane. 1985. The impact of AFDC on family structure and living arrangements. Research in Labor Economics 7:137-207. Garfinkel, Irwin, and Sara S. McLanahan. 1986. Single mothers and their children. Washington, DC: Urban Institute Press. General Accounting Office. 1994. Families on welfare: Focus on teenage mothers could enhance welfare reform efforts. Washington, DC: General Accounting Office. Hansen, L. P 1982. Large sample properties of generalized method of moments estimators. Econometrica 50:1029-54. Hanushek, Eric A. 1992. The trade-off between child quantity and quality. Journal of Political Economy 100:85-117. Hausman, J. 1978. Specification tests in econometrics. Econometrica 46:1251-71. Hoynes, Hillary Williamson. 1997. Does welfare play a role in female headship decisions? Journal of Public Economics 65:89-117. Jackson, Catherine A., and Jacob Alex Klerman. 1994. Welfare, abortion and teenage fertility. Unpublished paper, RAND Corporation. Kane, Thomas, and Douglas Staiger. 1996. Teen motherhood and abortion access. Quarterly Journal of Economics 111: 467-506. Keane, Michael, and David E. Runkle. 1992a. On the estimation of panel data with serial correlation when instruments are not strictly exogenous. Journal of Business and Economic Statistics 10:1-9. Keane, Michael, and David E. Runkle. 1992b. Reply. Journal of Business and Economics Statistics 10:26-9. Lundberg, Shelly, and Robert D. Plotnick. 1990. Effects of state welfare, abortion and family planning policies on premarital childbearing among white adolescents. Family Planning Perspectives 22:246-51. Lundberg, Shelly, and Robert D. Plotnick. 1994. Adolescent premarital childbearing: Do economic incentives matter. Journal of Labor Economics 17:177-200. Malthus, T. R. 1798. An essay on population. Reprinted in An essay on population; Introduction by Michael P. Fogarty. London, UK: J. M. Dent, 1958. Malthus, T. R. 1830. A summary view of the principle of population. In Encyclopedia Britannica. Moffitt, Robert. 1990a. The effect of the U.S. welfare system on marital status. Journal of Public Economics 41:101-24. Moffitt, Robert. 1990b. Has state redistribution policy grown more conservative? National Tax Journal 43:123-42. Moffitt, Robert. 1992. Incentive effects of the U.S. welfare system: A review. Journal of Economic Literature 30:1-61. Moffitt, Robert. 1994. Welfare effects on female headship with area effects. Journal of Human Resources 29:621-36. Murray, Charles. 1993. Welfare and the family: The U.S. experience. Journal of Labor Economics 11:s224-62. Nechyba, Thomas. 1997. Social approval, values and AFDC: A re-examination of the illegitimacy debate. Unpublished paper, Stanford University. Ohls, James, and Harold Beebout. 1993. The food stamp program: Design tradeoffs, policy and impacts. Washington, DC: Urban Institute Press. Orr, Larry L. 1976. Income transfers as a public good: An application to AFDC. American Economic Review 66:359-71. Ozawa, Martha. 1989. Welfare policies and illegitimate birth rates among adolescents: Analysis of state-by-state data. Social Work Research and Abstracts 14:5-11. Pagan, A. R., and Y. Jung. 1993. Understanding some failures of instrumental variables estimators. Unpublished paper, Australian National University. Pampel, Fred C., and Vijayan K. Pillai. 1986. Patterns and determinants of infant mortality in developed nations. Demography 23:525-41. Plotnick, Robert D. 1990. Welfare and out-of-wedlock childbearing: Evidence from the 1980s. Journal of Marriage and the Family 52:735-46. Ribar, David C., and Mark O. Wilhelm. 1996. Welfare generosity: The importance of administrative efficiency, community values and genuine benevolence. Applied Economics 28:1045-54. Rosenzweig, Mark. 1995. Welfare, marital prospects and nonmarital childbearing. Unpublished paper, University of Pennsylvania. Shields, Michael P., and Ronald L. Tracy. 1986. Four themes in fertility research. Southern Economic Journal 53:201-16. Shroder, Mark. 1995. Games the states don't play: Welfare benefits and the theory of fiscal federalism. Review of Economics and Statistics 77:183-91. Staiger, Douglas, and James H. Stock. 1997. Instrumental variables regression with weak instruments. Econometrica 65:587-600. Wilson, William Julius. 1987. The truly disadvantaged. Chicago: University of Chicago Press. Winegarden, C. R., and Paula Bracy. 1997. Welfare benefits and illegitimacy in the U.S.: Reconciling contradictory trends. Southern Economic Journal 64:167-79. Wu, D. M. 1973. Alternative tests of independence between stochastic regressors and disturbances. Econometrica 40:733-50. |Printer friendly Cite/link Email Feedback| |Author:||Strauss, Robert P.| |Publication:||Southern Economic Journal| |Date:||Apr 1, 1998| |Previous Article:||Intergroup disparity: economic theory and social science evidence.| |Next Article:||Investment during the great depression: uncertainty and the role of the Smoot-Hawley tariff.|
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9564547538757324, "language": "en", "url": "https://www.worldatlas.com/articles/the-leading-export-partners-of-albania.html", "token_count": 1106, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1767578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:76b5ee2a-f4b1-4407-aa32-9fa00acdea5f>" }
The Republic of Albania had a population of 3.057 million residents as of July 1, 2019. This country is located in the southeastern parts of Europe. It is bordered by Montenegro, Greece, Kosovo, and North Macedonia. Albania occupies an area of 11,100 square miles. Its capital city, which is also the country’s economic hub, is referred to as Tirana. Until the year 1990, Albania was a Communist nation possessing a centralized economy. Afterward, the country’s economy transitioned into a free-market. Today, Albania is among the world’s leading producer and exporter of chromium, copper, nickel, and coal. Albania’s Gross Domestic Product (GDP) as of December 2016 was growing at 2.8%, had a trade balance of -9.7%, and the unemployment rate standing at 14.7%. The economy of Albania is mainly driven by sectors such as services (54.1%); agriculture (21.7%) and industrial (24.2%). With regards to exports, Albania exported goods amounting to 7% of its overall GDP in 2018. The country-specific data for the year 2017 indicate that 89.9% of the goods exported from Albania ended up in Italy (53.4%); Serbia (9.4%); Kosovo (7.7%), Spain (5.6%); Greece (4.2%); Germany (4%); North Macedonia (3.1%); China (3.1%); Montenegro (1.8%), Romania (1.7%); USA (1.3%); Bulgaria (1.1%) and Hungary (1.1%). Leading Export Partners Of Albania The leading export partners of Albania are Italy, Serbia, Kosovo, Greece, and Germany with specific details of their trading relations discussed below; Trade with Italy Italy is the leading export partner of Albania. These two countries share close and long historical and cultural ties. One of the ways that the countries have maintained a flourishing trading partnership is through the establishment of an exclusive economic zone between them. Exports from Albania to Italy include footwear, clothes, electronic equipment, iron and steel, mineral fuels, construction metals, and aluminum. According to the United Nations’ COMTRADE database, the value of the international trade exports from Albania to Italy was worth US$ 1.38 billion in the year 2018. During this year, the highest value of exports to Italy from footwear, apparel not knitted, and apparel knitted amounted to 461.4 million, 212.78 million, and 191.37 million respectively. Trade with Spain Statistics held by the United Nations COMTRADE database on international trade show that the exports from Albania to Spain in the year 2018 amounted to US$ 223.94 million. Most of these exports were footwear, apparel, mineral fuels, oils, fish, meat, leather, iron and steel, raw hides and skins, oilseed and fruits. Out of these products, the main export from Albania to Spain is oil, followed by footwear and exports. Trade with Kosovo Albania and Kosovo have long historical ties with the number of Albanians living in Kosovo being around 90% of the country’s entire population. In fact, Albanian is the official language of Kosovo. Albania’s exports to Kosovo include mineral products, machinery, appliances, and electric materials, processed food, beverages and tobacco, metals, and chemical products. Out of these products, the highest value of exports was iron, fuels, and construction materials. The total value of the exports to Kosovo in 2018 was US$ 245 million which was a 30% increase from the value of exports in 2017. Trade with Greece There is a high number of immigrants from Albania living in Greece. Apart from being one of its leading trading partners, Greece is the largest foreign investor in Albania. The most common goods exported from Albania to Greece are articles of iron or steel whose value for the year 2018 has been put at US$ 2.23 billion by the United Nations COMTRADE database on international trade. Much as Albania and Greece enjoy relatively good trade relations, they have had a history of conflicts such as the Cham issue and the religious freedom issues for the Greek minority living in Albania. Most recently, the discussions of a “Greater Albania” have led to the country being seen as a threat in the economic region. Trade with Germany Albania exports approximately 4% of its total exports to Germany. In 2018, the country exported goods worth US$ 122 million to Germany that translated to a 24% increase compared to 2017 exports. Consequently, Germany is among the largest investors in Albania known for its critical role in supporting the accession of Albania. Challenges Faced By Albania In Its Business Relations Much as Albania is a favorite investment destination for most foreign countries, the country also contends with several challenges. The first challenge is the widespread corruption practices in Albania which have resulted in difficulties for investors as it has made the country’s investment climate unfavorable. Secondly, it is quite difficult to obtain land titles for construction projects in Albania due to a corrupt court system as well as unscrupulous actors in the real estate industry. Thus, countries that deal with construction materials are often affected by fluctuations in the pricing of products within the industry due to these adverse practices. The Leading Export Partners Of Albania |Rank||Country||Share in total export (in %)|
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9426136016845703, "language": "en", "url": "http://wiki-offline.jakearchibald.com/wiki/Waiver", "token_count": 1191, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:33f462c9-03b0-4595-9069-79467a6f3716>" }
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) Regulatory agencies or governments may issue waivers to exempt companies from certain regulations. For example, a United States law restricted the size of banks, but when banks exceeded these sizes, they obtained waivers. In another example, the United States federal government may issue waivers to individual states so that they may provide Medicaid in different ways than the law typically requires. While a waiver is often in writing, sometimes a person's words can also be used as a counteract to a waiver. An example of a written waiver is a disclaimer, which becomes a waiver when accepted. When the right to hold a person liable through a lawsuit is waived, the waiver may be called an exculpatory clause, liability waiver, legal release, or hold harmless clause. In some cases, parties may sign a "non-waiver" contract which specifies that no rights are waived, particularly if a person's actions may suggest that rights are being waived. This is particularly common in insurance. Sometimes the elements of "voluntary" and "known" are established by a legal fiction. In this case, one is presumed to know one's rights and that those rights are voluntarily relinquished if not asserted at the time. The following represent a general overview of considerations; specifics may vary dramatically depending on the jurisdiction. Key factors that some courts (depending on jurisdiction) may look at while determining the applicability of a waiver: - In some jurisdictions, one may not prospectively waive liability for some or all intentional activities - Waivers generally must be made voluntarily and with the full knowledge (or the ability to know) of the right being waived - The waiver should be unambiguous and clear to a reasonable person - In some jurisdictions (not including the United States), it may be necessary that the parties to the waiver have equal bargaining power - A waiver may have limited application where one contracts for an "essential service" such that it may violate public policy for liability to be waived - A waiver that the courts will not enforce because the purpose of the agreement is to achieve an illegal end constitutes an illegal agreement. In the case of Insurance Corp. of Ireland v. Compagnie des Bauxites de Guinee, 456 U.S. 694 (1982) the United States Supreme Court decided that when a court orders a party to produce proof on a certain point, and that party refuses to comply with the court's order, the court may deem that refusal to be a waiver of the right to contest that point and assume that the proof would show whatever the opposing party claims that it would. In that court case, the defendant had argued that the court lacked personal jurisdiction over it but refused a court order to produce evidence of this lack of jurisdiction. The defendant argued that, because the court lacked jurisdiction, the court had no authority to issue an order to show proof of the lack of jurisdiction. The Supreme Court rejected that argument and determined that the defendant's refusal to comply waived the right to contest jurisdiction, just as if it had never contested jurisdiction at all. Illegal waiver or agreement In US states such as California, a waiver is not lawful when it is contrary to an express provision of law, its implicit policy, or good morals. Furthermore, one cannot waive responsibility for violation of law, willful injury to a person or property of another, for fraud, or waive their residential tenant rights. - Financial Debate Renews Scrutiny on Banks’ Size. New http://mobile.nytimes.com/2017/08/02/us/politics/those-call Times. - Waivers. Medicaid.gov. - CAL. CIV. CODE § 1667: That is not lawful which is: 1. Contrary to an express provision of law; 2. Contrary to the policy of express law, though not expressly prohibited; or, 3. Otherwise contrary to good morals. - CAL. CIV. CODE § 1668: All contracts which have for their object, directly or indirectly, to exempt any one from responsibility for his own fraud, or willful injury to the person or property of another, or violation of law, whether willful or negligent, are against the policy of the law. - CAL. CIV. CODE § 1953: (a) Any provision of a lease or rental agreement of a dwelling by which the lessee agrees to modify or waive any of the following rights shall be void as contrary to public policy: (1) His rights or remedies under Section 1950.5 or 1954. (2) His right to assert a cause of action against the lessor which may arise in the future. (3) His right to a notice or hearing required by law. (4) His procedural rights in litigation in any action involving his rights and obligations as a tenant. (5) His right to have the landlord exercise a duty of care to prevent personal injury or personal property damage where that duty is imposed by law. (b) Any provision of a lease or rental agreement of a dwelling by which the lessee agrees to modify or waive a statutory right, where the modification or waiver is not void under subdivision (a) or under Section 1942.1, 1942.5, or 1954, shall be void as contrary to public policy unless the lease or rental agreement is presented to the lessee before he takes actual possession of the premises. This subdivision does not apply to any provisions modifying or waiving a statutory right in agreements renewing leases or rental agreements where the same provision was also contained in the lease or rental agreement which is being renewed. (c) This section shall apply only to leases and rental agreements executed on or after January 1, 1976.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9261429905891418, "language": "en", "url": "https://tiaonline.org/industry-priorities/cross-industry/artificial-intelligence/", "token_count": 1048, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0289306640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5ff8a4d5-a57d-453c-8ea4-79ca1d6a4bf8>" }
Artificial intelligence (AI) will bring order to a world in which by 2020, 50 billion connected devices are expected to come online, all generating an unprecedented volume of data. How will we stay afloat in this flood of data? AI is throwing us a collective life raft, in the form of algorithms capable of parsing a huge quantity of information that would otherwise be lost. Combined with ultra-high speed networks and quantum computing, AI will deliver tremendous benefits to society, improving the economy and quality of life. As AI takes center stage in breakthrough areas such as self-driving cars and medical treatments, not to mention smart communities, ensuring the responsible use of networks will be essential. This is where the information and communications technology (ICT) industry will play a major role. AI is poised to help telecom carriers deliver the “fourth industrial revolution.” By allowing networks to respond to constantly evolving conditions, AI will help operators maximize the potential of 5G and IoT networks. Together with software defined networks (SDN) and network functions virtualization (NFV), AI will enable digital transformation. Already, AI is helping diagnose up to 70 percent of network faults before an incident occurs. SDN and NFV will allow networks to carry much more and diverse traffic, and will give providers the ability to collaborate to offer more sophisticated services, bundles of services and mashups. This will allow customers and operators to interact with the network in entirely new ways. However, SDN and NFV present a management adaptation challenge, because people will be unable to understand such a large and complex system. This is where AI comes into play. Self-optimizing network (SON) technology, for example, is advancing management, optimization and healing of mobile radio access networks. In addition to transforming networks themselves, AI and machine learning advances will radically change the usage of telecom networks. Advances in smart buildings and self-driving cars will hinge on analytics drawn from data in videos. Video traffic is expected to represent 80 percent of all internet traffic in 2019, presenting both challenges and opportunities for the telecom industry. Telecom companies will need to meet the growing demand for more video and at a higher quality. Fortunately, as AI introduces challenges, it also offers solutions. To accommodate the video demands brought about by AI, telecom firms will need to implement AI and machine learning to optimize and automate their networks. Network management requires products that help manage the systems themselves. How is AI changing the way networks are being designed for technologies like IoT, AR/VR, autonomous vehicles and swarm data? What are the characteristics of the AI-enabling network of tomorrow? Miguel Villarreal, CEO of Villa-Tech shares his perspectives on the role of AI in network virtualization. Enhancing the experience of sports fans is high on the list for developers of 5G. How will fans get an immersive experience? What role will AI play in giving fans a reason to cheer? Jonathan Levine, Managing Director of Business Development at Intel Sports, Emili Planas, CTO of MediaPro and Jason Elliott, 5G Market Development… How do you build a flexible digital platform that's open to co-creation, yet secure? Rod Naphan, CTO of Fujitsu Network Communications, shows us the company’s latest global digital solutions, platforms and services that prepare them for the digital transformation of the 5G future. How will network transformation impact advances in AI? Is the industry prepared or unprepared to take on these challenges? TIA NOW speaks with Arpit Joshipura, GM of Networking and Orchestration at the Linux Foundation and Manish Vyas, President of Communications Business & Chief Executive of Network Services at Tech Mahindra about preparing virtual networks for… Analysts from IDC including spoke with TIA NOW about the tough issues that the communications technology industry is currently tackling, which were covered at the TIA Connectivity Jam in Dallas, TX. These issues span from data management, edge computing, connected devices, network benchmarking and artificial intelligence. The big data and data science revolution is upon us as firms are currently spending an estimated $36 billion on storage and infrastructure, that will double the data mining sector by 2020. Here with us in the TIA NOW studio to talk about predictive analytics is Richard Boire, SVP of Environics Analytics. Betty Manetta, President and CEO of Argent Associates, spoke with TIA NOW about how artificial intelligence plays a big role in fraud detection and predictive analytics. Manetta talks further about the importance of finding your role in the always changing and burgeoning technology ecosystem. Ned Taleb, CEO at Nexius spoke with Abe Nejad about the move towards real world applications and the importance of network enablement of machine learning and data science. Taleb talks further about intelligent networks and how they support AR/VR and AI. Brian Higgins, VP and GM of Exponent, a Verizon company, talks to TIA NOW from MWC 2017 about their new initiative to help carriers around the world get quicker to market and become more robust with the biggest issues facing them: Big Data, AI, IoT, Media Services and more.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9645630717277527, "language": "en", "url": "http://gazelleindex.com/2012/05/04/may-employment-report-good-not-so-good-and-bad/", "token_count": 373, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1923828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:725fa9ce-a2b7-4410-ac40-a62ec6a0b2d2>" }
The unemployment rate declined from 8.2% in March to 8.1% in April. This is a good thing– but! The decline was due largely to a decrease in the size of the civilian labor force, which declined by 342,000 workers. The lower unemployment rate was not the result of the economy adding large number of new jobs. In fact, the economy added only 115,000 jobs (less than half of what is needed and a smaller number than was expected). The implication is that there is much work to be done to strengthen the economic recovery. Government spending cuts reduced GDP growth by one percentage point during the first quarter of the year. GDP Growth was 2.2%, after government expenditure reductions subtracted .92% This means that Congress must be careful in how it goes about balancing the budget. Here’s why we characterize the Jobs Report as Good, not so Good, and Bad. - The unemployment rate decreased from 8.2% to 8.1%. - Black unemployment decreased from 14% to 13% - The number of jobs created in March was revised upward from 120,000 to 154,000 - Long-term unemployment declined by 207,000 workers Not so Good - The economy created only 115,000 jobs in April (the private sector added 130,000 jobs) - Government sector employment declined by 15,000 jobs - The manufacturing sector created only 16,000 jobs - Teenage Unemployment remained high at 24.9% - The size of the Civilian Labor Force declined by 342,000 Jobs, - The number of discouraged workers increased by 103,000 (i.e. those who have given up looking for work) - The number of employed persons declined by 169,000 - Construction employment declined by 2,000
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9635369777679443, "language": "en", "url": "http://gazelleindex.com/2012/08/27/fiscal-cliff-is-economic-stimulus-in-reverse/", "token_count": 792, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fa61aded-9073-4f0c-a954-fa08741a1396>" }
By February, a number of important stimulus measures will expire unless extended by Congress. The expiration of the Bush era tax cuts combined with automatic spending reductions are intended to decrease the budget deficit by $641 billion during the next fiscal year. However, if $641 billion is extracted from the economy, it would lead to a contraction in income and jobs by an amount that is much greater than the original spending cuts. That is, the stimulus multiplier will work in reverse and that is why there is so much talk about the fiscal cliff. To understand this, look at government policies that were designed to combat the “great recession”. The government injected billions of dollars in stimulus spending. The successive rounds of spending and income ultimately created an amount of new economic activity and new jobs that were much greater than the original injection of cash. Government stimulus spending has a multiplier effect on jobs and income because when money is injected into the economy, it becomes income to businesses and individuals who undertake more spending. This added spending creates even more income and spending in successive rounds and ultimately results in an amount of jobs and business activity much greater than the initial round of spending. However, the multiplier effect also works in reverse. That is, money extracted from the economy will reduce income and jobs and the effect will be just as powerful in the opposite direction. The fiscal cliff would mean a significant reduction in spending, which has the potential to drag the economy into a new recession. Keep in mind the economy is already weakened by the European debt crisis, the growing loss of domestic confidence and the unwillingness of investors and consumers to make major spending commitments before the election. If it takes a further blow by the fiscal cliff, that blow could be a recession catalyst. A large negative multiplier operating within an already weakened economy is a recipe for disaster! The Congress is currently in recess and when it returns it will have only 13 days remaining for legislation before the November elections. Unfortunately, there are no visible signs that the two parties are prepared to compromise and resolve the looming crisis. It may be hard to believe, but the economy has been in a formal expansion for the last three years and two months. That expansion has been punctuated by months of rapid growth followed by pronounced slowdowns. Nevertheless, the economy has crawled along and created jobs – even though not enough to reduce the high level of unemployment created by the last recession. The Congressional Budget Office (CBO) estimates that GDP will grow at 2.2% in 2012, unless we hit the fiscal cliff. If the latter occurs, growth is projected in 2013 at 1.1%. That rate is so low that any negative event would easily drag it into a recession. Since the 1940s, the average expansion has lasted five years before the onset of another recession. The US economy has been in the current expansion for over three years. With time running out and growth slowing down, the fiscal cliff could not happen at a worse time. Based on the Budget Control Act of 2011, automatic across-the-board cuts will occur unless a compromise is reached on a budget reduction plan. The actions that are of concern include the following: - $100 million in across-the-board spending cuts will be made to non-security programs such as Medicare and in security programs such as Department of Defense, Homeland Security, and Veterans Affairs. - The Bush era tax cuts will expire at the end of 2012; this will raise taxes for 100 million Americans. - The payroll tax cut will expire at the end of February 2013. - Emergency unemployment benefits will expire at the end of the year. It is time for Congress to put aside political gaming and act in the best interest of the economy, to avert this looming catastrophe. If this does not happen, expect to see all the progress that has been made on unemployment over the last year reversed, and the unemployment rate increase above 9%.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9629119634628296, "language": "en", "url": "http://namanninh.com/", "token_count": 546, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07958984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fe585412-ea77-45e5-a2aa-8a1a466973b8>" }
Finance is a vast field where everything has functions and then sub-functions and much more. Finance for those who don’t know is something like accountancy whereas; it is something not related even to accountancy. Finance is all about managing money and accountancy is related to recording of the money transactions. Brokerage is a vague term but thanks to all the development and education they have now bifurcated all the brokerage aspects for example: financial broker, trusted mortgage broker in Wollongong and so on. Financial broker is somebody who helps in buying and selling of the share in an open market. Share market is a place where a person like me or you cannot just go and start trading; there are certain protocols which one has to follow. There are some protocols for example: a person is bound to open an account (a share account) with a financial brokerage company, which assign a broker who takes care of everything pertains to share transactions. Share transactions are not that simple, it can only be dealt when one has ample knowledge regarding financial markets and company, which company is a dividend provider and which company is good for capital gains. Ups and downs in the financial market is yet the cup of tea of a financial broker. What’s in it for a financial broker: Definitely the question arises as in, what financial broker is getting as a result of this service? Answer lies in that percentage which a broker gets on every buy and sell of a share. This means that no matter if the client is losing money or gaining financial broker will get his/her fees properly. That certain percentage or amount which seems negligible, when multiplied by the number of shares becomes significant. This is what the financial broker gets in return of every single transaction. How to become a financial broker: It’s a money-making job to become a financial broker. Simple graduation with financial majors followed by some certification related to market knowledge, MBA in finance or may be ACCA or CFA can solve the purpose of becoming a financial broker. Finance broker is a person who must possess the financial knowledge as well as the market knowledge. Political scenarios are yet another field of concern from a broker’s perspective. Not anybody can become a financial broker, because after academic qualification it is important to seek practical knowledge, hence, internships and unpaid jobs in brokerage houses matters a lot for the flourishing career of a financial broker. It takes a lot more than just academic knowledge to be in this field, because money making is not an easy job and one has to take a lot of responsibility. In a nutshell financial broker is a person who makes money on the money of others.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.927420437335968, "language": "en", "url": "http://www.sg-insight.com/index.php/en/global-news/51341-imf-predicts-all-regions-of-global-economy-to-experience-negative-growth-for-first-time", "token_count": 221, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.033203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dbaac676-2ca0-4b66-a527-3138758203c4>" }
The global economy will shrink by 4.9 percent, the worst annual contraction since World War 2, the International Monetary Fund predicted Wednesday in its World Economic Outlook report. In its update to the WEO report released in April, the IMF lowered its global growth forecast due to underestimating the economic damage the coronavirus has had on economies. For the first time ever, the IMF projects that all regions will experience negative growth in 2020. The report predicts the gross domestic product in the United States will drop by 8 percent, worse than its April estimate of a 5.9 percent drop. For the 19 European nations that use the euro, the report says they will experience a 10.7 percent decrease in growth. Russia and Saudi Arabia, two of the world’s largest oil-producing countries, will contract by 6.6 percent and 6.8 percent, respectively, the IMF reports. Notably, despite the IMF’s outlook that all regions worldwide will experience a negative growth, it foresees China growing by 1 percent in 2020. Source: VOA news
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9562153220176697, "language": "en", "url": "https://fastdemocracy.com/news/", "token_count": 759, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.46875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ff39822b-f1c4-4265-b172-4471a7f9a2fc>" }
As the COVID-19 crisis continues to play out across the world and people everywhere are looking to make sense of this unique, chaotic situation, many Americans are keeping their eyes glued to the news to stay updated on what the president and Congress are doing to keep us safe. The federal government has recently passed three bills in an effort to keep the Americans safe and blunt economic damage caused by the global pandemic. Most recently, President Trump signed the CARES Act, a bill worth $2 trillion to prevent a large-scale economic fallout. While action on a national level is undoubtedly necessary if the government is to provide safety for Americans and their savings, states are also working to make an impact through not only their gubernatorial actions but also the movement of bills through their legislatures. Here are some of the actions being taken across the United States to combat coronavirus. You can learn more about what your state legislature is doing to address the COVID-19 crisis by signing up for free email updates at FastDemocracy.com. 1. Waive K-12 requirements due to COVID-19 closures Because of the need to avoid large gatherings in the coming months, a number of states have been working to pass bills that will allow schools to conduct classes online, while other states have put a pause on instruction altogether until the coronavirus situation has slowed. In order to take pressure off both instructors and families, some states have brought up legislation that will reduce the mandatory number of hours of instruction that students must receive each week. 2. Mandating insurance companies provide testing without co-pays Given that the coronavirus has been deemed a national emergency, many states are taking action to make sure health insurance companies cover the cost of coronavirus testing. The driving idea behind this is twofold – first, to ensure that dealing with coronavirus is not cost-prohibitive, and second to stop the virus from spreading further across the larger population. 3. Temporarily change unemployment eligibility requirements As a result of mandatory quarantines and the inability of many businesses to continue turning a profit when customers are remaining at home, state legislatures across the nation are making it easier for workers who have been laid off to access unemployment benefits. Early access to unemployment benefits would assist impacted workers maintain financial stability. 4. Preventing eviction and foreclosure during COVID-19 Because many Americans are currently finding themselves unable to go to work, concern regarding the payment of rent and utilities is growing. As such, some state legislatures are ensuring that those affected by the economic implications of the virus do not end up without a home. This legislation does not necessarily waive rent, mortgage, or utility payments; it just prevents eviction and foreclosure or the threat of eviction or foreclosure while the COVID-19 crisis is ongoing. 5. Halt debt collection With unemployment claims rapidly increasing, so are workers’ concerns about their ability to meet financial obligations. To further assist workers impacted by the economic slow-down due to coronavirus, some state legislatures are considering legislation that would pause debt repayments in order to help Americans avoid bankruptcy. Throughout the COVID-19 crisis, state and local governments have been on the forefront of proposing policy solutions to help prevent the spread of the virus and assist affected workers and families. Because of the large impact that state legislatures can have in the fight against COVID-19 by passing bills like those listed above, it is now more important than ever that American citizens stay current on what is happening in government at all levels. You can learn more about what the state and federal governments are doing to address the COVID-19 crisis by signing up for free email updates at FastDemocracy.com.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9371846318244934, "language": "en", "url": "https://pdsaonline.org/supply-chain-security/", "token_count": 475, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.28125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6d11a070-3741-48e8-950c-81d25f94d962>" }
The pharmaceutical distribution supply chain is the route a medication takes from the time it is manufactured to the time it reaches the pharmacy shelf. While consumers may not be aware of the distribution network, this path involves many steps prior to the time a consumer receives the medicine. U.S. consumers are fortunate to have a distribution supply chain that is among the safest in the world, yet increasingly sophisticated bad actors continue in their efforts to infiltrate the system. Implementation of the DSCSA will help strengthen the supply chain. How the Supply Chain Works When a finished drug or biologic completes the packaging process at a manufacturing facility, its journey to a patient is just beginning. For most products, the next leg of the journey takes them to a facility controlled by a wholesale distributor. Manufacturers may also contact with Third-Party Logistics Providers (3PLs) to coordinate the logistics. Wholesale distributors, including primary and secondary distributors, are responsible for maintaining the integrity of medicines from the manufacturer to the dispenser, which distributes the medicine to patients. Distributors are the critical link between hundreds of manufacturers and more than 200,000 dispensers throughout the country. Dispensers include hospitals, long-term care facilities, healthcare clinics, physician offices, and, of course, pharmacies, some of which may be small, independent drug stores, and others of which may be part of a chain of drug stores. The DSCSA Established a Single, Uniform, and National Solution On November 27, 2013, President Obama signed into law the Drug Quality and Security Act (DQSA), P.L. 113-54 (27 Stat. 587). P.L. 113-54 contains two separate titles: Title I addresses drug compounding and is known as “the Compounding Quality Act,” and Title II is known as “the Drug Supply Chain Security Act” (DSCSA). The DSCSA enhances the security of the pharmaceutical supply chain by establishing a national system for tracing and serializing pharmaceutical products and for establishing national licensing standards for wholesale distributors and third-party logistics providers. The DSCSA is a huge step forward in reducing potential threats to supply chain security and patient safety associated with pharmaceutical distribution. For the full text of the DSCSA, please review the Public Law.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.954758882522583, "language": "en", "url": "https://www.citeman.com/3171-revenue-and-cost-analysis.html", "token_count": 747, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:98baab84-d436-4551-b807-7fa234414e9e>" }
Revenues and costs associated with each segment in a business organization must be identified. The issue of appropriate segment costing has generated considerable debate in recent years. It is clear that most corporate accounting systems today are inadequate for detailed segment costing. Accounting systems that were originally developed for the purpose of valuing inventory and financial statement preparation do not provide accurate for measuring operational performance of channels, products, territories, or other critical operating divisions. Traditional accounting systems tend to collect cost data and aggregate them into so called ‘natural’ categories. Natural expenses include such categories as salaries and wages, rent, utilities, supplies and taxes, which describe the nature of the cost item or the object of the expenditure. It is common for many such natural cost categories to be broken down by business function or responsibility center. For example, management can distinguish between salaries paid to salespeople and wages paid to warehouse employees. One marketing expert suggests seven such functional categories that reflect marketing functions: (1) direct selling costs, (2) indirect selling costs (3) advertising, (4) sales promotion, (5) transportation (6) storage and shipping, and (7) order processing. Cost companies perform this level of natural or functional cost identification relatively well. The major problem experienced and the subject of considerable controversy concerns the next step: identifying the costs associated with serving specific channels, territories, and/or products. Two approaches that have each received considerable attention are the contribution margin approach and the net profit approach. Contribution Margin: A pure contribution margin approach requires that all costs be identified as fixed or variable according to the behavior of the cost. Fixed costs are those costs that do not change in the short run (management salaries, for example). Variable costs are those that change in a predictable manner in relation to some level of activity during a time period (sales commissions, for example). Normally the level of activity is sales volume. An extended contribution margin approach requires further identification of costs as direct or indirect A direct cost is one that is directly incurred by the segment under consideration. Indirect costs (frequently called joint costs) are those incurred due to the existence of more than one segment. Stated another way, direct costs are those that would no longer exist if a specific segment were eliminated. Indirect costs would continue to exist even if that segment were eliminated. Income statements in the contribution margin method of analysis can be prepared that identify profitability for each segment by determinants of fixed, variable, direct, and indirect costs. Variable cost of goods sold is directly related to the product mix sold in each channel segment; it includes only direct labor, materials and suppliers. All factory overhead costs are treated as indirect fixed costs in the contribution margin approach. Variable direct costs include such items as sales commissions, discounts, and any other expenses that may vary directly with volume within a channel The percentage of variable direct cost-to-sales may vary in each channel. Fixed direct costs include any other costs that can be traced directly to a channel segment. Such costs might include sales salaries and expenses. If separate sales forces are utilized advertising media costs include all expenses that cannot easily be traced to any specific segment. Net profit Approach: The net profit approach to financial assessment of segments requires that all operating costs be charged or allocated to one operating segment. Proponents of this approach argue that all of a company’s activities exist to support the production and delivery of goods and services to customers. Furthermore most of the costs that exist in a firm are, in fact, joint or shared costs. In order for the true profitability of a channel territory, or product to be determined, each segment must be charged with its fair share of these costs.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9473520517349243, "language": "en", "url": "https://www.pompeopontone.com/notes/rational-expectations-theory-and-quantitative-easing/", "token_count": 1272, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.259765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b65f35c1-f62b-4cf9-88a7-8c6c8d852b5d>" }
In order to explain fairly simply how expectations are formed, the Theory of Price Movements based on Rational Expectation Framework (John F. Muth, 1961) advance the hypothesis that‧‧‧ In order to explain fairly simply how expectations are formed, the Theory of Price Movements based on Rational Expectation Framework (John F. Muth, 1961) advance the hypothesis that they (the expectations) are essentially the same as the predictions of the relevant economic theory. In particular, the hypothesis asserts that the economy generally does not waste information, and that expectations depend specifically on the structure of the entire system. In particular, averages of expectations in an industry are more accurate than naive models and as accurate as elaborate equation systems, although there are considerable cross-sectional differences of opinion. Also reported expectations generally underestimate the extent of changes that actually take place. In essence, expectations, since they are informed predictions of future events, are essentially the same as the predictions of the relevant economic theory. We can call therefore call such expectations “rational”. In other words, expectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distributions of outcomes). This is the case because: (i) Information is scarce and not wasted by the economic system; (ii) The way expectations are formed depends specifically on the structure of the relevant system describing the economy; (iii) A “public prediction” will have no substantial effect on the operation of the economic system (unless it is based on inside information). Without getting too technical and, for the sake of clarity, debating the adopted hypothesis of normally distributed random disturbances or the linearity of the equations of the system, from a purely theoretical standpoint, there are good reasons for assuming rationality. First, it is a principle applicable to all dynamic problems (if true). Expectations in different markets and systems would not have to be treated in completely different ways. Second, if expectations were not moderately rational there would be opportunities for economists to make profits in commodity speculation, running a firm, or selling the information to present owners. Third, rationality is an assumption that can be modified. Systematic biases, incomplete or incorrect information, poor memory, etc., can be examined with analytical methods based on rationality. The literature shows many comparisons between some of the empirical implications of the rational expectations hypothesis with those of the cobweb “theorem”. Although reported expectations often underestimate the extent of changes that actually take place, several studies have clearly shown that the empirical findings are clearly inconsistent with the cobweb theory and they are generally consistent with the rational expectations hypothesis. Switching from academic debate to the “real” world, Ben Bernanke in 2014 stated that “the problem with quantitative easing is that it works in practice, but it doesn’t work in theory”. Bernanke was referring to the Wallace Neutrality, a famous result from monetary theory that asserts that the size and composition of the central bank balance sheet has no effect on inflation or employment (Cohen-Setton and Monnet 2012). In a new paper (2016) Farmer and Zabczyk bridge the gap between practice and theory, and show how a central bank can influence both by intervening in asset markets and by using open market operations and trades in risky assets to insure those unable to insure themselves. But how can we try to capture the real effect of Quantitative Easing? In a 2018 Banque De France working paper (Penalver, Hanaki, Akiyama, Funaki, Ishikawa), the authors conduct a repeated experiment in which a central bank buys bonds for cash in a quantitative easing (QE) operation in an otherwise standard asset market setting. The experiment is designed so that bonds have a constant fundamental value which is not affected by QE under rational expectations and to ascertain whether the key result – QE raises bond prices when in the rational expectations equilibrium it shouldn’t – holds when participants are exposed to the same treatment three times. It is clear from the repeated benchmark treatment (without QE) that participants can learn that prices should not deviate from the fundamental price in this setting. In the Buy&Hold treatment in which the central bank permanently removes some bonds from the market, prices rise statistically significantly well above the fundamental price and stay there, even after the central bank has stopped buying. In most markets, repeated exposure only strengthens the belief that prices should rise. An interesting finding is that the central bank considerably overpays relative to the fundamental price and the most recent market price in round 1. Rather than compete this effect away (as rational expectations would imply), participants come to expect it. Indeed, by round 3 the price path in the earlier rounds significantly conditioned their price expectations. It was noticeable also that the peak price effect occurs earlier in the later rounds as participants start to anticipate higher prices from the beginning. Price dynamics in the Buy&Sell treatment is remarkably similar to that in the Buy&Hold treatment, particularly over the periods 1 to 7. The main difference occurs thereafter, as prices tend to drop to the fundamental price as the central bank sells. Overall, the central bank makes considerable losses. The ECB has flooded the markets with trillions of euros in liquidity (money base) since 2015. Since 2015 when the ECB initiated its quantitative easing (QE) programme, the money base increased by 166 percent (from 1.2 trillion euro to 3.2 trillion euro), while at the same time the money stock increased by a mere 20 percent. This leads to the conclusion that most of the two trillion-euro money base created by the ECB failed to filter through to the real economy. The burden of business cycle stabilisation in the eurozone will have to come from the fiscal authorities. The question then becomes how much leeway the fiscal authorities have to perform their stabilisation responsibilities considering that these authorities have significant levels of outstanding debt which can quickly become unsustainable. Posted in: Investment & Finance → As an Investment Consultant and Specialist, Pompeo Pontone is a Professional Investor with 25 years’ experience in the fields of Investment Management, Quantitative Finance & Derivatives Trading and Data Science.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9467452168464661, "language": "en", "url": "https://www.projectcubicle.com/direct-costs-and-indirect-costs-cost-classification/", "token_count": 1152, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0113525390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:32abd31a-15d3-4857-bc42-8d68699425d1>" }
Cost classification is an important concept in budgeting, accounting and project management. Cost classification and categorization of expenses help project teams to understand what kind of costs will be spent during the life cycle of their project. For example, while creating the baseline budget , a cost control engineer lists the direct cost and indirect cost in the construction project. Basically, Direct costs and indirect costs are two different concepts used for budget planning and accounting operations. However, there are some key differences between them. It is not easy to make a certain list of direct and indirect costs all the time. Because direct and indirect costs are based on the nature of the product and business. Typically, direct costs are attributable to a product, goods or service itself. Direct costs are directly related to the product. On the other hand, indirect costs are those required to produce the product however, they are not directly related to the product. This article answers this question: what are the primary differences between direct and indirect costs ? It is important to understand classification while purchasing material or creating a project budget. Assume that you are a project manager of a house building project and the client makes a change request related to changing the perimeter walls from reinforced concrete to masonry. First thing you should do is to make unit price analysis and classify the costs according to their type. Material, labor and machinery costs are direct costs and increases as the amount of work increase. On the other hand, project management and operational costs are the indirect costs that they don’t directly relate to the amount of work but they increase as the duration of the project increase. For better understanding, let’s analyze each concept. A direct cost is a price that can be utterly attributed to the production of products or services. Some costs, such as direct materials, direct labor, equipment are common direct costs. In some cases, it is possible to classify an indirect cost to a direct cost. For instance, the salary of the manager who controls multiple concrete batch plants would be considered an indirect cost for any one of those concrete batch plants. However, that manager’s salary would be a direct cost for the department comprising all of those concrete batch plants. Direct costs are often variable costs. If the manufactured units increase, direct costs increase. Because more units need more materials and resources. For example, you will produce 1000 m3 concrete in the batch plant. You need 300 tons of cement to produce 1000 m3 concrete and 1 ton cement costs 100 $. So you need 30,000 $ to purchase cement. This is your direct cost. As the quantity increases direct cost increase. Indirect costs are those which affect the whole company such as depreciation, accounting services, general supplies, board salaries. They are not just for only one product. Overhead costs, ongoing costs, project management costs, operational costs are indirect costs. Indirect costs are often fixed costs. But also they can be variable. For instance, the rental cost of your head office is a fixed cost. Quantity of manufactured units doesn’t affect your rental price. An example of a variable cost is your heating and cooling costs which can change monthly. For cost controlling purposes, many companies try to limit their indirect costs as a proportion of direct costs. Let’s analyze the same batch plant example. Assume that this month you will produce 1000 m3 concrete. If you increase the production and produce 1500 m3 concrete this does not change your head office costs or marketing costs. Examples of Direct Costs and Indirect Costs Below are some examples of direct cost and indirect cost. The following are a few examples of direct costs - Laborer’s wages - Wood, Glass, Cement, Concrete, Rebar, etc. - Handles, locks, hinges - Direct materials - Consumable supplies - Freight in and out - Sales commissions - Royalty Payment - Patent Holder The following are a few examples of indirect costs. - Advertisement costs - Project management costs - Operational Costs - Manager’ s salary - Indirect costs related to transport - Administration cost - Indirect employee’s salaries - Security cost - Office cost - Selling & distribution cost What are the Primary Differences Between Direct and Indirect Costs ? Below are some differences between direct costs and indirect costs. - It is easy to determine direct costs considering the product or service. On the other hand, it is not easy to identify indirect costs. Detailed analysis is required to identify indirect costs. - Direct costs are attributable to a specific product, department, goods or service. On the other hand, indirect costs are attributable to multiple products or services. - Direct costs are variable costs that change based on the quantity of a product or service. However indirect costs are fixed costs. Cost classification process is very important in project cost management. It enables to develop an effective cost control and profit planning system. If you don’t know which cost is direct which expense is indirect, you cannot perform cost control effectively. Also, it is helpful for decision making. It is significant to have a clear understanding of cost classification. During the procurement of goods or a service, you can compare their direct and indirect costs to your project separately. One option may involve more indirect cost than the other. So you can select the alternative with the low indirect cost. Classification of expenses has an important impact on federal tax payments. It also affects a company’s cash flow and financial health. Note that this cost classification is very important for claiming tax deductions.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9119728207588196, "language": "en", "url": "https://www.routledge.com/The-Rise-of-the-Corporate-Economy/Hannah/p/book/9780415489478", "token_count": 259, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.28125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:63301d75-6d04-4b90-b4f0-5e75a44e6faa>" }
First published in 1976, this much acclaimed book looks at the story of how today's large corporations have superseded the small competing firms of the nineteenth century. The long-run analysis confirms that the crucial periods in the formulation of the modern corporate system were the 1920's and 1960's. The merger wave of these decades was associated with a desire to improve the efficiency of Britain’s industrial organization, and the author shows that it was in a large measure responsible for the trend improvement (by historical if not international standards) in Britain's growth performance. Students of business, economic history and industrial economics will all welcome the return to print of a notable contribution to the continuing debate on the evolution and control of the corporate manufacturing sector. Table of Contents 1. Business: history and economics 2. The industrial inheritance; the growth of firms to 1914 3. The rationalization movement 4. Government: trustbuster or promoter 5. Capitalist ownership and the stock market 6. Management and the limits to growth 7. The rise of the corporate economy: dimensions 8. The rise of the corporate economy: directions 9. From the 1930's to the 1950's : continuity or change? 10. The modern corporate economy 11. The upshot for welfare
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9605460166931152, "language": "en", "url": "https://www.techwell.com.au/telstras-fixed-line-terminology-explained/", "token_count": 690, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.053466796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f5ebf04f-2a68-4da1-8680-d4f2cf478683>" }
With our ever changing world, there are terms thrown around that not everyone understands. PSTN, ISDN, NBN, FTTP, FTTN and so on. This blog will help explain some of these terms and debunk the confusion associated with them. Phone lines have grown and changed with technology as the years have gone past. But they are still fundamentally the same as the old manned switchboards of the past, now the technology does the switching. Bringing us to the first acronym. Or Public Switched Telephone Network are the basic everyday phone lines we all use at home. They connect us to ADSL (Asymmetric Digital Subscriber Line) services and our families across the world. Commonly referred to as analogue lines they are the most basic iteration of phone lines we use. Or Integrated Services Digital Network are also known as digital lines or BRI (basic rate) and was the next step of internet after dial-up. They are normally supplied as BRI2 which are 2 lines connected through a white box referred to as an NT1 (Network Termination Type 1) it supplies 2 phone lines over a single copper pair where the basic analogue phone line only gives 1. ISDN also comes in other derivations as well, and has other features like DID (direct in dial) or 100 Number In dial – giving customers a series of 100 numbers. National Broadband Network (NBN) is the Network the Australian government is financing, initially as FTTP (Fibre to the Premises) however is now being rolled out as FTTN (Fibre to the Node) as a cost saving measure. The core difference between these services is the fibre optics used on NBN are no longer run into the house but to a centralised point, then delivered into homes via existing copper cabling. Speeds are still fundamentally the same but these services are now connected as VDSL (Very high bitrate digital subscriber line) instead of Fibre. Phone lines on NBN are supplied as SIP (Session Initiation Protocol) or VOIP (Voice Over Internet Protocol). With NBN rolling out at a rapid rate across the country there has never been a better time to submit an Expression of Interest. One of our team will be in touch to let you know if NBN is available to you. Plus, you will go in our monthly draw to win a $350 visa card. PSTN lines are quite often used as Outgoing Call Lines as most carriers have Unlimited Calling Plans on these lines. ISDN lines are quite often used as incoming lines due to the ability to specifically program them for different functions SIP & VOIP lines are internet based lines and are an emerging technology that may just give us the best of both worlds. All in all, it comes down to all lines serve the same purpose for you and your business – they keep you connected with customers and suppliers. Like cars, there are a lot of different makes, models, engines, fuels etc, and we may not understand those differences, but they keep you in touch with your customers friends and family. So let us do what we do best and take the hard work out of the lines, so you can do what you do best and focus your time and energy on making your business a success. Need more help understanding different technology? Give our Techwell team us a call on 1300 369 662.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.97579425573349, "language": "en", "url": "https://www.cashinasnap.com/blog/personal-finance-basics-to-tackle-debt-an-overview", "token_count": 826, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07763671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:57f0602f-1260-4779-a401-f44a3bf1eb0c>" }
Personal finance is something that many need to have a grasp on in order to better manage their finances and debts. It is typically neglected because of the complicated work and confusing words involved. However, you can get your personal finances at its best by just simply following basic financial guidelines and properly tackle your debt without having to feel frustrated or angry. Here's an overview of personal finance basics to help lower that debt!Basics of Personal Finance In order to have a good understanding of personal finances and debt, it is good to know the terminology used by finance professionals. You're already aware of most of the wording they use because it is also used in tax papers and by CPA's such as liabilities and assets. Both of these terms apply to how much you are able to save compared to how much you spend. When someone accrues debt, this is considered a part of their liabilities and that can mean owing too much in the long run. So, when it comes to the first step of avoiding debt, there are few things to understand: the cause of debt, the effects debt has on your life, and how debts can be avoided. A simple tax equation really helps during this process and that is: Assets = Liabilities + Capital. To expand, if your liabilities are larger than your assets then you amass debt. It is also good to know the equation of Net Income = Gross Income - Expenses. When your expenses are exceeding your gross income then your numbers are going to dip into the negatives. Knowing these two equations are the basics of personal finance. Once you have exact numbers from these equations, you can begin paying off debts and start actually saving your money for those rainy days.How to Avoid Debt Now you know the basis of personal finances and two important equations for managing them.Now figure out how to avoid bad debt and start clearing the current debt you have. It will take time but it is absolutely doable with the right kind of budget and know-how.Here are few things to know about properly avoiding debt: It is best to stay away from things such as credit cards with high limits and purchases on accounts because they can immediately lead to higher debt. Having these limitless opportunities to spend means losing control of your finances and placing yourself back into a situation with debt. If you can, only have credit cards with lower limits and make sure to only use them when you absolutely know you can pay back the amount on time. Smaller purchases like gas, for example, are best for credit cards. And as for purchases on account, they should be avoided altogether if possible. Always remember that if you can't pay off something completely or accrue debt, it is going to have interest attached to it. The longer you wait to pay it off, the more that interest is going to add up. So whatever debt you have, it should be paid off as soon as possible. On that same note, paying off interest is best with cash or further credit since it is not as beneficial as completely paying off debt. And the most important one to remember is that paying off more debt than you can financially handle is not ideal. While it may seem like a good idea to pay off as much as possible in as little time as possible, it can cause a bad financial situation that leads to even more issues When you think you won't be able to pay something off in a timely manner, then it is best to try and avoid purchasing it or finding a new way to pay it off. Bill and tax collectors, for example, are willing to work with you in order to set up a payment plan that can work with your budget. Regardless, there are options for you out there so you can stop feeling the pressure of debt and get your personal finances in order. Another route to consider is also taking out a loan. You don't even have to leave your home in order to so anymore. But it is still wise to keep in mind the basics of personal finance and what you have learned about managing debt.Every little bit can help to take off the financial burden.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9524029493331909, "language": "en", "url": "https://www.risdall.com/our-thoughts/strategy-101-consumer-demand-dictates-your-business-strategy-not-sales/?shared=email&msg=fail", "token_count": 795, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.076171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:395d8f30-7473-43fc-a155-8153410638f3>" }
Companies are under constant pressure of adapting their business or closing the doors for good. One company learned first-hand how avoiding change and ignoring market trends can lead to an unfortunate demise. Kodak: The Inventor of the First Digital Camera Founded in 1888, Kodak dominated the market for cameras, film, chemicals and paper for nearly a century. The company represented everything photography was in people’s minds – the fun, the memories, the technology and the Kodak moment. In 1975 Kodak was a pioneer in creating the first digital camera. Unfortunately, their team underestimated consumers’ desire for a convenient, easy-to-use camera that didn’t require film. Instead of releasing the digital camera and introducing new technology to the world, Kodak chose to keep the technology under wraps for fear that it would hurt their film sales. Film was Kodak’s most profitable product, so when the executive team made the decision to focus on film and keep the digital camera an internal project, they thought they were protecting the company’s future. However they learned the hard way that companies can’t ignore trends and consumer demand. If you don’t Do It, Your Competitors Will Around the same time Kodak was improving its digital camera technology, two other camera manufacturers happily stepped up to the plate and released digital cameras for sale to the public. Sony and Canon started selling cameras in the early 1980s. By the time the team at Kodak realized they needed to enter the digital age whether they wanted to or not, the competition was too fierce for Kodak to catch up. Kodak’s marketing myopia was the start of a downward slope for the company. Kodak has since attempted to regain its market share and restore its public perception, but its strategy misfire with the digital camera has left a scar that can’t be ignored. When Sales Get Tough, It’s Time for a New Business Strategy Kodak viewed itself as being in the business of film, so it wanted to protect the stable revenue from film sales. The company might have had a different experience had executives viewed themselves as being in the business of telling stories and providing customers with a way to share their memories. By asking the right questions and paying attention to consumer desires, companies can develop effective strategies that help navigate through uncertain times. When organizations are having trouble meeting sales goals, it’s wise to explore one of the below business strategies for growth: - Market penetration: Either increase market share by gaining more buyers, or increase product usage by encouraging current customers to buy more products. - Product development: Improve an existing product or extend an existing product line to encourage customers to make more purchases. - Market development: Offer current products to new markets by expanding geographically or targeting new audiences. - Diversification strategies: Create a completely new product targeted at a new audience. These strategies were created to help companies develop strong business models by aligning their company needs with consumer demand. If Kodak had embraced a product development business strategy to sell the digital camera to its current customer base, then it might still be the photography giant it once was. When trends change, pay attention to what your target market is saying and adapt your business accordingly. Kodak learned the hard way, but you don’t have to. If you’re seeing a plateau with sales or a shift in market trends, give us a call at (651) 286-6700 to talk about a business strategy that could be the difference between making it and breaking it for your company. - MASSolutions, “KODAK’S MARKETING MYOPIA“ - Forbes, “Kodak Failed By Asking The Wrong Marketing Question“ - Photo 1: Stockmonkeys.com
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9652137160301208, "language": "en", "url": "http://discoveringthenewamericandream.eiu.com/experiencing-the-dream/", "token_count": 298, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0181884765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:18e315b2-a1fe-43c6-9437-8d3109920f26>" }
Experiencing The Dream Compared with the most popular definitions of the American Dream, such as “satisfactory standard of living” and “freedom to live on your own terms”, very few Americans believe the American Dream is defined by “educational achievement”. However, having had a college education has an impact on how fully people feel they are experiencing the Dream. Those with degrees are more likely to say they are fully living the American Dream (65%) and are financially satisfied (63%) than those with no college or only some college (28%, 30%). Today, nearly half (47%) of Americans say they have personally experienced educational achievement. That figure grows significantly with household income (31% in the lowest income brackets, and 69% in the highest). Indeed, America may be home to some of the world’s best universities, but average tuition fees have surged 40% in the past decade for full-time students at public four-year colleges, leaving many students graduating with considerable debt. Although educational achievement is not a central tenant of the American Dream, a majority of Americans (76%), regardless of age, ethnicity and income, agree that education is the catalyst for achieving the American Dream, and that education and skill development top the list of important needs to get ahead in the future. Sadly, the rising cost of education and burden of college debt is generally perceived to be the biggest limitation to the American Dream for future generations.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9390945434570312, "language": "en", "url": "https://housing.wiki/wiki/Filtering", "token_count": 2848, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07275390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:027f2cd2-66ee-41f5-980c-d9c8918cbdaa>" }
Filtering is the process of housing units changing price over time and therefore tending to be inhabited by people of different income/wealth than previously. A related but distinguishable concept is "chain of housing turnover", in which newly available homes (e.g. new construction) are moved into by households who formerly did or otherwise would occupy another existing local home; this home in turn is left available for another household to move into, etc. Housing turnover research such as [Kristof 1965] and [Mast 2019] suggest that new market-rate housing has a strong effect in creating new housing opportunity to other households far down the income scale. - 1 History of concept - 2 Kristof (1965) NYC "chain of housing turnover" study - 3 Rosenthal (2014) US-wide empirical study of filtering - 4 Mast (2019) chain-of-turnover study - 5 Alternate terms / way of explaining filtering - 6 Other ways new housing can affect affordability - 7 References History of concept Originally and most commonly, 'filtering' it refers to "filtering-down," whereby housing that was built for or occupied by a higher-income group becomes less expensive and occupied by households of lower income. Ratcliffe defined it as "the changing of occupancy as the housing that is occupied by one income group becomes available to the next lower income group as a result of decline in market price." However, Lowry analyzed filtering as change in value which could be up or down: "I propose to define 'filtering' simply as a change in the real value (price in constant dollars) of an existing dwelling unit. ... To analyze filtering as a market process, its causes and consequences, four basic constructs should be kept in mind: (1) An array of all dwelling units according to their real values ... (2) An array of all dwelling units according to their quality (by some quantifiable measure other than price). (3) An array of all households according to their real incomes ... (4) An array of supply prices of new dwelling units in each quality class...." later the concept of gentrification has been described as "filtering up." [Goetze 1979] At least back to the 1940s in the United States, the concept of filtering has been used to support the argument that private, or market-rate, housing development can meet housing needs of much of the population, even if it is often originally built for or occupied by higher-income residents. While often made, the argument mostly lacked good empirical proof until Rosenthal's study (see below). Kristof (1965) NYC "chain of housing turnover" study from Frank S. Kristof, Frank S. "Housing Policy Goals and the Turnover of Housing." AIP Journal (now JAPA: Journal of the American Institute of Planners), Volume 31, 1965, Issue 3, Pages 232-245: In the summer of 1963, a project was completed that illustrated the chain of housing turnover generated by people moving into newly constructed units.2l Starting with an interview sample of 64 initially-occupied new units, the survey required a visit to each housing unit left vacant by the household that occupied the new unit. The successor household (if there was one) was interviewed, and the characteristics of its present as well as that of its previous housing unit was obtained. Its previous housing unit was then visited and the new household occupying that unit was interviewed in the same manner. The chain was followed until it was broken. This occurred when a household in the sample had not left a unit vacant in the City or when the unit in the sample was found to have remained vacant, was demolished, or had been otherwise removed from the market.‘* Although no claims are made about the representativeness of the sample, the implications that could be drawn from the survey data were quite dramatic. It was found, for every 10 newly constructed units in the sample, 24 families were able to make voluntary and presumably more satisfactory adjustments in their housing circumstances-10 by moving into the new units and 14 by moving into existing units made vacant by the housing turnover that ensued. The data further indicated that the chain of housing moves generated through new construction resulted in an improvement of the housing status of nearly all the families involved. Rosenthal (2014) US-wide empirical study of filtering Substantial evidence for filtering effects was presented in (Stuart Rosenthal, 2014), "Are Private Markets and Filtering a Viable Source of Low-Income Housing? Estimates from a 'Repeat Income' Model." in American Economic Review. https://drive.google.com/open?id=1tDFiyW5rVEUtx9rceWZjTDJPODX3NeEO. He dmonstrated that filtering occurs widely, although significantly less so in recent decades in high-cost, supply-constrained areas such as the Bay Area and Boston. Comment from Dan Immergluck, 2 January 2017: "I read Rosenthal piece as showing that incomes decline annually 7 times as fast as rents -->rent-to-income ratio increasing over time. And given high rents of new units being produced, I am not sure how this can be viewed as providing "affordable housing" to lower-income folks." Comment from YIMBYwiki, 2 January 2017: "counterpoint to Dan: even if rent:income ratio or price:income ratio increases, the unit may still be affordable to the tenant. Higher-income/wealth renters typically spend lower % of income on housing, so the ratio might often increase a lot and still be affordable." Comment from YIMBYwiki, 2 January 2017: also, fwiw, the households occupying down-filtered units are affording them at least in the sense of being able to move into them. They could be cost-burdened, but otoh the fact they could get the unit suggests they met some income:rent standard of the landlord. Comment from Dan Immergluck, 2 January 2017: "In most cities the bulk of folks under 50% AMI cost burdened. That means most landlords are accepting high rent to income ratios. Often over 50%. Look at recent work by @jenny_schuetz. Esp at what this means in terms of residual incomes." Mast (2019) chain-of-turnover study Mast, Evan. "The Effect of New Market-Rate Housing Construction on the Low-Income Housing Market." Upjohn Institute Working Paper 19307. July, 2019. https://www.dropbox.com/s/zuzxvupdbqcvhql/Mast%20Luxury%20Housing.pdf Alternate terms / way of explaining filtering "Taking pressure off the existing housing stock is normally how I phrase it. " - Laura Clark, YIMBY Action. 7 Dec 2017. Sightline "Musical Chairs" concept and video. Other ways new housing can affect affordability Filtering is not the only way in which new market-rate housing may address affordability for other income groups, however. Other reasons are 1) fungibility/substitution, 2) adaptation; 3) helping to subsidize other housing. 1) Fungibility / substitution (see also "chain of housing turnover" above) The degree to which different housing units, or any good, may equivalently serve someone's needs is sometimes called fungibility, or substitutability. See Wikipedia: fungibility. In the case of market-rate housing, it is likely that often there is enough fungibility between available housing units that the buyer/renter, if that MR housing had not been developed, would have sought out another unit in the area, excluding or potentially displacing another, probably lower-income person, from that unit. 2) Adaptation Second, housing may be used differently or converted from its original intended use, possibly serving other and lower-income residents. Houses and apartments become shared between multiple tenants, either informally as house- or apartment-shares, or possibly by being divided into multiple legal residences. This can occur even with the first tenants of new housing, for example in San Francisco where vacant new "luxury" apartments are sometimes divided and shared between many tenants. 3) Funding for other housing New market rate housing projects may include or generate funds for affordable housing through Inclusionary housing programs, through Impact fees, or more generally by increasing city and state tax bases to allow funding for other housing. - Cortright, Joe. Urban myth busting: Why building more high income housing helps affordability. City Observatory. 20 Feb 2017. [note, article states "the median value of rental housing declines by about 2.2% per year" citing [Rosenthal 2014], this seems to be a mistake. Dan Immergluck notes "Rosenthal paper says that 2.2% per year decline is in INCOMES of the tenants, NOT price/rent of unit. The decline in RENTS is closer to 0.3% per year." - Goetze, Understanding Neighborhood Change, 1979. Apparent first use of "filtering up" as a term or definition for gentrification. - Gray, Nolan. "How Luxury Units Turn Into Affordable Housing: Building more high-end apartments doesn’t sound like a quick fix for the affordable housing crisis. But maybe you just have to look harder." Citylab, June 5, 2019. [discussing Mast 2019 paper]. https://www.citylab.com/perspective/2019/06/housing-supply-debate-affordable-home-prices-rent-yimby/591061. - Hertz, Daniel. "What filtering can and can’t do." City Observatory, 10.11.2015. http://cityobservatory.org/what-filtering-can-and-cant-do/. - Kristof, Frank S. "Housing Policy Goals and the Turnover of Housing." AIP Journal (now JAPA: Journal of the American Institute of Planners), Volume 31, 1965, Issue 3, Pages 232-245. https://doi.org/10.1080/01944366508978170. PDF: https://drive.google.com/open?id=1xaM4ODiP9KUJH-Gjgqr53Xzl2tWhrlRg. - Lehner, Josh. “Why Housing Supply Matters.” Oregon Office of Economic Analysis (State of Oregon). December 14, 2017 “Last week, I was a panel member for the YIMBY (Yes In My Back Yard) breakout discussion at the Oregon Leadership Summit. The group overall discussed the lack of supply, the importance of affordability, some regulations, market conditions, public policies and the like. It was a wide-ranging and informative session, if I may say so myself. Today I want to recap a few things, and take another stab at visualizing the importance of housing supply and the role of filtering.” “Housing does filter. New construction is always expensive and always aimed at the upper third or so of the market. That said, over time as housing depreciates, it does become more affordable. This filtering does not happen overnight. It is a long-run process. Filtering is also the major way to provide reasonably priced workforce housing for those making in and around the median family income. There is not nearly enough public money to fund the affordability gap, “Finally, what follows is another effort to show how filtering works in the real world. What the charts below show is the current housing stock in the Portland metro based on the recently released 2016 American Community Survey data. The first chart shows the housing stock divided into thirds based on home values or monthly rents*. Then we look at when these units were built by decade.” - Lowry, Ira S. "Filtering and Housing Standards: A Conceptual Analysis." Land Economics, Vol. 36, No. 4 (Nov., 1960), pp. 362-370 - Mast, Evan. "The Effect of New Market-Rate Housing Construction on the Low-Income Housing Market." Upjohn Institute Working Paper 19307. July, 2019. https://www.dropbox.com/s/zuzxvupdbqcvhql/Mast%20Luxury%20Housing.pdf - Ratcliff, Richard Updegraff. "The Economics of the Housing Problem: Preliminary Draft." (1945) - Ratcliff, Richard Updegraff. Urban Land Economics (1949). - Rosenthal, Stuart S. . "Are Private Markets and Filtering a Viable Source of Low-Income Housing? Estimates from a 'Repeat Income' Model." American Economic Review, 104(2): 687-706 (February 2014). DOI: 10.1257/aer.104.2.687. Preprint (2013): http://faculty.maxwell.syr.edu/rosenthal/recent%20papers/Is_Filtering_a_Viable_Source_of_Low-Income_Housing_%206_18_13.pdf.https://drive.google.com/open?id=1tDFiyW5rVEUtx9rceWZjTDJPODX3NeEO. - Zuk, Miriam, and Karen Chapple. "Housing Production, Filtering and Displacement: Untangling the Relationships." Berkeley Institute of Government Studies - Research Brief, May 2016.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8609521389007568, "language": "en", "url": "https://investinganswers.com/articles/calculating-internal-rate-return-using-excel-or-financial-calculator", "token_count": 783, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.051025390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7798732b-37cd-4b34-b899-f44191d446ec>" }
Calculating Internal Rate of Return Using Excel or a Financial Calculator For both examples, we'll use the following data set: Assume Company ABC wants to know whether it should buy a $500 piece of equipment. It projects that it Year 1, $200 in Year 2, and $300 in Year 3. Now you can calculate the IRR of the proposed project.increase profits by $100 in #-ad_banner-#Using a financial calculator: : the steps in this tutorial outline the process for a Texas Instruments BA II plus financial calculator. 1. Enter theopen the cash flow register. The calculator should read CF0=, which tells you to enter the cash flow for time 0.values for each period into the calculator's cash flow register. This is done by pressing the Cash Flow [CF] key to 2. Next enter the cash flow values for the subsequent periods. This is done by hitting the down arrow once. The calculator should read CF1=. Type in the amount for the first cash flow, 100, and hit [ENTER]. The calculator should now say C01= 100. To enter cash flow from Year 2, hit the down arrow twice. The calculator should read CF2=. If it says F1=, hit the down arrow one more time. Type in the second year's cash flow, 200, and hit [Enter]. The calculator should read CF2= 200. Hit the down arrow twice again and do the same thing for the third cash flow period, CF3. If the data set has more periods, follow the same procedure for C04 and so on. 3. Once the cash flow values have been entered into the calculator you are ready to calculate the IRR. To do this press the [IRR] key. The screenread IRR= 0.000. To display the IRR value for the data set, press the [CPT] key at the top left corner of the calculator. If you have followed this process correctly, the calculator display the correct IRR. For our example, the IRR is 8.208264%. Using Microsoft Excel: Finding the IRR using Excel is fairly straighforward. 1. First, type the intial cash flow into any cell on the spreadsheet. Keep in mind this initial investment has to be a negative number. Using our original example, type -500 into the A1 cell of the spreadsheet. 2. Next, just like the calculator, youtype the subsequent cash flow values for each period into the cells directly under the initial investment amount. Following our example, type 100 into cell A2, 200 into cell A3, and 300 into cell A4. 3. Finally you are ready to calculate the IRR. To instruct the Excel program to calculate IRR, type in the function command "=IRR(A1:A4)" into the A5 cell directly under all the values. When you hit the enter key, the IRR value, 8.2%, should be displayed in that cell. This same procedure can be followed for any data set if the cash flow values are listed one after another in a column directly under the intial investment amount. You would thenthe range of cells in between the parentheses of the IRR command function. Personalized Financial Plans for an Uncertain Market In today’s uncertain market, investors are looking for answers to help them grow and protect their savings. So we partnered with Vanguard Advisers -- one of the most trusted names in finance -- to offer you a financial plan built to withstand a variety of market and economic conditions. A Vanguard advisor will craft your customized plan and then manage your savings, giving you more confidence to help you meet your goals. Click here to get started.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.965180516242981, "language": "en", "url": "https://olduvai.ca/?tag=milton-friedman", "token_count": 367, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.455078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7575cfd6-a249-4214-a127-350b95fd6fb0>" }
Whenever the so-called economy shows signs of weakness most experts are of the view that what is required to prevent the economy sliding into recession is to boost the overall demand for goods and services. If the private sector fails to increase its demand then it is the role of the government to fill this void. Following the ideas of Keynes and Friedman, most experts associate economic growth with increases in the demand for goods and services. Both Keynes and Friedman felt that the great depression of the 1930’s was due to an insufficiency in aggregate demand and thus the way to fix the problem was to boost aggregate demand. For Keynes, this could be achieved by having the federal government borrow more money and spend it when the private sector would not. Friedman on the other hand advocated that the Federal Reserve pump more money to revive demand. There is however never such a thing as insufficient demand as such. We suggest that an individual’s demand is constrained by their ability to produce goods. The more goods that an individual can produce the more goods he can demand i.e. acquire. Note that the production of one individual enables him to pay for the production of another individual. (The more goods an individual produces the more of other goods he can secure for himself. An individual’s demand therefore is constrained by his production of goods). Observe that demand cannot stand by itself and be independent – it is limited by production. Hence, what drives the economy is not demand as such but the production of goods and services. In this sense, producers and not consumers are the engine of economic growth. Obviously, if he wants to succeed then a producer must produce goods and services in line with what other producers require ie. consume. …click on the above link to read the rest of the article…
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9628401398658752, "language": "en", "url": "https://omcsandiego.org/high-school-college-prep-tips/recent-change-to-the-financial-aid-process/", "token_count": 763, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0306396484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:645808f4-962d-480e-8eb0-3855707740b4>" }
A recent change to the financial aid process should make things a little easier for parents in the coming years. It may also change how parents manage their finances starting on January 1st, 2016. That’s only two weeks away! Families of high school seniors often find themselves having to estimate their taxes to meet impossibly early financial aid deadlines. If their estimates are wrong, their aid package can end up being worse than they expected or even disappear. In a great development for parents, the federal government will soon make the process easier by requiring parents to use two-year-old income tax returns rather than the latest (and often unfinished) returns. Take a quick look at the change below and consider whether it will impact you and your family. In February of a student’s senior year, parents were required to enter the prior calendar year’s tax information (Jan 1st of junior year to Dec 31st of senior year) into a FAFSA form in order to gauge how much financial aid (if any) they were eligible for. This was a nightmare. By February, most families didn’t have the tax information they needed to fill out the FAFSA form accurately (after all, the federal tax deadline is in mid-April). Some parents were forced to submit paperwork with estimates of their final tax numbers. Not only did this timing cause a lot of angst and running around, but it left families with an incomplete picture of how much aid (if any) they would receive to pay for college. Now, parents can use what’s called their “prior-prior year tax return” to fill out the FAFSA forms. That is, they can use the calendar year that straddled their child’s sophomore/junior year versus the junior/senior year. Here’s an article that takes a deeper dive into the new rule and its implications: https://www.insidehighered.com/views/2015/09/17/essay-prior-prior-ppy-year-data-free-application-federal-student-aid-fafsa What does this mean for parents with freshman and sophomores in high school? When this year’s sophomores (Class of 2018) apply to college in their senior year, parents will submit information from the calendar year 2016 tax return. When this year’s freshmen (Class of 2019) apply to college in their senior year, parents will submit information from the calendar year 2017 tax return. Why might you care? It used to be that January of the child’s junior year would be the earliest time that colleges could “look back” at your financial situation to assess financial aid eligibility. In other words, if a family planned to “manage their income” in order to present a more “needy” profile, they started in the middle of their child’s junior year. Now, they must do so in the middle of their child’s sophomore year. For instance, if you decided that it wasn’t worth it for both parents to continue working because it generated too much AGI (adjusted gross income) that effectively went directly into the college coffers, then one parent should stop working in the middle of their child’s sophomore year (not junior year), because that is the year that will be evaluated on the FAFSA form. If you think this development might impact your financial planning, please seek expert advice from a college financial aid counselor. The rules can be complicated and every family’s case is unique. Never stop preparing,
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8452954292297363, "language": "en", "url": "https://podbay.fm/podcast/575817086", "token_count": 1892, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0771484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4bb3377d-5af8-4e13-8531-32d091f82869>" }
Law students are routinely taught how to gather facts, interpret legal texts, and apply legal rules to established facts. These techniques are and will always be important to all lawyers, regardless of their position and practice. However, they do not suffice. Lawyers in almost every area of practice (litigation, corporate, government, public interest, etc.) must be familiar with analytical concepts and methods that go beyond fact gathering and interpretative legal skills in order to be successful. This course covers the basics of the methods. It will provide you with a fundamental understanding that allows you to apply them when necessary in legal practice. This course is designed to be fully accessible to those with no prior quantitative training or background in the subjects covered. However, a high school level of mathematics is required. Vorlesung Methodenlehre, Contracting - Example: Your next tenancy agreement; Why exchange goods and services?; The Edgeworth box: Tea or coffee?; Why contracts?; Complete contracts; Contracts and the law; Adverse selection vs. moral hazard; Adverse selection: The reaction cycle; Moral hazard; Resolving disputes; Contract checklist; Production contracts; Examples for flat-free and cost-plus contracts; The hold-up problem; Sale and lease of property; Loan contracts; Principal-agent contracts; Example: Bonuses for executives. Vorlesung Methodenlehre, Game Theory - Decision theory vs. game theory; A typical legal application of game theory; Some applications of game theory in legal settings; Representation of games; Normal-form games; The prisoner's dilemma; Prisoner's dilemma in litigation; Solution concepts for the prisoner's dilemma; Discoordination game: Matching pennies; A clever takeover bid; Dominant strategy equilibrium; Iterated games; The iterated prisoner's dilemma; Social benefits or social losses?; Achieving the cooperative outcome; N-person prisoner's dilemmata and common goods; Coordination games; Extensive-form games; The ultimatum game. Vorlesung Methodenlehre, Individual Finance - What is finance?; Individual Finance; Finance and Lawyers; The time value of money; The time price of money;Compound interest; The effect of compound interest; Example: An investment decision; Example: Refined litigation risk analysis; Inflation; The Rule of 72; Net present value of an investment; The concept of Annuities; Internal rate of return; Risk and return; Mortgages as means of individual finance; The concept of a mortgage; Student debt in the US; Financial distress: Personal bankruptcy; Financial innovations before the crisis; The financial crisis - how come?; Global CDO Issuance Volume 2000 - 2012; Legal responses to the finacial crisis; Too big to fail?. Vorlesung Methodenlehre, Decision Theory - Fundamental concepts; Expected value; Cumulative Probabilities; Serially cumulative probabilities; Decision trees; TreeAgePro and other software for litigation risk analysis; Difficulties in building the tree; Discounting and risk preferences; Risk aversion; Sensitivity analysis; Challenges to the rationality assumption. Vorlesung Methodenlehre, Due to software failure this podcast covers only the first part of unit 4; we are currently working on restoring the second part. Accounting - Accounting and Lawyers; Why Accounting?; Financial Statements; Financial statements in the media; The fundamental accounting equation; The structure of a balance sheet; San Francisco Coffee Company GmbH; The income statement; Sample income statement; Alternative design of the income statement; Income statement Siemens AG; Cash flows; Sample cash flow statement (IAS 7). Vorlesung Methodenlehre, Corporate Finance - What is corporate finance?; The fundamental accounting equation; Corporate finance and lawyers; The spectrum of financial products; Why do firms emerge?; The theory of the firm; Insourcing or outsourcing - make or buy?; Insourcing at General Motors; Why is there limited liability?; Separation of ownership and control; Shareholder coordination; Managers as agents for the stockholders; The case of Ernst Lieb; Diverging interests; Principal-agent theory; The case of Rajat Gupta; Countermeasures for principal-agent problems; The market for corporate control; Agent costs of debt; The efficient-market hypotheses; Behavioral finance; The dot-com bubble; Company valuation. Vorlesung Methodenlehre, Law and Economics I - The basic idea of Law and Economics; Positive and normative Law and Economics; History of Law and Economics; Importance for jurists and lawyers; Property rights; Bargaining and efficiency; The externality problem; Solutions to the externality problem; Which legal measure to take?; The Coase theorem; The Coase's invariance thesis; Coase's efficiency thesis; Pareto efficiency and Kaldor-Hicks efficiency; Coase's assumptions; Challenges to the Coase theorem; Transaction costs; Consequences of high transaction costs; The (political) program of Law and Economics. Vorlesung Methodenlehre, Microeconomics II - Microeconomics II - Imperfect consumer information; Adverse selection: The reaction cycle; Government responses; Example: Practice of mediation; Monopoly and related market behavior; Relevance for lawyers; Types of monopolies; Monopoly and IP; How a monopolist sets prices; The monopolist's calculation; Price discrimination; Government policy and monopoly; High profile cases; Natural monopolies and price regulation; Pricing in a duopoly; Oligopoly and monopolistic competition; Public goods; Welfare economics. Vorlesung Methodenlehre, Microeconomics I - Microeconomics in the news; What is microeconomics about?; Microeconomics vs. macroeconomics; Microeconomics and lawyers; Individual decisions on the consumption of goods; Basic demand theory; Elasticities; Consumer choice; Opportunity costs; Supply and demand curves; Supply and demand; Competitive markets; Social welfare; Consumers' surplus and producers' surplus; Why competition?; Total surplus as a measure for social welfare?; Government intervention; Price floor and price ceiling; Commodity taxation leading to a deadweight loss; Distributive justice; Lessons for lawmakers. Vorlesung Methodenlehre, Law and Economics II - Information productivity; Information duties; Complete contracts; Incomplete contracts; Contractual risk allocation; Risk allocation schedule; Utility function of a risk-averse individual; Contractual risk allocation; Risk allocation and legal practice; Sanctions for breach of contract; Efficient breach of contract; Specific performance and efficiency; Restitution damages and efficiency; Reliance damages and efficiency; Expectation damages and efficiency; Sanctions for breach of contract; Contract renegotiation. Vorlesung Methodenlehre, Statistics I - Statistics and lawyers; What do statisticians do?; Descriptive vs. inferential statistics; Terms used for descriptive statistics; A multivariate date set; Building a histogram; Numerical descriptors; Inferential statistics; Variables used in inferential statistics; Deviations and distributions; The standard normal distribution; The Gaussian distribution; Z-scores and z-table; Calculating with a z-table; Hyothesis testing; Error types; Estimation; How to perform empirical research; The difference between coincidence and correlation; Negative Correlation; The difference between correlation and causation. Vorlesung Methodenlehre, Statistics II - Methodology of econometrics; Regression analysis; OLS regressions; Minimizing ordinary least squares; Bivariate and multivariate statistics; Regression analysis in practice; Event studies; Cumulated average abnormal returns; Survey data and validity; Stratified random sampling; Misleading averages: Gesell's norms; Percentages and percentage points; How to tune up a graph; Statistical misinterpretation; Marketing with statistics; Suspicious wording; Statistical Don'ts. Vorlesung Methodenlehre, Law and Economics IV - Dispute resolution and legal practice; Economic perspective on litigation; Corrective policies; Settle or litigate?; The English rule; The American rule; The zone of possible agreement; Effects of risk aversion; Prisoner's dilemma in litigation; The rationality assumption; Optimistic overconfidence; A hypothetical value function; Other challenges to the rationality assumption; Alternative dispute resolution; Methods of alternative dispute resolution; Dispute resolution clauses; Private and public effects of ADR; Settlement statistics. Vorlesung Methodenlehre, Law and Economics III - Tort law and legal practice; The crisis of tort law; Costs of the US tort system; The history of liability law and economics; Liability standards; Tort law as a complement to contract law; Factors influencing the efficiency of liability standards; Damage costs; Liability (only) for damages intentionally caused; Negligence liability; The Hand rule; Strict liability; Identifying the optimal precaution level; Contributory vs. comparative negligence; Practical problems with the negligence rule and the strict liability; Which liability rule to choose?; Optimal liability standard; Liability and administratives costs.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9535526037216187, "language": "en", "url": "https://thehedgefundjournal.com/portfolio-management/", "token_count": 2247, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0498046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:81f40353-64fc-4ff1-836f-0c3e5d080a55>" }
Behavioural finance is the study of how asset prices can be wrong, in the sense that the actual price of an asset can be above or below the fundamental value. Behavioural finance stands in opposition to the efficient market hypothesis, which states that asset prices are always equal to their fundamental value. Behavioural finance has emerged in the past 20 years as a competing branch of academic research. Behavioural finance consists of two key components. First, there has to be something that causes prices to be wrong in the first place. For example, perhaps some investors are irrationally gloomy about IBM, and thus value it at less than its true value. Second, there has to be something that prevents other, more rational investors from correcting the mistakes made by these irrational gloomy investors. This second component is called 'limits to arbitrage'. The first component, the original causeof the mispricing, could be either a specific feature of investor psychology, or it could be some sort of institutional effect. Much of behavioural finance has focused on investor psychology. Briefly, a large body of evidence shows that investors make systematic errors in evaluating information. They may over-react to dramatic news and under-react to mundane news. So, for example, if Apple has an exciting new Ipod product, investors become over-enthusiastic about Apple and it becomes overpriced. Meanwhile, investors may undervalue IBM despite its solid but boring fundamentals. There are many other ways in which some investors seem to behave irrationally. Both emotion and simple cognitive errors play a part. There are several limits to arbitrage that prevent rational arbitrageurs from quickly pushing prices back to fundamentals. These include various trading costs as well as risk. The risks associated with correcting mispricing include both sentiment risk and fundamental risk. The classic example of sentiment risk is the tech stock bubble of 19992000. NASDAQ stocks were overpriced in early 1999, but got even more overpriced by early 2000. A rational arbitrageur attempting to correct this mispricing faced huge sentiment risk. A variety of different evidence supports the idea that investors make mistakes and that these mistakes affect asset prices. One type of evidence comes from the behaviour of individuals, both in lab settings and when making actual investment decisions. For example, many studies have shown individuals make basic errors in choosing how to diversify, what types of evidence to consider, and how much to trade. A second type of evidence comes from asset prices themselves. According to the efficient market hypothesis, it should be impossible to predict returns or to earn profits from trading, yet a variety of robust patterns have been identified that allow one to construct profitable trading strategies. A third type of evidence comes from connecting the behaviour of different classes of investors with asset prices. Under the efficient market hypothesis, the trades of any one class of traders should not predict future returns. In fact, recent research indicates that some investors are smart money, and are able to earn excess returns. Other investors are dumb money, and consistently hurt themselves through their trading. The smart money appears to be professional investors (such as short sellers) and the issuing firms themselves, while the dumb money appears to be individual retail investors. When you see, for example, retail investors frantically buying tech stocks in 1999, and tech companies frantically issuing stock, that's a good sign the dumb money is buying and the smart money is selling. Behavioural finance produces three categories of investable strategies. First, there are under-reaction strategies. This group of strategies reflects the tendency of stock prices to under-react to specific events. For example, when bad news (say, an unexpectedly low earnings announcement) occurs for a specific stock, the stock price immediately falls, but does not fall enough. On average, it will continue to go down in the next few months. An under-reaction strategy would hold long positions in stocks that recently had good news, and short positions in stocks that recently had bad news. The second category is value-type strategies, reflecting mispricing that persist for many months or years. The value effect is the fact that over long periods of time, value stocks (measured by price/book or some other valuation ratio) outperform growth stocks. The value effect has been studied by academics since the 1980's and has been shown to hold true in many different countries and time periods. It reflects the fact that sentiment causes some stocks to be overpriced (growth stocks) and some to be underpriced (value stocks). One can use many signals in addition to valuation levels. Value-type strategies go long on firms that are repurchasing stock, have high quality earnings, and have high free cash flow. The strategies go short on issuing, low quality firms that seem 'speculative'. Value-type strategies are contrarian. They tend to buy stocks that investors don't like and sell short stocks that investors love. These different measures of mispricing – issuance, valuation ratios, earnings quality – all tend to be correlated across stocks. This fact suggests that these various measures are all reflecting the same underlying phenomenon, namely mispricing. In thinking about this cluster of attributes associated with mispricing, it is useful to consider an example that captures many of these common elements. Telecom stocks in 1999 had many of the attributes correlated with overpricing. They had huge valuation ratios and were issuing large amounts of shares. Not surprisingly, telecom stocks had low returns subsequent to this episode. The third category is technical strategies, which are a variety of different strategies based on variables such as volume, events and short-term price trends. An example of a technical strategy is the 'earnings announcement premium'. It is a striking fact that, as first noted in a 1968 accounting journal, all stocks tend to rise around their scheduled earnings announcement date. That is, some stocks rise on earnings news, some stocks fall, but on average the winners systematically outweigh the losers. The accompanying graph (Returns to Anticipated Announcement Strategy, Jan 1973 – Nov 2005) shows the twelve month moving average returns from the following simple strategy. Buy every stock on the first day of the calendar month in which you expect an earnings announcement, and sell it at month end. For example, if you expect IBM to announce quarterly earnings on 15 July, buy it 1 July and hold until 31 July. We will hedge this long portfolio by shorting all stocks not expected to announce this month. Thus for every stock we will go long four times a year and short the other 8 months. The graph shows that on an annual basis, the announcing stocks outperform the non-announcing stocks by six percent per year (these calculations do not include transactions costs, which would of course lower the portfolio returns). What explains this effect? One explanation from behavioural finance is the attention-grabbing effect. When IBM announces its quarterly earnings, this news is noticed by unsophisticated investors. These investors tend to buy IBM when it is in the news, and this wave of buying pushes up stock prices. Studies have shown that individual retail investors tend to buy on any news, whether it is good or bad. One might expect that these various strategies would stop working as more and more investors become aware of them. Yet this doesn't appear to have happened. For example, the graph seems to show that the earnings premium is fairly constant over time, even though several academic studies have addressed it since 1968. The value effect has worked amazingly well in the past five years, just as well as it has in the previous 70 years. Why hasn't this smart money eliminated these patterns? One answer to this question is that these strategies keep working on average precisely because they only work on average. Not every value stock goes up. One can only detect and exploit these patterns by looking at very diversified portfolios consisting of hundreds of stocks. In addition to working only in large groups of stocks, you also need to look at long time periods consisting of many years. For example, many observers wrongly concluded that value investing had 'stopped working' as of 19992000, only to see value stage a huge comeback in the next few years. As long as these strategies are hard to detect and difficult to evaluate, they can continue to generate profits on average. Another answer to this question is that it is hard to judge which out of the thousands of possible investment strategies will work going forward. There are many voices claiming to have a way to beat the market, so the item in short supply is not information, ideas, or even IQ, but rather wisdom. These judgment calls require knowledge of the underlying behavioural theory, experience in evaluating empirical evidence, statistical sophistication and formal training in using financial modeling to construct portfolios. These attributes are scarce commodities, but one place they can be found is in individuals with academic finance backgrounds. Access to cutting edge research is particularly valuable right now, since financial economics is in an exciting state of transition. Scientific understanding of financial markets is changing rapidly as behavioural finance researchers make new discoveries. Having identified various patterns that have been uncovered by researchers in behavioural finance, the next step is to design a system to exploit these patterns. The proper way to do this is with a systematic approach. As suggested by the limits of arbitrage discussion, one major issue is risk. What empirical studies have shown is that, for example, on average, stocks with good earnings news beat stocks with bad earnings news. But for any individual stock, there is a lot of random noise. Thus to properly exploit the systematic pattern, one needs to construct a well diversified portfolio that minimizes idiosyncratic risk. In addition to diversifying across stocks, it is also useful to diversify across strategies. That is, a portfolio that combines both value-type strategies and under-reaction strategies is far superior to a portfolio with only one of those categories. The reason is that value-type strategies tend to have returns that are negatively correlated with returns from under-reaction strategies. After all, value strategies tend to buy firms with falling prices while under-reaction strategies tend to buy firms with rising prices. Thus putting these two types of strategies into a combined portfolio results in a substantial decrease in risk. A properly constructed systematic approach can produce returns that are consistent over time. A key element to engineering consistency is a sufficient historical period with which to test the various substrategies. Generally, the academic studies mentioned here use long historical sample periods to test for patterns. These studies typically go back to 1963, and in some cases back to 1926. Using these long periods to simulate various strategies allows one to be confident that the resulting strategies are robust to different market conditions. It also allows one to estimate how the different substrategies move together over time. Behavioural finance says that prices can be wrong, and it is possible to exploit this mispricing to earn high returns. By using patterns uncovered by rigorous scientific studies, combining a wide variety of strategies that exploit various behavioural effects and superimposing risk management tools, one can construct portfolios that deliver consistent profits over time.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.7527377605438232, "language": "en", "url": "https://tri-articulation.info/conferences-formations/archives/item/161-land-sharing-vs-land-grabbing", "token_count": 726, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1337890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7a3c319d-e826-4719-8634-f401937d8a12>" }
Proposals for land preservation, new land use and management practices 3rd June 2013 - Brussels Venue: European Economic and Social Commitee Rue de Trèves, 74 - Bruxelles Land Sharing vs. Land Grabbing What is Land Grabbing? Land grabbing commonly means large-scale land acquisitions (buying or leasing) by private or public entities, without due regards to local communities decisions, needs, rights and protection. It is usually used for phenomena occurring in Southern countries since the 2007-2008 crisis of commodity prices. So can we talk about land grabbing in Europe? Some trends and recent evolutions of farmland management in Europe take similar forms and/or have similar consequences than those of land grabbing in the South. Diminution of agricultural land and land concentration, sale and rent price increase, disconnection between agricultural land use value and its price, insufficient renewal of farmers' generations, competition between food, fibre and agrofuels for agricultural land use or massive financial investments in farmland are a source for concern. In this respect, Europe is no exception to the global context of increasing pressure on agricultural land and food production, and financial concerns taking precedence over community choices. What is Land Sharing? A number of citizen-led initiatives have developed to provide easier land access to local, ecological forms of agriculture geared at matching the needs of their communities. They are of different shape and size, some are centred on one or two farms, others have regional or national scope. They engage in different ways with consumers, local inhabitants, and other local stakeholders, but all include some forms of involvement of the communities in land use and management. Many of these initiatives have already been very successful and bear testimony to the interest and readiness of the public to get involved in favour of ecological, local food production and the preservation of vibrant rural areas. Although they are still a loose movement, these initiatives pave the way for inventing new ways of owning and managing land as a common good. They (re)place farmers as part of a long chain of good land stewards, develop a long-term perspective on land use and environment protection and try to reconnect land with its intrinsic and use value, rather than its market price. They have many challenges ahead, but also experiences and reflections to share with all those concerned in the future of European agriculture, food and countryside. Liste d'articles et vidéos liés au thème d'une métamorphose de la propriété privée, notamment celle de la terre agricole : > 09 Conséquences sur le travail humain, la propriété privée et l'allocation des moyens de production (2012) - Mouvement pour la triarticulation sociale > 10 La propriété privée au service de la collectivité : le transfert du droit d'usage des moyens de production (2012) - Mouvement pour la triarticulation sociale > 11 La terre n'est pas une marchandise (2012) - Mouvement pour la triarticulation sociale > Pistes concrètes pour une métamorphose du droit de propriété privée • Une étude de cas. (2013) - Stéphane Lejoly > Chante Terre : vers un nouveau droit d'usage de la Terre ? (1994) - Stéphane Lejoly
{ "dump": "CC-MAIN-2020-29", "language_score": 0.97017902135849, "language": "en", "url": "https://workintown.com/how-many-cryptocurrencies-in-the-world/", "token_count": 361, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.470703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:36e67df2-0870-4f1f-a1ac-bfc565b8187c>" }
Money rules the world. Some may say that it is a controversial saying, but that does not make it any less true. In the modern world, money has gone so far as to reach out to the ever-developing digital world. Most of you have already heard about cryptocurrency or bitcoin, but not everyone has a clear understanding of what it is and how it is going to affect the world the way we know it. To begin with, it needs to be mentioned what cryptocurrency is and how many types of are available. Cryptocurrency is something that most people refer to as digital money. Most of it comes as coins or tokens. The fact is that these days there are more than 2500 different cryptocurrencies that you can use. According to trustworthy ICO Rating resources, 5 of them are the most used ones. They are: - Stellar Lumens There are also other cryptocurrencies that are widely used on the market; in total, they are 25 out of more than 2500 cryptocurrencies. Surely, such an amount of crypto is impressive, and many people may wonder whether they should convert their money into its digital equivalents. However, it needs to be pointed out that many cryptocurrencies are still unstable due to a number of reasons. For instance, your virtual vault may get hacked, or your digital account can be deleted because of a computer crash. All these things scare regular people. The majority does not convert their currencies into crypto. For the most part, they do not know how to use it and what are the benefits that such currency may come with. There are several issues to be fixed for the majority to use cryptocurrency, and this process will take some time to be finished. Such an assumption leads to relative conclusions that popular cryptocurrencies will be forgotten while the new ones will emerge.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9464262127876282, "language": "en", "url": "https://www.educba.com/financial-analysis-example/", "token_count": 1125, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": -0.02978515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:503d4b86-61e2-451c-9f33-435d1e782066>" }
Introduction to Financial Analysis Example Financial analysis example is the investigation of business results and financial reports with the aim to understand the performance of the entity. The analysis covers the facets of the profitability, liquidity, and solvency of the business. This, in turn, helps to make decisions with regards to investing, policy or determining the future state of action. The analysis can take place in corporate finance or for investment finance. The corporate finance deals with NPV, IRR calculation of a prospective project whereas investment finance analysis deals with understanding the competitive benefit in investing amongst a slew of competitive firms for an investor. Financial analysis exists in various forms and some of the forms are discussed below: Examples of Financial Analysis (With Excel Template) Let’s take an example to understand the calculation of the Financial Analysis in a better manner. #1 Financial Analysis Example – Liquidity Ratio Analysis It is a measure of the timeliness with which an entity would be able to clear out its imminent liabilities. The creditworthiness of an entity depends on how the number of liquid assets it possesses. An unfavorable ratio would mean uncertainty with regards to the fulfillment of the external liabilities and thereby raising questions on its future. This ratio analysis though should be considered the payment cycle of the entity and the seasonal fluctuation. For example- if the payment cycle is in progress, the cash with the entity would obviously be low thereby not giving the correct picture of the financial situation. The ratios could be of the following kinds - Cash Ratio compares the amount of cash to the immediate short term liabilities. If the business were to be dissolved today, would the cash be adequate to cover the short term liabilities that it has at that point? - Quick Ratio is the measure of the cash and the future cash to be received (receivables from debtors) to repay the current liabilities that the firm has. Quick assets involve assets that can be converted into cash within 90 days. This ratio gives an indication of the ability of the firm to cover its liability obligations without resorting to the long term assets. More is the ratio, better is the ability of the firm to cover itself from foreseeable liabilities - It measures the current assets that a firm has against the payment of the current liabilities. Here, the current would mean either convertible into cash in the next one year or due to be re-payed in the next one year. It is one of the most important ratios to look at the liquidity ability of the concern. The below example looks at Entity A and the determination of the liquidity ratios for a particular point of time: #2 Financial Analysis Example – Trend Analysis This tool plots the performance of a given variable over a period of time to find out the various features, predict the future course of action and weave methods around it considering such a trend to continue in the near future. For example: if the profit of concern is decreasing every year by around 5%, there is a cause to check the factors that are influencing such movement. It could be due to external factors like change in market conditions or could be driven by internal situations like cost increase or decrease in revenue. First, the trend analysis will tell us the cause and then it would indicate if such movement would continue in the future as well. If after the analysis, it is determined that the internal factors have very little to do with the movement and that it is beyond the control of the firm, then measures have to be taken to ensure that the unfavorable movement is minimal. This could involve expenditures on certain new assets and/or change the existing processes. Generally, trend analysis is depicted by line graphs which are a good visual medium to understand the changes happening period over period. #3 Financial Analysis Example – Rate of Return Analysis This is generally used in the case of a capital purchase decision-making process. Rate of return is the measure of the increase in returns that the new asset will provide over the cost incurred on it. This analysis could be performed at two stages: Pre-purchase: This would indicate the expected returns that an asset would bring over a period of time. If the returns are more than the cost incurred discounting at a decided rate of return then it worth to invest in the asset. Post-purchase: After the asset is utilized in the production, the management might want to do a post facto analysis of how the asset is yielding and compare it to the expectation that they might initially have from the asset. In case, the yield is not up to the mark the management could decide to probably make a decision to sell it at the current market price and come up with an alternative solution that could help create better returns. Financial analysis is important for decision making to be it for the management or for potential investors. It helps understand the current health of the entity and simplifies the comparison between the entities of the same industry. Also, future forecasts could be made which will help management make decisions. The analysis is subject to the time period at which it is done. Many times, an entity may be going through a temporary crisis. The analysis at that point will be skewed unfavorably. Also, given how the entity has performed in the past might not necessarily be the best indication of how it is going to perform in the future. This is a guide to the Financial Analysis Example. Here we discuss the introduction and practical example of liquidity ratio analysis, trend analysis, rate of return analysis along with detailed explanation and downloadable excel template. You can also go through our other suggested articles to learn more –
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9391071200370789, "language": "en", "url": "https://www.intechopen.com/books/efficient-decision-support-systems-practice-and-challenges-from-current-to-future/determination-of-effective-policies-for-ecological-agriculture-development-with-system-dynamics-and-", "token_count": 4924, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.064453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ca88c55c-c010-4e82-a514-7f73943c0efe>" }
Values for particular scenarios considering amount of subsidies, self-supply coefficient, delay which represents to what proportion self-supply of organic farms should be considered. Agricultural activity, beyond its primary function, can also shape the landscape, provide environmental benefits such as land conservation, sustainable management of renewable natural resources and preservation of biodiversity, and contribute to the socioeconomic viability of many rural areas (Majkovič et al., 2005). One way of emulating the prevailing EU policy reform trends is also to support and encourage organic farming, which is gaining in importance in Slovene agricultural production. Contemplated as a whole, any sound agricultural reform would entail not only necessary positive shifts in economic efficiency levels concerning the production and processing of food, but should specifically address some key socio-economic issues that are at the core of preserving and maintaining the ecological balances in the Slovene countryside; with biodiversity becoming an increasingly important agricultural policy concern (Ivančič et al., 2003). With respect to terms of multifunctionality, organic agriculture is the highest environmentally valuable agricultural system (Rozman et al., 2007a, 2007), and has strategic importance at national level that goes beyond the interests of agricultural sector. This alternative agricultural paradigm may provide the link between objectives of sustainable resource use and sustainable regional development. The consequences of policies are long term and irreversible. In this light the conceptual methodological approach for evaluation of development policies for organic farming must be developed. Organic agriculture represents a complex system at national level (Shi and Gill, 2005) and different modeling approaches have been described in the literature (farm level, regional level and national level). Also, technologic economic simulation at farm level and multicriteria decision analysis are often used for decision support at farm level (Rozman et al, 2005; Pažek et al, 2006). Boorsma (1990) distinguishes three approaches in modelling the behaviour of the farmer: econometric modelling (based on linear regression equations of a data set); mathematical programming and modelling decision processes based on decision rules. At the national and regional level we often encounter econometric models that can efficiently reflect the situation in agricultural systems and can also be used for forecasting policy consequences (Akinwumi et al., 2000, Turk (1998)). Although econometric models have a great methodological value and forecasting capabilities the modeler must ensure relatively long and consistent data series that are rarely available. Mathematical programming is frequently applied in farm planning. It allows determination of an optimal allocation of land, labour and capital, given a set of goals (e.g. maximisation of income and leisure and minimisation of risk) and constraints (e.g. labour and land). Bontkes and Van Keuler (2003) argue that the study of agricultural systems requires the use of non-linear dynamic models that allow simulation of the system in a qualitative way, based on a description of the underlying processes. Their approach is illustrated with a regional model that has been developed to simulate agricultural development in the Koutiala region in the south-western part of Mali. There are many factors, such as farm type and soil quality, that might influence farmers' decisions. However, attempting to consider the complex interactions of all factors in a single model is not a productive approach. Hence, the authors (Kaufmann et al., 2009) adopted the approach of isolating parts of a system and examining it assuming that all other things are equal. The diffusion of organic farming practices is modeled by a generic agent model (Borchev and Filippov, 2004) based on Theory of planned behavior for understanding and modeling the farmers decision making process. System dynamics (SD) methodology (Forrester, 1961) can be and has been used as an alternative to econometric and mathematical programming approaches (Bockerman et al (2005)); Elshorbagy et al (2005)). SD model in its essence, is a continuous model because it is presented as a system of non-linear differential equations (Munitič and Trosić, 1997). There have been many important SD applications in the field of agriculture recently Shen et al. (2009) present system dynamicsmodel for the sustainable land use and urban development in Hong Kong. The model is used to test the outcomes of development policy scenarios and make forecasts. It consists of five sub-systems including population, economy, housing, transport and urban/developed land. Similar approach is presented by Weber et al. (1996). However, the most important work in the field of simulation of development policy scenarios are presented by Shi and Gill (2005) who developed a system dynamics based simulation model for ecological agriculture development for Jinshan County (China) and Kljajić et al. (2000, 2001, 2002, 2003) with an integrated system dynamics model for development of Canary Islands where main interactions between agriculture, population, industry and ecology were taken into consideration. The preliminary results of SD simulation of organic farming development is conducted by Rozman et al. (2007) and Škraba et al. (2008). The model incorporates key variables affecting the organic farming systems and is used in identification in of main reasons that the strategic (15% or organic farms) has not been achieved. Yet this research did not incorporate the full aspects of food market and consumer factor (Rozman et al., 2007). However, consumer concerns are inherently dynamic because they respond to difficult and complex societal and technological situations and developments. For example, because of the rising concern with global warming, carbon dioxide absorption of crops is now attracting public attention, which means that new requirements are being proposed for the environmentally friendly production of crops (Korthals, 2008). In this light Rozman et al. (2008) and Rozman et al. (2010) upgraded the model with the inclusion of organic market development factor. This paper presents a system dynamics model for development of organic agriculture in Slovenia in order to identify key reasons and propose development policy to achieve strategic goals set in the ANEK (Majcen and Jurcan, 2006). The paper is organized as follows: first we present the state of the art of organic agriculture with its system analysis and identify key variables, main flows and feedback loops in the systems. The results section presents scenarios (different policies in organic farming) and their evaluation using the developed SD model. Main findings and suggestions for further study conclude this article. 2. Model development 2.1. Study area We selected the Republic Slovenia as study area in order to develop and employ the SD model. The most of Slovenia agriculture is located (with exception of eastern flat land with its intensive field crop production) in hilly unfavorable areas. In the European space Slovenia belongs to the countries with the most unfavorable conditions because of its diverse and mountainous relief and high proportion of carst areas. Recent studies have also shown deficiencies of organic products on the market (Pažek at al., 2006). Thus organic agriculture has been identified as one of developmental opportunities. There are approximately 80,000 farms in Slovenia; conventional and organic. In year 2006 only 1,728 farms are in the organic farm control system. Even though the subsidy has been offered (Recent research has shown (Rozman et al., 2007) that correlation between subsidies level and number of organic is too low) to the farmers, the proportion of the organic farms is still low, not higher than 5%. The short term strategic goal is to reach the 10% or 15% ratio by the year 2015. This is determined by the state action plan ANEK (Majcen and Jurcan, 2006). Although the number has increased to 2000 in 2007 and 2067 in 2008 the strategic goal (15%) is still underachieved. In Slovenia up to 440,349 hectares are defined as less favoured areas (LFA). These are hilly and mountainous areas, areas with karst features or other factors that limit possibilities of intensive farming. Relatively high share of less favourable areas make Slovenia suitable for less intensive sustainable production systems – such as organic agriculture. The system analysis of organic agriculture In order to provide the proper systemic solution of the described problem, the simulation model should be build which represents the structure with key elements. The simulation model should consider the key variables that influence the development of the organic farming such as: number of conventional farms number of organic farms promotion of organic farming (marketing, market development, education organization of general organic farming support environment system self awareness delay constants of process changes The key variable in the model is the number of organic farms. These are the farms that are in the control system at the one of the control organizations. The growth of the number of organic farms was initially (in year 1998) almost linear however, in the years from 2003-2005 the growth is moderated to approximately 4% despite the increase of subsidies for 20%-30%. At the development of the causal loop diagram (Fig. 2.) as the first step of the development of SD model the following key variables were identified: the number of potential candidates (farms) for conversion to organic farming the number of farms converted to organic farming the flow between (1) and (2): conversion rate (transition) There is the delay mark between the “Promotion and Market Development” and “Self Organization Resources”. Here the longer delays should be considered since there is a significant amount of time needed in order to promote the organic farming idea and marketing channels which would support the organic farming. “Support Resources” are significantly depended on the government “Subsidy”. More there are “Support Resources” higher the “Organic Farming Goal” is set meaning, that larger number of organic farms could be supported. If the “Organic Farming Goal” increases, the “Conversion” increases above the level that would otherwise have been. Mentioned interconnections marked with Therefore the reinforcing feedback loop System dynamics model structure is shown in figure 3. Model consists of 29 variables and 51 links. There are two level elements applied in the upper part of the model. The variable “conventional_farms” represent the number of conventional farms. By the flow “transition” the “conventional_farms” become “organic_farms”. This structure is commonly known as the market absorption model. “conversion” is dependent on the “organic_farming_goal”. The goal is set by the “support_resources” available modeled as a level element. The conversion could only be achieved if there is enough “support_resoureces” present in order to make a “conversion”. The “support_resoures” are not only the financial means. Here the society support is also considered, for example education should provide positive thinking when organic farming is considered. In this category the market development as well as the demand should also be considered. However at present the “support_resources” are mainly dependent on the subsidies form the government. Important variable “self_organization_resources” is driven by the impact of the policy and society support which intensifies with the number of “organic_farms”. This represents the application of reinforcing feedback loop which should be augmented. “development_limit” represents the function which consider variable consumption of the resources. If the resources are scarce the usage is lower than in the case of abundance. Resources are consumed by the “organic_farms”. The prosperity of the “organic farms” therefore depends on the “support_resources” which are not only financial means; here the social impact of organic farming represents the supportive environment which should sustain such an activity which is in the world of consumption counterintuitive (Forrester, 1961). Figure 4 shows model examples of model equations. There are 77,000 conventional farms initially and 1,728 organic farms. The model is realized in Powersim. By the following equations the model could easily be transformed to other SD tools such as Vensim, iThink, Stella etc. In our research the agent-based approach has been considered as the possible way to analyze the dynamics of transition to organic farming. By this, one could compare both methodologies, System Dynamics and Agent-Based modeling. In Agent based model built with AnyLogic (Borschev and Filippov, 2004) shown in figure 4; we define the agents as farms. The model is represented by two agent states; 1) Conventional Farms (red) and 2) Organic farms (green). Transition among particular states is determined by the promotion of organic farming and information spread. The contacts in the state of organic farming is also considered. This approach is promising since it is possible to model whole agricultural sector where each particular farm is taken into account. Initially one initializes the particular number of agents, in our case 2000, since this is the number of potential farms for transition. The model is based on the Bass diffusion agent-based model. The number of farms is set to 2000 since the agent model with 20.000 farms would take too much time to run. Initially all the agents are painted red since all the farms are conventional. During the simulation agents transform from conventional to organic farms, which could be observed on the graphical view; the agent turns from red to green. Since the agents could transform from conventional farms to ecological in two ways there are two different border representations. If the agent performs transition on account of the promotion, the border of the agent turns yellow. If agent performs transition on account of other causes, the border turns blue. In this manner one could easily estimate how many agents performed transition in particular was as well as how fast particular transition occurred during simulation. Table 1 shows the parameter values for the eight scenarios performed on the developed system dynamics model. SC1 is the initial scenario where the initial amount of the subsidy is provided (1000). This would mean that there are some resources provided by subsidy to support 1000 organic farms. Figure 5 shows results of eight different simulation scenarios. One of more important findings is, that the system is sensitive to the changes in demand. If one observe scenario 7 and 8, where the population is changed only for 50k and 100k, once could observe, that the conversion to the organic farming would be jeopardized. |Scenario||Subsidies||Self-supply coefficient||Delay||Promotion factor||Population| As the mean of concept validation the results of agent-based model are shown in the following section. Tab. 2. shows list of parameter values for Agent Based Model for four different scenarios which are performed as the demonstration of how future Agent Based Model should be implemented. Figure 7 show the results of first simulation scenario SCA1. At the beginning the transition is started with low gradient until, on account of promotion, the gradient increases as well as number of agents. Informal information contributes to more intensive conversion until the proportion of conventional farms is low and informal communication loses its power. As one could observe, the farms that are unchanged are on the outskirt of the system due to remoteness and lover intensity of communication with other farms. Such farms are consequently not given the same amount of informal promotion. |Scenario||No. of farms||Effect of support||Transition factor||No. of contact| Figure 8 shows second scenario SCA2, where informal information flows are considered, here it is put to minimum. Increase of the model is almost linear; here the promotion of organic farming dominates in its influence. Here the importance of informal communication could be observed as well as impact of certain promotion actions, which is tied to particular number of organic farms. The proper influence of promotion is also confirmed. In the real world, such situation would occur in the case of very isolated farms, which have no proper contacts with other farms and limited access to support resources despite the fact, that the support level might be high. Figure 9 shows third scenario SCA3, here the successfulness of promotion is lowered to minimum. On the contrary, the level of communication and transition intensity is increased as the consequence of communication i.e. promotion. As one could observe, after the initial starting time, significant increase in transition occur. One could conclude, that promotion »infects« few initial agents, which, due to the high level of communication contribute to the explosion of transitions. In the real situation this would mean, that the agents have low level of susceptibility for promotion however, they are strongly interconnected and demonstrate large level of interpersonal trust. Figure 10 shows the fourth simulation scenario SCA4, where the key role is played by the communication between agents. The level of the communication is increased on the highest value. The parameters of promotion and the intensity of transition are lower than in the third scenario. However, the transition is exceptionally fast. One could observe, that several agents become isolated, those, who have less intensive contacts. In the real world, such situation would occur if the cooperation among agents (farms) would be very strong with strong contacts. This could be achieved via internet and other means of e—communication, personal contacts etc. Here the technology as well as support action in the field of communication should be considered. Promotion factor represents the policy to promote organic farming and self-organizational resources. That would mean the development of organic-farming marketing, production etc., which would contribute to better demand. This value is set to 0.8 initially which means, that each new organic farm rises the resources (not only financial) for 0.8 additional organic farm by adding, e.g. to the better development of the organic marketing and prodution. The delay represents the number of months in order to spread the effect of the additional support resources in the system. Initially we consider, that this delay is short, in our case 1 month. “1” marks the response of first scenario, SC1. Self-supply coefficient represent the proportion to which the country should be self content regarding the food supply. This factor determines the food demand. 1.3 would mean, that the desired food production shoud be 30% higher of normal production. The coefficient of self-supply determines the demand which also depend on the Whole production of the agriculutral sector. Here it is important, that in the case of higher prices, the food production capacity would play a key role and influence the possible negative transitions (back to conventional farming). Population is considered as 2 million which determines the food demand. If one compares it to the scenario “2” where the subsidies are rised to 3000 the more intensive transitions are observed. However, the observed number of organic farms is far from desired meaning, that the subsidies by themselves would probabbly not be enough. In the scenario SC3, the impact of decreasing self-supply coefficient is considered as well as decrease in subsidies. If one decreases Self-supply coefficient, the demand/supply delay ratio would be better, influencing on the better demend for organic farming products. This would in turn compensate the lower Subsidies and provide the highest conversion so far. In scenario SC4 the subsidies are rised to 2000 which gives the best results regarding the response of the system and the limit value of the organic farms, which is approximately 17,000. This would mean, that the right political choice would be, to increase demand for the organic farming products by lowering the self-supply and provide larger amount of subsidies. However, this could be risky in the condition of higher food prices. SC5 considers higher delay at the establishement of the self-supporting resources, which is set to 36 months. This is more realistic since the establishment of self-supporting resources takes some time. The consequence is, that the rise in the number of farms is much slower. This would mean, that it is very important, to quickly establish self-support resources for organic farming if we want to achieve fast transitions. SC6 shows the impact of lowering the delay in establishing self-support resources. Here the delay is put to the 12 months giving much better response and achieving the limit value of the organic farms, which is approximately 17,000. SC7 shows the impact of larger food demand in the case that the population would increase. This would have for the consequence larger food demand and rise in prices. It such situation, the transition would be slower and less farmers would choose to switch to the organic farming due to higher food prices. SC8 shows even worse situation if the population would have an additional increase meaning, that the demand for food would be even higher. In that case, the transition to organic farming would be even slower. One of the important questions is »How could the subsidies be replaced?« As the model shows, the main leverage is the organic farming promotion and market development. In this manner, the self-supporting resources are established which further promote the transition to the organic farming. This is the counterpart of direct subsidies which should be converted to the actions that support self organization component in the system. The presented combined methodological framework (SD) for the analysis of development of organic farming could provide additional information support to agricultural policy makers, bring additional clarity to the decision, and could therefore play an important role in further development of organic farming, in particular as assistance and advisory in policy planning. In this paper an attempt was made to employ system dynamics model in order to simulate the development of organic agriculture. The presented SD model enables simulation of different policies and this kind of model is comprehensible to a wide range of users in the decision making process. After performing several simulation scenarios the following findings could be abstracted: Conversion to the organic farming relies on subsidies which provide the main source of conversion from conventional farming to organic farming. Subsidies are not the only driving force in the system; even more important are other activities that promote organic farming. Subsidies could not be provided in sufficient amount in order to complete conversion from the conventional to organic farming. Feasible strategy to achieve complete conversion should consider reinforcing feedback loop between resources, number of organic farms and supportive actions which are bounded to the number of organic farms. Current output parameter i.e. number of organic farms, is caught in an unwanted equilibrium value due to the domination of balancing feedback loops in the system. Important factor is self-organization of the organic farming environment which includes market development and general public awareness. Due to the large systemic delays in the system the anticipative value of the system control plays an important part. Important factor that influences the transitions to the organic farming is demand on the market which is largely driven by the politics and the self-supply principle. The agent based model shows that it is possible to build an agent-based model which would enable to monitor each particular farm and its transition. The tool AnyLogic has been identified as the proper tool for such modeling task. Further strategic actions should consider the dynamic response of the system and the feasibility of stated system target values. Consideration of the interaction of four main feedback loops indicated in the system which determines the system performance provides the means for proper control strategy definition. The presented combined methodological framework (SD) for the analysis of development of organic farming could provide additional information support to agricultural policy makers, bring additional clarity to the decision, and could therefore play an important role in further development of organic farming, in particular as assistance and advisory in policy planning. Further research is needed in the field of SD modeling in order to properly evaluate the applicability of the proposed model. Especially the market development of organic food should be additionally considered as proposed by Rozman et al., (2008). The SD model should be further verified and correspondingly improved. The agent-based model should be developed which would enable precise monitoring of each particular farm. The model structure and its results should be evaluated by relevant expert group.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9581393003463745, "language": "en", "url": "https://www.mcplegal.com/insights/contract-defined-terms/", "token_count": 1074, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.015869140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7aadbea6-5d43-42cf-819f-f27801b2fb08>" }
July 16, 2013 Whether lengthy and extremely sophisticated or short and straightforward, contracts are simply the expression of two or more parties. In many cases, a skilled contract drafter can make even the most complex provisions of a contract understandable. However, some contracts can seem daunting and complex to parties that are not exposed to them regularly. In fact, if drafted poorly, even for attorneys some contract language is archaic and difficult to understand; assuming it makes enough sense to even be understood at all. Generally, however, many people who are unfamiliar with reading or interpreting contracts struggle and find them difficult to understand. Oftentimes this comes from basic misunderstandings about the way that contracts are intended to be read, which can often lead to poorly drafted agreements by inexperienced parties or attorneys who do not fully understand what the parties are trying to express in writing. The first thing to understand is that in a contract defined terms reduce length and ambiguity in a contract by replacing lengthy language or an explanation, usually a definition, with a single term or short phrase. Defined terms should be capitalized and should remain capitalized in a contract when used in that context. For example, if “Services” are defined to mean and refer to something specifically, such as ‘computer programming and IT support,’ then for the duration of the contract, in place of ‘computer programming and IT support,’ “Services” should be used with a capitalized “S.” Without such capitalization, ‘services’ could be read ambiguously or with plain meaning and may not capture the expression of the parties. Therefore, it is important to know what the definitions are in a contract, so that you can understand the meaning of the provisions and how that particular definition or similar language is being used. It is equally important for a contract drafter to use appropriate definitions where applicable and to use the language consistently. A skilled drafter or contract attorney will use these definitions as a tool to tell the contract’s story and to keep everything clear. However, contract drafters can quickly make a contract provision complex with many defined terms used in a single expression. For example, “only Sales made in the Territory by Builder or Sub-Contractor for Services or Goods Delivered will Costs be deducted.” With so many defined terms used at once the sentence could easily have complex results and restrictions based on the structural combination and use of those particular terms. Therefore, it is incumbent on the drafter or contract attorney to fully understand the implications of defined terms and how they are being used, especially if combined with other defined terms. The second important thing to understand is that if the term is not specifically defined, then, pursuant to Virginia contract law, “words that the parties used are normally given their, usual, ordinary, and popular meaning.” Preferred Sys. Solutions, Inc. v. GP Consulting, LLC, 284 Va. 382, 732 S.E.2d 676 (2012). Consequently, if the word is not defined within the agreement, you can read the contract to understand it exactly as the word would normally be used. Many people are worried that the language or the words themselves contain pitfalls or traps outside of their normal understanding in a contract. This is generally not the case when there are no specifically defined terms in the agreement. Conversely, this means that the words will not take on specific meaning if you do not define them to mean that. Great care must be taken by contract drafters to be certain that they are fully capturing the understanding of the parties or their client’s wishes. If the parties are using the words in an unusual way, then it may be best to specifically define them. It is also important to understand that under Virginia contract law, typically the contract is bound by the four corners of the document, meaning, no outside language or other documents, including email exchanges, will be used to interpret or explain the parties’ intentions, especially if the document contains a superseding clause. This is why it is so important that the contract capture the parties’ understandings fully. However, the ‘four corner’ approach along with the superseding clause is a lengthy topic with some exceptions that we will address in a future blog post when discussing Virginia’s ‘parol evidence rule.’ A third important issue to understand about reading or drafting a contract is that pursuant to Virginia contract law, “[n]o word or clause in a contract will be treated as meaningless if a reasonable meaning can be given to it, and there is a presumption that the parties have not used words needlessly.” Id. This means that not understanding a clause does not necessarily render it meaningless. Therefore, it is extremely important to understand the provision, because there is a presumption that those words were not needlessly added to the contract and will be interpreted to have significance and meaning as a result. Additionally, it highlights the dangers of inexperienced drafters using duplicative language and provisions. Such inexperienced drafters may attempt to provide a “concrete” expression or be “extra cautious,’ but they may inadvertently provide added significance or meaning to the provision or clause unnecessarily, which could ultimately change the parties’ intentions.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.921639084815979, "language": "en", "url": "https://www.nreionline.com/finance-amp-investment/price-right-consider-replacement-cost", "token_count": 1022, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.027587890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:36da24e4-4077-436b-a600-101820f27d8b>" }
With property sales at a minimum and accurate capitalization rates difficult to ascertain, investors are increasingly looking at replacement cost as a method of establishing pricing parameters. The comparison of sales price to estimated replacement cost is a common component of property analysis. A sales price below replacement cost is generally considered one potential indicator of attractive pricing for the buyer, but given the current market conditions nearly every property is expected to trade at some discount to replacement cost. We believe that a consistent approach to estimating replacement cost is a necessary starting point in order to evaluate the discount a given price represents. In addition, an examination of the relationship between current rents and replacement rents (the rent necessary to justify new construction) is a helpful tool to estimate the potential for future rent growth. A basic principle of real estate economics holds that new development occurs only when rents justify the costs of development, including a reasonable return to compensate for development risks. This pressure to develop occurs at the point in the market cycle when demand exceeds existing supply such that market rents rise above replacement rents. Development then occurs until the excess demand is satisfied. Typically, the addition of new space to the market overshoots demand, causing vacancy to increase and putting downward pressure on rents. Market rents fall back below replacement rents, removing the incentive for additional new construction until demand again outpaces supply. Calculating replacement cost The practice of preparing a reliable cost estimate for new development is complex, requiring both an array of data and an understanding of the technical aspects of construction. This type of detailed analysis is typically not realistic in the context of making an investment decision for an existing asset, however. Instead, replacement cost numbers are typically generated based on rough estimates of costs per square foot suggested by local brokers or contractors, often with minimal regard to the specifics of building types, construction methods, or locations. The challenge of estimating land prices, particularly in the current market, adds to the complexity of the task. In order to provide a more consistent approach, we suggest requesting two or more estimates from reputable developers and/or consultants familiar with the market and product type. In developing these estimates, it is important to provide a definition of what costs should be included, so that apples-to-apples comparisons are possible. We divide the various components of replacement costs into the general categories summarized in Figure 1. FIGURE 1: APPLES-TO-APPLES COMPARISONS REQUIRE DEFINING THE COMPONENTS OF REPLACEMENT COST Once a reasonable conclusion of replacement cost is determined, additional adjustments are necessary to assess the impact of depreciation on an older asset relative to new construction. The impact of depreciation may be estimated based upon a comparison of the premium for rents in new buildings versus older buildings in the local market, adjusting for differences such as amenities and location Calculating replacement rents Market rent levels encourage or discourage new development based on the relationship between those rents and development costs. Once a reasonable estimate of replacement costs is defined, then we can also determine the rent necessary to justify that new development. Replacement rent is a function of replacement cost, the required return on costs (or risk premium), and an adjustment to acknowledge some level of expected vacancy and expenses. A simplified example of the calculation is shown in Figure 2. For illustration purposes, we assume an absolute net lease so that no adjustment is necessary for expenses. FIGURE 2: REPLACEMENT RENTS REFLECT COSTS, RISKS AND VACANCY ASSUMPTIONS Based on these assumptions, a developer seeking a 10% return on cost and an expected stabilized vacancy rate of 5% would need to be comfortable with the ability to achieve average rents of $31.58 per sq. ft. in a stabilized building in order to proceed with a project with an estimated total cost of $300 per sq. ft. This calculation then allows for a comparison between replacement rents and current market rents. Given the steep decline in rents over the past two years, market rents are generally well below replacement rents. We can then use the difference between current market rents and current replacement rents as one tool in estimating future rent increases. Even assuming a traditional 3% increase in construction costs over time, market rents will have to increase at a faster pace at some point to return to parity with replacement costs. We believe that a scenario that includes a return to replacement rents may be an appropriate consideration in investment analyses, likely calling for a more substantial spike in rents at some point in the future when demand again outpaces existing supply. This approach would be most appropriate for the strongest markets and submarkets where there are reasonable expectations for increasing demand, especially when combined with a relatively high level of supply constraints. Of course, the timing and size of that rent spike will vary by market and submarket, and must be estimated based on a thorough analysis of local supply and demand characteristics. David Lynn is managing director and head of U.S. research and investment strategy with ING Clarion based in New York.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9436312913894653, "language": "en", "url": "https://www.preventionweb.net/news/view/72269?a=email&utm_source=pw_email", "token_count": 1380, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.058349609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d599bd86-eaa0-4765-8349-a2fafbc1b8bf>" }
The COVID-19 crisis demonstrates the need to invest much more in pre-disaster risk reduction and preparedness for a range of risks, including climate change and its accompanying hazards such as flooding. But what, asks Swenja Surminski, does this kind of investment need to consider in practice – and why is ‘resilience’ not already widely taken into account when making policy and investment decisions? Current public health and environmental crises are illuminating that the inherent systems we have in place around the world to deal with emergencies need to be re-evaluated. There is an urgent need to adjust financial flows towards increased investment into ways to reduce the risk from a hazard event before it has happened (known as ‘pre-event’ or ‘ex-ante’ disaster risk reduction). However, for policymakers or investors the old adage that ‘prevention is better than cure’ does not always hold water: preventative measures tend to be seen as a cost, with uncertain or distant rewards, and they lose out to more immediate action. As a result, decision-makers still undervalue investment in pre-disaster resilience due to its political unattractiveness, even though evidence shows that strengthening resilience is hugely cost-effective and can generate multiple benefits. This has caused a major imbalance in disaster funding, with significantly more spent on recovery and repair than on risk reduction and increasing resilience. The idea of resilience has been promoted for a long time. In terms of global commitments, this has manifested through the UN’s Hyogo Framework and its successor, the Sendai Framework for Disaster Risk Reduction 2015–2030, the Sustainable Development Goals and the Paris Agreement on climate change. Yet it is often unclear what the term means and how to capture it when making policy or investment decisions. Different disciplines apply different concepts when assessing resilience – from ‘robustness’ to ‘bouncing back’ and ‘bouncing forward’ in the face of shocks. A commonly used definition is the one provided by the United Nations Office for Disaster Risk Reduction (PDF): “the ability of a system, community or society exposed to hazards to resist, absorb, accommodate to and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions”. Importantly, resilience needs a holistic understanding of risks and risk drivers, taking into account how risks interact and what this means for aims and ambitions of individuals, companies or countries. Resilience can also have a transformational aspect when we consider future risks and how to reduce and prepare for these. The Grantham Research Institute is currently carrying out flood resilience work at the community level in cities in Germany and Eastern England, and works with partners across several developing countries to develop local resilience concepts in the face of rising risk levels. In that context, we consider resilience as a holistic strategy (PDF) to help communities move ahead in a sustainable way – that is, by pursuing social, ecological and economic development goals while managing the risk of flooding over time in a way that mutually reinforces these goals. As such, achieving resilience is not just a matter of selecting one strategy – for example, in the flooding context, building a dyke. True resilience can only be achieved through a strategy that employs financial, human, natural, physical and social capitals. This is particularly important in the context of climate change, where we know that today’s decisions will determine tomorrow’s risks. A lack of regard for future risk can lead to expensive lock-ins: where and how we build today’s infrastructure, housing or community structures will shape the lives of current and future generations. Once something is built it becomes costly to adjust, move or upgrade. Disregarding future risk means that desperately needed investments will only have short-lived benefits. Therefore, climate resilience needs to be an essential component of current and future planning and decision-making to ensure that previous gains in poverty reduction and economic prosperity are not wiped out by adverse climatic impacts. However, as our recent analysis of risk governanceshows, there is still a prevailing focus on post-event response and recovery strategies and a lack of recognition of the importance of investing in risk reduction strategies proactively. Our research with ODI and the World Bank, and work with IIASA and partners in the Flood Resilience Alliance, has demonstrated why ex-ante action is so important via a ‘triple dividend of resilience investment’ framework. These three dividends are: This mindset of preparedness particularly applies to the recovery process from the COVID-19 pandemic: any short-term emergency measures and long-term stimulus spending must aim to create a greener and more resilient future. This can be a catalyst to ‘build back better’, incorporating both resilience and preventative thinking in post-event action strategies. Furthermore, a holistic approach is important for avoiding silo thinking, which is dangerous in both public health and natural disaster resilience strategies. We are facing complex challenges and will only succeed if we understand how we can cope with interconnected and compounding risks. So far, research and management approaches on natural disaster resilience have primarily focussed on strengthening the resilience towards singular events. Yet, with the impacts of COVID-19 expected to last from several months to years, occurrence of compounding and consecutive disasters is more likely. This requires urgent action from researchers to develop and provide evidence and guidelines for policymakers on how to build multi-risk resilience while responding to the current COVID-19 crisis. Importantly, this also needs to move beyond the traditional view of relying on ‘hard’ engineering and infrastructure solutions only. Human, social and indeed natural capital are hugely important for building resilience, but often overlooked when designing risk strategies, as our research has found (PDF). A ‘building back better’ approach should guide any recovery effort, regardless of the context or discipline, as it can guide us onto a healthier and more sustainable path for future generations. It is encouraging to see ‘resilience’ reaching the headlines again and being identified as a goal for recovery and stimulus by national governments and international financial organisations. Now it is essential to fill this with real meaning, and to protect lives and livelihoods, alongside promoting effective ex-ante (preventative) efforts to be undertaken by governments, businesses and communities. Our triple dividend of resilience mindset coupled with post-disaster adaptation support is a strong strategy to reduce our world’s ever-increasing vulnerability to the effects of extreme weather events and public health crises. The views in this commentary are those of the author and do not necessarily represent those of the Grantham Research Institute.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9563717246055603, "language": "en", "url": "http://thespoke.earlychildhoodaustralia.org.au/childcare-package-neither-bold-or-sustainable/", "token_count": 1351, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.35546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:87c208a3-5ead-4143-95a3-43ef0a0bb927>" }
The government’s new childcare package, which will not come into full force until July 2017, tinkers around the edges. The package will simplify the benefits system, increase subsidies for those on lower incomes, and extend subsidies to include nannies. Subsidies are now linked to “benchmark” prices in an effort to reduce inflationary pressures, a decision that could actually be detrimental to parents in the longer term. Bold policies such as universal early childhood education and care (ECEC) or a radical improvement in the quality of ECEC are not the order of the day. The government says its objective is to help parents who want to work or work more. The relationship between childcare policies and women’s labour supply is an area I and the University of Canberra’s Xiaodong Gong have extensively researched. There are a number of reasons why governments might subsidise childcare, not as a handout, but as a sensible policy tool. One reason put forward is that it will pay for itself — subsidies to childcare result in more women working and the increased tax revenue will more than offset the cost of the subsidy. A second reason is that investment in ECEC yields improved educational outcomes for children, resulting in better-educated, more productive and happier citizens. If credit constraints or lack of information lead to under-investment in education, then there is a role for government intervention. A third reason is, in a society where women have traditionally borne the burden of care for young children, childcare subsidies help women enter and remain in the work force. So, childcare is a crucial pillar in any set of policies designed to enhance gender equality in society. Do these arguments stand up in Australia? Will childcare subsidies pay for themselves? No. Based on Australian data from the late 2000s, each dollar of subsidy returns only about $0.14 of tax revenue. So a dollar of subsidy, after taking account of increased work hours of mothers, costs about $0.86. And this excludes government costs of program administration. These estimates are based on small changes to the currently existing program of childcare benefit (CCB). Might responses to radical changes to the childcare system be really different? Probably not, but it’s not impossible. Without data to inform, the hopeful can continue to speculate. Is subsidising childcare worth it? Determining whether money spent on childcare is “worth it” in terms of educational outcomes and women’s equality is harder to answer. While these outcomes have value, they don’t come with a price and reasonable people can disagree about their worth. We know from overseas that participation in childcare appears to help educational outcomes, particularly for children from lower income families. For middle- and upper-middle class families, the results are mixed with some studies saying that childcare attendance can actually be harmful for future educational outcomes relative to children staying at home with a well-educated carer. In Australia, there are no broad, representative studies. The few existing, observational studies do not convincingly address the problems that arise from unobserved differences between those children who attend childcare and those who do not. Some studies find small positive effects of ECEC whereas others find either no effect of childcare attendance or a slight negative effect on teacher assessments of children’s knowledge or year 3 NAPLAN tests. We need more evidence in this area. What about gender equality? This is difficult to quantify satisfactorily, but what is clear across developed countries is a general pattern where countries with free or heavily-subsidised ECEC have higher fertility rates and higher labour force participation by women. It’s hard to identify the “childcare” effect, as such policies are often bundled with a wide set of policies aimed at supporting gender equality and broader choices for both women and men when it comes to balancing work and family. The likely impact of the new policy Women from lower-income families tend to respond more to childcare subsidies than those from wealthier families. This is not surprising, as childcare represents a larger share of the household budget for poorer families. So positive labour supply effects should ensue from tilting subsidies towards those who are less well off. While simplification sounds good, childcare is an area where having two policy levers is valuable. Our research shows that responses to subsidies (like CCB) are very different than responses towards tax rebates — like the original childcare tax rebate (CCTR), which actually was a tax rebate, not a subsidy under a different name. Tax rebates are less expensive (a dollar of rebate only costs around $0.73) and produce larger labour supply increases. However, the wealthier (with taxable income and higher marginal tax rates) benefit more from tax rebates. This suggests a policy where subsidies are targeted at lower-income families and an across the board tax rebate is (implicitly) targeted at wealthier families. It is unclear what effect the linking to benchmark prices will have. We do not know much about the supply side of childcare – to what degree do subsidies simply flow on as higher prices? How will these benchmark prices be adjusted over time? Childcare prices have increased at a faster rate than inflation over the last 15 years, so failing to index the benchmark prices or indexing them only to a broad measure of inflation will, over time, result in substantial real decreases in support to families. Nannies are not the answer It is difficult to see how a policy of subsidising nannies fits into this. This part of the policy will not pay for itself for the same reasons that no childcare subsidy pays for itself. The research about ECEC benefiting children is based upon classroom interactions and integration into the educational process not segregation with a nanny. We do not know whether children cared for by nannies do better or worse at school than those who have stayed home with a parent; nor how their performance rates against those who have been in long day care. Australian ECEC continues to suffer from quality issues, despite the National Quality Framework. The typical childcare worker has qualifications and teaching skills well below those of the typical preschool or primary school teacher. The typical nanny will have no job-specific educational qualifications and less training than the typical childcare worker. Will we allow long day care centres to hire less qualified workers than they now do, call them “nannies” rather than “childcare workers” and still have access to subsidies? Subsidising nannies looks more like an appeal to a particular demographic rather than a well-thought out attempt to improve outcomes for children.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9591833353042603, "language": "en", "url": "http://www.opentextbooks.org.hk/ditatopic/32713", "token_count": 232, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0274658203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b37a253a-f7d7-4e54-ab8a-f6831ae6bc1e>" }
After a pay system has been developed, we can begin to look at specific methods of paying our employees. Remember that when we talk about compensation, we are referring to not only an actual paycheck but additional types of compensation, such as incentive plans that include bonuses and profit sharing. We can divide our total pay system into three categories: pay, incentives, and other types of compensation. Pay is the hourly, weekly, or monthly salary an employee earns. An incentive, often called a pay-for-performance incentive, is given for meeting certain performance standards, such as meeting sales targets. The advantage to incentive pay is that company goals can be linked directly to employee goals, resulting in higher pay for the employee and goal achievement by the organization. The following are desirable traits of incentive plans: - Clearly communicated - Attainable but challenging - Easily understandable - Tied to company goals Table 6.3 "Types of Pay" illustrates the three types of compensation. Most organizations use a combination of pay, incentives, and other compensation, as outlined in Table 6.3 "Types of Pay" , to develop the total compensation package.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9618045091629028, "language": "en", "url": "https://cryptotips.eu/en/knowledge-base/what-is-cryptocurrency-mining/", "token_count": 1480, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.050048828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f2721bc0-a29e-465e-9b48-f2d9e8101f0b>" }
What is cryptocurrency mining? If you are interested in buying Bitcoin, you can easily become a digital currency owner. There are many different brokers that offer Bitcoin and other cryptocurrency, to a wide audience all over the world. However, these coins have to come from somewhere because the amount of coins keep growing every day. Not every crypto has the same properties, but with a large number of coins, new coins come into circulation with cryptocurrency mining. Miners use special equipment to verify and process transactions on the blockchain. In return, they receive a reward of new cryptocurrency. This is only the case with Proof of Work cryptocurrency, but more on that later. No central bank but there are still checks Many different cryptocurrencies are known to be decentralized systems. The purpose of these cryptocurrencies is to create a payment method in which central banks don’t play a role. The central bank that you know from our regular bank account, performs checks on all transactions that we do. This way we know for sure that everything is going well. Because cryptocurrency does not have a central party that verifies transactions and security, another system has been invented, it is called the blockchain. The blockchain is a large ledger that contains all transactions performed with a certain crypto coin. These transactions are bundled in blocks and then added to the blockchain one by one. As the name blockchain suggests, it is a long chain of blocks containing transactions. Within the network of a digital coin like Bitcoin, each node in the coin’s network has a copy of the blockchain. This node can simply be a user’s computer with special software. When a new block is added to the blockchain, the blockchain can be compared to all other copies. If the copies match, the transactions are secure and approved. When something differs between the copies, it is immediately clear that something is not right. This is a pretty simple explanation of the blockchain, but the blockchain system is also in reality not even that complicated at all. You can read more about blockchain in our knowledge base. Miners add new blocks to the blockchain Now that you know how the blockchain works, it is good to know how these blocks end up on the chain. A block can contain many different transactions. When you, as a user, add a block to the blockchain, you get a reward in cryptocurrency. This sounds very good, because it sounds like free money, but adding a block to the blockchain is easier said than done. When you want to add a block to the blockchain, you first have to solve a mathematical problem. These calculations need a lot of computing power. These mathematical problems are solved by so-called miners. Miners are people who have special computers to solve these problems as quickly as possible. If you are the first one with the right solution, you may add the block to the blockchain and you will get your reward. A simple laptop is by far not strong enough for these calculations. Many miners put thousands of euros/dollars in hardware to build the fastest cryptocurrency miner. With Bitcoin, a new block is added to the blockchain every 10 minutes. So every 10 minutes, a miner finds a new block and can get his reward. Receiving free cryptocurrency sounds very attractive to many people. So, that is why there are thousands of miners who try to be the fastest in solving the mathematical problem. To ensure that blocks don’t get added to the blockchain faster than 10 minutes, the system adjusts the difficulty of the calculation. This has the advantage that the security of the network is also increased. For example, if you want to hack into Bitcoin’s system, you’ll need more than half the computing power of the whole network combined. This is almost impossible, that is why attacks on Bitcoin are almost impossible. There are many miners at the moment and the difficulty level is very high, it is almost impossible for one party to find a block (the mathematical problem to be solved). As a result, mining is done in a mining pool. Miners offer their computing power to a mining pool and get a percentage of the yield. The percentage depends on how much computing power is offered to the pool. In this way, all miners join forces and are guaranteed to be paid out. Without a pool, a lot of computing power could be added to the network with the chance that the problem will not be solved. That means high costs, but no rewards. Proof of Work The question is why a calculation needs to be solved when a new block is found on the blockchain. This is a piece of security within the network of a cryptocurrency. The computing power of a computer must be high enough to be able to process all transactions in a block. These transactions must pass through an algorithm. By solving the calculation you prove that your computer is powerful enough to process everything properly. This provides security within the network. Because there is no central party that controls everything, users must be able to trust each other. So this test helps with that. Proving that you have enough computing power is called Proof of Work. If you want to know more about Proof of Work, you can read more about it on our website. There are many supporters and opponents of this system. Every 10 minutes Bitcoin has a miner who gets his hands on new, fresh coins. But you will earn less and less Bitcoins per block as time goes by. Every 4 years Bitcoin halves the reward for adding a block to the blockchain. At the launch of Bitcoin in 2009, you received 50 Bitcoins per solution of the calculation. Nowadays, that’s a lot less (6.25 BTC). When all Bitcoins are mined, there will no longer be a halving. So, the reward for adding a block will also disappear. However, the mining will remain popular, because you will also get the transaction cost per block as a reward. Should you start mining yourself? If you want to mine cryptocurrency yourself, you should take a good look at which coin is the most profitable. For mining you can’t just use a laptop, you will have to invest seriously in both hardware and power. Besides, a computer that is turned on day and night consumes a lot of processing power and is not very energy efficient. For Bitcoin to be mined, the investments for a starting miner are way too high. So, take a good look at the different mining options and various equipment. By the way, we don’t recommend mining on your own with the goal of making a profit. It might be a fun hobby project, but it is going to be very difficult to make a profit out of it. To be in the plus, you will need to earn back your investment of the equipment. Besides, energy costs are relatively expensive in most countries. If you have an abundance of energy from your solar panels, for example, then it will be a lot more interesting, but still very difficult to get your investment out of it. In recent years, the level of difficulty is and has been increasing rapidly for all cryptocurrencies. Because of this, it is possible that the profits will be lower than you had calculated beforehand. Also, keep in mind that the device often makes a lot of noise, so it is not convenient to place it in the living room or bedroom.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9633602499961853, "language": "en", "url": "https://solarlove.org/50-say-solar-power-is-important-to-the-future-of-energy/", "token_count": 424, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.1015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:08e52151-e400-4d1c-98e1-11ed7cfffd6f>" }
SolarCity and CleanEdge teamed up recently to ask Americans their opinions on solar energy. According to Green Tech Media, the survey shows more than 50% of respondents say solar power is important to the future of energy. Wind power was slightly behind at 44%. Cost is still the biggest issue for homeowners thinking about their energy future. Despite the price of photovoltaic solar panels falling dramatically over the past 10 years, a complete solar power system, including an inverter and all the software necessary to interface properly with the traditional electric grid, can cost from $15,000 to $30,000 dollars, depending on where the home is located and the amount of electricity a particular family needs. When it comes to investing in clean energy for their own home, saving money is more important than environmental concerns. More than 80 percent of 1,400 respondents identified financial factors as their top reason for considering alternatives. “Returns trump sustainability,” said Ron Pernick, lead author of the report and managing director of Clean Edge. In most cases, consumers preferred products with a relatively low upfront cost. LEDs were the most popular choice, with about one-quarter of respondents saying they planned to purchase at least five LED bulbs in the next year. When it comes to larger clean energy purchases such as rooftop solar, prices will need to fall even further in order to spark growth says SolarCity CEO Lyndon Rive. He says the sweet spot for a 20-year solar lease is 15 to 20 percent less than retail electricity rates. If the savings are less than that, most people aren’t interested in switching to solar power. It will be interesting to see how storage batteries will affect the number of homeowners considering a solar panel system. Tesla is expected to unveil its new storage battery for residential use this week. Other companies like Aquion are preparing products for the home storage market as well. Will home battery prices fall as rapidly as solar panel prices did over the past 10 years? That could be just the economic incentive needed to convince a lot more homeowners to install a solar system on their homes.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.925319492816925, "language": "en", "url": "https://www.cloudcomputingoutlook.com/news/overcoming-cloud-computing-security-challenges-nid-501.html", "token_count": 564, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.154296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1521f01a-1ea0-4403-b08d-f1ddd32ac7d6>" }
Organizations need to define the business value of data and the impact of their loss. Protection of data starts with the question of who has access to it. Data accessible through the internet is the most vulnerable Fremont, CA: In terms of management, access, and scalability of the business, the cloud offers significant benefits over the traditional platforms. However, the cloud also brings along certain security risks. Traditionally, these risks are associated with denial of service, data loss, malware, and system vulnerabilities. Recent studies have shown that the new threats in the cloud environment are centered around decisions on cloud strategy and implementation. Data breaches are cybersecurity incidents or simple attacks where sensitive or confidential information is viewed, stolen, or used by an unauthorized individual. This can damage the reputation of the company as a result of mistrust from customers and partners. It can also lead to the loss of Intellectual Property (IP) to competitors, impacting the release of a new product. In the long run, data breaches can have a significant impact on the company's brand and market value. Organizations need to define the business value of data and the impact of their loss. Protection of data starts with the question of who has access to it. Data accessible through the internet is the most vulnerable. Encryption techniques can help protect data, but also make the interface less user-friendly. Implementing a robust and well-tested incident response plan that considers the cloud provider and data privacy laws can help data breach victims recover from the impacts. Misconfiguration and Inadequate Change Control Setting up computing assets incorrectly leaves them vulnerable to malicious activity. Unsecured data storage elements or containers, unpatched systems and logging or monitoring left disabled, unchanged default credentials and configuration settings, excessive permissions, standard security controls left disabled, and unrestricted access to ports and services are some examples of asset misconfigurations. The impact of these misconfigurations is dependent on the nature of misconfiguration and the time taken to detect and resolve it. Cloud-based resources can be dynamic and complex and are also challenging to configure. Traditional change management controls and approaches are inadequate for the cloud. Organizations should look to embrace automation and technologies that can continuously scan for misconfigured resources and remediate problems in real-time. Inadequate Cloud Security Architecture and Strategy Implementing proper security against cyber attacks is one of the most significant challenges organizations face while migrating IT infrastructure to the public cloud. Migrating to the cloud is not as simple as the traditional lift and shift methods. Secure movement, deployment, and operation in the cloud are dependent on proper security architecture and strategy. Successful cybersecurity attacks can lead to financial loss, reputational damage, legal repercussions, and fines. See Also :- Top Cloud Technology Solution Companies
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9627665281295776, "language": "en", "url": "https://www.principlesofaccounting.com/chapter-10/equipment-leases/", "token_count": 363, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0107421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:89d08ceb-811b-4ddd-abde-24768025c8cf>" }
Many businesses acquire needed assets via a lease arrangement. With a lease arrangement, the lessee pays money to the lessor for the right to use an asset for a stated period of time. In a strict legal context, the lessor remains the owner of the property. However, the accounting for such transactions looks through the legal form, and is instead based upon the economic substance of the agreement. For leases generally exceeding one year the applicable accounting rules dictate that the lessee account for a leased asset as though it has been purchased. The lessee records the leased right as an item of property, plant, and equipment, which is then depreciated over its useful life to the lessee. The lessee must also record a liability reflecting the obligation to make continuing payments under the lease agreement, similar to the accounting for a note payable. Such transactions are termed financing leases. Note that the basic accounting outcome is as though the lease agreement represents the purchase of an asset, with a corresponding obligation to pay it off over time (the same basic approach as if the asset were purchased on credit). Short-term leases are known as operating leases. Rent is simply recorded as rent expense as incurred and the underlying asset is not reported on the books of the lessee. Why all the trouble over lease accounting? Think about an industry that relies heavily on financing lease agreements, like the commercial airlines. One can see the importance of reporting the aircraft and the fixed commitment to pay for them. To exclude them from the financial statements would fail to represent the true nature of the business operation. |Did you learn?| |Who is a lessee, and who is a lessor?| |Cite some possible advantages of a lease.| |Describe general principles of accounting for leases.|
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9586783051490784, "language": "en", "url": "https://www.wallstreetmojo.com/adverse-opinion/", "token_count": 1158, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.17578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:32b06ab8-0a6f-47e9-925d-fddf3a4a5d35>" }
What is Adverse Opinion? Adverse Opinion provided by the statutory auditor in his audit report denotes that financial statements of the company does not show ‘True & Fair’ view of the business practices of the organization and has been misrepresented or mistated. The statutory auditor is responsible for giving his view on the truthiness & fairness of the financial statements prepared by the management at the end of the financial year which is showing the business practices of the organization. The auditor upon performing his audit procedures try to obtain sufficient and appropriate audit evidence to verify the data provided in the financial statement of the entity. After collecting the audit evidence, the auditor forms his opinion on the fairness of the financial statement provided by the entity. Example of Adverse Opinion In the financial year 2018-19, the company has faced an extraordinary event (earthquake) where a lot of business activity of the company has been destroyed due to the earthquake. These circumstances indicate material uncertainty on the company’s ability to continue as going concern and therefore it may not be able to realize its assets or pay off the liabilities during the normal course of its business. The financial statement and notes to the financial statements of the company do not disclose the said fact. Auditors are required to draft their opinion, explain. In this case, not disclosing the fact of ‘destruction of business due to earthquake’ clearly state that the financial statement is not providing a true & fair view of the organization. So the auditor needs to provide an Adverse Opinion in his audit report for the financial year 2018-19. And such would be shown as below: In our opinion, because of the omission of the information provided above in the financial statement, the financial statement does not provide the true & fair view as per the requirements and also does not provide the information need to be reported as per the accounting principle: - In the case of the balance sheet, the state of affairs of the company as on the 31st March 2019 - In the case of profit & loss statement, the profit/loss for the year ended on the 31st March 2019 - In the case of cash flow statement, the cash flow of the company for the year ended on 31st March 2019 Why is Adverse Opinion Important? - When statutory auditor obtains evidence require for audit and during the audit he came to know that there are some misstatements & he asked management to rectify the misstatements, if management rectifies those misstatements then he gives unqualified opinion but in case of management don’t rectify the same and it is so significant that he can’t give qualified opinion then he give an adverse opinion. - If he identifies some fraud in organization and management of the organization is also involved in the fraud and auditor asked management to disclose that in financial statements, and if management refuses to disclose the same and if it is so significant that he can’t just qualify the report he should give an adverse opinion. - It is important for stakeholders of the company like for shareholders, as shareholders are the owner of the company and they need to know the financial situation of the company because they have invested their money in that organization. For Banks, they need to know the actual condition of the organization, whether a company is in a condition to repay the loan and interest amount. - The government needs to know that the company is following all the rules and regulations and paying statutory dues on time. As all stakeholders have some interest in an organization, so if an auditor comes to the decision that financial statement is not giving true and fair view or financial statements are not prepared according to respective law and regulations, he should give an adverse opinion. Difference Between Adverse and Disclaimer - Adverse Opinion – As explained, during the audit if auditor get information and documents that show there is some material misstatement or fraud and management is not ready to rectify the information or disclose that in the financial statement, internal control of the company is not good or management try to restrict the scope of the audit and they are not ready to lift the restriction in that case auditor should communicate this to upper-level management and if upper-level management is also not lifting the restriction, in that case, he should communicate to those charged with governance and give an adverse opinion. In his audit report when he gives adverse opinion he writes that he has obtained sufficient and appropriate evidence and on the basis of that in his opinion financial statements are not giving true and fair view or financial statements are not prepared according to respective law. - Disclaimer – During the audit, if an auditor is not getting information from management if management restricts him to get evidence from outside parties and he is not getting sufficient evidence from any source, if there is some material misstatement and he doesn’t have sufficient and appropriate evidence and that misstatement is significant that he can’t just qualify the opinion in that cases he gives disclaimer of opinion. In his audit report, he writes that he wasn’t able to obtain sufficient and appropriate evidence so he is not able to give his opinion on financial statements. When financial statements don’t provide all the information and statutory auditor after conducting audit & on the basis of all the evidence collected, he comes to the conclusion that financial statement is not providing a true and fair view, he will discuss all this with management and those charged with governance and after communication, he gives an adverse opinion. This has been a guide to What is Adverse Opinion and its Definition. Here we discuss types of opinion and why it is important along with an example, differences between adverse and disclaimer. You can learn more about from the following articles –
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9450212121009827, "language": "en", "url": "https://aucgroup.net/bridging-the-wastewater-treatment-investment-gap/", "token_count": 133, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0081787109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bf690f05-77fc-43a0-8964-a2363815aba7>" }
According to the ASCE’s 2017 Infrastructure Report Card, there are 14,748 wastewater treatment plants in the U.S. and by 2032, more than 56 million new users will be connected to centralized wastewater treatment systems. The investment gap for drinking water and wastewater infrastructure between 2016 and 2025 is estimated to be around $105 billion. Additionally, water services receive less than 5 percent of the federal government funds when compared to the other major infrastructure categories. With the limited amount of federal funding in the wastewater industry, it is critical to have alternative financing options to install, upgrade and repair wastewater facilities. After all, wastewater treatment is critical to protect public health.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9321699738502502, "language": "en", "url": "https://ilsr.org/rule/land-use-policy/", "token_count": 515, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.482421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b24f18f2-45af-498b-960b-0facfd617b94>" }
Largely a post Word-War II phenomenon, the word sprawl describes what its name evokes: formless, spreading, inefficent consumption of land. A “sprawling” landscape generally has no center and few public spaces where people congregate. Many Americans feel that sprawling development has accrued too many costs: The environment has suffered as Americans make more and more vehicle trips, new houses gobble up farmland and scenic countryside and new sewer lines and septic tanks damage the water supply in many areas. Civic participation also suffers as we spend more time stuck in traffic, know fewer of our neighbors, and inhabit a privatized landscape with few public squares or “third places”. In addition, as varying ethnic groups and social classes live in isolation from each other, there is less of a sense of unity and shared fate. The sprawl model also negatively affects small locally owned stores. When permissive zoning laws allow large megastores to locate on the outskirts of town (with generous tax breaks often thrown into the deal), money is siphoned away from the local businesses, further undermining a sense of place and community. (See New Rules Project’s Retail Sector for more about this problem. Also see Stacy Mitchell’s book The Hometown Advantage: How to Defend Your Main Street Against Chain Stores and Why it Matters .) Thissection offers several policy measures that encourage a more efficient use of land that fosters civic participation and social interaction. The state of Vermont uses a Land Gains Tax to protect rural land from short-term speculation. First effective in 1973, the tax imposes very high taxes on sales of land held a short time and sold for a large profit. The land gains tax is imposed on the gain from the sale or exchange of Vermont land that was held less than six years, and the land is not part of the first ten acres beneath or contiguous to the seller's principal residence.… Read More Can a land tax reduce sprawl and strengthen urban economies? The evidence is persuasive though not conclusive. Political economist Henry George first proposed a land value tax over 100 years ago, as a way to eliminate land specualtion and make more land available for production. Today,some observers hail it as a way to curb sprawl. Current property taxes are based in the value of property, reflecting both the land and structure value, in a proportion determined by local property assessors. Decisions to reinvest or remodel currently result in higher assessment valuations and thus higher taxes.… Read More
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9498363733291626, "language": "en", "url": "https://searchsecurity.techtarget.com/tip/Information-risk-management-Defining-the-scope-methodology-and-tools", "token_count": 1669, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.033935546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:29477274-e02c-4fc5-be51-90ad0ad07bf2>" }
In this installment of the Risk Management Guide, learn how to define the scope of the IRM team's responsibilities,... the difference between qualitative and quantitative risk analysis and the tools used to carry out risk analysis. Once you have solidified management's support, developed an IRM policy and established an IRM team, you need to define the scope of the IRM team's responsibilities and the risk assessment methodology the team will follow, and review some of the tools that the IRM team may be able to use. Before a risk assessment and analysis is started, the team carries out project sizing to understand what assets and risks will be evaluated. Most assessments are focused on physical security, technological security or personnel security. Trying to assess all of them at the same time can be quite an undertaking, so you should determine which issues the group will be addressing and those that it will not. Risk analysis methodology There are two types of approaches to risk analysis: quantitative and qualitative. Quantitative risk analysis attempts to assign real and meaningful numbers to all elements of the risk analysis process. These elements may include safeguard costs, asset value, business impact, threat frequency, safeguard effectiveness, exploit probabilities and so on. When all of these are quantified, the process is said to be quantitative. Quantitative risk analysis also provides concrete probability percentages when determining the likelihood of threats. Each element within the analysis (asset value, threat frequency, severity of vulnerability, impact damage, safeguard costs, safeguard effectiveness, uncertainty and probability items) is quantified and entered into equations to determine total and residual risks. Purely quantitative risk analysis is not possible, because the method attempts to quantify qualitative items, and there are always uncertainties in quantitative values. For example, if a severity level is high and a threat frequency is low, it is hard to assign corresponding numbers to these ratings and come up with a useful outcome. In most instances today, qualitative risk analysis is used. A quantitative method uses percentages, formulas and monetary values. The most commonly known and understood formulas are the single loss expectancy (SLE) and the annualized loss expectancy (ALE) methods. The formulas are: Asset value x exposure factor = SLE SLX x annualized rate of occurrence = ALE In the SLE formula the IRM team assigns a monetary value to the asset (asset value) and multiplies it by the amount of damage that would most likely be endured if a specific threat became realized. So if the organization's reputation has a value of $500,000 and it was found out that this organization did not abide by the California privacy law, this could damage this asset by 20%, which would mean that the SLE equaled $100,000. Now the team must take this value and multiply it by the annualized rate of occurrence (ARO), which is the number of times this threat would most likely become realized in a 12-month period. If the team thinks that this can happen twice in 12 months, the ARO has the value of 2. If the team thinks that this can happen once in ten years, the ARO value is 0.1. So the team takes the SLE value and multiplies by the ARO. In our case we will assign the value of 2 to the ARO. The resulting ALE value is $200,000. This means that if the company does not make sure that it abides by the California privacy law by implementing a monitoring countermeasure and a process of telling customers of a potential compromise, the company could lose approximately $200,000. The IRM team goes through these steps to determine the potential loss that can be endured so that they know how much can be spent on mitigating this specific risk as it pertains to this one asset. The qualitative method The reason that a qualitative method is more commonly used than a quantitative method is because of the difficulty of assigning monetary values to assets, calculating the percentage of damage that could be endured (exposure factor) and deriving the probability of frequency of a threat becoming realized (ARO). Inserting a value, quantitative or qualitative, for probability is a difficult and potentially dangerous move. Most risk methodologies use some type of probability value to represent the likelihood of a threat being realized; i.e. a vulnerability being exploited. NIST uses the following categories and definitions: CobiT uses High, Medium and Low ratings, as does the Octave approach to risk management. Another approach to qualitative risk assessments is the Australian/ New Zealand approach which uses a percentage to represent probability, as shown in the following graphic. The difficulty in using metrics that represent the probability of a threat being realized is that it is very subjective and complex. In our ALE example, how would the team decide that the company would most likely experience the issue of not being compliant with the California law two times a year and not zero times or 15 times? There are many constantly changing variables that would have to be considered to properly forecast the correct likelihood of a vulnerability being exploited and the frequency of this taking place. For example, what is the probability of our company --StuffRUs -- not abiding by the California privacy law and reporting an exposure to its customers that live in California? This one question leads to many other questions; what is the probability of someone hacking into our database? What is the likelihood of someone hacking through our firewall, not being detected by our IDS, hacking through our access controls and encryption on the database? What is the probability of this type of incident going unnoticed? What is the likelihood of an internal employee or contractor doing this type of activity? How would we know that our California customers' data was accessed? And so on. This one question can easily take the risk analysis team up to three hours to answer, but even after all of this effort, how do they know they are right? As the environment changes and the threat agents change, the probability of this threat being realized changes. There are many complicated and complex ways of trying to assess the probability of a vulnerability being exploited, but in the end it is mainly guesswork. The team can review past performance data and review the patterns that correlate with this type of threat, but most companies do not have this type of past performance data to pull from. The team could look to the industry and see what other companies have experienced and try and use that as a baseline, but the different companies most likely use different types of technologies, processes and people so this is not necessarily a fair comparison. So most organizations use a qualitative approach, but the industry as a whole is trying to define metrics that can be used for quantitative analysis. Almost everyone would like to use a quantitative risk analysis approach since the exercise is carried out to basically figure out where to best spend the company's finite security budget. In this series I will be covering some risk management tools that use a quantitative approach and explain the pros and cons of these tools. So far we have developed our IRM policy, created our IRM team, defined the team's scope, and decided on if the team will be using a qualitative or quantitative approach to risk analysis. In the next article we will go through the steps of an actual risk analysis. The following are some resources that you can review to help accomplish the items addressed in this article: Risk management guide Introduction: Understanding risk An overview of the risk management process How to define an acceptable level of risk How to write an information risk management policy How to implement an effective risk management team Information risk management: Defining the scope, methodology and tools How to conduct a risk analysis How to deal with risk About the author: Shon Harris is a CISSP, MCSE and President of Logical Security, a firm specializing in security educational and training tools. Shon is a former engineer in the Air Force's Information Warfare unit, a security consultant and an author. She has authored two best selling CISSP books, including CISSP All-in-One Exam Guide, and was a contributing author to the book Hacker's Challenge. Shon is also the co-author of Gray Hat Hacking: The Ethical Hacker's Handbook.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9591279029846191, "language": "en", "url": "https://tehcpa.net/economy/tobins-q-is-the-tax-cuts-and-jobs-act-working/", "token_count": 1274, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": -0.02099609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b10fb87c-e173-42e2-9370-ef60d72f65a0>" }
Tobin’s Q- Is the Tax Cuts and Jobs Act Working? Determining the Economy’s Health with Tobin’s Q The above-mentioned sign is called Tobin’s Q, and is one indicator, however, that offers a reason to be hopeful. Tobin’s Q or the Q ratio is named after James Tobin, renowned economist and Nobel laureate. Tobin’s Q can be determined at a macro-level, the economy and a micro-level, a single company. James describes it as: “One, the numerator, is the market valuation: the going price in the market for exchanging existing assets. The other, the denominator, is the replacement or reproduction cost: the price in the market for newly produced commodities. We believe that this ratio has considerable macroeconomic significance and usefulness, as the nexus between financial markets and markets for goods and services.” Calculating Tobin’s Q Tobin’s Q for a single company is calculated by taking the market value of that company and dividing it by the aggregate of what all that company’s assets are worth. Should Tobin’s Q be larger than 1, deductive reasoning shows that the market value of the business is larger than the total value of its respective assets. Essentially, we can safely assume that the whole company has a worth larger than the sum of its respective parts. This suggests that a sensible business move would be to expand the business by purchasing more assets. If the company can put new assets into use in the same profitable way as the existing one, the net value of the company will rise. Incidentally, the precise same theory can be used as a tool to analyze the economy, in its entirety. Even, the Federal Reserve includes a commonly used measure of Tobin’s Q contingent on its annual flow of funds report. What is Tobin’s Q Telling Us? Now that we know the Fed even diligently relies on its estimate of Tobin’s Q, we can attempt to answer many questions, including why the United States is in the mess it is in, and how the TCJA may or may not come to the rescue. Remember that, when Tobin’s Q is larger than 1, this indicates an economy that is in an expansion phase. Is History Repeating Itself? In the latter 1990s, Tobin’s Q skyrocketed while the market valuation of technology companies rapidly and astronomically outpaced their net assets. During said time, most considered this state a purely irrational bubble. In hindsight, however, we were witnessing a complete fundamental turn in the landscape of the American economy, which- to this day- is still dominated by technology firms. As the unprecedented technology expansion continued, however, our economy experienced an extended period of more-to-be-desired business investment. Even in the face of beyond extraordinary efforts by the Federal Reserve, Tobin’s Q stayed steadfast at below 1 for over a decade. Investment was overwhelmingly flowing into homebuilding, which was financed by brand new and horribly misunderstood debt instruments. The internal combustion of those debt instruments combined with the Great Recession left the Federal Reserve with another dilemma. On one side, unforeseen low-interest rates could have the effect and cause disturbingly unstable and unmanageable booms and more importantly, busts. Other viewpoints are seriously considering that higher interest rates so drastically above zero would seal the deal for never-before-seen asset price crashes and a steep decline in investment. Oddly enough, such a scenario is an eerily similar end result when the Fed first attempted to normalize monetary policy by minutely raising interest rates in 2015. If one recalls, the state of global affairs was near terrifying and domestic and international financial markets plummeted. This is the same point at which a near recession hit the United States. In context of Tobin’s Q, the federal government was putting to much stress on investors to actually invest- there was an overwhelming air of discouragement. And, lawmakers were left puzzled- how on Earth were legislators going to normalize monetary policy. while also encouraging investment? Ramifications of Tax Reform on Monetary Policy And then came tax reform. Analysts on the right determined that lowering the aggregate tax rate on companies and permitting immediate broadening of capital, tax reform would up the return on investment; this would also raise Tobin’s Q. Interestingly enough, following President Trump’s election, the economy improved because of the likelihood that potential tax reform could be on the horizon- one of Trump’s primary campaign promises. And, sure enough, Tobin’s Q also rose. Very quickly, the federal reserve resumed its normalization policy. Interest rates rose, and the balance sheet shrank. Now, tack on the increasing Tobin’s Q factor…and that is still the state of affairs in the United States. Now, we are seeing some red flags; probably the most prominent being the shake-up in the Executive Branch and unsurmountable friction everywhere. Also, global conditions are not extremely favorable. Exports are at a standstill. Global markets are tanking. And, financial conditions are visibly tightening. At a very near point in time, the Federal Reserve will have no other choice but to lower rates. Export growth is falling around the world, stock markets are troubled, and financial conditions are starting to tighten. To answer the main question: is the TCJA working? Most economists agree that this recent global expansion is directly correlated to the TCJA. It is unlikely, however, that things would have gotten far without tax reform both stimulating GDP growth in the short term and bolstering Tobin’s Q over the long term. Thomas Huckabee, CPA of San Diego, California recognizes that many options exist when it comes to choosing the right CPA. And, the right CPA is essential and needed to guide your business through all the new legislation that has been introduced. Operating a full-service accounting firm, Tom also guides clients through the complicated process of how new legislation ramifications affect your business.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9510144591331482, "language": "en", "url": "https://www.encyclopedia.com/social-sciences-and-law/education/education-terms-and-concepts/resource-allocation", "token_count": 6150, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.042724609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:61ca1116-09bc-4628-a7ba-efab84313e77>" }
Resource Allocation in Higher Education RESOURCE ALLOCATION IN HIGHER EDUCATION Institutions of higher education–be they large public universities or small private colleges–are not homogeneous organizations. Because of differing missions, goals, programs, histories, traditions, laws, and explicit procedures, they obtain and expend revenues, or financial resources, in myriad ways. Therefore, there is no universal model about the best way to allocate financial resources within higher education. Nevertheless, there is general consensus within the U. S. higher education community about the meaning of certain terms pertaining to resource allocation, as well as a general consensus about certain methods and processes for channeling financial resources into specific programs and projects. Budgetary Concepts and Terms For the layperson, the terms budget and allocation are often confused. Although the two terms are certainly related, and often synonymous, there are differences that one should be aware of in order to gain an appreciation of the resource allocation process. Broadly interpreted, the term budget represents both an institution's revenue sources and its expenditures. For public institutions, this side of the coin is usually comprised of legislative appropriations; tuition based on the number of credit hours and level of courses taken; contracts and grants–which comprise revenues received from external sources for research and certain types of off-campus program development; auxiliary operations–which refer to on-campus operations that are self-supporting, for-profit enterprises (such as the campus bookstore, cafeteria, and laundry); and local funds. Local funds, particularly within public universities, refer to those revenue sources not kept within the state treasury, but within local banks. Local funds may be comprised of fees and assessments charged against students for the support of campus-wide student activities, intercollegiate athletics revenues, concessions, and financial aid monies. Taken together, these revenue sources make up an institution's operating budget. They represent the totality of monies required to finance the institution's normal and recurring expenses (its core operations). However, this is not the complete picture, for the operating budget does not include fundraising revenues, which are monies donated to the institution by private donors, usually for specific purposes (such as endowed academic chairs, athletic scholarships, or a new academic program that is acceptable to the institution and a priority of the donor). Although fund-raising revenues have become ever more critical for institutional operations, they are rarely considered part of the traditional operating budget. Expenditures represent the most common understanding of the term budget. In this sense, the budget formally represents the institution's strategic priorities and associated costs. That is, the budget is a detailed plan for expending revenues for various institutional purposes. Moreover, these purposes are, or should be, focused on long-term strategic imperatives that parallel and support the accomplishment of the institution's most critical needs and aspirations. Traditionally, expenditures for the operating budget fall into certain main categories that apply to both public and private institutions. Certainly, the largest slice of the budget pie is earmarked for instruction and research (I & R)–the core activities of any college or university. At Florida State University, for example, approximately 70 percent of the operating budget is designated for I & R purposes. Other large slices of the budget pie include administrative support services, such as centralized computing and accounting services; student services, such as the registrar's office and financial aid; plant operations and maintenance, including grounds, building services, and utilities; and libraries. Expenditures from the operating budget are generally unrestricted. That is, there is some flexibility in allocating resources within and between the various categories that make up the operating budget. However, there are also restricted budgets, both within and external to the operating budget. Restricted means just that–monies can only be expended for strict, narrowly defined purposes. For example, within I & R, a public university could receive a restricted legislative appropriation to fund a Title IX (gender equity) program. Likewise, restricted budgets outside the operating budget may include monies earmarked for sponsored research or financial aid monies received from external sources, such as the federal government. For most core operations, whether financed by unrestricted or restricted budget expenditures, one should be aware of exactly how the monies are earmarked within the major expenditure categories. Generally, the monies fall into three main activities: (1) salaries and benefits, which are certainly the most costly activities; (2) capital outlay, which refers to major purchases of expensive equipment, such as computer systems; and (3) expense items, which include less expensive items and continuing costs such as office furniture, service contracts, expendable supplies, and travel. One critical budgetary category that is not considered a part of the traditional operating budget is fixed capital outlay, which comprises the monies earmarked for major construction and renovation projects. The auxiliary budget, also kept separate from the core, concerns the receipt and expenditure of monies obtained from revenue producing campus enterprises (e.g., a bookstore). Institutions with medical schools and teaching hospitals often have separate budgets for these purposes. Some institutions have service-center budgets, which refers to certain centralized services such as photography, printing, and copying. These services are not financed by operating budget expenditures. Rather, units under the umbrella of the service-center budget are reimbursed for their services by charging operating budgetary units, which, in turn, pay the service-center unit from operating budget expenditures, usually from the expense category. Allocation Concepts and Terms For the purpose of understanding the differences (and nuances) between the concepts of budget and allocation, one could say that the formal budget is the architecture (or basic plan per category) of how monies will be expended. Allocation, however, refers to the actual funneling of dollars to various units within an institution. In some instances, allocation flows will exactly mimic the expenditure categories. However, were this always the case, the descriptive analysis of budgets and allocations would end here. Rather, allocations often do (and should) have an element of flexibility built within them to reflect changing environmental conditions–including both internal and external environments, such as political circumstances, economic exigencies, and the strategic direction of the institution. Although most institutions do permit some flexibility within their allocation decisions, many eminent higher education leaders, such as Dr. James J. Duderstadt, the former president of the University of Michigan, have publicly noted that far too many allocation decisions have become overly mechanistic. This has become particularly true within large, public institutions, which have also publicly expressed their collective concern over the ineffective and inefficient ways that monies are allocated. In addition, the National Association of College and University Business Officers (NACUBO) has also publicly expressed concern about the deficiencies currently inherent within internal allocation systems and processes. Before discussing normative issues concerning how such deficiencies may be corrected, one must first understand the basic processes of allocations, particularly within the unrestricted I & R category. Historically, both academic and administrative units have relied upon incremental budgeting for determining allocations. Incremental budgeting simply means that the unit will sum the dollars contained within its current (annual) salary and benefits, capital outlay, and expense activities, and then increase the sum by a percentage to cover inflation and other expected cost increases. Incremental budgeting certainly simplifies the allocation process and facilitates accounting. With limited exceptions, incremental budget requests are accepted as forwarded to the central budget authority, funds are allocated according to the three major activities, and the unit lives within the allocations. At some institutions, academic and administrative units, with approval of a central budget authority, are able to transfer a minimal percentage of funds among salaries and benefits, capital outlay, and expense–if critical exigencies so demand. Nevertheless, this type of allocation system remains basically static. The problem with static allocation systems is that they are inherently unable to anticipate change. Duderstadt duly notes that within large public universities, legislative appropriations, in terms of real dollars, have continuously diminished since the 1970s. Diminishing public appropriations, coupled with the opportunities and threats posed by a volatile environment, limit an institution's ability to adapt. During extreme economic situations, static allocations based upon incremental budgeting could actually spell the death of a public institutions' major academic offerings. Another allocation process, often coupled with incremental budgeting, is formula-based allocation. This can be more flexible than simple incremental budgeting, because such formulas are usually based upon total credit hours or full-time head count per academic unit. This type of allocation process rewards those academic units that are most popular with students, and therefore does provide flexibility to fund those programs that are most in demand. Conversely, if an academic program is critical to a university's mission, but does not attract large numbers of students, it is automatically punished by formula-based allocations. In short, this is a market-based allocation process. While a for-profit organization can and should allocate its resources into the maintenance and expansion of its most profitable offerings, higher education institutions are striving for both tangible and intangible successes that may not necessarily be popular among students. Colleges and universities, recognizing the inadequacies of incremental and formula-based budgeting, have enacted certain allocation adjustments to enhance flexibility and the quality of certain programs. At one university, for example, a 1 percent flat tax was charged against the allocations to all academic units to replenish a central reserve fund and enhance certain graduate programs. However, according to a report by that university's provost, this type of allocation mechanism proved itself insufficient to meet most challenges facing the institution. In order to meet institutional objectives, and depending upon the authority granted to an institution by its governing board or its state legislature, an institution may be required to reduce allocations in one area to cover allocation demands in another. In order to meet the salary needs of the faculty, for example, resource allocations may be significantly diminished for libraries, computing systems, or facilities maintenance. Flat taxes and other short-term options, such as hiring adjunct faculty or downgrading positions, can only operate at the margin, however, because not enough financial resources are generated, particularly on a long-term basis, to solve problems resulting from a lack of allocation flexibility. Similarly, wholesale raiding of funds from one allocation category to fill the coffers of another, if permitted, can only serve to weaken the entire university structure over time. Whether the allocation process is incremental, formula-based, or stopgap in nature, such processes focus only upon short-term, year-to-year allocations. In 1999, Drs. Edward Ray and William Shkurti, the provost and senior vice president for finance, respectively, at Ohio State University, succinctly stated the problems accruing to that particular institution as a result of allocation inefficiencies: - Current practices were not supportive of the instructional mission. - Current practices were not supportive of the research mission. - Current practices did not provide sufficient incentives to reduce costs and/or generate additional revenues required to address academic priorities. - Current practices did not provide sufficient accountability for the costs of individual unit decisions that impact the entire university. Achieving Normative Consensus The problems inherent within traditional budgetary and allocation processes indicate the need for a new approach. Notwithstanding the fact that public institutions are further hampered by legislative mandates, private institutions also face the same problems inherent within incremental and formula-based allocations. The challenge facing higher education is to embrace new philosophies and outlooks that take a long-term, wide-ranging view of what the institution is, what it should be, and how it can move from what is to what should be. Appropriate, sufficient, and equitable resource allocation processes simply can no longer be based on what worked in the past. In this sense, most colleges and universities have embraced strategic planning–a long-range, holistic examination of what the overall mission of the institution should be; in other words, a vision. To better define this vision, one must further ascertain the specific goals that should be set to accomplish the mission, and what environmental factors exist–internally and externally–that can either enhance or inhibit the accomplishment of the vision. Specific questions need to be asked, such as: What does the university plan to accomplish over the next several years? How does the university plan to accomplish its goals and objectives? What resources are needed to carry out this plan? What are the funding sources from which the institution can obtain the necessary financial support? To best answer these questions, institutions should first examine their decision-making structures. Colleges and universities are not pyramidal, hierarchical structures ruled by an autocracy at the top that transmits decrees downward through the chain of command. Conversely, colleges and universities cannot be anarchistic organizations where decision-making is randomly conducted by individual units. The problem, thus, is to create a decision-making structure that seeks consensus through participation. At one Eastern university, for example, allocation decisions remain the basic prerogative of university executives, such as the president, provost, vice president for finance, and the deans. Nevertheless, to reach its highest-priority strategic objectives, faculty and staff members from colleges and departments are invited to submit their own ideas on how best to achieve the institution's overall mission, long-term strategic initiatives, and specific goals–all within the context of maintaining and enhancing the quality of priority programs identified by strategic planning. Specifically, faculty and staff members are requested to review the allocation and adequacy of resources vis-à-vis the quality of programs relative to peer institutions, the centrality of programs to the university's mission, and the cost-effectiveness of programs relative to the best practices of higher education and the private sector. To facilitate and direct this endeavor, a university-wide committee, the Strategic Plan Advisory Committee (SPAC) was formed. SPAC not only identified allocation problems in detail, it helped develop a long-term, multiyear plan that will enable the university to respond to special opportunities and eventually solve the most basic and continuing allocation problems. Similarly, at the small, private-college level, Wheaton College in Massachusetts has set up a formal group–the Budget Advisory Committee–similar to the SPAC. Wheaton's committee, consisting of faculty and staff members, reports directly to the college president, and operates with the long-term view that allocations should be treated as strategic investments, not simply as annual costs. Hence, it has determined that allocations should regularly include reallocations from lower priorities to higher priorities, and that cost savings should be actively pursued in order to increase the college's strategic flexibility. In short, if realistic and successful allocation processes are to be developed and accepted throughout the institution, structural arrangements must be designed to facilitate the participation of stakeholders and attainment of consensus. Once consensus on basic allocation-decision parameters is achieved, a second consideration includes the formal allocation structures and processes that might be adopted. To help identify these means, decision-makers and participants in the decision-making process should be provided with feasible and workable alternatives. One alternative, as suggested by Duderstadt, is an institution-wide, integrated resource-allocation model he calls Responsibility Center Management. Resource-allocation decisions are shared between academic units, administrative units, and the central administration. After determining strategic priorities, this alternative allows critically-important units to keep the resources they generate, makes them responsible for meeting costs they incur, and then levies a tax on a unit's expenditures to provide a central pool of resources for supporting central operations and facilitating flexibility funding. This alternative has the potential to reduce some of the inequities and inefficiencies inherent within formulaic or incremental allocation processes. Another alternative is substitution, or the elimination or reduction of noncritical activities to release allocations for more critical, strategically oriented activities. This alternative not only reallocates resources to those programs deemed most critical for strategic purposes, it also alerts the public and the institution's stakeholders that the college or university has taken cost effectiveness very seriously. Other structural and process alternatives for resource allocations include: differential tuition rates based upon program popularity; using foundation allocations to replace traditional allocations; permitting the carry-over of surpluses from one year to another; and permitting the most productive research units to retain a large portion of the overhead (indirect) costs assessed against their research awards. The point is that viable and reasonable alternatives should be presented at the start of the analysis in order to preclude time being wasted. Traditional budgetary and resource allocation procedures that have been utilized for decades in America's colleges and universities are rapidly losing their functionality. Indeed, reliance upon their continued use can cause irreparable damage to the system of higher education. Budgets and resultant allocations are complicated subjects. Because of their complexity and a reliance on the fact that they worked well enough in the past, inertia exists. However, in light of the volatile higher education environment of the early twenty-first century, the increasing inequities and inefficiencies of current systems and processes, and greater demands for accountability by legislative bodies and institutional stakeholders, structures and procedures for budgeting and allocating financial resources must be re-examined. The task is not easy–the problems are complex, and consensus about what should be done is difficult to attain. Nevertheless, to ignore the problem can, and will, have a negative impact upon public and private higher education systems. See also: Accounting Systems in Higher Education; Finance, Higher Education. Callan, Patrick M., and Finney, Joni E., eds. 1997. Public and Private Financing of Higher Education: Shaping Public Policy for the Future. Phoenix, AZ: Oryx Press. Meisenger, Richard J., Jr., and Dubeck, Leroy W. 1984. College and University Budgeting: An Introduction for Faculty and Academic Administrators. Washington, DC: National Association of College and University Business Officers. Ray, Edward J., and Shkurti, William J. 1999. "University Goals and Resource Allocation." <www.rpia.ohio-state/Budget_Planning/html>. Schwartz, John E. 1999. <http://w3.Arizona.edu/~provost/issues/issues-5.html>. Southern Illinois University. 2001. "What Is RAMP?" <www.siu.edu/~budget/rampint.html>. University of Maryland-College Park. 1998. "Rationalizing Resource Allocation and Administrative Operations." <www.inform.umd.edu/EdRes/provost/StrategicPlanning/SPAC2_IV_Rationalizing.html>. Wheaton College. 2001. "College Priorities for 2001-2002." <www.wheatonma.edu/admin/finance/RA/Prior.html> John R. Carnaghi Decision-Making in Schools, Applying Economic Analysis to DECISION-MAKING IN SCHOOLS, APPLYING ECONOMIC ANALYSIS TO In the 1999 through 2000 school year, spending for all levels of education amounted to $646.8 billion. According to the National Center for Education Statistics, of this total, $389 billion was spent for K–12 education and the remaining $257.8 billion was expended by postsecondary institutions. Despite the substantial financial commitment to education, the impact of economics on the way educational institutions allocate and use their resources has been remarkably limited. Economics is concerned with obtaining the best possible outcome from a limited budget, and thus seems an ideal approach for dealing with how to allocate resources within schools. Although economists are beginning to analyze educational problems in increasing numbers, they have yet to make major inroads in improving educational productivity. This article describes ways in which economic analysis could be used to improve decision-making in educational institutions, and to inform the allocation and use of educational resources. Even though virtually all educators believe that additional resources will lead to higher student performance, it remains unclear how best to spend dollars to achieve that goal. As a result, demands for more money, absent a well-reasoned description of how the money will be used, does not build confidence that money–by itself–will make a difference. Researchers have used production functions–a statistical approach linking outcomes with specific inputs–to understand how money matters. To date, this research has been inconclusive with some arguing that money matters and others suggesting a systematic link between higher levels of resources and more money does not appear to exist. This stems in part from disagreement over the proper outcome of schooling. Traditional allocation tools like cost–benefit analysis are infrequently applied in educational settings due largely to the difficulty of placing a monetary value on the outcomes or benefits of education. Henry M. Levin and Patrick McEwan suggest that linking costs to some measure of performance, or effectiveness, is a better approach for education. Under this model, the cost per unit gain of achievement is estimated so that programs that are more efficient, or cost effective, can be identified and chosen. Eric Hanushek argues that the proper incentives for better performance and efficient use of educational resources are not in place, and that holding schools accountable for student performance is essential to use more effectively existing and new money. Improvement of student performance, with or without new funds, requires improved decisionmaking in the following four areas. - Reallocation of existing resources - Incentives for improved performance - Development of the concept of venture capital for schools and school systems - A more market-based budgeting environment Reallocation of Existing Resources Regardless of what impact additional funds might have, it is important that existing resources be used as efficiently as possible. In many districts it may be possible to reduce class size through different assignments of teachers throughout the district. To the extent that smaller class size improves student performance, these changes would offer an improvement in student performance at little or no cost. Before seeking additional funds, schools may investigate other ways to restructure what is done with current funds. Allan Odden and Carolyn Busch argue that schools can find additional resources through a combination of creative use of categorical funds, elimination of classroom aides, and reallocation of resources, such as the elimination of one or two teaching positions. Although some of these options may result in larger classes, or fewer teachers, the more intensive use of staff and greater professional development activities available have resulted in improved student performance in many of the schools that have adopted this approach. The use of incentives to encourage schools or school districts to allocate resources in ways that lead to improved student performance is not a new idea. Unfortunately, the incentives that seem to have the most success have been sanctions. Schools faced with threats of intervention often act quickly to improve performance rather than risk the stigma of a sanction. Conversely, many positive incentives have been less successful. For example, high-performing schools are often granted waivers from state regulation in exchange for success. In this case, the regulatory system loosened constraints that may have made the organization successful. Perhaps the more appropriate incentive would be to provide such waivers to under-performing schools with the hope that increased flexibility would lead to improvements. Hanushek argues that the incentives currently in place in schools do not encourage teachers to work towards improving student performance and therefore need to be changed. He suggests that there is not sufficient awareness of positive performance incentives, and that more experimentation and research is needed. Venture Capital (Equity) One problem of education in the early twenty-first century is that once funds are appropriated to a school or program, they become the possession of that entity. In a study of the costs of implementing California's "Caught in the Middle" reforms for middle schools, published in 1992, David Marsh and Jennifer Sevilla found that the annual costs of restructuring schools to meet the requirements of this program were between 3 and 6 percent higher than current average expenditures per pupil in California schools. However, they also concluded that the first year start-up costs amounted to approximately 25 percent of annual costs. The problem schools face is finding those start-up funds. Often such funds are not available for all schools in a district, and schools receiving such funds treat them as a continuous source of revenue. Yet if such funds were rotated among schools, it would be possible to institute new programs in all schools over a few years. Related to the concept of venture capital is the concept of revolving funds. This notion offers a way for school districts to deal with large purchases, like computers, that occur on a regular but nonannual basis. Budget procedures in school districts do not reward schools for saving resources in one year to make large purchases the next year. A school that receives a sum of discretionary money in one year is likely to lose any of the funds it has not expended by the end of the fiscal year. As a result, schools are often unable to make a large coordinated purchase. A solution to this would be a revolving fund in the district to pay for such purchases. Schools would receive large appropriations of funds for such purchases once every few years. Finding a way to use the money in a revolving fashion would facilitate continued improvements in educational programs. The major problem is determining who gets the venture capital funds first and who has to wait. In many large districts, the superintendent publishes lists of the best- and worst-performing schools, and such lists could be used to prioritize the allocation of these funds. Another issue is the equity of the distribution. Although some schools will get more funds one year than others, over the established time period, all schools will receive an equal amount–one simply has to accept the idea that equity is measured over some time frame, and not on an annual basis. Many reformers call for market-based changes in the organization of schools. There are many ways to introduce the market into the educational arena, but most of these fall under the heading of school choice. Public school choice can be considered as either an intradistrict or interdistrict choice, and these can be broken down further into the various types of programs in each category. Two other types of choice involve the blurring of the line between public and private education: private school vouchers and privatization of former public schools. Intradistrict choice programs, by definition confined to one school district, grew largely out of an attempt to desegregate schools, rather than to provide competition or parent choice. The first of these programs is called controlled choice, where districts created models for assigning students to schools outside of the traditional neighborhood school model as a way of reducing segregation. A second type of intradistrict choice program is the magnet school. Magnet schools were designed to attract white students to schools with high minority populations, often located in heavily minority communities. The newest model of intradistrict choice is the charter school. With the development of the charter school, the purpose of the choice models shifted away from desegregation to a focus on providing parents with the choice to send their children to schools that may be less regulated than their traditional neighborhood school. These schools operate under a charter between those who organize the school (typically teachers and parents) and a sponsor (typically the local school board or state board of education). Interdistrict choice programs allow the transfer of students between school districts. Although interdistrict choice programs also grew out of attempts to desegregate, they always had the goal of increasing parental choice as well. Many states allow interdistrict choice through open enrollment policies, which vary from state to state; some states mandate that all districts have open enrollment while others allow districts to choose whether they wish to be open or closed. Perhaps the most talked-about form of choice program is the voucher program. Voucher programs can be organized in different ways, but the basic idea is to give some children access to private schools by issuing vouchers to their families, which the families then give to the school in lieu of a tuition payment. Often these programs have the intention of allowing low-income students to go to schools they could not otherwise afford to attend, although vouchers are not necessarily limited to those in poverty. A final market-based approach is the privatization of schools that were formerly public. This is also a relatively new approach, and one that arose largely out of a demand for strategies that could save failing schools. The argument is that if public education functions like a monopoly (a firm that has control over its price and product) because it is not subject to competition, it has little incentive to function efficiently. By introducing some competition through privatization, schools would be forced to provide higher-quality education at a lower price. Recent efforts to collect resource data at the school site and even student level may lead to enhanced knowledge of how resources impact student outcomes. To the extent that such knowledge is applied to decisions about how schools are operated, the long-term impact may be improved educational productivity through enhanced and informed decision-making. See also: Public School Budgeting, Accounting, and Auditing. Hanushek, Eric A. 1994. Making Schools Work: Improving Performance and Controlling Costs. Washington, DC: The Brookings Institution. Hanushek, Eric A. 1997. "Assessing the Effects of School Resources on Student Performance: An Update." Educational Evaluation and Policy Analysis 19 (2):141–164. Levin, Henry M., and McEwan, Patrick. 2000. Cost Effectiveness Analysis: Methods and Applications. Thousand Oaks, CA: Sage. Marsh, David, and Sevilla, Jennifer. 1991. Goals and Costs of Middle School Reform. USC Center for Research on Education Finance Policy Brief. Los Angeles: University of Southern California, Center for Research on Education Finance. Odden, Allan, and Busch, Carolyn. 1998. Financing Schools for High Performance. San Francisco: Jossey-Bass. National Center for Education Statistics. 2001. "Digest of Education Statistics: Table 31." <http://nces.ed.gov/pubs2001/digest/dt031.html>. Lawrence O. Picus
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9235098361968994, "language": "en", "url": "https://www.enerparc.de/en/eeg-eng", "token_count": 133, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.04833984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:01792e0e-c6c5-4fdb-b95d-0a99c0f5c5cf>" }
The Renewable Energy Sources Act (EEG) was introduced in 2000 as a central component of the German energy transition. The goal behind the EEG is to gradually achieve 80% of the electricity supply from renewable energies by 2050 and thus increase the proportion of renewable energies in the German gross electricity consumption (§1 EEG 2017). The EEG guarantees plant operators of renewables a fixed energy buy-back price in ct / kWh for 20 years from the date of commissioning. State funding for the first plants will therefore end in 2021. The so-called Power Purchase Agreements can be a new instrument for the economic operation of these old systems, regardless of the EEG funding.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.920758068561554, "language": "en", "url": "https://www.gamesd.app/top-blockchain-platforms", "token_count": 1337, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.034423828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2b7b8e15-0d92-4179-b7e7-678fd5a23336>" }
The blockchain technology used in various industry and everyone has the interest to experience the potential of this technology. The rise of dapp development is also a reason behind the increase to use the blockchain platforms. The blockchain technology used to develop the various blockchain-based applications. R3, ripple, Ethereum, hyper ledger and EOS which builds blockchain frameworks, and also allow users to develop and host applications on the blockchain Blockchain platforms for your business process Let's see the depth analysis of the platform which is mentioned below Ethereum is open-source which wad proposed by vital butter in, blockchain-based computed platform. Smart-contract in ethereum had a runtime environment from ethereum virtual machine. Every node in the ethereum network has run an EVM implementation. Ether is the indigenous cryptocurrency of ethereum. For executing a transaction on the ethereum network, developers have to pay charges in Ethers while building applications in dapp. Ethereum has also built a huge online support community to keep everyone up-to-date with product implementation and updates. Ripple used a blockchain network to connect digital assets exchanges, banks and corporations. It allows payments through a digital asset called ripple or XRP. Ripple is purposely designed as a day to the day payment system which is more faster and scalable than other blockchains. Ripple is a trusted investment believed by many banks. Ripple is used for Low commission currency exchange fast international transactions Quorum proposed by J.P.Morgan, the quorum is designed based on ethereum core's it can incorporate the ethereum updates quickly. Quorum is an open-source and free to use blockchain platform forever. It used the various algorithm to process a lot of transaction per second. It can manage applications requiring high throughput accessing and speed of private transactions. Other blockchains are failed to tackle the confidentiality of records but quorum manages the messages securely through a system called constellation. Hyperledger Fabric is one of the most important blockchain projects in Hyperledger. Using the modular architecture Hyperledger fabric designed to build blockchain-based applications or solutions. Fabric framework is designed for authorized networks, enabling known integrity to participate within a system. In addition, Hyperledger fabric allows various enterprises to make parts of the blockchain. Fabric does not have its indigenous currency, but it enables users to define assets from the customer side. Corda is an open-source platform which is used to enable the businesses to transact directly using smart contract. That will reduce transaction, streamlining business operations and record keeping. Corda does not have a cryptocurrency which allows only the authorized users to access the data, not for the entire network. Corda is purposely designed for the financial industry, but it also applied in healthcare, supply chain, trade finance, dapp games and in government authorities. Hedera is designed for fair, fast, and secure applications to take advantage of the adaptability of hashgraph on a decentralized, public network you can believe. Developers can build a new kind of decentralized application which are scalable. Hedera hashgraph smart contract is editable, developers can add the new features and can able to fix the bugs. Hedera Hashgraph has a capability to manage the hundreds of thousands of transaction per second with heavy security. Hyperledger Sawtooth is also an open-source blockchain platform which is introduced by Linux. Sawtooth is purposely designed to create, deploy and execute the distributed ledgers to maintain without the central authority. Hyperledger sawtooth combine with hardware security solutions knows as trusted execution environments. Decentralized applications on Sawtooth can choose transaction rules, define the consensus mechanisms and choose the required permissions to determine the working of the digital ledger in a way which it meets the requirements of an enterprise. Hyperledger Iroha is one of the blockchain platforms to build trust, security on decentralized applications. Iroha is simple and modularized distributed ledger system based on fast and highly secure consensus algorithm. This platform is highly applicable for IoT use cases and for the supply chain. Hyperledger Iroha smart contract is similar to ethereum. It maintains a healthy balance between transparency in the development process. An open chain is designed to manage the digital assets in a robust, secure and scalable way. Open-chain didn't build blocks for storing transaction. In open-chain transactions are linked directly to each other. The open-chain uses partitioned consensus will have a single authority or validation of transactions. Steller is also an open-source, decentralized payment protocol which allows cross border transaction. Stellar is deal with exchanges between fiat-based currency and cryptocurrencies. It is possible to build smart devices, banking tools and mobile wallets using the Stellar network. Dragonchain’s open-source blockchain platform which gives enterprises and developers the resources they need. Build public/private hybrid blockchain applications, and write smart contracts in minutes with Blockchain as a Service. Dragon chain blockchain as a service provides flexibility to businesses by allowing them to utilize Interchain which gives the capabilities of other blockchains Designed to establish scalable decentralized applications, the base asset of the NEO blockchain is NEO token. The role of NEO token is to provide GAS tokens which can be used to pay transaction amount to run applications on the network. Neo uses a consensus algorithm (Delegated Byzantine Fault Tolerance). The creators of Neo pick this protocol because it allows scaling and better performance as compared to other consensus mechanisms. Neo has three components It is purposely designed for the development of dApps (Decentralized Applications). The company distributed one billion ERC-20 tokens to assure widespread distribution of their cryptocurrency and grant anyone to use EOS blockchain after it was released. EOS achieve consensus by using multi-threading as well as a delegated proof-of-stake algorithm Tron (TRX) is a most using blockchain platform launched as the foundation for a decentralized entertainment ecosystem. Tron focuses on expanding the market of decentralized digital content apps by making it more easier to create and deploy them. The Tronix TRX is the proprietary cryptocurrency token of the Tron blockchain network. Tron is mainly designed to ease this transition and therefore quicken the decentralization of existing platforms and creation of new decentralized Apps. Where to build Build an application in a blockchain platform is a complicated thing. Prefer well-experienced developers to create decentralized applications with innovative ideas. Our developers experienced a lot of projects with different requirements. Reach us to know more about blockchain development. We are also experts in making your dapp games in the blockchain network.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9440644383430481, "language": "en", "url": "https://www.myessay.help/2020/05/23/fin-534-homework-chapter-5/", "token_count": 1815, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06396484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:59c8618d-4284-4ab8-8451-4ba3d70bb49c>" }
FIN 534 – Homework Chapter 5 6 . Three $1,000 face value bonds that mature in 10 years have the same level of risk, hence their YTMs are equal. Bond A has an 8% annual coupon, Bond B has a 10% annual coupon, and Bond C has a 12% annual coupon. Bond B sells at par. Assuming interest rates remain constant for the next 10 years, which of the following statements is CORRECT? a. Bond A’s current yield will increase each year. b. Since the bonds have the same YTM, they should all have the same price, and since interest rates are not expected to change, their prices should all remain at their current levels until maturity. c. Bond C sells at a premium (its price is greater than par), and its price is expected to increase over the next year. d. Bond A sells at a discount (its price is less than par), and its price is expected to increase over the next year. e. Over the next year, Bond A’s price is expected to decrease, Bond B’s price is expected to stay the same, and Bond C’s price is expected to increase. 7. Which of the following statements is CORRECT? a. Two bonds have the same maturity and the same coupon rate. However, one is callable and the other is not. The difference in prices between the bonds will be greater if the current market interest rate is below the coupon rate than if it is above the coupon rate. b. A callable 10-year, 10% bond should sell at a higher price than an otherwise similar noncallable bond. c. Corporate treasurers dislike issuing callable bonds because these bonds may require the company to raise additional funds earlier than would be true if noncallable bonds with the same maturity were used. d. Two bonds have the same maturity and the same coupon rate. However, one is callable and the other is not. The difference in prices between the bonds will be greater if the current market interest rate is above the coupon rate than if it is below the coupon rate. e. The actual life of a callable bond will always be equal to or less than the actual life of a noncallable bond with the same maturity. Therefore, if the yield curve is upward sloping, the required rate of return will be lower on the callable bond. 8. Which of the following statements is CORRECT? a. Assume that two bonds have equal maturities and are of equal risk, but one bond sells at par while the other sells at a premium above par. The premium bond must have a lower current yield and a higher capital gains yield than the par bond. b. A bond’s current yield must always be either equal to its yield to maturity or between its yield to maturity and its coupon rate. c. If a bond sells at par, then its current yield will be less than its yield to maturity. d. If a bond sells for less than par, then its yield to maturity is less than its coupon rate. e. A discount bond’s price declines each year until it matures, when its value equals its par value. 9. Suppose a new company decides to raise a total of $200 million, with $100 million as common equity and $100 million as long-term debt. The debt can be mortgage bonds or debentures, but by an iron-clad provision in its charter, the company can never raise any additional debt beyond the original $100 million. Given these conditions, which of the following statements is CORRECT? a. The higher the percentage of debt represented by mortgage bonds, the riskier both types of bonds will be and, consequently, the higher the firm’s total dollar interest charges will be. b. If the debt were raised by issuing $50 million of debentures and $50 million of first mortgage bonds, we could be certain that the firm’s total interest expense would be lower than if the debt were raised by issuing $100 million of debentures. c. In this situation, we cannot tell for sure how, or whether, the firm’s total interest expense on the $100 million of debt would be affected by the mix of debentures versus first mortgage bonds. The interest rate on each of the two types of bonds would increase as the percentage of mortgage bonds used was increased, but the result might well be such that the firm’s total interest charges would not be affected materially by the mix between the two. d. The higher the percentage of debentures, the greater the risk borne by each debenture, and thus the higher the required rate of return on the debentures. e. If the debt were raised by issuing $50 million of debentures and $50 million of first mortgage bonds, we could be certain that the firm’s total interest expense would be lower than if the debt were raised by issuing $100 million of first mortgage bonds. 10. Cosmic Communications Inc. is planning two new issues of 25-year bonds. Bond Par will be sold at its $1,000 par value, and it will have a 10% semiannual coupon. Bond OID will be an Original Issue Discount bond, and it will also have a 25-year maturity and a $1,000 par value, but its semiannual coupon will be only 6.25%. If both bonds are to provide investors with the same effective yield, how many of the OID bonds must Cosmic issue to raise $3,000,000? Disregard flotation costs, and round your final answer up to a whole number of bonds. Why Choose Us We value our clients. For this reason, we ensure that each paper is written carefully as per the instructions provided by the client. Our editing team also checks all the papers to ensure that they have been completed as per the expectations. Professional Academic Writers Over the years, our Acme Homework has managed to secure the most qualified, reliable and experienced team of writers. The company has also ensured continued training and development of the team members to ensure that it keep up with the rising Academic Trends. Our prices are fairly priced in such a way that ensures affordability. Additionally, you can get a free price quotation by clicking on the "Place Order" button. We pay strict attention on deadlines. For this reason, we ensure that all papers are submitted earlier, even before the deadline indicated by the customer. For this reason, the client can go through the work and review everything. At MyEssay.help, all papers are plagiarism-free as they are written from scratch. We have taken strict measures to ensure that there is no similarity on all papers and that citations are included as per the standards set. Customer Support 24/7 Our support team is readily available to provide any guidance/help on our platform at any time of the day/night. Feel free to contact us via the Chat window or support email: [email protected]. Try it now! How it works? Follow these simple steps to get your paper done Place your order Fill in the order form and provide all details of your assignment. Proceed with the payment Choose the payment system that suits you most. Receive the final file Once your paper is ready, we will email it to you. MyEssay.help has stood as the world’s leading custom essay writing services providers. Once you enter all the details in the order form under the place order button, the rest is up to us. At MyEssay.help, we prioritize on all aspects that bring about a good grade such as impeccable grammar, proper structure, zero-plagiarism and conformance to guidelines. Our experienced team of writers will help you completed your essays and other assignments. Admission and Business Papers Be assured that you’ll definitely get accepted to the Master’s level program at any university once you enter all the details in the order form. We won’t leave you here; we will also help you secure a good position in your aspired workplace by creating an outstanding resume or portfolio once you place an order. Editing and Proofreading Our skilled editing and writing team will help you restructure you paper, paraphrase, correct grammar and replace plagiarized sections on your paper just on time. The service is geared toward eliminating any mistakes and rather enhancing better quality. We have writers in almost all fields including the most technical fields. You don’t have to worry about the complexity of your paper. Simply enter as much details as possible in the place order section.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9421656131744385, "language": "en", "url": "https://www.savethechildren.org.uk/blogs/2013/propelling-progress-towards-the-right-to-health-for-all", "token_count": 745, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1728515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b39aef88-95d4-4d1a-bf64-ba35630e33f5>" }
Propelling progress towards the right to health for all At the United Nations General Assembly (UNGA) last week there was a lot of buzz around ‘universal health coverage’ (UHC) – what the World Health Organization (WHO) defines as ensuring that “all people obtain the health services they need, of good quality, without suffering financial hardship when paying for them.” Equity emerged as a more prominent priority in these discussions than ever before. So why is equity so important in the UHC debate? In our joint report with the World Health Organization, UNICEF and the Rockefeller Foundation, we show how unfair and avoidable inequalities in the coverage of essential, good-quality health services and financial risk protection prevail, both between and within countries. The poorest, most marginalised and most vulnerable people are systematically left behind. That’s neither fair nor necessary. UHC is an issue of social justice and human rights. But it’s also one of economic and sustainable development. Addressing inequities in the coverage of interventions and health financing is not just the right thing to do from a moral and ethical perspective. It is also an economically sound investment, producing better value for money and sustainable gains. The health system has the potential to mitigate some of these inequities and help to realise the right to health. But as political commitments to UHC are made, it’s critical that reforms prioritise the needs of the poorest and most vulnerable people – ensuring progressive pathways towards universality are pursued. - greater public financing and the elimination of out-of-pocket payments - mandatory prepayment and large-scale risk pooling (eg, through taxation) - a package of interventions that responds to the needs of the most vulnerable people. Equitable pathways to UHC will also require reforms across the building blocks of the health system – such as ensuring an appropriately trained, supported, equipped and motivated health worker is in reach of every child – while addressing the broader social determinants of health – ie, the conditions in which people are born, live and work. Political will and effective donor support can be catalytic. As Tim Evans, Director of Health, Nutrition, and Population at the World Bank, proposed during our event last week, donor performance should be assessed by their support to help countries establish equitable prepayment and risk pooling mechanisms. This will require a step change in donor behaviour and their interpretation of value for money. Learning from the Millennium Development Goals (MDGs), we must ensure that targets and indicators strengthen the health system and have a distributional dimension, so that we can hold countries and partners accountable to reach those most in need. And here the debate is just starting. During the UNGA, we heard promising emphasis on public financing and equity in current efforts to develop metrics for UHC in the post-2015 agenda. The World Bank and WHO are proposing two targets: - to end impoverishment from health expenditures - to achieve 80% coverage in poorest 40% of population of two composite measures for MDGs 4, 5 & 6 (on tackling child mortality, maternal health, and HIV, malaria and other diseases) and non-communicable diseases. The health report by the Sustainable Development Solutions Network proposes minimum thresholds for public financing, ODA, and a maximum for out-of-pocket payments. We look forward to consultation and open discussion as this thinking evolves to ensure sufficient equity and ambition in UHC as a tangible component of the post-2015 agenda. And to propel progress towards the right to health for all. This blog has also been published on the RockBlog.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.961676299571991, "language": "en", "url": "https://www.synergies.com.au/case-studies/the-potential-of-microgrids-to-support-economic-development-in-remote-communities/", "token_count": 704, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.06787109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:01e77be3-c3c8-4945-8bfb-6a17d6b9fd69>" }
Advances in technology are fundamentally transforming the way in which energy services can be provided, and are allowing services to be provided more efficiently. Standalone power systems, for example, use a mix of different technologies to allow individual customers continuous energy supply without the need for grid connection. Microgrids use intelligent technologies to enable consumers to become ‘prosumers’ and actively generate and trade power on a controlled platform that ensures continuous energy supply to a group or community of customers. Microgrids are expected to be particularly cost effective for small, remote communities that are currently rated as ‘high-cost customers’ relative to those located in larger towns. Synergies was engaged by Horizon Power to examine the scope for distributed energy resources and microgrids to create new economic opportunities for small and remote communities in northern Australia. Microgrids are currently being explored as cost effective installations for small, remote communities in Northern Australia. In addition to cost-savings for a utility such as Horizon Power (which receives a subsidy from the WA government to services these communities), having access to lower cost energy could provide opportunities for remote communities to grow their local economy. The emergence of microgrid technologies may also make it feasible for other essential services to be delivered at a local level. Instead of having multiple government agencies involved in supply of services, efficiency gains could be achieved through a single service provider – leveraging off the institutional arrangements developed for the microgrid. Potential services that could be delivered through this model include water and wastewater services, plumbing services, as well as telecommunications, gas and other infrastructure-related services. Horizon Power engaged Synergies to further investigate the economics of these opportunities. Synergies delivered three outputs to Horizon Power to assist it in early stage business development research into the opportunities associated with microgrids. First, Synergies drew on developing country literature to assess the key factors and pre-conditions for economic development in poor and remote communities. We used this framework to examine the extent to which microgrids could present opportunities for communities to participate in power generation and establish new businesses, thus lifting local incomes and employment. Second, a number of different models were identified for deploying microgrids in remote communities, characterised by differing degrees of local engagement in energy supply chain. Third, the suitability of these models for particular community types was assessed and guidance provided on how to match models to communities best able to engage with them. Four different models were identified by Synergies for deploying microgrids to remote communities. The models are differentiated from one another based on the commercial and institutional arrangements by which electricity is supplied and the extent to which a community can participate in the microgrid. There are likely to be many variants around the four models examined in the report. However, our aim was to present a suite of models that are distinctively different from each other and that can be matched to a community of a particular type, defined by set of characteristics. The models also provided a means of identifying how microgrids could yield measurable economic benefits for remote communities. The report demonstrated that there is a strong, ‘in principle’ case for deploying microgrids and distributed energy resources to remote communities. However, the extent to which a community will capitalise on the opportunities presented by this new technology will depend to a large degree on a town’s complement of natural assets, proximity to market, and the community’s entrepreneurial skills.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.967156171798706, "language": "en", "url": "http://reaser-law.com/difference-equitable-equal-inheritance/", "token_count": 454, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0712890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:40569d2c-83e2-45b4-a758-c7470a13ef4a>" }
When planning their estate, many people believe the best course of action is to divide their assets equally among their adult children. However, there are times when this is not the best solution or the most practical solution. It is at these times that you need to know the differences between equal and equitable inheritance. What Is Equal Inheritance? Equal inheritance is when each of your adult children receives an equal share of your estate. Of course, this will only happen when both of their parents have passed on. This option is the best solution for families that have children where the needs of each child are the same or you have provided similar support to the children. Each of the children must also be finically responsible and emotionally capable of handling their inheritance. It is important to note that when you have real estate and other physical assets, you will need to determine the value of each asset to ensure that all children receive an equal amount. One of the primary benefits of an equal inheritance is that it will help avoid any disputes. These disputes can be costly to your children and take an emotional toll on them. What Is Equitable Inheritance? There are times when an equal inheritance is not the best solution. These cases could be when one child has taken on the role of caregiver to an aging parent or to compensate them for any lost wages and time. Equitable inheritance can be used when the amount of support given to the children by the parents during their life is different. This support could be for a wedding, a down payment on a house or educational expenses. Equitable inheritance should also be used when you have a child with disabilities or special needs. These children will need more financial help in the future in regards to their living and medical expenses. However, it is important to place these funds in a special needs trust to ensure that they do not have direct access to the funds. Equal and equitable inheritance are two different solutions that you need to consider when planning your estate. Equal inheritance ensures that all of your children receive an equal share of your estate. Equitable inheritance will provide certain children with more than the others based on a number of factors such as the amount of support provided while the parent was alive and if they have any special needs.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9420138001441956, "language": "en", "url": "https://popularresistance.org/california-becomes-first-state-requiring-all-new-homes-be-built-with-solar/", "token_count": 883, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cf476e5e-904d-43ff-9a73-1e6a896c64ba>" }
California Becomes First State Requiring All New Homes Be Built With Solar Above Photo: Juan Ortega, a solar panel installer for California Premier Solar Construction, climbs a ladder to put a solar racking onto a new home in Santee. (Hayne Palmour IV / San Diego Union-Tribune) Environmental groups hailed the decision, pointing to estimates that energy use in buildings account for about one-fourth of greenhouse gas emissions in California. The state’s four investor-owned utilities — including San Diego Gas & Electric — also came out in favor of the measure. A representative for the Utilities Codes and Standards statewide team said the rule is “a cost effective way to help customers reduce energy use, lower greenhouse gas emissions and represent(s) a significant milestone in the continued effort to achieve California’s long-term energy and climate goals.” Under Senate Bill 350, passed in 2015, the state must double statewide energy efficiency savings in electricity and natural gas end uses by 2030. California also calls for at least 50 percent of state’s electricity to come from clean-energy sources by 2030. The updated code, which includes an option to promote solar paired with battery storage systems, figures to give the solar industry a big boost. On average, about 80,000 new homes are built in California each year. The California Solar and Storage Association estimates about 15,000 of them are equipped with solar installations, which would translate into a gain of 65,000 once the rule goes into effect. “I think this is going to be a great freeway to help expand the solar market in California and really keeps California in that leadership position nationwide,” said Kelly Knutsen, the association’s director of technology advancement. California has the largest percentage of electricity generation that comes from solar of any state. Solar capacity in California came to just over 21,000-megawatts in 2017, nearly five times more than the state in second place, North Carolina. The Golden State also accounts for more than one-third of the nation’s 260,000 workers in solar-related fields. Adrian Moore, vice president of policy at the Los Angeles-based Reason Foundation, which advocates free-market economic policies, questioned how many homeowners will actually live in their houses long enough to recoup the upfront costs from the new solar requirement. The rule may also be “removing a choice from people who may want to do some other form of alternative energy,” Moore said. “And no matter how you slice it, this increases the cost of housing and exacerbates the affordable housing problem. It seems like a feel-good measure that doesn’t think through all the consequences.” Among its primary responsibilities, the CEC is charged with promoting energy efficiency in the state through appliance and building standards. Hochschild said the new rule will help homeowners. “One of our top priorities in California is to keep people in their homes,” the commissioner said. “More times than you would expect we find that the homeowner can afford the mortgage but not the mortgage plus the energy bill. That is the difference-maker. What we’re doing today is going to result in the lowest energy bills of any code we’ve ever done.” McAllister said the commission’s vote did not represent “a radical departure.” The updated code requires new homes to have PV solar systems with a minimum of 2 to 3 kilowatts, about two to three times smaller than typical residential systems. “We are in a terrific situation in the marketplace right now where we have a lot of great options that are cost effective, including solar,” McAllister said. “The solar industry is a mature industry right now.” “So while it may cost more to install the panels, sellers don’t reap an obvious financial benefit in the form of a higher selling price,” Hale said. “In a market where affordable new construction already lags demand, this mandate could exacerbate this imbalance by raising the price of low-density new construction. However, the existence of the mandate implementation may cause builders to hurry to complete projects before the mandate kicks in.”
{ "dump": "CC-MAIN-2020-29", "language_score": 0.952937126159668, "language": "en", "url": "https://solartechnologies.com/solar-power-today-tomorrow/", "token_count": 1747, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.03564453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4be0eb6a-f5a3-49fb-a069-4034a27c6561>" }
Solar power is one of the most efficient yet clean sources of energy we have access to. There are no increased fuel costs or dependencies, no ties to pollutants, and it’s both reliable and affordable. Of course, in order to harness solar power you need access to specific technology. This tech relies on either small-scale solar photovoltaic (PV) systems, large-scale solar photovoltaic systems, or concentrating solar power (CSP) systems to capture solar energy. Once harnessed, the system can use this solar energy to power anything you could imagine such as appliances, vehicles, consumer electronics, lighting, heat and A/C systems, and much more. When used in combination with a modern power connection (hardwired), it can even help cut your bill in half—if not down to a third of the cost. Most people believe that solar power and the related technology to harness it is prohibitively expensive, so it remains out of their reach. However, such beliefs couldn’t be any further from the truth, as each year, all around the world, it becomes more and more affordable to make the switch. How affordable is solar energy? One of the most common systems used to harness solar energy is a small-scale rooftop-based solar photovoltaic system. Solar capturing panels are placed on top of the roof of a residence, building, or business, and then feeds collected energy to a conversion system. Even just a small system used to be ridiculously expensive, but prices have declined considerably over the past few years. From 2010 to 2013, prices for rooftop-based PV systems have dropped more than 29%, and this includes installation costs. When you combine falling installation costs with the promise of tax credits and money saved on energy bills, you have no shortage of reasons to get involved. Most states offer tax credits, rebates, grants, and more that could decrease the total cost of a rooftop-based PV system to below $10,000. In addition, customers are able to finance these costs through leasing agreements and power purchase contracts, the latter of which requires them to continue using the system for an extended period of time at fixed rates. While this is all great news for consumers who are looking to power their homes, it doesn’t offer much for business owners who generally have larger structures with higher demands. The good news is that large-scale PV systems have also dropped in price, more so than household ones. In fact, large-scale systems are an average of 60 percent lower in price than residential solar systems if you take a look at the per-wattage costs. Concentrated solar power systems (a method that uses mirrors to direct thermal energy) are much more expensive and have not seen the same reduction in prices, but they have one particular advantage over the other two types. CV systems can be used to store the sun’s energy as they collect heat, which means they are still capable of producing electricity when there’s no sunlight. Where can solar energy be used, and where is it most efficient? Considering solar energy relies on a good supply of sunlight and UV rays, it’s not exactly efficient everywhere. In the United States, southwestern regions are the most reliable as the sun often shines the strongest there. Even so, in areas where sunlight is not as prominent, the amount available for energy generation only varies by less than 30 percent across the entire country. In laymen’s terms, it can be used pretty much anywhere with a small reduction in total energy generation in areas with less sunlight. For example: a solar panel array installed in Portland, Maine would generate only about 85% of the energy that a similar system would produce out in California, 95% of the total energy it would generate in Miami, and 6% more than it would in Houston, Texas. The typical efficiency rating for a single solar panel is about 11-15%, depending on where it’s installed. To break it down, this rating measures the percentage of sunlight that hits the panel, which can be turned into usable energy. While that may seem low at the onset, consider that a system generally uses a multitude of panels working in tandem. In this respect, a rooftop-based panel system can generate enough energy to power an entire home from top to bottom throughout the day. Since most consumer based solar systems are photovoltaic, they do not store or produce energy at night when the sunlight is gone. As for how the system works in tandem with traditional power, it’s set up like this: If your solar energy system produces more power during the day than you consume, the excess energy is sold back to the grid as “store credit.” On days or nights where you use more energy, this store credit is purchased back from the grid. If you produce much more on average and you have lots of extra energy at the end of the month, it carries over to the next, just like roll-over minutes for a cell phone. How fast is solar energy use expanding? Thanks to the ever-lower barriers to entry, increased reliability in newer solar energy systems, and the rising costs of traditional power consumption, the industry is growing exponentially. Back in 2009, Al Gore had the right of it when he said that solving climate change with renewable energy constitutes the “single biggest business opportunity in history.” From 2010 to 2013, the amount of solar photovoltaic systems installed in the US jumped more than 485%. By 2014, the United States had more than 480,000 total solar systems installed, which produced up to 13,400 megawatts (MW). To put that into perspective, it’s enough to power nearly 2.4 million US households. It’s not just consumers looking into solar power, either. Many businesses and companies have installed solar energy systems to improve their efficiency and lower their total operating costs. The installed capacity of photovoltaic systems in the US commercial sector grew from about 2,000 megawatts in 2010 to well over 6,000 megawatts in 2013. The commercial world is beginning to see the light. So to speak. What must be done to continue this growth? All that aside, even with recent growth there’s no guarantee that solar energy will continue this upward trend in usage. There are a handful of things that must be done in order to ensure the industry continues to see this same level of innovation and growth. States that offer solar support should do their best to maintain and better regulate the use of renewable energy. That is, they must ensure that solar powered systems continue to offer the same cost benefits, if not more so. Perhaps more legislation should be put into place to encourage and support the use of these systems in modern homes and businesses. To add to this, more states should consider jumping on the solar support bandwagon. At the end of 2016, the current tax credit offered to solar energy system owners will decline from 30 percent to 10, resulting in less federal investment in the solar sector. This is one of the most important reasons why consumers and commercial owners decided to have a system installed. Hopefully, this will be remedied by the necessary parties increasing that tax break once again. If there’s anything we know about human behavior, it’s that much of it is influenced by our wallets. The rise of energy storage technologies will help ensure that solar energy can become even more reliable, and capable of providing electricity when there’s no sunlight, or during periods of increased demand for power. But beyond that, innovation and R&D in every field of renewable energy (geothermal, anyone?), will help reduce total costs of these systems by introducing new technologies into the marketplace. From where we stand, it’s difficult to imagine where our ability to harness the natural world safely will take us in the future. There are so many endless possibilities that nearly anything could come of innovation in the market. Dare we speak of the Dyson sphere? This long-prophesied, but still largely hypothetical power system would encase an entire star and harness most, if not all, of the power it gives off. Who knows where we’ll be by the time something like that is produced. But until then, we’ll have to be content with baby steps. ABOUT THIS ARTICLE Written by Daniel Faris from zmescience.com. This post originally appeared on the SunPower Business Feed.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9617283344268799, "language": "en", "url": "https://www.brookings.edu/research/kids-share-an-analysis-of-federal-expenditures-on-children-through-2008/", "token_count": 1508, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.052490234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5ef9c099-89c2-4baa-9c7a-cbbd64b45de5>" }
Less than one-tenth of the federal budget was spent on children in 2008, $295 billion out of a total of $2,983 billion in outlays. Well over a third of the federal budget (38 percent) was allocated to the elderly and disabled for the non-child portions of Social Security, Medicare, and Medicaid. The children’s share of the tax expenditure budget was also less than 10 percent. Kid’s Share: An Analysis of Federal Expenditures on Children through 2008 This third annual Kids’ Share report examines expenditures on children during a time federal budgets are undergoing much change. Our estimate of how much of the federal budget was directed toward children in 2008 is based on detailed budget data released in May 2009 and captures the effects of early responses to the recession. The effects of the American Recovery and Reinvestment Act of 2009 do not appear in the 2008 expenditures but do figure prominently in the expenditure projections included in the final section of the report. After an initial section explaining the methodology involved in estimating children’s expenditures across more than 100 federal programs and tax provisions, the report presents findings in four areas: expenditures in 2008, historic trends across the budget, historic trends within children’s expenditures, and projections through 2019. Expenditures on Children in 2008 Federal budget outlays totaled $2.98 trillion in 2008, of which less than 10 percent ($295 billion) was devoted to children. In addition to outlays from a range of federal programs and refundable tax credits, there was an additional $73 billion in reductions in tax liabilities for families with children. With these tax expenditures, which represent less than 10 percent of the total tax expenditure budget, federal expenditures on children totaled $368 billion in 2008. Six large programs accounted for more than three-fifths (62 percent) of all expenditures on children in 2008. Three of these programs – the child tax credit, Medicaid, and the Supplemental Nutrition Assistance Program (SNAP), formerly the Food Stamp program – had higher expenditures in 2008 than in previous years as a result of early responses to the recession. Expenditures under the child tax credit, for example, included a one-time tax payment of $300 per child as part of the tax rebates in the Economic Stimulus Act of 2008. While our focus is federal expenditures, this year’s report adds an important glimpse into the broader picture, which includes state and local expenditures. In 2004, federal spending represented about one-third of total public investments on children. State and local spending data are not yet available for 2008, but they may represent a smaller share of the total, given fiscal pressures on state and local budgets in times of recession. while domestic spending has increased. Social Security, Medicare, and Medicaid have increased fourfold from 1960, from 2.0 to 8.0 percent of gross domestic product (GDP) (these spending estimates exclude Social Security and Medicaid spending on children to avoid doublecounting). Outlays on children also have grown, more than doubling between 1960 and 1980 (from 0.6 to 1.4 percent of GDP) and increasing more gradually since then, rising to 2.1 percent of GDP in 2008. While outlays on children have increased in dollars and as a percentage of GDP, children are receiving a smaller share of the domestic federal budget, as shown in a comprehensive analysis that includes children’s tax expenditures as well as outlays. Under this measure, the children’s share of domestic federal spending – spending that excludes defense and international affairs and adds children’s tax expenditures – has actually shrunk over time, from 20 percent in 1960 to 15 percent in 2008. That is, the children’s share of the budget has shrunk by almost a quarter. In contrast, spending on the non-child portions of Social Security, Medicare, and Medicaid has more than doubled, rising from 22 to 47 percent of domestic spending. Trends in Expenditures on Children, 1960–2008 During the 1960s and early 1970s, federal programs serving children and families expanded considerably. Since 1975, however, spending on programs benefitting children has risen only moderately as a percentage of GDP, and that growth is solely due to growth in Medicaid spending and tax credits. Most of the significant increases in spending on children in the past 30 years have occurred in taxes, including the expansion of the earned income tax credit in 1993 and the enactment of the child tax credit in 1997. Over the past half-century, spending on children has gradually shifted from providing cash payment to parents to providing in-kind benefits and services to children and families. Some of the decline in cash payments to parents has been offset by an increase in refundable tax credits. Another long-term trend is a shift toward spending on programs that are means tested – that is, targeted to low-income families. Finally, there has been a long-term decline in the value of the dependent exemption, particularly between 1960 and 1985, followed by increases in the earned income tax credit and child tax credit. Future Trends in Expenditures on Children, 2009–19 The American Recovery and Reinvestment Act (ARRA) included substantial increases in spending on children, including increases in Medicaid, education, SNAP (food stamps), the child tax credit, and TANF, as well as smaller programs such as Head Start and child care assistance. As a result, spending on children will rise to a record high of 2.2 percent of GDP in 2009. However, there were even larger infusions of government funds for transportation, infrastructure, energy, and the bailout of banks and other institutions, so total government outlays are projected to increase to 27.4 percent of GDP, the highest level since World War II. As a percentage of total federal outlays, spending on children is actually projected to decline, from 9.9 to 8.2 percent of total outlays. As the ARRA provisions expire, we project that spending on children will shrink over the next decade, falling to 1.9 percent of GDP by 2019, if current policies continue unchanged. In contrast to the projected decline in spending on children, spending on the elderly and disabled is projected to rise steadily. Over the next 10 years, the non-child portions of Medicare, Medicaid, and Social Security are expected to increase 2.3 percentage points (from 8.0 to 10.3 percent of GDP). In other words, the increase in spending on these three programs in the absence of reform will exceed total spending on children. There is a growing danger that the escalating costs of these major entitlements, as well as growing interest payments on the national debt, will crowd out spending on children’s programs. These budget projections assume no change in current policies other than the extension of expiring tax provisions. In fact, the new administration and Congress are considering several significant policy and budget changes, including major reform of the nation’s health care system, investment of federal resources toward broad-scale education reform, and attention to the nation’s long-term fiscal and environmental challenges, all of which could have direct impacts on spending on children over the next decade. Read the Complete Report » Read the Data Appendix » Editor’s note: This report is a joint project of the Brookings Institution and the Urban Institute. A hard copy of the full report is available from First Focus.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9190658926963806, "language": "en", "url": "https://www.tcs.com/blogs/transforming-skill-development-with-digital", "token_count": 982, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0732421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:84de1648-3b1a-4cc4-96d9-9d9d726ba8ca>" }
With the global economy picking up during last couple of years, jobless growth has become a cause of major concern in policy makers and citizens of both developed and developing countries. Traditionally, economic growth has a strong and positive correlation with job creation – higher the economic growth, higher is the number of jobs created. However, the current economic growth is not adding or creating new jobs in the expected proportion, thereby resulting in jobless growth phenomenon. The socio-economic impact of jobless growth is higher among the unorganized workforce as they have little or no access to social benefits such as unemployment insurance, pensions and so on. Rapid automation is a commonly cited reason behind jobless growth. From the days of industrial revolution, automation has always disrupted sectors resulting in traditional jobs being killed and new jobs getting created. Today, the velocity of automation across sectors through aggregation of digital technologies and rapid proliferation of technology across flat borders is creating a perfect storm, which is both disrupting existing industries and creating new industries. This is making traditional jobs obsolescent and creating a huge demand for new skills at an unprecedented scale. Today’s workforce does not possess the skills for the jobs of the future. This is resulting in a huge supply – demand asymmetry in the job market. The traditional approaches to skill development are unlikely to cope with the huge demand of rapid re-skilling of the current workforce and skilling of new entrants into the workforce. The need of the hour is an agile, scalable, and responsive skill development ecosystem where all stakeholders act in unison to provide a seamless “skilling to employment” experience and enable the following: - ‘Anywhere Anyhow Anytime’ accessibility to individuals for skilling programs - Lifelong continuous competency upgrade by individuals - Tight coupling between industry demand and skill development initiatives - Frictionless interaction between employers and job-seekers - Seamless collaboration with all stakeholders of skill development ecosystem Mainstreaming of digital technologies and the multiplier effect of their integration can create a reimagined skill development ecosystem with the following key elements: Digital Skilling Platform: A digital skill development platform can enable a frictionless integration of employers, individuals, skill development institutions, and regulators. Among other things, the skilling platform will: - Manage skill registry where individuals can enrol and continuously update their skills and competencies as they upgrade those. - Maintain standard-based skilling courseware prepared by registered skill providers in multi-media digital formats. - Provide individuals with access to digital skill development courses. - Remove information asymmetries between employers and job-seekers. - Publish employment opportunities at granular level – competency, location, sector etc. - Facilitate ‘virtual’ connect between employers and job-seekers. Mobility: Given the smartphone ubiquity and pervasive connectivity, all services of the digital skilling platform including enrolment, skill upgrade, and access to skill development courses should be provisioned through mobile devices apart from the traditional internet channels. Digital Identity Assurance: Given the sensitivity of personal data maintained in the digital skill platform, end-to-end identity assurance can be rendered through biometrics, digital certificates, and digital signatures. Big Data Analytics: The digital skilling platform will collect a huge amount of multi-dimensional data, which can be harnessed to develop insights to improve the effectiveness and efficiency of the skill development ecosystem. Some of the insights, which can be derived, are: - Workforce supply in a given region by age, gender, social profile, education, experience, domain, and competency level - Map regions to enable decisions on skill coverage and outreach - Workforce consumption and requirement by region, industry, skill level - Workforce supply and consumption trends for each of the above - Workforce demands not met by current skill registry - Capacity augmentation requirements in skill development supply The analytics will provide necessary data, analysis, trends, and forecast that help in planning at granular (micro) and holistic (macro) levels. It will also enable sector and geography level skill development interventions based on the supply and demand mismatch. Augmented Reality: The general shortage and often urban concentration of master trainers especially in emerging skills will not go away in a short-time. Augmented Reality-based skill development courses provisioned through cloud-based digital platforms and accessible through mobile devices can be more effective as compared to in-person classroom-based skill development. The re-skilling / skilling challenge facing nations today is huge. A poorly skilled workforce stifles investment and growth and often leads to exploitative job markets and large-scale unemployment. Transforming the skill development ecosystem and making it responsive to both industry and citizens will require holistic solutions powered with digital technologies.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9686669111251831, "language": "en", "url": "https://cryptogeist.com/2017/09/21/incredible-article-ico-scams-youll-ever-read/", "token_count": 1075, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2197265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c2c86b33-e33f-45c5-b1a4-1ac839b48206>" }
ICO, otherwise known as “initial coin offering” is when a new cryptocurrency is live for trading to the general public. It has been the popular source of startup money for new businesses and nearly a new Initial Coin Offering is announced every day! ICOs is essentially ownership or rights to a project but in a digital format. So what is the difference between IPO and ICO? An IPO is called an “Initial Public Offering” or a “stock market launch”, and is where shares of a company can be sold to the general public. It’s a very exciting moment for companies that do reach an IPO in that the company is ready for the next stage of growth. The purpose of an IPO is for the company to raise additional capital with the option to monetize their shares and become a publicly traded business. It’s an open market, and clearly defines that capitalism stands for, with intention of growing new enterprises every day. The origination of an IPO started before the 1600s in during the Roman Republic. With the name “publicani”, they were representatives who owned shares and were allowed to distribute amongst the public called “over the counter trading”. There was plenty of evidence that showed an increase of values in assets, and that public heavily favored this type of business. Unfortunately, the “publicani” was eliminated during the fall of Rome. Another case of the IPO was during the early modern period when the Dutch innovated their financial principles. The first IPO that was recorded was in 1602 where the “Dutch East India Company” initiated an order to sell shares to the general public. In recorded history, The Dutch East India Company were first to successfully issue bonds and shares of stocks to the general public. Although it was Rome that first invented the IPO, it was the Dutch who turned the concept into a fully capable business principle. For the United States, they had adopted this principle in 1783. Here are the advantages of an IPO The single most effective principle of an IPO is that it allows virtually anyone to be an investor. Whether a successful business magnate or purely a beginner – the pool is vastly open to anyone who’s interested in investing which provides capital to the business owners and can use those funds to repay debt or have more capital for further innovations and future change. Don’t forget to mention the amount of exposure the company gets when it goes public. Every news station, media outlet, and newspaper will want to cover this hot company and will provide immense exposure, which could lead to more prestige and well recognition. Here are the disadvantages of an IPO There are plenty of risks when it comes to an IPO, which is why many prospective companies choose to not go public. While it’s an amazing feeling to be featured in the press and receive a ton of media, you’d be surprised how many companies don’t want this attention. Here’s why: Going public requires a lot of legal action, which is incredibly expensive If the company isn’t fully well functioned – the media and public will catch on to these slippages and therefore may shy away from investing in an IPO Going public is also a huge marketing effort, which requires additional costs to agencies, advertisers, and mainstream media. Some costs can be in the millions! When a company decides to go public – new shareholders can be involved. Some shareholders may even own MORE shares than the original owners and have the power to dictate which direction the company will go The first ever ICO was in 2013 with the company named Mastercoin. In 2014, Etherium had their ICO, and introduced the world of Blockchain technology, and “mining for eth”. Today, you almost see an ICO happening every day to the point where many scams and “pump and dump” schemes are introduced. Pump and Dumps, Ponzi Schemes, and other fake scams are littered all over the internet, and it’s best to be very wary to seriously look deep into your next ICO before pulling the trigger. Right now, scammers have made MILLIONS from false ICOs – and well-known crypto influencers are calling it out on the internet. Here are some well-known ICOs scams where investors lost MILLIONS: - Pump and Dump – One of the oldest tricks in the book – scammers today will obtain 1000s of social media handles promoting an ICO that simply does not exist, yet millions are easily influenced by these folks and are thus scammed. - Promises too good to be true: Any ICO that offers investors 1 BTC sounds way too good to be true. If 1 BTC is $3700, imagine how many BTCs the founders would have to deliver to the foolish millions - Phishing scams – Email is still a very popular channel to sell products, and now the Nigerian Prince is looking for someone to give BTC too. Don’t fall into these emails tricks! Though it is an exciting time to invest in cryptocurrency, remember to always practice your best judgment before forking over your hard-earned money. We hope this tutorial has better equipped you with knowledge on faulty ICOs.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9309238195419312, "language": "en", "url": "https://kimacommercial.com/blog/understanding-capitalization-rates-and-debt-coverage-ratios/", "token_count": 852, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.034423828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dfa7b45e-e831-43af-b57f-ab5dc179992f>" }
Understanding Capitalization Rates and Debt Coverage Ratios Capitalization Rates or “Cap Rates” and Debt Coverage Ratios or “DCRs” are highly effective and popular commercial real estate metrics that can be used for valuation analysis of real estate, property trends, and to help make purchasing decisions. Provided below is a basic overview of how Cap Rates and DCRs are utilized. First, what is the definition of a cap rate and what is the formula for determining a cap rate? The definition of a cap rate is the ratio of Net Operating Income to the Asset Value. To obtain the cap rate, simply use the following formula: Cap Rate = annual net operating income/cost (or value). For example, if a property is on the market for $1,000,000 and the net operating income is $150,000, the cap rate is 15. Using another example where the net operating income is $7,000 and the property is listed for $100,000, the cap rate is 7. So what can we learn from using cap rates and what do they tell us about a potential real estate investment? One thing we learn from the cap rate is the return on investment an investor can expect to earn on a purchase using all cash. The examples in the above paragraph would, therefore, yield returns of 15% and 7% respectively. Another thing a cap rate is helpful for is evaluating risk. For example, two equally sized office buildings in the same neighborhood can be evaluated for risk based upon cap rates, all other things being equal. The higher the cap rate between the buildings, the greater the risk premium. Investors often use cap rates to evaluate the risk of certain investments in making decisions about their portfolios. Cap rates can also help to identify trends in a particular market over time. For instance, if cap rates are trending lower in a market over a period of a few years, the market is growing more competitive. Whereas, higher cap rates over the same period indicate less competition for that particular product. This, therefore, provides some insight into the performance of the particular markets over this time period. Utilizing this simple analysis can help to evaluate risk for the purchaser. It is important to remember that cap rates are much more accurate an indicator of property performance when the source of Net Operating Income is relatively steady. A discounted cash flow analysis may need to be used when a Net Operating Income stream is complex and/or irregular. Another example of important real estate investment metrics is the debt coverage ratio or DCR. Examples of how the DCR is utilized are outlined below. The Debt Coverage Ratio (DCR) is used to determine the ability of an income stream from a property to pay its operating expenses and mortgage payments. Banks and investors will set a limit on their tolerance for this ratio and expect a particular project to remain at or above this ratio for the duration of the loan or investment term. The larger the DCR the better the investment is covering its debt service. A DCR of 1 means that the investment is meeting its obligations to the bank or investor but with no free cash flow left over. Therefore, it is not unusual for a bank or investor to require a DCR of 1.25. This provides 25% of each dollar in excess of expenses as a cushion for the bank or investor. To calculate the DCR, simply add all operating expenses and debt service, including interest, and subtract them from gross revenue. This leaves Net Operating Income. An example of a DCR of 1.25 would be Net Operating Income of $150,000 on Debt Service of $120,000 ($150,000/$120,000 = 1.25). This simple calculation can be critical when pre-qualifying your investment to ensure you are heading down the correct path. As interest rate and amortization are functions of this ratio, these are important to consider when calculating your DCR. Utilizing Cap Rates and Debt Coverage Ratios will often be the first things your commercial real estate broker will do for you when assisting you with your project. Therefore, it is important to utilize a professional when determining the best plan of action for your commercial sale or purchase.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8921550512313843, "language": "en", "url": "https://www.epa.gov/renewable-fuel-standard-program/proposed-renewable-fuel-standards-2017-and-biomass-based-diesel", "token_count": 427, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.05615234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:608abe35-861f-422b-9349-b8a04ec5dd36>" }
Proposed Renewable Fuel Standards for 2017, and the Biomass-Based Diesel Volume for 2018 On this page: EPA proposed increases in renewable fuel volume requirements across all types of biofuels under the Renewable Fuel Standard (RFS) program. These increases would boost production and provide for ambitious yet achievable growth. The proposed volume requirements and associated percentage standards for are for calendar year 2017 for cellulosic biofuel, biomass-based diesel, advanced biofuel, and total renewable fuel. EPA also proposed the volume requirement for biomass-based diesel for 2018. |Cellulosic biofuel (million gallons)||33||123||230||312*||n/a| |Biomass-based diesel (billion gallons)||1.63||1.73||1.90||2.00||2.1*| |Advanced biofuel (billion gallons)||2.67||2.88||3.61||4.0*||n/a| |Renewable fuel (billion gallons)||16.28||16.93||18.11||18.8*||n/a| |(*Proposed Volume Requirements)| The proposed volumes would represent growth over historic levels: - Total renewable fuel volumes would grow by nearly 700 million gallons between 2016 and 2017. - Advanced renewable fuel – which requires fifty percent lifecycle carbon emissions reductions – would grow by nearly 400 million gallons between 2016 and 2017. - The non-advanced or “conventional” fuels portion of total renewable fuels – which requires a minimum of 20 percent lifecycle carbon emissions reductions - would increase by 300 million gallons between 2016 and 2017 and achieve 99 percent of the Congressional target of 15 billion gallons. - Biomass-based biodiesel – which must achieve at least 50 percent lifecycle emissions reductions - would grow by 100 million gallons between 2017 and 2018. - Cellulosic biofuel – which requires 60 percent lifecycle carbon emissions reductions- would grow by 82 million gallons, or 35 percent between 2016 and 2017.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.939329206943512, "language": "en", "url": "https://www.esri.com/about/newsroom/arcuser/bidders-can-see-opportunities-in-spectrum-auction-with-arcgis/", "token_count": 1400, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.030517578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7977bf7a-6e86-4a09-801c-94ff989bea9a>" }
On March 29, 2016, the Federal Communications Commission (FCC) is expected to begin its first-ever Broadcast Incentive Auction designed to reallocate licenses on the 600 MHz spectrum to provide more bandwidth for wireless devices. Spectrum allocation has traditionally been based on geographic areas, but for this auction the FCC created 416 new geographic divisions for the auction called Partial Economic Areas (PEAs), which will not correspond to the boundaries of any geographic designation previously used. For wireless companies planning to participate in the auction, understanding these new areas and which spectrum will potentially be available is crucial to developing a winning strategy. Wireless carriers and others participating in the auction need geospatial tools to optimally plan network expansion; prepare initial bids; and continually modify their bidding strategy to factor in the status of licenses, which constantly changes throughout the auction, and the relation of potential holdings to existing holdings, target markets, and competitors. In contrast to previous auctions, information on the availability of spectrum by geography will not be obtainable much in advance. So wireless carriers will also need to be adept in processing real-time opportunities as they emerge during the auction. Esri has addressed these needs with Solutions for Smart Spectrum Analysis. The auction is a high-stakes event not only because it will determine which wireless carriers will hold licenses for specific locations and provide wireless connectivity for the ever-increasing number of smartphones and other mobile devices in the United States but also because billions of dollars will change hands in the process. The FCC manages the public spectrum in the United States, including the District of Columbia and US territories, by granting licenses to use portions of the spectrum based on use (television, radio, and wireless) and location to avoid problems caused by signal interference. The exponential growth in the number of smartphones and other mobile devices that are used for accessing email and consuming digital content has created a tremendous demand for additional wireless spectrum. In addition to making more spectrum available, the FCC would like to ease spectrum congestion by establishing a wireless spectrum that is generally uniform across markets. Realizing this goal will require that broadcast television providers relinquish spectrum that can then be reassigned to meet wireless broadband needs. To accomplish this transfer, the FCC is holding a spectrum auction. Although the FCC has been conducting auctions to license spectrum rights for more than 20 years, the Broadcast Incentive Auction is a new tool designed to help with the reallocation of spectrum. It is far more complex than previous auctions because it includes two interlinked auctions: a reverse auction and a forward auction. The entire process begins with the reverse auction, which consists of discrete, successive rounds of bidding to acquire spectrum licenses from television broadcasters. Bids to acquire spectrum start high and move successively lower until the FCC acquires the desired amount of spectrum within a market area. After each round of bidding, the remaining television band in each market area is repackaged by the FCC to determine how much space is left and ensure that it has a channel for each of the remaining stations. Repackaging is a distinctive feature of this auction and one that introduces a much greater level of complexity to the bidding process. It not only frees up spectrum for wireless use and reorganizes television station usage so that it takes up a smaller portion of the band, but it also resets the priorities of bidders in a market area after each round. The forward auction allows wireless carriers to buy spectrum that has been freed up by the reverse auction. But again, it is not a simple process. There are several added twists. Not only does the available spectrum in any location change after each bidding round—shaped by repackaging—but the format of the auction enforces specific bidding behavior. Unlike a traditional real-time auction in which bidders can wait until the last minute to place bids, FCC rules for this auction require bidders to actively participate throughout the auction process. Each bidder makes a payment before the auction starts, which determines its eligibility in the auction. That bidder is required to bid on a specified portion of its maximum eligibility to continue to participate in the auction. The results of a bidding round are released approximately 15 minutes after that round closes. Only then do bidders learn about bids that have been placed by other bidders. This information influences how bidders value licenses, particularly in relation to any bidding strategies. These factors are likely to require a thorough reassessment of the current bidding strategy and quick modification of that strategy for future rounds. The number of bidding rounds is not predetermined. This iterative process continues, with bidders dropping out as the bidding level for licenses they wish to acquire exceeds the amount they are willing to pay. Bidding continues until all bidding activity ceases in a round and that becomes the final round. The FCC estimates that the Incentive Auction could take two to three months. Auction participants need to quickly visualize the location of available licenses and competitor holdings to uncover opportunities and threats and respond to them. Wireless carriers will be trying to expand and consolidate market areas to increase revenue and improve the customer’s experience. Using the ArcGIS tools and data available in Esri’s Solutions for Smart Spectrum Analysis, particularly GeoEnrichment, bidders can analyze market penetration and evaluate potential market areas. GeoEnrichment provides demographic and economic information at a local level. At the same time, ArcGIS can identify threats from competitors. The three bundles available with Solutions for Smart Spectrum Analysis that scale offerings to need and include access to Mosaik CoverageRight, the most comprehensive wireless coverage and licensing database in the United States. These tools are vital for preauction planning and even more critical for reformulating bidding strategies quickly during the course of the auction, as licenses are taken off the table in successive rounds and the remaining licenses repackaged, changing the competitive landscape. Rather than stare at pages of tabular data, ArcGIS maps these changes and makes them immediately comprehensible. ArcGIS can quickly map round-by-round results that are supplied by the FCC in comma-separated values (CSV) format to show the current status of the auction or results from past rounds. The ArcGIS platform gives auction participants access to real-time auction information that is easily shared with decision makers but is secure. Dynamic maps and visualizations via dashboard or web- or mobile-based apps support fast and accurate decision making during the auction and help ensure bids align with an organization’s auction goals. One of the great strengths of GIS has always been its ability to make patterns in data apparent. With powerful tools and authoritative market data that can be easily combined with auction results and instantly shared, Esri’s Solutions for Smart Spectrum Analysis can provide a competitive advantage during the auction and a comprehensive view of the capacity and opportunities available to move market strategies forward.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9350528120994568, "language": "en", "url": "https://www.htl.london/blog/serverless-computing-primer-for-decision-makers", "token_count": 1221, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.053955078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:db18c817-f561-436f-bca8-7da96bc3bd1b>" }
Serverless Computing - A Primer for Decision Makers It seems like only yesterday that cloud computing was deemed the next big thing in the business and IT landscape. Service providers scrambled to offer the best cloud services available, while organisations carefully planned how they could best make a smooth transition into the cloud environment. Now fast forward about a decade. Cloud computing remains a game-changing technology which initiated a paradigm shift in many companies, not only in how they set up their network infrastructure, but also in how they run their operations. Over time though, provisioning resources in the cloud may become a tedious and complex task for IT administrators, especially if the primary aim of a business is faster time to market their product. This is where serverless computing comes in. Serverless Computing: What is it and how does it work? Serverless computing is a form of cloud computing that provides backend services on a pay-per-use basis. This means that a user—in most cases, a developer, is able to write snippets of code and then run it right away, without having to think about provisioning and managing the underlying infrastructure. This simplifies code deployment and greatly reduces administrative responsibility and costs for managing physical or virtual servers. It should be understood, however, that this doesn’t completely remove servers. The term serverless of a misnomer in this regard, because servers are still very much in use. The user just doesn’t need to worry about them at all. In serverless computing, also appropriately referred to as “backend-as-a-service”, developers simply write and deploy code, period. The service provider automatically allocates the exact amount of resources needed to run the code, then bills the user for these, usually down to the nearest 100 milliseconds of task running time. The resources provided and the subsequent costs charged are very precise—no more and no less than what is actually used. Why go serverless? Many service providers would say that serverless computing best exemplifies two of the primary attributes of cloud computing in general: computing resources that can scale at a moment’s notice, and paying only for what you use. While these same features are also definitive of IaaS, serverless takes it a step further. First, computer resources auto-scale with demand, unlike “regular” cloud computing where the company has to be mindful of the peak periods in order to allocate more virtual resources, and then scale these down when they are no longer needed. You really just pay for what is actually used, as there are no server, storage, or networking resources on standby that need to be paid for. Instead, what is billed is simply the instance when a function or task is performing. As beneficial as this setup may sound however, businesses should know that a serverless architecture is not applicable for all software application types. Who would benefit most from using serverless? A serverless architecture is essentially built on small, usable chunks of code or functions, which are self-contained and can be deployed when and where needed. This makes it a suitable option for developers who are designing lightweight and flexible applications, and for existing applications that can be broken down into separate, independent blocks which are easy to update and expand. On the other hand, sizeable applications with workloads that are relatively predictable, as well as legacy systems that have an entirely different structure, would receive no benefit from being migrated to a serverless environment. In these cases, a traditional setup with dedicated servers, whether physical or virtual, would be more fitting from a cost-efficiency and system architecture standpoint. What are the pros and cons of serverless computing? The biggest benefit of serverless computing is pretty much apparent: it adds a lot of efficiency and speed to the development lifecycle. Automatic scalability, significantly lowered server costs (no idle resources) and elimination of the need for server maintenance (thus freeing up more time for developers and admins) are the main reasons for considering and adopting a serverless architecture, if the move makes sense. But it’s not all good news. The drawbacks are real too: Heavy reliance on vendor ecosystems. You don’t have to manage the servers yourself, but you are also completely dependent on how your vendor manages theirs. This means you have no control over server hardware, runtimes and runtime updates. Plus, you can’t easily switch providers if you need or want to. Performance issues. Serverless computing makes you prone to dealing with ‘cold starts’, primarily because when a function is not running regularly the startup time is affected. You do have the option of keeping functions ‘warm’ by letting them run at regular intervals. Also, it’s best to keep serverless codes small and focussed to minimise this problem. Security concerns. The issue of security is inherent in the cloud, and it’s no different in the serverless world. With your servers in the hands of the provider, you have no guarantee or full knowledge of their security policies and practices. This can be a huge concern, particularly if you have to deal with personal information or confidential data. IT talent is scarce. Your company may be ready to go serverless, but are your developers ready? The fact is, only a small percentage of developers are capable of writing serverless code at this point. However, this may change sooner rather than later, considering the appeal of serverless computing to IT organisations today. Which providers offer serverless computing? Most major cloud providers also offer serverless computing, namely AWS, Google Cloud Platform, Microsoft Azure, and IBM Cloud Functions. Each provider has its own features, so consider these first to see how you can maximise the benefits and mitigate the risks outlined above. So, is serverless computing the best option for you? While this question remains, having a better understanding of it should help you make the right call.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9332783818244934, "language": "en", "url": "https://www.knowledgiate.com/different-types-of-budgets-in-finance-and-accounting/", "token_count": 936, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.034912109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ffb5af28-1185-4aeb-8951-b03fc5ce21fa>" }
Budgets can be classified on the basis of functions involved, according to the time, according to the nature of transactions and according to activity levels. (A) Functional Classification of Budgets Budgets can be prepared on a functional basis also. When individual budgets are associated with a particular function and are integrated with the Master Budgets attic business, they are called functional budgets. Such main budgets are follows (1) Sales Budget : The sales budget is a forecast of total sales classified according to group of products that are expected to be sold in what quantity and at what prices. A sales budget is generally regarded as the keystone of budgeting. It reflects the expected revenue from sales and cash receipts from customers. (2) Production Budget: It is a forecast of budgeted production based on sales, productive capacity and requirements of inventories etc. (3) Production Cost Budget : It is a forecast of cost of production including direct material cost, direct labor cost and other overheads- fixed, variable and semi-variable. This production cost budget can be sub-divided into Materials Requirement Budget, Direct Labor Budget, Factory overheads Budget and Office and Administrative overheads budgets etc. Material Budget is also known as purchase budget. (4) Personnel Budget : It is a budget of personnel inventory hence it would automatically include labor employed in productive capacity. (5) Research and Development Budget : This budget relates to the improvement in the quality of the products of research for new products. (6) Cash Budget: It is one of the important budgets. It is a sum-total of the requirements of cash in respect of various functional budgets as well as of anticipated cash receipts. (7) Capital Budget: It is forecast of outlay on fixed assets as also the sources of capital required. (8) Master Budget : It is an integrated budget prepared from the separate functional budgets. (9) Other Functional Budgets: There may be other functional budgets also, for example, selling and distribution cost budget. Administrative overhead budget etc. (B) Classification of Budgets according to Time: According to time or period, the budgets may be broadly of the following three types 1) Long-term Budgets : Such budgets are prepared with a long ten view. They are concerned with planning the Operations of a firm over five ti ten years. They are prepared generally in the form of physical quantities. (2) Short-term Budget : Such budget is prepared for a period of one a two years, it is generally in the shape of a short-term production plan. (3) Current Budgets: Such budgets are prepared fora very short period for example one month, one quarter or a season and so on. (C) Budgets according to Nature of Transactions On the basis of nature of transactions, budgets may be divided into two categories :- (1) Operating Budget : Operating budget is frequently known as revenue budget or income and expense budget also. It relates to the entire operations of the firm. The budget lays down the estimated net profit, operating profits, and profit appropriations for the budget period. Operating budget is the life blood of budgeting system which depicts the over-all policies and plans of the firm , covering a definite period. There are two main components of the operating budget-sales budget and expense budgets. (2) Capital Budgets : These are related to the capital structure and liquidity of the enterprise. These include working capital budget. Annual cash budget, Budgeted Equity Capital, and Loan Capital, Budgeted Investments in Fixed Assets etc. (D) Budgets according to Activity Levels Budgets according to activity levels may be of two types – (1) Fixed Budget : It is budget in which targets are rigidly fixed. This is a forecast of the targets for the coming year prepared well in advance, sometimes even two and three months before the year. These targets are used as a standard yardstick to measure actual performance. Though a fixed budget can also be revised, whenever the necessity arises, but generally their nature is of static character. (2) Flexible Budget : It is also known as a Variable Budget. If the costs in a responsibility center are expected to vary with the volume of production, as is the case with most production departments, the flexible budgets must be prepared. Such a budget shows the expected behavior of costs at various volume levels. This is a variable budget possesses a distinct advantage over the fixed budget, particularly where it is difficult to forecast sales, costs and expenses with exact or greater accuracy.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9546921253204346, "language": "en", "url": "https://www.majkic.net/eng/news/sci-tech/642-what-is-software-testing", "token_count": 2595, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0205078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:07d33707-78ee-47d2-be6d-8d202e04984b>" }
Recently, software glitches have increasingly affected the average consumer – at airports, or with online banking. Often, we hear that the software wasn’t properly tested. But what does this mean exactly? Every now and then, really spectacular software breakdowns occur. The opening of Heathrow Terminal 5 became a public embarrassment because the baggage system failed to function. More than 17 million customer accounts at RBS and its subsidiaries NatWest and Ulster Bank could be accessed for some or all of the day because the installation of customer management software corrupted the entire system. One of the biggest Austrian banks paid out €21 million to appease its customers with vouchers because the new online banking software didn’t work for days on end. Errors like these are not only damaging to a company’s brand, but can also be very costly. The goal of software testing is to avoid such incidents and the consequences. On the following pages, we explore the topic of software testing and address these main questions: - What’s the difference between today and yesterday? - What must be tested? - Who should do the testing? We can assume that in the cases previously mentioned, the software in question was definitely tested: Banks and insurance companies know the risks of using software that has not been tested. So how can such malfunctions continue to occur? Some, but certainly not all, software glitches can be caused by storms and natural disasters. Still, this provides no explanation for the increase in software errors of late. Testing has always been done and it used to work well. And natural disasters are a known, if unpredictable, factor. So why should the tried-and-true formulas suddenly fail? The reason is simple: Programs have become more complex. And to address this complexity, more testing is required. How much more? Take the years 2000 and 2010. In this time, the volume of data being moved around increased by a factor of 50,000. If a program was tested for two weeks in 2000, it would have to be tested for 100,000 weeks in 2010 – in other words, around two thousand years. More interactions, not more data, increases complexity Working and calculating this way is clearly not an option. After all, software is now more efficient, development tools allow many errors to be detected before the program is first created, and modern object-oriented software design enables developers to code neatly and in a less error-prone way. But even if testing is only increased by a factor of 50, it would still have to be tested for 100 weeks – or two years. That simply isn’t feasible. Comparing the difference in size and quantity alone doesn’t necessarily mean that the software has become more complex. In fact, one of the main arguments for using a computer is that it doesn’t matter whether it has to perform a calculation five times or 5,000 times. It should simply be reliable. It is not the increase in the quantity of data that causes complexity, but rather the increase in possible connections and systems. Look at the development of mobile telephony: In Germany, Radio Telephone Network C came along first in cumbersome cases, followed by the much more manageable digital cellular network D-Netz. In comparison, today’s smartphones have the processing power of mainframe computers from 20 years ago. Apart from the pure advancement of technical data, think about all the things that can now be done with a smartphone. Above all, think about the number of other systems that can be tapped into – at the same time, even. It is the number of possible connections that causes the corresponding increase in complexity. The main difference between today and yesterday is not the advancement in programming languages – even though developers may no longer code in Assembler or COBOL, these languages can still be used to write good programs today – but rather the number of possible solutions there are for a certain problem. Take this analogy of trying to cross a river that is 30 feet wide without using a boat and without getting wet. In the past, there was one solution: system analysts would look for places where big rocks could be used jump across the river to the other side. Today, there are 10 different bridges crossing the river, that is, 10 different ways to solve the problem. The software architect, then, has to choose a particular solution based on whether it meets various quality requirements. Let’s say there is a highway bridge crossing the river as well as a wooden walkway. To use the highway, you need to build feeder roads. Even if the simple wooden walkway is sufficient and building feeder roads requires more effort, the software architect may still choose to use the highway with the reasoning that other people want to cross the river, too. It’s impossible to test every combination Here is another example: Forty years ago, when passengers would buy a train ticket from a ticket machine, they would have to answer a series of questions, one after the other. From where do you wish to depart? To where do you wish to travel? How old are you? Are you entitled to a reduced fare? In which class do you wish to travel? And so on. If they discovered while answering the questions that they didn’t have enough money, they would have to cancel the transaction and start again from the beginning. At today’s ticket machines, passengers will find the questions slightly more hidden in different fields. Instead of entering their age, they select standard fare, half price, or other offers. Rather than typing the destination in full, they type the first few letters, and only the possible destinations are then displayed. While the layout of the input fields suggests that the information can be entered in any order, that is still not possible. For example, if users have entered a discount ticket, they cannot subsequently upgrade to first class. However, instead of getting an error message that says, “First class must be entered before you select a discount,” users will see a message like, “You must purchase your first class ticket on the train.” In this case, it is clear that developers made some small mistakes in the process of transferring an originally linear, simple input sequence to a graphical input system. Let’s say the machine needs to process five different inputs and they can be in any order. This means there are 120 different combinations of how entries can be made. So, it is understandable that not all input options were tested before the software was implemented. In the past, it was possible to test each individual function and then test the complete process. Now it is necessary to test the interactions between individual functions. The number of these interactions depends directly on the number of possible sequence combinations, which can easily be a seven-digit sum. If you take a smartphone, for example, the number of possible combinations surpasses the example of the ticket machines by several orders of magnitude. In 2000, it was stated at a conference in Germany that the number of possible states in a program the size of Excel 4 is approximately 10^80. That’s an unimaginable figure. It becomes even more unbelievable when you think that the number of distinguishable particles in the universe was estimated by Steven Hawking to be 10^160 in 2000. Both figures seem doubtful. But even if this figure is reduced to one percent of one percent, it still ends up as 10^66. It is clearly impossible to test every possible case that could arise. In fact, that is exactly what the Austrian bank – mentioned at the start of the article – said to its customers. So, if it is impossible to test everything, what parts of a program must be tested? This is one of the first tasks involved in software testing: to ascertain which test cases should be used. Many companies use developers to test other developers’ work. Alternatively, they ask the business department to do testing, since these employees are the only ones who really know how the program should work. Who tests what? Developers usually test whether the requirements have been met. If a certain requirement can be executed and the right result is delivered, the test case is in order. If the test cases are selected so that each requirement is assigned at least one test case, the program is considered to work as soon as all test cases provide good results. The business department, on the other hand, does not concentrate on the general requirements, but rather on the requirements that are important for its own activities. And since these testers are familiar with certain customers, customer transactions, accounts, policies, and product combinations that frequently caused problems in the past, they can also refine the testing process. This is referred to as experienced-based testing, and it is an improvement on the method that uses developers alone. Still, there is a weak point in this kind of testing: Who checks the requirements? Often, no comparison is made between the final specifications and the design specifications. This problem originates with the people placing the request. Often, they cannot visualize the behavior of the software that will ultimately be programmed. In the end, the final product may contain an “error,” which was in fact called for in the requirements. They just thought it would look different. In addition, it is often the case that requirements are simply missing. These missing requirements may be overlooked in the testing phase both by developers and the business department because they are seen as self-evident. Using the example of the ticket machine again, it is clear that testers assumed users would know to start by touching the button at the top and working their way down. In fact, the program doesn’t tell users that they have to press the top button first, and they are technically able to press the buttons in a different order. If they do so, they won’t be able to complete the process correctly, but they also won’t get an error message because the requirement was missing from the design specifications. Professional testers accept neither the developers’ assumption nor the business department’s confirmation. They try to put themselves in the role of the user. This is where testers’ creative thought processes start. They try to anticipate the users’ wrong entries. Here’s an example: Passwords are generally case-sensitive. Today, if users enter a wrong password, they are usually reminded of this with the message: Ensure that your CAPS LOCK key is off. But in the past, when this reminder was not so ubiquitous, users often assumed that they really had typed the wrong password. Perhaps they tried typing it again and again until they received a new message: Your password has been blocked. Please contact your system administrator. This happens in every software program today in a different form – users make entries that infringe on a business rule that they know nothing about. Good programs will give users a corresponding error and help message. However, if developers assumed that users would know this business rule, they won’t have provided any such messages. The program will refuse to cooperate, and the user won’t know why. 65,000 errors in Windows NT What can software testing accomplish today, in concrete terms? For the Windows NT operating system, which is small compared with today’s systems, Microsoft registered around 65,000 errors. The system is considered professional (C2 certification), and Microsoft made every effort to track down as many of the errors as possible. Due to economic considerations, however, it simply is not feasible to find all the errors. Despite professional testing, approximately three to four percent of all errors make it to production, that is, to the user. In this case, that comes to around 2,000 errors. Operating systems are big, but even in a commercial application the number of errors can be between 1,000 and 4,000. Keeping in mind that it’s not possible to find all errors, it is important to look for those upon which users will inevitably stumble. For this, testers need to investigate the user’s typical use cases. In software projects that add more functions to existing software, the use cases are often not known or not explicitly described. In this case, professional testers would put together a list of use cases themselves and ensure that they record the related business rules in as much detail as possible. For each use case, there is one test case per associated business rule. These test cases check whether a business rule is being infringed upon. By Hans Hartmann, test director at Objentis since 2007. NOTE: I you would like to PROMOTE link in this article, please let me know.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9490248560905457, "language": "en", "url": "https://www.nawra.org.uk/2018/03/impact-of-welfare-reform/", "token_count": 290, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.412109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c986c5dd-d51a-49bd-a40e-5d9472687b6d>" }
The Equality and Human RIghts Commission have published their final cumulative report on the impact of government Welfare Reforms. The report suggests that children will be hit hardest with an extra 1.5 million being pushed into poverty. In addition, the report finds that the child poverty rate for those in lone parent households will increase from 37% to over 62% and households with three or more children will see losses of around £5,600. They also identify significant and disproportionate impacts on disabled families, on women and on Bangladeshi households. The report concludes that these negative impacts are largely driven by the freeze in working-age benefit rates, changes to disability benefits, and reductions in Universal Credit rates. David Isaac, the Chair of the Equality and Human Rights Commission, which is responsible for making recommendations to Government on the compatibility of policy and legislation with equality and human rights standards, said: “It’s disappointing to discover that the reforms we have examined negatively affect the most disadvantaged in our society. It’s even more shocking that children – the future generation – will be the hardest hit and that so many will be condemned to start life in poverty. We cannot let this continue if we want a fairer Britain.” The Commission calls on government to reconsider existing welfare policies and to review the level of welfare benefits to ensure that they provide an adequate standard of living. The full report can be downloaded here.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.937802791595459, "language": "en", "url": "https://www.profinanceguide.com/financial-management-in-healthcare-role-functions/", "token_count": 1244, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01080322265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:da7784a4-816e-4240-856f-eec597c45598>" }
Have ever wondered how hospitals bills get processed or how they pay for their equipment? The financial system of healthcare organizations operates the same as any other business and relies on strong financial management. In this lesson, we have discussed the role and functions of financial management in healthcare organizations. Let’s begin! What is Healthcare Financial Management? Healthcare finance is defined as finances that include both financial management specialty and accounting specialty within the healthcare industry. There are many parties involved in healthcare financial management, such as healthcare providers, pharmaceutical companies, health insurance companies, research-based organizations, and medical equipment companies. What is The Financial Management Size and Structure of the Healthcare Organizations? The size and structure of the healthcare finance department vary by the nature and size of the organization. However, the general model is that the chief financial officer is responsible for managing treasury and other accounting tasks under his direction. The Objective of Financial Management in the Healthcare Sector - Profit maximization. - Wealth maximization. - Proper mobilization. - Proper estimation of financial requirements. - Proper utilization of finance. - Survival of company. - Maintaining proper cash flow. - Creating reserves. What Is The Role Of Financial Management In Healthcare? When we talk about the finance department of any organization, we would think that paying bills and collecting payments is all that a department does. But there is a long list of the responsibilities of this department. Since the Affordable Care Act comes into existence, finance departments in healthcare sectors have experienced major changes. Not only the finance team fulfills general bookkeeping duties, fulfilling purchase orders, finalizing sales, maintaining receipts, and managing payments, running payroll, but does a lot more. They negotiate contracts with contractors and service providers and maintaining cash reserves for future expenses. They retain these records electronically as well as manually. In short, the primary role is to manage money and financial risk in a way that coincides with the financial goals of the organization. After all, if a healthcare organization has strong financial management plans, only then they’re able to provide the best healthcare treatments to all their patients. What are Financial Management Functions? From evaluation and planning, treasury, financing decisions, long-term investment decisions, contract management, working capital management, to financial risk management, all include in financial management functions of healthcare organizations. Read about these in greater detail given below. 1. Evaluation and Planning Evaluation of the financial effectiveness and planning overall operations accordingly allows the healthcare organization to work better in the future. For example, a hospital evaluates the budget of an emergency room and discovers that they’re losing patients because of smaller space. In response to this evaluation, the management then decides to plan for an expansion of ER. Treasurers work alongside other teams. They keep the check and balance of an organization’s cash. Moreover, they are responsible for evaluating that there are always enough funds available to meet the organization’s immediate needs. The treasury team can forecast the future needs of the organizations and also advise in making long-term investment decisions to ensure a constant stream of revenue. 3. Long-Term Investment Decisions Investment decisions involve analyzing current strategies and determining how investments will affect the financial future of the organization. The financial team comprises both top and middle-level managers who share their ideas when it comes to big investments. Taking the example mentioned above, the financial team at the hospital must consider the cost of an emergency room expansion–––to decide if it is a good investment or not. Raising funds for expenditures is not an easy task. It involves things like using internal funds, fundraising, grants, or loans. The financial team will look at the cost vs. benefits approach of the investment and the amount of debt that they will incur. In the case of the ER expansion example, the senior manager will make the final decision and would ask the financial management team to initiate actions. The team will bring someone in to estimate the renovation cost and how long it would take to complete. At first, they may decide to use organization’s internal funds and then would apply for a small loan to cover the expenses. 5. Working Capital Management Working capital=current assets minus current liabilities. Current Assets might include cash, receivables, inventories, and marketable securities. The financial management team is responsible for managing the working capital of healthcare, for lowering the cost of an organization. For example, in the renovation of the hospital’s emergency room, the team will determine which assets are reusable and in which there’s a need to invest. Here the financial team will use the working capital of the hospital to make these purchases. 6. Investor Relations Financial management in healthcare also takes care of investor relations. They deal with shareholders and other stakeholders of the organization who have an interest in finances and stability. Moreover, they provide investors with financial reports on current business performance or expected future changes. Communication among finance team Oftentimes, lower and middle-level managers in the finance department meet with the CEO of a healthcare organization to discuss the books. In this meeting, they talk about the current financial statement, earnings to date, and balance sheets. The CEO will use this information for future strategic planning that involves budgeting, evaluating various sector’s performances, making long-term investment decisions, and determining if working capital is enough for the upcoming year. Challenges Financial department in Healthcare May Face - Managing finances in a capital-constrained environment––Handling the pressure to cut costs. - Accessing technological transformation––Replacing an outdated IT structure with new medical technology. - Adapting to market forces––Acquisitions and Mergers are a significant part of the healthcare sector. - Meeting rules and regulations––Healthcare and hospitals have to comply with several regulations and compliance. Every day, it becomes more challenging for the financial department to manage to survive in the current healthcare climate. They work hard to find alternative sources of fund while staying afloat. To know more click here.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9544382691383362, "language": "en", "url": "http://jaredbernsteinblog.com/mobility-and-inequality/", "token_count": 574, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2001953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ae221770-9fea-458e-a56e-cc832a7302e9>" }
For a very useful review of what’s known about the extent of economic mobility across time, countries, and generations, read this Jason DeParle piece from this AM’s NYT. I’ll have more to say about this later, but for now, I wanted to amplify one point about something that’s too often misunderstood in these discussions: the negative relationship between inequality and mobility, i.e., how higher inequality—greater distance between income classes at a point in time–can itself reduce the rate of mobility—the ability of families to move across those distances over time. In terms of the causal link between these two dynamics, I’ve stressed issues like access to quality education, better neighborhoods with better resources (libraries, parks, healthy environments)…to the extent that higher levels of inequality separate families along those dimensions, common sense would dictate that such differences map onto mobility differences. But there’s also a related technical point that DeParle makes in the piece: The income compression in rival countries may also make them seem more mobile. Reihan Salam, a writer for The Daily and National Review Online, has calculated that a Danish family can move from the 10th percentile to the 90th percentile with $45,000 of additional earnings, while an American family would need an additional $93,000. The graph below shows what’s going on here (hat tip: JC). The figure shows the income distribution of two countries, say Denmark (“low inequality”) and the US (“high inequality”). Suppose we’re measuring the percent of families that move across income fifths over time, say from the bottom fifth of the income scale to the middle fifth. Well, there’s simply a lot less economic ground to cover in Denmark relative to the US. In other words, part of the higher mobility in low-inequality countries is a function of lower inequality itself. It’s easier to move up and down the income scale when “up” and “down” are shorter trips. This simple insight is important, because we hear a lot of conservatives–Rep Paul Ryan, for example–arguing that we shouldn’t worry so much about inequality, because mobility will offset it. First, that’s wrong in ways I note in the link above (we need increased mobility to offset increased inequality, and we don’t see that). Second, as the evidence in the NYT piece shows, we actually have less mobility than other advanced economies. And third, most importantly in my view, increasing inequality itself makes it harder to achieve greater mobility, due both to diminished access to mobility-enhancing opportunities and to the distance problem shown in the figure above.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9102112054824829, "language": "en", "url": "https://ndcpartnership.org/case-study/india-negative-pricing-manage-power-system-oversupply", "token_count": 210, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.019287109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7035bfe2-9763-4c6f-9890-eafa69944959>" }
India Negative Pricing to Manage Power System Oversupply India uses negative energy pricing, or the practice of allowing power prices in an electricity market to fall below zero, to discourage generation during periods of oversupply on the electric grid. This strategy can minimize the curtailment of generation and expand opportunities for use of renewable energy sources. India’s work on negative pricing aligns with its target of 175 GW of renewable energy capacity by 2022, which is also articulated in its INDC. Key actions and good practices supporting negative pricing in India, and detailed in the case study, are highlighted below. - The India Central Electricity Regulatory Commission developed a regulation that implemented negative pricing for energy supply deviations greater than 12%. - Together with negative pricing, increasing the number and frequency of allowed revisions to the schedule for renewable energy generation is enabling efficient grid outcomes. - Accurate forecasting systems are also supporting efficient outcomes within the context of negative pricing and grid integration more broadly. Negative pricing has provided a clear economic incentive to generators to improve their forecasts.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9312645196914673, "language": "en", "url": "https://tapipedia.org/es/search/tap?f%5B0%5D=field_topics%3A5782&f%5B1%5D=field_authors%3A36673&amp%3Bf%5B1%5D=field_authors%3A8030", "token_count": 864, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.041748046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6fa93161-cc6d-4ac6-8296-c8937c97ad74>" }
La búsqueda encontró 10 resultados en 0.008 segundos. Agricultural water management is a vital practice in ensuring reduction, and environmental protection. After decades of successfully expanding irrigation and improving productivity, farmers and managers face an emerging crisis in the form of poorly performing irrigation schemes, slow modernization, declining investment, constrained water availability, and environmental degradation. More and better investments in agricultural water are needed. This study builds a profile of the status of poverty and vulnerability in Malawi. Malawi is a small land-locked country, with one of the highest population densities in Sub-Saharan Africa, and one of the lowest per capita income levels in the world. Almost 90 percent of the population lives in rural areas, and is mostly engaged in smallholder, rain-fed agriculture. Most people are therefore highly vulnerable to annual rainfall volatility. The majority of households cultivate very small landholdings, largely for subsistence. The sector review includes seven chapters and one annex. This first chapter is an overview of agriculture, irrigation and the purpose and content of this report. The second chapter provides a review of the Bank s own strategy and priorities for irrigation and drainage within its portfolio of investments, from the time of its 2004 Strategy until the present. It also includes a short summary of key lessons learned in this sector. The aim of this discussion paper is to ascertain the government of Lao's (GoL) current practices in negotiating, awarding, and managing land concessions; enhance GoL understanding and commitments to develop national capacities targeting improved land management, that will generate revenues for GoL, and ensure sustainable development as an urgent priority; and provide a basis for dialogue within the government to enable its determination of priorities to better address land development issues in Laos, to enable the achievement of sustainable, responsible economic development. The World Bank Group has a unique opportunity to match the increases in financing for agriculture with a sharper focus on improving agricultural growth and productivity in agriculture-based economies, notably in Sub-Saharan Africa. This Policy Memorandum provides policy advice to the government of Liberia (GOL) in an effort to mainstream gender issues in policies, programs, and projects supporting agricultural production and value-chain development. It is organized as follows. Section I reviews women's roles in Liberian agriculture and agricultural value chains, drawing on a variety of data sources, including the 2007 Core Welfare Indicator Questionnaire Survey (CWIQ) and the two rounds of the Comprehensive Food Security and Nutrition Survey (CFSNS, 2006 and 2008). Agricultural investments made by developing countries and multilateral development banks (MDBs) have declined in recent decades. This decline is associated with a slowdown in the growth of agriculture productivity. Most development institutions have recognized the damage caused by this past neglect, in part evident in rising food prices, and renewed attention to agriculture and agribusiness is emerging. But this renewed interest will need to deliver results, especially in Sub-Saharan Africa, where the MDBs have had the least success but where the needs and opportunities are enormous. The rural space is home to 53 percent of Nigeria's population and more than 70 percent of its poor. While it is well understood in Nigeria that financial exclusion of the rural population stunts development, still fewer than 2 percent of rural households have access to any sort of institutional finance. This report summarizes and consolidates the findings of three Bank studies on poverty issues in Mexico, written as part of the second phase of this work: Urban Poverty, Rural Poverty, and Social Protection. It also expands on how Mexico will seek to use social protection policy as a vehicle for redistribution. Discussed in Chapter 1, the state has a clear role in providing risk-pooling mechanisms where private insurance markets fail (e.g., old age and health insurance), but the role of social protection policy in promoting redistribution is more an issue of national choice. The Government of Mozambique is seeking to achieve its strategic objectives and targets for socio-economic and political development by intensifying the implementation of its five-year government plan (PQG). It is also taking preparatory steps for the next phase of its PQG, which coincide with the new government period following the national elections taking place in 2019.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9290041327476501, "language": "en", "url": "https://tem.fi/en/collective-agreements-and-mediation-in-labour-disputes", "token_count": 593, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0260009765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c096e2b5-35a2-4f50-af22-b2ea7fb81fe1>" }
Collective agreements and mediation in labour disputes A collective agreement is an agreement made by one or more employers or an employers’ association with one or more employees’ unions concerning conditions which must be complied with in contracts of employment or employment relationships. Collective agreements serve two important purposes: - They lay down the minimum terms of employment and - they oblige the parties to observe industrial peace. If an employment contract is in any respects in contradiction of the collective agreement for the relevant sector of industry the contract is null and void for the sections concerned and the equivalent provisions in the collective agreement must be observed instead. The general applicability of a national collective agreement is determined by the committee confirming the general applicability of collective agreements, which operates under the auspices of the Ministry of Social Affairs and Health. The parties that have concluded a collective agreement must submit the collective agreement and the associated documents to the Ministry of Social Affairs and Health within one month of the signing of the agreement. The employer party must submit a copy of the agreement in writing and by e-mail so that the generally applicable agreement can be published on the Internet. The parties to a collective agreement are responsible for ensuring that its provisions are observed. Mediation in labour disputes A system of mediation in labour disputes has been established for dealing with conflicts between parties in working life. The mediation system has been set up for the labour market organisations so that disputes can be resolved through negotiation. In cases of mediation in labour disputes the negotiating partners are assisted by a national conciliator and conciliators. The labour market central organisations may also use a national conciliator to assist them in concluding a collective agreement. Litigations over the content or breaches of collective agreements can be referred to the Labour Court. The Labour Court’s jurisdiction relates to a collective agreement’s legitimacy, validity, content, scope and the correct interpretation of any of its clauses. The Labour Court can also decide on how much in compensatory damages is to be paid out following unlawful industrial action. Its decision is final. Legal disputes over an employment relationship not linked to a collective agreement binding on the employer by virtue of the Collective Agreements Act are dealt with by general courts. Publications (abstracts in english) - Current status and conditions for promoting local collective bargaining, 2020 - Coverage of collective agreements in 2017/2018, 2019 - Wage earners’ unionization in Finland in 2017, 2019 - The systems and their developments of local agreements in some European countries, 2016 - Coverage of collective agreements in 2014, 2016 - Wage earners’ unionization in Finland in 2013, 2015 - Compliance with the regulations in collective agreements, (pdf), 2013 - Report on European industrial peace system, (pdf), 2011 Law-drafting: Tarja Kröger, tarja.kroger(at)tem.fi
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9540889263153076, "language": "en", "url": "https://www.banque-france.fr/en/banknotes/issuing-and-maintaining-quality-euro-banknotes-and-coins/circulation-euro-banknotes", "token_count": 711, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1044921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2e5b5028-2cc7-4381-8c11-e367e0d8c924>" }
From the moment they are manufactured from bales of cotton right up to their destruction by the central bank, banknotes follow a specific path through the economy: they are transported, distributed via ATMs, used to pay merchants, collected, sorted and – if not too worn - recirculated. Banknotes that are no longer fit for circulation are replaced with new ones. The average lifespan of a banknote is 3 years. Euro banknotes are made of cotton and are manufactured at printing works located in the euro area. They are transported to the Banque de France for storage before being put into circulation. Commercial banks order the quantity of banknotes they need and collect them from Banque de France branches using cash-in-transit firms. They then distribute the notes to the general public either via their own branches or via automated teller machines (ATMs). Once in the customer’s wallet, banknotes are used day-to-day to purchase goods and services from retailers. The retailers in turn deposit the banknotes at their local bank branch, which sends them back to the Banque de France (via a cash-in-transit firm) to be sorted and put back into circulation. In 2006, a strict legal framework was introduced specifying the conditions under which private operators such as commercial banks and cash-in-transit companies can replenish ATMs with banknotes that have not been received from a central bank branch. On average, a banknote will find its way back to a euro area central bank counter every six months. With the switch to euro banknotes and coins in January 2002, calculating the length of the banknote cycle and lifespan of notes at national level became irrelevant due to the migration of banknotes between different euro area countries. Euro area banknotes last for an average of three years, but individual lifespans vary widely depending on the denomination, from around 1.5 years for the €10 note, to over 30 years for the €500. The €100, €200 and €500 last longer and are more resistant to wear and tear as they tend to be used more for hoarding purposes. The new Europa series of banknotes is designed to be longer-lasting than the previous series, as well as having enhanced security features to prevent counterfeiting. The new notes are being rolled out gradually over a number of years, and four denominations have already made their way into consumer’s wallets: the €5 (2 May 2013), the €10 (23 September 2014), the €20 (25 November 2015) and, most recently the €50 (4 April 2017). Banknotes that have become too worn and dirty to be recirculated are destroyed by the Banque de France – the Bank is the only body authorised to do this in France. Unfit banknotes are replaced with new ones, which are distributed to commercial banks as part of their regular orders. Lien: The Banque de France’s role as a wholesale supplier The European Central Bank (ECB) works closely with the euro area national central banks to ensure the banknotes in circulation are of the highest possible quality. Your feedback is important in helping us meet this target. Please take a few minutes to complete our online survey on the condition of the euro banknotes you use day-to-day. The results will help us to determine your needs and level of satisfaction when it comes to banknote quality. Updated on: 06/07/2018 15:06
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9523715972900391, "language": "en", "url": "https://www.deconcinimcdonald.com/estate-planning-law-sept-2019/", "token_count": 1176, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1904296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:78885db7-91f8-4ba1-8a2e-985e84ab2fc1>" }
All but a few states in the United States have now adopted the Revised Uniform Fiduciary Access to Digital Assets Act (the RUFADAA). As adopted in Arizona, the RUFADAA defines a “digital asset” as “an electronic record in which an individual has a right or interest.” What does that mean, and how is it going to be applied to actual estates? I don’t find that definition of “digital asset” to be particularly useful by itself. How do you know if you have a right or interest in something that you have posted on Facebook, or Instagram, or similar sites? I am told that some such sites claim in their terms of service that anything you post becomes the property of the proprietors of the site. Digital assets in the form of your book, or original music recording, or similar creative works should be somewhat easier to characterize. If you have things of that nature stored in cloud storage, it seems obvious to me that they would qualify as electronic records in which you have an interest. By the way, what is a fiduciary? The RUFADAA defines a fiduciary as “an original, additional or successor personal representative, conservator, agent, [or] trustee….” That’s consistent with how the term is generally defined. In this context, the fiduciary is going to be either the personal representative of your estate or the successor trustee of your trust. Another whole aspect of handling digital assets involves platforms such as Github that are designed for individuals and companies to develop and store computer code. That’s an example of digital assets in the hands of a service provider that may have substantial monetary value (as opposed to things like Facebook accounts which, frankly, I don’t see as having any monetary value). The Uniform Law Commission (that’s the organization that produces uniform laws like the RUFADAA), in its description of the legislation, distinguishes between “digital assets” that are the digital equivalent of tangible personal property (e.g., files, like your book or recording) and electronic communications (such as email, but also including instant messages and text messages). The description also says that the legislation “restricts a fiduciary’s access to electronic communications such as email, text messages, and social media accounts unless the user consented in a will, trust, power of attorney, or other record.” That makes it sound like even with this legislation, it may be difficult to predict how service providers such as Google and Facebook are going to react to fiduciaries’ attempts to retrieve digital assets, and particularly things that can be considered communications, in the service providers’ control. Is an Instagram page electronic communications, or is it personal property? There’s also a privacy aspect to digital communications. You may not want your personal representative or successor trustee to see your texts or instant messages, or your emails for that matter unless they contain important information about your assets. It seems unlikely that texts or instant messages would be useful in administering your estate or trust, anyway. Take it from me, conducting important business via texts or instant messages is asking for trouble. You likely do want the person administering your estate to be able to gain access to your digital assets that are the equivalent of tangible personal property, however. As for those assets, I suggest that you take a close look at the terms of service established by your digital service providers, such as cloud storage providers. The terms of service, in case you don’t pay attention to such things, is the junk that you checked the box saying you read it and agreed to it but probably didn’t actually read. This is a topic that will likely continue to expand in importance in estate planning. As that happens, it will become more important to think about which service providers have your digital assets and what you can do to minimize the hassle for whoever is going to have the task of collecting those assets. If a person dies without a will, it’s called intestacy. Another formulation is that someone who died without a will died intestate. Intestacy laws are laws that govern what happens to the assets of a deceased person who dies intestate. Those rules are necessarily one-size-fits-all to some extent. A recent article by law professor Shelly Kreiczer-Levy in the Wisconsin Law Review made the argument that intestacy laws “cannot truly reflect diversity of lifestyles and associations.” I can’t disagree with that. The professor suggested “using big data to create personalized rules, tailored to the personal characteristics of each decedent.” I don’t know what data the professor is talking about, but it sounds complex. It’s also completely unnecessary. It’s easy to avoid having the intestacy laws decide what happens to your assets. How easy? As easy as making a will. That’s personalized, and it avoids intestacy. Nathan B. Hannah is a Shareholder in the Tucson office, and practices in the areas of estate planning and administration, real estate, and commercial transactions. He is also a noted blogger, and you can find more of his articles on his private blog, Contact Attorney Hannah: [email protected] or 520/ 322-5000 This communication is designed to bring legal developments of interest to the attention of our clients and others. It should not be relied upon as a substitute for specific legal advice in a particular matter. For further information on any of the subjects discussed, or for legal advice in connection with any particular matter, please contact us.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9572309255599976, "language": "en", "url": "https://www.newamericanfunding.com/blog/uncovering-the-mystery-behind-property-taxes/", "token_count": 784, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.08251953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:395bfe6f-9ae2-4465-9f2e-73c206e2ca54>" }
Uncovering the Mystery Behind Property Taxes - Oct. 11, 2012 - Rosemarie Pirio - Real Estate Tips Property taxes can be a mystery to many homeowners, even though it's something everyone pays. How property taxes are calculated, what they help pay for, who sets the rates, and how the "assessed" value of the home is determined are some of the questions surrounding this unsolved case, so let's get down to it and crack the code. The first clue to uncovering how property taxes are calculated is realizing that property taxes are local taxes that provide a major source of income to local governments to help pay for schools, libraries, roads, police, fire protection and many other services. Obviously, it costs to provide these services, so once the different governing authorities, such as the city, county and local school districts that provide these services, set their budget for the coming year; they will determine a mill levy, or tax rate, needed to cover these annual expenses. Calculating Property Taxes Property taxes are calculated using a mill rate, or the tax per dollar of assessed value of property, where one mill is one-tenth of a cent ($.001). So for $2,000, one mill would be equal to $2. For example, say the county mill rate is 1%, the city's is 0.03% and the local school district's is 0.02%, the total mill rate, or property tax rate, for that jurisdiction 1.05%. Property taxes are calculated by taking the mill levy, or property tax rate, and multiplying it by the assessed value of the property. This leads to the next question, how is a property "assessed" to determine its value? Assessing the Value of the Property A property's assessed value is based on its market value, or how much the property would sell for under normal conditions, as determined by the assessor, i.e. the local official who estimates the value of all the properties in the community. Once the market value is determined, the assessed value is calculated by multiplying the market value by the assessment rate. There are a few different methods an assessor may use to determine a property's market value: - Market Approach - With this approach the assessor will compare the property to similar properties that have recently sold. This is the most common approach for residential properties. - Income Approach - This approach determines the value based on how much income it would make if it were rented out, while also considering how much it would cost to manage and maintain the property. - Cost Approach - This approach looks at how much it would cost to replace the property, including the structure and land, minus any depreciation. The level of assessment(LOA) or assessment rate is then multiplied by the fair market value to determine the assessed value of the property. The LOA is the percentage of full value at which properties are being assessed within a community, and is established by the assessor. To calculate your property tax, multiply the assessed value of the property by the total mill levy or tax rate. For example, a home that has a market value of $300,000 and exists within a community where the LOA is 85%, the assessed value of the property would be $255,000. By multiplying the previously established property tax rate of 1.05%, by the assessed value of $255,000, the end result are property taxes of $2677.50. If you are ever unsure about the assessed value of your property, you can contact your local county or municipality and discuss it with the assessor. Obviously it's important that the property is assessed fairly so that you pay fair property taxes. Mystery solved! Hopefully by now you fully understand the purpose of property taxes and how they are determined.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9522215127944946, "language": "en", "url": "http://blog.aaii.com/inside-etfs/?print=print", "token_count": 1915, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.027587890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0d8013f5-fe71-40ed-bccd-8982fa99bd13>" }
This article originally appeared in the September 2014 issue of the AAII Journal. Like mutual funds, exchange-traded funds (ETFs) are a way for investors to participate in the stock, bond and commodity markets; achieve a diversified portfolio; and gain access to a broad array of investment strategies. What are ETFs and how do they work? The Investment Company Institute, a trade group for all types of regulated funds including exchange-traded funds, has put out a paper explaining how ETFs are created, what determines their price and how taxes on ETFs are treated. In addition, it discusses the similarities and differences between mutual funds and ETFs. The following is a condensed version of the report “Understanding Exchange-Traded Funds: How ETFs Work.” An ETF is a pooled investment vehicle with shares that can be bought or sold throughout the day on a stock exchange at a market-determined price. In most cases, an ETF is index-based—designed to track the performance of a specified index or, in some cases, a multiple, an inverse, or a multiple inverse of its index (commonly referred to as leveraged or inverse ETFs). Actively managed ETFs, which do not seek to track the return of a particular index, have been available to investors only since 2008. Several factors have contributed to the growing popularity of ETFs. Specific features of ETFs that investors find attractive include: - Intraday tradability. An ETF is essentially a mutual fund that has a secondary market. This means that investors buy or sell existing ETFs shares at market-determined prices during trading hours on stock exchanges, in dark pools, or on other trading venues. This feature gives investors liquidity and quick access to different types of asset classes. - Transparency. Generally, the price that an ETF trades at in the secondary market is a close approximation to the market value of the underlying securities that it holds in its portfolio. This fairly tight relationship makes ETFs a convenient and easy option for investors who want to minimize the possibility that the share price could trade at a substantial premium or discount to the net asset value (NAV) of the fund (as can happen in a closed-end fund). - Tax efficiency. As discussed below, investors have been attracted to ETFs because they typically do not distribute capital gains. General trends that have contributed to the popularity of ETFs include: - Access to specific markets or asset classes. Investors can gain exposure to specific markets or asset classes that would otherwise be difficult or impossible for them to attain. For example, some foreign markets require investors to have foreign-investor status, a local bank account and a local custodian to access their markets. Investors seeking to participate in these markets can simply buy an appropriate ETF as they would any other stock on an exchange: The ETF has either met all the requirements or achieved the exposure through other types of financial instruments that are not readily available to individual investors. - The rising popularity of passive investments. Investor demand for index-oriented products, particularly in the domestic equity space, has been strong for the past decade. - Increasing use of asset allocation models. More financial advisers are moving toward using third-party asset allocation models to manage their clients’ assets, and they find ETFs to be an efficient and cost-effective way to rebalance their clients’ portfolios to implement a change in an investment strategy. How are ETFs created? An ETF originates with a sponsor that chooses the investment objective of the fund. In the case of an index-based ETF, the sponsor chooses both an index and a method of tracking it. Index-based ETFs track their target index in various ways. Many early ETFs tracked traditional, mostly capitalization-weighted, indexes. More recently launched index-based ETFs follow benchmarks that use a variety of index-construction methodologies, with weightings based on market capitalization or other fundamental factors, such as sales or book value. Others follow factor-based metrics: These indexes first screen potential securities for a variety of attributes—including value, growth, and dividend payouts—and then either equal-weight or market-cap-weight the selected securities. Other customized index approaches include screening, selecting and weighting securities to minimize volatility, maximize diversification or achieve a high or low degree of correlation with market movements. An index-based ETF may replicate its index (that is, it may invest 100% of its assets proportionately in all the securities in the target index), or it may sample its index by investing in a representative sample of securities in the target index. Representative sampling is a practical solution for ETFs that track indexes containing securities that are too numerous (such as broad-based or total market stock indexes), that have restrictions on ownership or transferability (certain foreign securities) or that are difficult to obtain (some fixed-income securities). The sponsor of an actively managed ETF determines the investment objectives of the fund and may trade securities at its discretion, much like an actively managed mutual fund. For instance, the sponsor may try to achieve an investment objective such as outperforming a segment of the market or investing in a particular sector through a portfolio of stocks, bonds or other assets. The creation/redemption mechanism in the ETF structure allows the number of shares outstanding in an ETF to expand or contract based on demand. Figure 1 illustrates the creation process. The redemption process is simply the reverse. Though ETFs share some basic characteristics with mutual funds, there are key operational and structural differences between the two types of investment products. Key similarities to mutual funds: Both mutual funds and ETFs can provide the basic building blocks of investors’ portfolios. An ETF is similar to a mutual fund in that it offers investors a proportionate share in a pool of stocks, bonds and other assets. Also, like mutual funds, new shares of ETFs can be created or redeemed at any time, and ETFs are required to post the marked-to-market net asset value of their portfolio at the end of each trading day. Like mutual funds, ETFs are most commonly structured as open-end investment companies, and they are governed by the same regulations. The vast majority of ETFs are regulated by the Securities and Exchange Commission (SEC) in essentially the same way as mutual funds. Key differences from mutual funds: One major difference between ETFs and mutual funds is that individual investors buy and sell ETF shares on a stock exchange through a broker-dealer, much as they would any other type of stock. In contrast, mutual fund shares are not listed on stock exchanges. Rather, investors buy and sell mutual fund shares through a variety of distribution channels, including through investment professionals (full-service brokers, independent financial planners, bank or savings institution representatives, or insurance agents) or directly from a fund company. Mutual funds and ETFs are also priced differently. Mutual funds are “forward priced,” meaning that, though investors can place orders to buy or sell shares throughout the day, all orders placed during the day will receive the same price—the fund’s net asset value is usually computed as of 4:00 p.m. EST when the U.S. stock exchanges close. In contrast, the price of an ETF share is continuously determined through trading on a stock exchange. Consequently, the price at which investors buy and sell ETF shares on an exchange may not necessarily equal the net asset value of the portfolio of securities in the ETF. Two investors selling the same ETF shares at different times on the same day may receive different prices for their shares, both of which may differ from the ETF’s net asset value. How are ETFs taxed? SEC-registered ETFs are subject to the same tax rules as mutual funds. To improve their tax efficiency, ETFs commonly employ two mechanisms that also are available to mutual funds: low portfolio turnover and in-kind redemptions. The relative tax efficiency of ETFs and mutual funds depends on the extent to which they use these mechanisms. - Low portfolio turnover strategies: Like index-based mutual funds, index-based ETFs are less likely than actively managed funds to trade securities, thus reducing taxable gains that must be distributed. - In-kind redemptions: ETFs that distribute securities to authorized participants that are redeeming ETF shares can reduce their unrealized gains (also known as tax overhang) by distributing securities that were purchased for less than their current value (so-called low-basis securities). Because these transactions are in-kind, the ETF does not incur any tax when the low-basis securities are distributed. It is important to note that though these strategies can reduce capital gains distributions to investors while they are holding ETF shares, investors ultimately pay taxes on any capital gains when they sell their ETF shares. Thus, these strategies enable tax efficiency through tax deferral, but not tax avoidance. To read the full report from the Investment Company Institute, “Understanding Exchange-Traded Funds: How ETFs Work,” go to www.ici.org/etf. If you are not an AAII member and want to gain access to all the benefits of membership, simply take a risk-free 30-day Trial AAII Membership and start becoming an effective manager of your own assets.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.937244713306427, "language": "en", "url": "http://www.beefresearch.ca/blog/value-of-industry-investments-in-research/", "token_count": 2048, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1259765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a9a13109-18b7-471d-ada2-cd9e6bbacae8>" }
This article written by Andrea Brocklebank, BCRC Executive Director, originally appeared in the April 2015 issue of Saskatchewan Cattlemen’s Connection magazine and is reprinted on the BCRC Blog with permission of the publisher. “Telling the future by looking at the past….is like driving a car by looking in the rear-view mirror” – Herb Bordy…but history helps illuminate the present. Shortly after Confederation, agriculture became a nation-building tool to settle the West and prevent U.S. expansion. Agriculture provided freight for Canada’s railroads, fed the urban population, and supplied millers, processors and exporters. Canada’s Experimental Farms Stations Act of 1886 supported productivity-boosting research and provided even more freight, food, and economic spin-offs. Canada’s farm population declined as technology and mechanization reduced the need for farm labour, and more people moved into other jobs in Canada’s expanding economy. When Canada’s first agricultural census was completed in 1931, 31.7% of Canada’s population lived on farms. By 2005, 2.2% of Canada’s population lived on farms. What does this mean for applied cattle, forage and beef research? As Canada’s population grows, governments are challenged to support healthcare, education and other programs demanded by Canadians while reducing deficits. Very few Canadian voters are beef producers, and the beef sector is a relatively small part of the Canadian economy (less than 2% of the GDP). Consequently, public funding for applied agricultural research has declined over the last several decades. In a competitive global beef industry, standing still is falling behind. If beef cannot compete for limited land, labour and water resources with other agricultural commodities, production will decline. As an export dependent industry, Canada’s beef industry must also be able to produce a high quality product that is competitively priced with other major beef producing countries. Putting your money where your mouth is. Although both federal and provincial governments continue to be important and significant contributors to Canadian beef and forage research, lack of industry support has been used to justify cutting or redirecting public research and extension programs. This is especially apparent as researchers retire. A lack of industry support means that retired researchers have not been replaced and research programs have been cut in an effort to reduce budgets. “Industry support” means “industry investment”. Government research funding is increasingly being triggered and guided by industry investments. The ratio varies, but often government contributes $3 for every $1 industry invests. What does this mean to Saskatchewan beef producers? Many Canadian universities have a narrow focus on animal welfare, genomics, or environmental research programs. This attracts public funding for today’s hot topics, and may be less costly than production research, but a broad farm-to-fork approach is needed to train new expertise that can conduct applied research of direct benefit to beef producers. The University of Saskatchewan currently has Canada’s strongest and most comprehensive forage, cattle and beef research program. The expertise, infrastructure, research and education within and between the Departments of Soil, Crop, Plant Sciences, and Animal Sciences, the Western College of Veterinary Medicine (WCVM), Vaccine and Infectious Diseases Organization (VIDO), , the Crop Development Centre, the Western Beef Development Centre (WBDC), and others allow for research that provides meaningful outcomes that directly benefit Saskatchewan’s and Canada’s beef industry. Industry investment and leadership are critical to the success of the research produced by these programs. For example, the Saskatchewan Beef Industry Chair position currently held by Dr. John McKinnon was funded by industry. Dr. McKinnon works closely with cow-calf producers, feedlot operators, veterinarians and the feed industry on numerous aspects of beef cattle management, including nutritional and environmental factors influencing the growth and carcass quality of feeder cattle and the nutrition of wintering beef cows. The relevance of Dr. McKinnon’s research and technology transfer program to Canada’s beef industry can be directly attributed to his keen interest in working closely with Canada’s cow-calf and feedlot sectors. Without industry funding this position may not have existed. Dr. McKinnon has played a key role in developing the proposed new Beef Cattle Research and Teaching Unit (BCRTU). The half-century old University feedlot facility is in dire need of replacement. Its location in the center of Saskatoon is unsuitable, its small pens and facility design no longer reflect industry standards, and it no longer meets Canadian Council for Animal Care standards. The Livestock Research Building on campus can’t support the caliber of the nutritional and physiological research that the university’s research team is capable of conducting. The proposed BCRTU will overcome these challenges and allow the University to conduct meaningful research into the future. The Saskatchewan Cattlemen’s Association’s commitment of one million producer check-off dollars to the BCRTU initiative clearly tells government that this initiative is an industry priority. Meaningful progress is being made with government to ensure the construction of this important facility proceeds. The Termuende family ranch bequeathed to the University of Saskatchewan is another significant private investment. This initial partnership evolved into the applied beef cattle research program at the Western Beef Development Centre (WBDC), emphasizing technology transfer that brings research results to cattle operations. The partnership between industry and the University at the WBDC led to significant investments in infrastructure and ongoing support of research expertise and programs by the Saskatchewan Ministry of Agriculture, Agriculture & Agri-Food Canada, and other funding agencies. The WBDC is evolving to strengthen its ties with the industry-oriented beef and forage research and technology transfer programming at the University of Saskatchewan. This initiative has been spearheaded by a Livestock and Forage Steering Committee convened by the Saskatchewan Ministry of Agriculture. The intent is to relocate the WBDC researchers, program and herd to the Goodale research farm managed by the WCVM. Locating the WBDC and near the BCRTU and closer to the Saskatoon campus will provide greater access and opportunities for researchers and students. This integration will come at a cost, and industry will need to consider what role it needs to play as this integration proceeds. Research expertise is another significant concern. A number of critical beef and forage researchers are set to retire in the next five to ten years. The BCRC and other industry groups have invested check-off dollars to train new researchers in these areas. Ensuring that new researchers are available to replace anticipated retirees will help ensure that research positions are not lost through attrition, that research programs are transitioned, and that research momentum is not lost. Like the Beef Chair positions in the Department of Animal Science and in the WCVM, industry may need to provide seed funding that can leverage government funds to get these new researchers hired into permanent positions. Check-off funds can only be spread so far, so this is also an opportunity for private donors from the beef and forage industry to make a very meaningful investment with a lasting impact. What is the beef industry’s role in funding research? There are two main ways that individuals and industry organizations can support research. Producer organizations support research through provincial and national check-off investments. Each provincial cattle organization decides how to divide their national check-off dollar between research and marketing initiatives, so check-off investments in research vary among provinces. Saskatchewan producers allocate $0.30 of the national check-off to research programming, with the remainder being allocated to marketing of beef ($0.68) and administration ($0.02). Producer check-offs help provide consistent levels of funding to support research projects and programs that maintain or improve beef quality, food safety, feed and forage productivity, environmental sustainability and animal health and welfare. Check-off revenues to support research and marketing initiatives are under significant pressure. When annual inflation is considered, the purchasing power of the national check-off has fallen from $1.00 in 1999 to $0.80 in 2013. Cattle inventories and sales have declined to levels last seen in the early 1990s, leading to still fewer national check-off funds available for research. This will greatly limit industry’s ability to fund high priority research and support badly-needed initiatives like new Beef Industry Chairs. The new National Beef Strategy (http://beefstrategy.com/) clearly explains what industry could achieve if the national check-off was increased from $1.00 to $2.50/head. If the proposed national check-off increase is implemented, Saskatchewan will have a $3.00 provincial check-off and a $2.50 national check-off. To put this into perspective relative to current prices, with the proposed increase, Saskatchewan producers would be investing a total of 0.43% of a weaned calf’s value, or 0.27% of a fed animal’s value into provincial and national policy, research, and marketing initiatives. This is less than a half of what other agricultural commodities invest in their industries. Producer check-off funds help support ongoing research projects, while private contributions (e.g. Termuende Research Ranch) and endowments (e.g. Beef Industry Chairs and the Beef Cattle Research and Teaching Unit) allow larger investments in research facilities and expertise. This provides an opportunity for Saskatchewan producers who value the contributions that Saskatchewan’s applied cattle, forage and beef research and technology transfer have made to their industry in the province to help ensure it continues while leaving a lasting legacy. Click here to subscribe to the BCRC Blog and receive email notifications when new content is posted. The sharing or reprinting of BCRC Blog articles is typically welcome and encouraged, however this article requires permission of the original publisher. We welcome your questions, comments and suggestions. Contact us directly or generate public discussion by posting your thoughts below.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9422885179519653, "language": "en", "url": "https://basicaccountinghelp.com/what-is-the-expanded-accounting-equation/", "token_count": 920, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.115234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:81112548-55e7-4ca5-a6af-95275bac0a39>" }
The basic accounting equation is a simple formula where assets are equal to liabilities plus shareholder equity, but the equation can be made more granular to provide greater insight into equity transactions. The type of business impacts the format of the expanded accounting equation, but the concept is still the same – a detailed accounting of owners’ equity transactions. The expanded accounting equation is a useful tool because of the additional detail on the owner’s equity section of the accounting equation. Unlike the basic accounting equation which only focuses on the balance sheet, the expanded equation uses the income statement to provide additional detail of the company’s transactions. Expanded Accounting Equation The expanded accounting equation builds on the standard accounting equation, adding granularity to owners’ equity portion of the formula. For a sole proprietorship, the accounting equation becomes assets are equal to liabilities, plus owner’s capital, plus revenues, minus expenses minus owner draws. This helps to illustrate cash inflows and outflows of the business attributable to normal operations and contributions or withdrawals by the owner. If the business is a corporation or another legal entity with multiple owners, the basic accounting equation becomes assets are equal to liabilities, plus paid-in capital, plus revenues, minus expenses, minus dividends minus treasury stock. For corporations, this equation sheds light on important capital structure and common stock data points. Without insight into equity, business owners would be unable to effectively manage the finances of a business. How is this Equation Used? Regardless of the form of business, the expanded accounting equation provides insight into two important aspects of operations – revenue and owner transactions. The formula is useful as it shows the relationship between your income statement and balance sheet. Net revenue or loss can impact owners’ equity, and it’s important to understand what percentage change in equity is attributable to net income. If a business has had a bad year or quarter, the expanded accounting equation can illustrate the impact of negative performance on equity. Conversely, if retained earnings is high, that change is also illustrated. Owners contributions and withdrawals are also important to understand, because they impact the cash position of a business, and they help illustrate the capital structure of a business. For example, if there are significant treasury stock transactions, it can give an indication of what management is trying to accomplish regarding stock price. It also gives an indication of what management’s views about the future are. Example of the Formula In a sole proprietorship, the balance sheet may be simple, but the expanded accounting equation is still relevant. On the asset side of the equation, items such as cash, accounts receivable and inventory are listed. Obligations would include items such as accounts payable and notes payable. Owners capital would include all owner contributions to a business. Revenues would include items such as retail sales and similar gross income line items. Expenses could be items such as cost of goods sold, administrative expenses and payroll. Owner draws could be quarterly distributions that an owner would take from their business. Corporations would be similar except for the stockholder’s equity portion of the equation. For example, treasury stock are shares in a corporation that have been purchased back from investors. Paid in capital is a reflection of the sale of stock to investors in a corporation. All of these transactions have a direct impact on the viability of business over the long term. Relevance for Accountants From a practical standpoint, the accounting equation helps accountants produce complete and accurate financial statements, because it keeps all accounts in balance. If accountants want to ensure the balance sheet accounts are accurate, they can use the accounting equation and perform a high-level analysis. This is very helpful when preparing financial statements outside of an accounting software system. If financials are being prepared in Excel, mistakes can be made and the basic accounting equation may become out of balance. The expanded accounting equation can help accountants perform a more granular check on the accuracy of the financial reports. In situations where owners’ equity transactions are under review, using the expanded equation can help catch errors in financial statements and help add credibility to Excel schedules that illustrate financial activity. Tools such as this equation are essential for internal control and the accuracy of financial reporting. The expanded accounting equation has the power to provide useful insights into the owners’ equity transactions that a business engages in. This granularity can give business owners and leaders alike an understanding of capital structure for strategic planning. If equity transactions are impactful, then the expanded accounting equation is particularly relevant
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9377113580703735, "language": "en", "url": "https://dr2consultants.eu/is-this-the-long-awaited-advent-of-sustainable-living/", "token_count": 412, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b44bb8b4-af62-4e39-8472-ea002ac6afea>" }
According to reports in German magazine “Der Spiegel” last month, Germany plans to ban the sale of internal combustion engines by 2030. In addition, the country’s Bundesrat, in a resolution, calls on the European Commission to pass directives assuring that only emission-free vehicles will be approved across the EU from 2030 and to “review the current practices of taxation and dues” aimed at stimulating the purchase of zero-emission vehicles. The German proposal comes on close heels to similar proposals in the Netherlands, Norway and Switzerland. The move to carbon neutrality comes along a global push by governments and, businesses and society towards sustainable mobility and the reduction of the emission of greenhouse gases to tackle the growing climate and health concerns. In the quest to fight global CO2 emission, 2016 may have marked a significant change for moving towards sustainability goals with the ratification of the Paris Climate Deal by the European Union in October and continued investment in renewable energy. With the public becoming increasingly aware of climate friendly alternatives and combined with falling prices and better value-for-money deals, a shift can also be observed in the behavior of European industry. For instance, according to studies, in the year following the publication of Volkswagen’s diesel scandal, sales of diesel cars in Europe dropped below 50% due to 5 – 12 percent slumps in key European markets such as Germany, France and the Netherlands. The industry’s aim to lose its carbon emitting stigma could best be observed at the esteemed Paris Auto Show where every major car brand presented Electric Vehicles. In addition, due to their rapidly falling production costs, availability and substantial state subsidies, the solar industry is enjoying an, at least, equally high success. After years of doubting the success of engaging the average consumer into investing into carbon-neutral goods, affordable alternative energy and transport methods are finally becoming increasingly available to the end-consumer. This, paired with legislation encouraging the reduction of carbon emissions by Industry and civil society, may finally bring the long awaited advent of sustainable living.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8932852149009705, "language": "en", "url": "https://knowledge4food.net/knowledge-portal-item/measuring-nutritional-quality-of-agricultural-production-systems-application-to-fish-production/", "token_count": 382, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0196533203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b4627e59-61c9-42dc-90f4-95f16c0a061b>" }
Measuring nutritional quality of agricultural production systems: Application to fish production This article (PDF) in the Global Food Security journal reviews indicators which capture an element of nutritional quality applicable to different stages of the food and nutrition system, applying them to aquaculture systems. Reorienting food systems towards improving nutrition outcomes is vital if the global goal of ending all forms of malnutrition is to be achieved. Crucial to transitioning to nutrition-sensitive agriculture is valuing and measuring nutritional quality of the outputs of agricultural production. The article reveals that a large number of indicators are relevant to latter stages of the food system, while fewer are relevant at agricultural production stage. So the use of a combination of different indicators is needed for comprehensive evaluation. Indicators, therefore, which reflect on nutrition composition of species produced, diversity in nutrients produced, and the abundance or quantity of those nutrients are desirable. Especially when at the same time they are simple to calculate and interpret. ‘Nutritional yields’, ‘potential nutrient adequacy’ and ‘Rao’s quadratic entropy’ show particular promise in capturing the ability of a production system to nourish most people and could be useful tools for prioritising investments and decision-making in the public, non-government and private sectors driving agriculture. There are multiple factors that must be considered when prioritising among alternative production sub-systems. These include costs of inputs, labour requirements, environmental impacts, yield and market value of foods produced. It is impractical to assume that farmers can or will simply shift to production systems of higher nutritional quality, without economic or other benefits. From a policy perspective, the public sector can play a role through the provision of financial incentives. Shifting thinking away from ‘feeding people’ to ‘nourishing people’ requires a simple measure of nutrition quality relevant at the production sub-system level.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9587024450302124, "language": "en", "url": "https://propertyupdate.com.au/glossary/buyers-market/", "token_count": 185, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06494140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:75c16f3a-d626-4db1-bf17-51dc7f80893b>" }
A buyer’s market is when supply exceeds demand, giving purchasers an advantage over sellers in price negotiations. This is the basic premise of the Law of Supply and Demand. Put simply, if there are plenty of sellers looking to sell, but few buyers looking to buy, then prices are likely to fall as supply exceeds demand. Compare this to a Seller’s Market where prices increase as supply can not keep up with demand. What happens during the period of a buyer’s market? Generally, in a buyer’s market, properties will sit on the market for longer before receiving an offer and sell for less than their asking price. Often, there is competition between property sellers so they must drop the price of their property to attract an offer from a purchaser. Buyers markets occur during the slump and stabilization phase of the property cycle. « Back to Glossary Index
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9627310633659363, "language": "en", "url": "https://scienmag.com/rebirth-of-the-japanese-black-tea-market-challenges-for-entrepreneurial-green-tea-farmers/", "token_count": 1583, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0732421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9ea47f09-687d-4fe1-aded-7938d6e1665d>" }
Rebirth of the Japanese black tea market: challenges for entrepreneurial green tea farmers Credit: Kanazawa University In Japan, tea farms are found in warm areas, whose northern limit is Ibaraki prefecture, where green tea has been produced. However, black tea was also manufactured from the mid-19th century and, at one time, Japan exported more than 5,000 tons of black tea of Japanese origin. With Japan’s economic growth in the second half of the 20th century, Japanese black tea lost its economic competitiveness and finally disappeared. Nonetheless, from the dawn of the 21st century, black tea manufacturing has been revitalized and production has grown. In 2007, the black tea manufactured by Satsuma Eikokukan in Kagoshima prefecture won a gold medal in the Great Taste Awards in the UK. In general, a new industry and its market are not created by gradual changes in existing markets. Signs of opportunities for new products are very small and must be sought carefully. Here, the manufacture of black tea in Japan is seen as the creation of a new market. We studied the history of black tea in Japan, tea species, technological innovation and other factors. In addition, as a case study, black tea production in Sashima, Ibaraki prefecture, is elaborated. In mid-19th century, the Japanese black tea industry was born from the government’s encouragement to tea producers to engage in black tea manufacturing. Black tea manufacturing suitable for the Japanese climate was sought and production increased. Improvements were made by using varieties of tea plant and in 1951 the quality of Japanese black tea was highly appreciated in the London tea market. The export reached its peak in 1954, contributing to the acquisition of foreign currency. However, due to the subsequent economic growth of Japan, labor shortages and higher wages made the cost of Japanese black tea manufacturing much higher, which led to loss of international competitiveness. On the other hand, black tea imports expanded rapidly from 1964 to 1974, of which 65% was in the form of tea bags. In 1986, black tea was first sold in bottles. Consumption and imports increased sharply and in 1997 black tea imports reached nearly 20,000 tons (Figure 1). The manufacture of Japanese black tea decreased to almost zero around the end of the 20th century. However, the 1996 tea boom stimulated Japanese black tea manufacturing again; in 2010, 84 tons were manufactured, which further grew to more than 200 tons in 2016. The Japanese black tea is consumed domestically. In 2007, the black tea manufactured by Satsuma Eikokukan in Kagoshima prefecture won a gold medal in the Great Taste Awards in the UK. Japanese green tea is mainly produced in Shizuoka, Kagoshima and Mie prefectures, and black tea manufacturing overlaps the green tea production (Figure 2). Tea is made from the leaves of the tea plant, Camellia sinensis, of which there are two main varieties, Assam or Indian (C. sinensis var. assamica) and Chinese (C. sinensis var. sinensis). It is considered that Assam tea is suitable for black tea and Chinese tea for green tea. While Assam tea is cultivated in hot and humid areas like Indonesia, Chinese tea is grown in China, Japan and Taiwan for the production of green tea. It is rather difficult to cultivate Assam tea in Japan, but for the past half century, breeding has been improved by mixing Assam and Chinese tea sorts. In general, tea contains tannin (catechin), caffeine, and theanine (a free amino acid). Tannin is astringent and caffeine has a bitter flavor. Assam tea contains higher levels of both tannin and caffeine, so black tea produced in southern countries has a stronger flavor than Japanese black tea. On the other hand, theanine tastes of umami and sweetness but is chemically fragile under sunlight. The reason why tea fields are located in foggy mountainous areas is that the leaves are not overexposed to sunlight, which enables theanine to remain in tea leaves. In addition, the metabolism of theanine is influenced by temperature, being accelerated as the temperature rises. Therefore, theanine and tannin differ in content depending on when the tea leaves are harvested. Black tea made from leaves picked during the summer contains more tannin; thus, it is possible to make tea with a strongly astringent flavor. The manufacturing of black tea includes four processes: withering, rolling, fermenting, and drying. Fermentation is a particularly highly skilled process, requiring technical knowledge of humidity, temperature and the process of fermentation. While it only takes 3-4 hrs to produce green tea by machine, black tea manufacturing takes about 20 hrs. The revitalization and dissemination of Japanese black tea occurred thanks to innovations in product manufacturing. Entrepreneurial farmers accomplished improvements in manufacturing technology by innovations in fermentation and by learning about the climate, varieties and fermentation methods in different countries and regions. A growing market needs a new supply chain. In the Japanese black tea market, tea farmers sell their product directly to retailers and blenders or sell it via the Internet. One of the important characteristics of direct selling, as described below concerning the case study of Sashima, is that the communication channel is open between tea farmers and consumers. Tea farmers can take advantage of feedback from consumers as a driving force for subsequent innovation. Advanced consumers play important roles in quality evaluation. In the Sashima region of Ibaraki prefecture, about 30 farmers produce green tea and 8 out of 30 farmers also manufacture black tea. They used to produce green tea with a brand name, Sashima tea, but due to shrinkage of the green tea market, a new market had to be explored. Farmers who had been cultivating Assam tea started manufacturing black tea. As described above, humidity and temperature management of the fermentation process are of particular importance and high-level technology is required for black tea manufacturing. Researchers and leading farmers visited tea farms in Sri Lanka and made contacts with universities, which brought about improvements in the manufacturing technology of black tea. They gave important advice to tea farmers including those in Sashima. Tea farmers in Sashima put in place a feedback system from consumers for improvement of product quality. This was a new challenge for tea farmers and had an impact on the creation of a market for Japanese black tea. Furthermore, the technology was disseminated by technology transfer. This way, Sashima black tea was born. In the process of market development for Japanese black tea, entrepreneurial tea farmers explored black tea manufacturing along with green tea production. The business opportunities were established via the connections of green tea farmers, retailers and consumers. Innovations in the fermentation process were accomplished by leading farmers and researchers through those connections, which was one of the keys to the success of market exploration for Japanese black tea. Moreover, the fermentation technologies were transferred to other black tea farmers. This transfer of core technologies increased the number of reliable manufacturers. As a result, they were successful in rapidly expanding the Japanese black tea market. On the other hand, creation of a new market may cause cannibalization of existing businesses and the Japanese black tea market should be no exception. If black tea was sold in the existing green tea market, cannibalization of green tea and black tea might have happened. For that reason, a new unique online channel for Japanese black tea was created. Especially in the Sashima case, it was important that a feedback system was introduced between tea farmers and consumers for improving the product quality (Figure 3). This was a big challenge for tea farmers and had an impact on the creation of a market for Japanese black tea. Related Journal Article
{ "dump": "CC-MAIN-2020-29", "language_score": 0.908905029296875, "language": "en", "url": "https://study.com/academy/topic/gaming-industry-overview.html", "token_count": 653, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11572265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e1d896a2-4767-48a5-997e-f0a4aedc3863>" }
About This Chapter Gaming Industry Overview - Chapter Summary For a high-level overview of the gaming industry, take a look at the lessons in this self-paced study guide. Our expert instructors help you review the history of the gambling industry in the U.S., casino operations and the lottery system. We've included plenty of self-assessment quizzes to help you check your understanding of the lesson material. Study at any time that fits your schedule and feel free to reach out to our instructors if you have any questions. After completing the chapter, you should be able to: - Summarize the history of the gaming industry and understand how it generates revenue - Trace the history of gambling in the U.S. - Understand how modern casinos operate - Know how the lottery works in terms of luck and probability 1. Revenue Generation in the Gaming Industry If you've ever wondered how the gaming industry generates profit, then look no further than this lesson. You'll learn about the growth of the gaming industry and the strategies used to generate nearly $100 billion in revenue in 2016. 2. History of Gambling & Casinos in the U.S. Gambling is a major pastime in the United States, and it's not without its own history. In this lesson, we'll explore the history of gambling and see how it has impacted America throughout the years. 3. Modern Casinos & Casino Operations Modern day casinos are an amusement park of adult fun. Containing gambling, restaurants, and luxurious rooms, casinos have grown into a place for everyone to gamble. 4. Lotteries: Finding Expected Values of Games of Chance Most of us won't have a problem with winning the lottery. But is it a realistic goal? Do you really have a chance? In this lesson, learn how luck and probability collide when finding expected values in games of chance. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the Hospitality 101: Introduction to Hospitality course - Hospitality Industry Overview - Leadership & Management in the Hospitality Industry - Key Management Functions in the Hospitality Industry - Communication & Decision-Making in the Hospitality Industry - Improving Service in the Hospitality Industry - Planning & Organizing in the Hospitality Industry - Control Systems & Issues in the Hospitality Industry - Hotel Classifications & Operations - Food, Beverage & Alcohol Operations - Restaurant Organization & Operations Overview - Tourism & Recreation - Meetings, Conventions, Expositions & Special Events - Studying for Hospitality 101
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9530316591262817, "language": "en", "url": "https://www.alexchandiwana.co.uk/2017/12/what-is-gdpr.html", "token_count": 595, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.365234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dbd62c4e-ccf7-4020-996f-09b57bb1a2c6>" }
What Is GDPR? What Is GDPR? Following my earlier blogs with considerations on GDPR the questions l have been asked by many is what is this GDPR all about. GDPR is the General Data Protection Regulation. It is a new piece of European legislation that was finally adopted on 27th April 2017 after several false starts. It will come into force on 25th May 2018 across Europe, and it will apply not only to any organisation situated in the EU, but also to any organisation/business that processes the personal data of EU citizens regardless of where they are situated. Where existing laws only apply to data controllers (the owners of the data), GDPR also applies to organisations that process data on behalf of data controllers. What about Brexit? GDPR will apply in the UK regardless of Brexit. In the Queen’s Speech recently her majesty said: A new law will ensure that the United Kingdom retains its world-class regime protecting personal data… So, what is GDPR ? GDPR takes many of the concepts under existing privacy laws and enhances and extends them. Existing data subject rights, such as the right to receive a copy of the data and the right to rectification are extended for example with shorter time limits for compliance. There are also a set of new data subject rights such as the right to erasure (not quite as broad as the much-discussed right to be forgotten), and data portability. Other big changes include a right to self-report any breaches, special rules for processing children’s data, new categories of sensitive data and the requirement to give specific information to individual data subjects about what will happen to their data. The cost of non-compliance The supervisory authorities have powers under GDPR to order organisations to pay compensation to data subjects. They also have the power to administer substantial fines against both data controllers and data processors. The numbers are high (maximum being the higher of 4% of global turnover or €20m) and so have grabbed attention. However, whilst the size of fines is intended to be “dissuasive” the authorities are also required to take into account the behaviour of the organisation and to fine accordingly. Therefore l would recommend you and your organisation/business take reasonable steps to have the correct processes in place. Therefore it is right and proper that our reaction to the legislation should be to take a broad risk-management approach and to invest in our security. The cost of compliance As you start looking into GDPR you will find that it will impact more of your organisation than you originally thought . It will also take you longer to get compliant than you can imagine. This article will undoubtedly raise more questions than it has answered, but what is clear is that you will have to make investments in your security systems and processes and it is key to ensure that these investments are made in the right areas.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9655771255493164, "language": "en", "url": "https://www.cleanenergyauthority.com/solar-energy-news/xcel-power-plan-colorado-090418", "token_count": 674, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:adf7bfec-e289-4ef7-ac50-2f37a8f7efa9>" }
The Colorado Public Utilities Commission has cast its vote in favor of Xcel Energy’s ambitious “Colorado Energy Plan” that will cut the company’s CO2 emissions by 60 percent and increase the share of renewable sources in its energy mix to 55 percent by 2026. In addition to reducing its environmental footprint, the plan will also save Xcel Energy’s customers about $213 million over this period. What does the Plan Include? As part of the plan, Colorado’s largest electric utility, Xcel will advance the phase-out date of two of its coal-fired plants in Pueblo from 2035 to 2025. According to the company, the plan entails an investment of $2.5 billion across eight counties. By retiring the two coal-fired units, Xcel will be able to phase out 660 MW of coal power. In its place, the utility plans to add about 700 MW of solar and 1,100 MW of wind. In addition, it will generate 380 MW from existing natural gas reserves and 275 MW from battery storage. Xcel Energy Colorado issued a statement about this “transformative plan,” which said that it would stimulate economic development in the rural regions of Colorado, while significantly reducing the company’s carbon footprint. The leaders and staff at the Colorado Public Utilities Commission said that the low costs of renewable energy and the improving energy storage technology now provide a unique opportunity to promote a clean energy economy in the state. Historically Low Prices of Wind and Solar While Xcel was building its ambitious electric resource plan two years ago, it received over 400 bids for wind and solar power from energy companies. Many of these bids were at historical low prices. According to analysts, the prices as well as the number of bids the company received were extraordinary. It makes stellar sense to maximize the advantage for a green economy through low prices and significant tax credits. A Boulder-based environmental conservation group, Western Resource Advocates, said that the new plan from Xcel will dramatically reduce carbon emissions and produce hundreds of jobs as well as investment for Colorado’s rural economy. The president of Xcel Energy Colorado, David Eves, announced to the media that the company’s customers expect it to provide low cost power and boost the use of cleaner energy. He said that much the proposed $2.5 billion of investment by the company in new energy sources would be directed toward the state’s rural areas. This is awesome. As long as Colorado does not do what Germany did – their energy policy has been terrible but this is another topic. Continued Focus on Moving to Renewables Earlier this year, Xcel had announced plans to add 1,550 MW of wind power in the Midwest. The company has shut down nearly 1,100 MW of coal capacity at its Denver area plans to meet the state’s 2010 clean air standards. That capacity has been replaced with wind, solar, and natural gas generation. The company also upgraded the carbon emission controls at its Pawnee Station in Morgan County and Hayden station in Routt County, apart from shutting down or converting three generation stations in the metro area. These actions were implemented at a cost of about $1 billion, according to an Xcel Energy spokesman.