Datasets:

meta
dict
text
stringlengths
224
571k
{ "dump": "CC-MAIN-2020-29", "language_score": 0.970532238483429, "language": "en", "url": "https://www.cobdencentre.org/2017/08/lord-liverpool-and-the-return-to-gold/", "token_count": 3222, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.23046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:14f0b171-227f-4745-961f-37e066c7df13>" }
Martin Hutchinson and Kevin Dowd* * Hutchinson (karlmagnus.aol.com) is the author of the Bear’s Lear column, http://www.tbwns.com/category/the-bears-lair/. Dowd ([email protected]) is professor of finance and economics at Durham University and a senior fellow of the Cobden Centre. The following is an excerpt from Martin Hutchinson’s forthcoming book, “Britain’s Greatest Prime Minister”, a biography of Robert Banks Jenkinson, 2nd Earl of Liverpool (1770-1828). Lord Liverpool was Prime Minister from 1812 to 1827 and had led Britain through the later part of the Napoleonic Wars. He was the decisive player in Britain’s resumption of the gold standard in 1821. Definitive reports on cash payments resumption from the Commons and Lords Select Committees were presented on May 6 and 7, 1819. By this time, the economy had definitively turned down, with the temporary euphoria of 1817-18 having ended and a deflation in anticipation of the return to gold having set in. The Commons report showed that, while the Bank of England, had in 1817 enjoyed gold and cash reserves larger than at any previous time in its history, redemption of old notes had since drained £6.76 million of bullion from it, which had mostly been sold by speculators at a profit, of which around £5 million had been carried to France, according to Alexander Baring. The Commons Committee had accordingly recommended that notes redemption should cease temporarily, since only by a sharp contraction in its notes issue could the Bank reduce the bullion price to a level at which arbitrage was unprofitable. Bank advances to the government totalled £19.4 million in Exchequer Bills at April 29, 1819, down from a maximum of £34.9 million in August 1814. Conversely, the public balances held by the Bank had declined from around £11 million in 1807 to £7 million currently (in consideration of which the Bank had lent the government £3 million interest-free in 1808). For the Bank to resume cash payments fully, around £10 million of the Bank’s Exchequer Bills outstanding would have to be funded through longer-term government debt, or the Bank would have to reduce its accommodations to private traders, which would cause economic damage. An immediate resumption of cash payments would require the Bank to eliminate suddenly most of its £25 million of notes outstanding, which would be highly deflationary and damaging to trade. Alexander Baring estimated that accumulating the necessary £20 million of bullion in the country would take an additional 4-5 years. Accordingly, the Committee recommended that the Bank should be forced to deliver not less than 60 ounces of gold against their notes at £4/1/- in February 1820, and the same amount at par in May 1821, with full cash payments being resumed in May 1823. Finally, the Committee proposed the repeal of the laws preventing the melting down of currency, since they were wholly ineffectual, with almost the entire 1817 issue of gold sovereigns having disappeared. The Lords Committee report largely reflected that of the Commons Committee. The unfavourable movement of the gold exchange rate in 1817-18 had been largely caused by the large volume of foreign loans incurred in those years, especially those to France. However, the rapid expansion of the money supply in 1817 had caused over-trading, which had subsequently led to distress. The total note circulation from the Bank of England and country banks had between 1810 and 1818 varied between 42-48 million, which demonstrates a massive increase in monetary velocity since the 1790s, given the roughly 60% increase in the volume of trade over the period. The Committee thus recommended that the Bank should be compelled to pay its notes in bullion at gradually declining premiums over the period 1819-21, with full resumption of cash payments in coin in May 1823 or later, on Parliament giving one year’s note. On presenting the Lords Committee report, Lord Harrowby, Lord President of the Council and leader of the Lords committee, after some discussion proposed to produce and debate resolutions based on it on May 21 1819. After a petition had been presented by Lord Lauderdale from 500 merchants of the City of London protesting against a resumption of cash payments, the debate proper on May 21 began with Liverpool producing a letter written by the Directors of the Bank of England. They proposed that the Bank repay its Notes at the market bullion price, the government should repay its Exchequer Bills, and both parties should then observe what effect these two payments had on the markets and the economy. The attempt by the Bank to redeem its pre-1817 notes had itself indicated the dangers of return to a fixed parity, so the Bank was not prepared to commit to its ability to maintain such a parity once it was established. Harrowby then proposed six Resolutions. The first proposed to continue the payment restrictions for a limited time. The second provided for the Bank to exchange its notes for bullion at £4/1/- for some period before full resumption. The third provided for a subsequent period, during which the Bank would exchange its notes for bullion at the mint price of £3/17/10. The fourth provided for an intermediate period, with exchange for gold at an intermediate price, with no ability to reverse the price decline. The fifth provided for the Bank to resume cash payments after being given notice by Parliament after bullion was exchangeable at the mint price. The final Resolution provided for repeal of all laws prohibiting the melting down or export of gold. Lauderdale then proposed an alternative set of resolutions providing for a bimetallic system with no fixed parity, while Grey, the Whig leader, proposed a further delay to allow further consideration of the topic. Lord Liverpool’s May 21 1819 speech After deprecating further delay, Liverpool proceeded to give his views on the question of Gold Standard resumption, avoiding extraneous subjects and personalities, as the topic itself was highly complex. There were three questions to be considered: whether to return to a fixed standard of value, whether to return to the pre-1797 standard and how it was to be done. On the first issue, while there was no doubt that the bank restriction had enabled Britain to survive the war, it could not be a permanent part of the country’s economic system, even in future wars, which were unlikely to be so total as that against Napoleon. As for the question of whether there should be a fixed standard: “No body of men, I believe, was ever entrusted with so much power as the Bank of England, or has less abused the power entrusted to them: but will Parliament consent to commit to their hands what they would certainly refuse to the sovereign on the throne, controlled by parliament itself – the power of making money, without any other check or influence to direct them, but their own notions of profit and interest?” It would make more sense for the government to issue bank notes directly, but no country in the world had ever established a currency without a fixed standard of value. As for returning to the former standard, “Policy, good faith and common honesty call on the state to return to this ancient standard, if possible. … The engagement was to pay to a certain standard; and those who engaged to do so were bound by that engagement, if they meant to act honestly. … I am prepared to show that it is not only practicable, but that no permanent inconvenience can arise from the adoption of the principle I recommend.” Gold had returned from a price 30% above the standard between 1813 and 1816, its price was now only 3% above the standard. If the Bank had to contract the existing money supply, there might be some inconvenience, but far less than had been incurred in bringing the price down 30%. As for the Parliamentary Committees, they had adopted a plan of returning to the old standard as gradually as possible, thus minimizing any inconvenience. It thus made no sense for Lauderdale to denounce their plan as a “forced, precipitous and highly injurious contraction of the circulating medium” when the contraction was not to begin until next February, and at a price 3% above the current gold price. As for the question of whether the Bank could bring gold to par simply by contracting its note issue: “I never could entertain a doubt, that if the circulating medium were gold, a reduction of the amount from £50 to £30 millions must increase its value, on the principle that the value of all property increases in proportion to the diminution of its amount: the same must also take place with reference to a circulating medium of paper.” As for the Committees’ plan itself, its advantage was “that the Bank might open with a much smaller amount of treasure than if they were obliged to commence their operations by the resumption of cash payments. The next and the most striking advantage of the proposed measure is, that the Bank will begin to put it in operation upon a perfectly fair principle. Without recognizing any permanent depreciation of the standard, the report recommends to arrest the evil where it is.” By starting with bullion rather than coin, the Bank could begin at the present market price, and gradually work to the desired consummation. Liverpool agreed that some reduction in the Bank’s advances to government was necessary; these were currently below £20 million, and he believed that a reduction of £6 million rather than £10 million should give the Bank enough liquidity to undertake a gradual return to cash payments. There was much difference of opinion on the money supply needed for Britain’s commercial transactions. “It will be found to be the opinion of some of the witnesses examined by the committee, that the commercial world will always be against the resumption of cash payments, as it would diminish the facility with which they at present obtain accommodation.” Indeed, Alexander Baring had said so “than whose statements and sentiments on the whole of this important subject I have never heard anything more intelligent and comprehensive.” The present system “must frequently give ease and facility to commercial transactions, and enable individuals engaged in those transactions to surmount obstacles, which in the ordinary state of the circulation, would be impossible.” However, “the consequence of it is too often an encouragement to speculation, to unsound dealings, to the accumulation of fictitious capital; from all of which, in the course of a given number of years, a greater quantity of evil would probably accrue than of real advantage. Even, therefore, on that narrow ground, although nobody could deny that the existing system gives occasional and valuable facilities to trade, yet it is manifest that in the long run it tends to destroy that solid and secure foundation on which the commerce of a great nation ought to rest.” Turning to the circulating medium itself, it was no greater than in 1792, before the war, in spite of the tripling or quadrupling of British trade, which caused many to argue that a return to gold might be unduly restrictive. However, the fallacy of this came from not recognizing the difference between a gold and a paper circulation. Before the war, the circulation consisted of £30 million of gold and £20 million of paper; now it consisted of £50 million of paper. Before the close of the American war there were few country banks, so people kept their wealth in small hordes of specie, but by the extension of the banking system, this habit had been almost done away. “There is now scarcely such a thing as dead capital, except the small proportion which is kept in the respective banks.” Besides this, the system of bank clearing enabled £1,457 million of merchants’ payments annually to be cleared by exchanging only £220,000 per day, or £68 million a year. Liverpool then presented statistics on the circulation of Bank of England notes, showing that the £1,000 notes were in circulation for only 13 days on average compared with 22 days in 1792. Thus, the increased efficiency of payments systems, and of paper over gold, would enable an unchanged circulation to satisfy a greatly increased volume of trade. “In the county of Lancashire, where enterprise of every kind is carried to a greater extent than in any other district in the island, the greatest part of the circulating medium is carried on by bills of exchange; and when a respectable and intelligent individual, connected with that county, was asked whether any inconvenience resulted from that system, he replied ‘None whatever.’” Liverpool ended by discussing the Mint regulations, and pointing out that since silver was not legal tender beyond 40 shillings, and consisted of only around £5 million in total value, fluctuations in price between gold and silver were most unlikely to have a significant effect on the overall circulating medium. He ended by advocating that the House follow the recommendations of the Committee. “My own persuasion is … that most, if not all the inconveniences that might be incurred from the experiment, have been incurred already, and that if parliament will steadily adhere to the course recommended, it will see the ancient standard of the country restored without material distress to any class of His Majesty’s subjects.” Liverpool’s judgement that gold payments could be resumed at the old rate “without material distress” seems overstated; the deflation necessary to accomplish this caused a 28% further decline in prices between 1818 and 1821, and a further sharp recession. Nevertheless, the 1819 recession, while nearly as deep as that of 1816-17 and quite unexpected, was also very short; it had still not begun at the time of the Prince Regent’s speech in late January, and it was already lifting rapidly by the time of the autumn Parliamentary session of November-December in which the Six Acts were introduced. With no government providing Keynesian remedies and prolonging the suffering, and with a sound monetary system, even deep recessions were blessedly brief. The Bank of England had a temporary surplus of gold in 1817, and consequently began redemption of notes that had been issued before January 1, 1817; in conjunction with the large loans to France in 1817-18, this had caused a drain of gold. The House of Commons Committee report is contained in “The Parliamentary Debates from the year 1803 to the present time,” T.C. Hansard, 1819 Vol XL col 152-78; the House of Lords Committee Report is contained in ibid., colds 199-224. James Maitland (1759-1839), 8th Earl of Lauderdale (Scottish) from 1789, 1st Baron Lauderdale (GB) from 1806. MP for Newport and Malmesbury, 1780-89. Radical and proto-Keynesian economic theorist. It is not clear how aware were British historians of Liverpool’s time of the Chinese Song Dynasty’s paper money system (1120s-1274) though that was mostly regional in its application. “The Parliamentary Debates from the year 1803 to the present time,” T.C. Hansard, 1819 Vol XL col 610-28, May 21, 1819. Rousseaux Overall Price Index, 1818=160, 1821=116. British Historical Statistics, ed. B.R. Mitchell, Cambridge University Press, 2011, p722.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9308919310569763, "language": "en", "url": "https://www.edf.org/blog/2018/05/18/2018-farm-bill-spotlight-heres-what-you-need-know", "token_count": 815, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1923828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ede24599-739f-410f-9d38-6ae541532d65>" }
As the farm bill moves through the House and Senate, it’s clear just how much is at stake. The massive, $867 billion piece of legislation includes conservation programs, nutrition assistance, global trade and crop insurance on which millions depend. The bipartisan, urban and rural coalition that historically made it possible to pass legislation of this scope and scale has splintered. Lawmakers now only have until September to get a new bill across the finish line, or pass an extension of the current bill. The farm bill is the largest source of conservation dollars for privately owned land in the United States, 40 percent of which is farmland. Growers and ranchers – and the natural resources they steward – need the legislation to pass in a timely manner. For that to happen, the House and Senate need to draft bipartisan legislation that can pass both chambers. The bill that comes out of the conference committee should incentivize innovation and stewardship, and provide strong funding for conservation programs at a time when we need it more than ever. Where the House farm bill succeeded and where it fell short The House bill, H.R. 2 (115), provided much-needed improvements to specific conservation programs, a welcome development. Positive programmatic updates included: - Permanent funding for the Regional Conservation Partnerships Program, which lets local stakeholders propose conservation projects to the U.S Department of Agriculture and scale results through public-private partnerships. - Irrigation districts becoming eligible for USDA Environmental Quality Incentive Program contracts, or EQIP, which provide funding for drought resilience projects. - Funding for source water protection to help farmers reduce runoff and protect water supplies for downstream communities. At the same time, however, the bill siphoned money from conservation programs to pay for unrelated initiatives. Rather than keeping cost savings from changes to the Conservation Stewardship Program and EQIP, for example, the bill rolled that money into non-environmental uses. In all, the House bill would have cut funding from conservation programs by an estimated $800 million over the next decade. What the farm bill must deliver in 2018 Sustainable agriculture – and a strong, bipartisan farm bill – have never been more important. Low commodity prices and extreme weather events diminish profitability, and the specter of trade wars and water quality lawsuits has created ever-more uncertainty and mental stress for our growers. Voluntary conservation programs such as EQIP are an increasingly popular way for farmers to build operational resilience, maintain revenue and reduce the environmental footprint of farming. But today, interest in such USDA programs vastly exceeds available funding. We look for the revised farm bill to maintain current funding and protect long-term investments. These programs help farms and food supplies rebound quickly from climate and economic disruptions, such as the California wildfires and hurricanes Harvey, Irma and Maria, which caused upwards of $5.5 billion dollars of agricultural damage in 2017. All eyes on lawmakers as mid-terms approach Leaders of both parties have the opportunity to work together and invest in American farms. First up, they can pave the way for big data to revolutionize on-farm conservation. USDA collects and manages data on soil health, conservation practices, yield, profitability, climate and weather – but this wealth of information sits in separate silos. The farm bill can require USDA to aggregate and anonymize this data, and grant trusted researchers access to quantify the links between conservation and risk management. The results would help farmers, landowners, lenders and insurers make the economic case for good stewardship practices. Will lawmakers step up to the plate and deliver? People back in their districts will want to know. As the House returns to the negotiating table and the Senate Agriculture Committee continues to work on its draft, this is a pivotal moment to invest in agricultural and conservation innovations for the 21st century. Get policy and political updates Friday digests from our staff keep you up to date on the week's events. Thanks for subscribing to In case you missed it
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8546706438064575, "language": "en", "url": "http://ajest.info/index.php/ajest/article/view/347", "token_count": 1325, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06494140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2ab96052-f31b-4922-9941-ff9cbe9ffa7a>" }
Influence of Land Use Regulatory Instruments on Household Disaster Risk Management in Eldoret Urban Area Land use planning seeks to regulate land use in efficient manner, thus preventing escalation of hazards into disaster risks that would threaten lives of households in both rural and urban areas. Thus, it enables households in urban areas to access serviced land at affordable prices, access socio-economic services, infrastructure, transportation facilities and good environment. The demand for urban serviced land is often in the increase, and this has been enhanced by natural population growth and rural-urban migration. This study examined the influence of urban land use planning regulatory instruments on household disaster risk management in EUA. Descriptive Survey research designs were used. The classical spatial economic theory (making room model); stakeholders’ theory and disaster reduction theory (community-based model) were applied in this study. The study targeted the households in Eldoret Urban Area of (Langas, Kapsoya, Kamukunji and Kapsaos). Proportional stratified random sampling was applied for the purpose of quantitative data collection, while, purposive sampling was used for qualitative data. A total sample size of 550 respondents was sampled. Questionnaire was the main instruments to collect primary data, alongside key informant interviews (KIIs) and focus group discussions (FGDs). Finally; descriptive, inferential, regression and correlation statistics were applied in data analysis and interpretation. Results indicated that land use planning regulatory instruments have combined influence of 69.0% over disaster risk management. Test results on H01 showed that there was significantly positive relationship between urban land use planning and disaster risk management. The effect of Land use planning regulatory instrument on HDRM was significant positive (R= 0.878), the study revealed that Land use planning regulatory instrument accounted for 87.8% (R2 =.771) of HDRM. The findings are a pointer to the fact that land use planning and its three dimensions had significantly positive effects on household disaster risk management. From these results, it can be concluded that urban land use planning is a critical tool or technique in designing and developing urban areas where hazardous zones are mapped, demarcated and kept off from households’ socioeconomic activities. It was recommended that urban authorities must focus on urban land use planning to achieve sustainable development and growth. ActionAid International. 2006. Disaster Risk Reduction: Implementing the Hyogo Framework for Action (HFA). Available at http://www.preventionweb.net/files/8847-AAimplementinghyogo.pdf. African Population and Health Research Center (APHRC), (2014). Population and Health Dynamics in Nairobi’s Informal Settlements: Report of the Nairobi Cross-sectional Slums Survey. (NCSS) 2012. Nairobi: APHRC. Albala-Bertrand JM (2013) Disasters and the networked economy. Routledge, New York Anas A, Liu Y (2007) A regional economy, land use, and transportation model (RELU-TRAN). J Reg Sci 47(3):415–455 Barro RJ (2013) Environmental protection, rare disasters, and discount rates. NBER Working Paper 19258, National Bureau of Economic Research, Cambridge Bin O, Landry CE (2013) Changes in implicit flood risk premiums: empirical evidence from the housing market. J Environ Econ Manag 65(3):361–376 Chan EY, Yue J, Lee P, Wang SS. Socio-demographic Predictors for Urban Community Disaster Health Risk Perception and Household Based Preparedness in a Chinese Urban City. PLOS Currents Disasters. 2016 Jun 27. Christian Aid (2014). ‘Working toward health convergence: a case study’. London: Christian Aid. CPCS (Centre for Peace & Conflict Studies) (2014) Listening to communities of Kayin state. Siem Reap: CPCS. Corbyn Z (2010) Mexican ’climate migrants’ predicted to flood US. Nature News, Published online 26 July 2010. Coulombel, N., 2010. Residential choice and household behaviour: state of the art. Sustaincity working paper 2.2a, Cachan. Gaube V. and Remesch A., (2013). Impact of urban planning on household’s residential decisions: An agent-based simulation model for Vienna. Journal Environmental Modelling & Software. Vol. 45; 92-103 GOK: Phyical Planninga Act 1996. Public Health Act GunjalK. (2016). Agricultural Risk Management Tools. Resource for the e-learning curriculum course on “Agricultural Risk Assessment and Management for Food Security in Developing Countries1”Platform for Agricultural Risk Management. Hunte, M. (2010), "An international perspective on traffic policing from an Antiguan perspective", paper presented at the 7th International Police Executive Symposium, Evanston, IL. Karanja, Muchiri (3 September 2010). "Myth shattered: Kibera numbers fail to add up". Daily Nation. Retrieved 4 September 2010. Khayesi, M. (2007). The Struggle for Regulatory and Economic Sphere of Influence in the Matatu Means of Transport in Kenya: A Stakeholder Analysis Kenyatta University, Nairobi Kenya. Kimathi Mutegi (2013). Kibera: How slum lords cash in on misery, The Nation, Kenya (19 September 2013). Archived copy on the Wayback Machine from 12 October 2013. King, D., Harwood, S., Cottrell, A., Gurtner, Y., and Firdaus A. (2013). Land Use Planning For Disaster Risk Reduction and Climate Change Adaptation: Operationalizing Policy and Legislation at Local Levels.Centre for Disaster Studies, James Cook University, Australia Putman, S.H., 2010. DRAM residential location and land use model: 40 years of development and application. In: Pagliara, F., Preston, J., Simmonds, D. (Eds.), Residential Location Choice. Springer, Berlin Heidelberg, pp. 61-76. Saunders W.S.A. and Becker J.S. (2015).A discussion of resilience and sustainability: Land use planning recovery from the Canterbury earth quake sequence, New Zealand. International Journal of Disaster Risk Reduction Vol. 14. 73–81
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9604741334915161, "language": "en", "url": "https://bank.caknowledge.com/credit-card/", "token_count": 1000, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.024169921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7990e951-031f-4a96-9e3b-03abeb91d05a>" }
Credit Card: Meaning | Features | Advantages and Disadvantages A credit card is a payment card issued to users as a system of payment. It allows the cardholder to pay for goods and services based on the holder’s promise to pay for them. The issuer of the card creates a revolving account and grants a line of credit to the consumer (or the user) from which the user can borrow money for payment to amerchant or as a cash advance to the user. A credit card is different from a charge card: a charge card requires the balance to be paid in full each month. In contrast, credit cards allow the consumers a continuing balance of debt, subject to interest being charged. A credit card also differs from a cash card, which can be used like currency by the owner of the card. A credit card differs from a charge card also in that a credit card typically involves a third-party entity that pays the seller and is reimbursed by the buyer, whereas a charge card simply defers payment by the buyer until a later date. Meaning of Credit Card: Credit cards were first introduced by travel agencies and the idea was later picked up by banks. They are made of plastic material and therefore called ‘plastic money’. A card issued by a financial company giving the holder an option to borrow funds, usually at point of sale. Credit cards charge interest and are primarily used for short-term financing. Interest usually begins one month after a purchase is made and borrowing limits are pre-set according to the individual’s credit rating. Features of Credit Card : - a) Parties : The credit card system has three parties – the bank issuing the credit card; the account holder using the card and the establishments accepting the cards for payment of goods and services sold. - b) Specific Person : A customer with assured and substantial income and who maintains good account is issued with a credit card. - c) Size of the Card : The cards are of standard size and thickness. - d) Details : The details such as name of the cardholder, account number, validity date are embossed on the card so that they can be checked with imprinter machine. - e) Specimen Signature : The card also bears specimen signature of the card holder Advantages of Credit Card : - a) Purchasing : These cards can be used for purchase of goods, getting services from hotels, railway stations, airlines upto a specified limit. - b) Easy Transaction : The cardholder signs the invoice which is then sent to the bank which in turn makes payment to the seller or provider of services. Later, the bank recovers the money from the account holder. This saves the customers from the trouble and danger of carrying cash with them while travelling. - c) Other Uses : Some banks even allow withdrawal of cash from their branches. The credit cards can be used for payment of telephone bills or for buying a jewellery. - d) Increase in Business : The business of the establishment increases and the banks get higher rate of interest or some fee is charged. The establishments accepting credit cards enter into agreement with the banks. The supplier verifies the card with the help of imprinter machine. In this way credit card is useful to all. It has become a status symbol in India though in foreign countries it has become quite common. Disadvantages of Credit Card : a) The high interest rates : Compared to regular bank loans, credit cards have extremely high interes rages. Sometimes this interest rate can be as high as 20% for any purchases that are not paid in full at the end of the month. b) The illusion of “Free Money” : Credit cards create the illusion of free money and this leads to the temptation to overspend. This makes credit card owners want to purchase things they don’t need. Apparetly signing a piece of paper isn’t the same as paying in cash. People that are bad at budgeting are the ideal customers for credit card companies, and they know it. c) The Danger of an Unpaid Balance : Because you are only billed once a month it is easy to forget how much you spent that same month. This way many credit card users spend more than they can cover at the end of the month. In just a couple of months of unpaid balances the interest rate can be enough to become the start of a long term debt problem. d) Credit Card Thef and Fraud : The last but probably most important disadvantage and risk of using credit cards is the posibilty of fraud or theft. There is no need for a modern thief to take your credit card physically, all he needs is some numbers and your money can dissapear from your bank account. It is important you check each monthly statement to find any clues of fraud
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9626833200454712, "language": "en", "url": "https://ceowatermandate.org/resources/climate-change-opportunities-to-reduce-federal-fiscal-exposure-2019/", "token_count": 383, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2373046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b3490115-f436-48e8-a823-fa937c93c4b9>" }
Since 2005, federal funding for disaster assistance is at least $450 billion, including approximately $19.1 billion in supplemental appropriations signed into law on June 6, 2019. In 2018 alone, there were 14 separate billion-dollar weather and climate disaster events across the United States, with a total cost of at least $91 billion, according to the National Oceanic and Atmospheric Administration. The U.S. Global Change Research Program projects that disaster costs will likely increase as certain extreme weather events become more frequent and intense due to climate change. The costs of recent weather disasters have illustrated the need for planning for climate change risks and investing in resilience. Resilience is the ability to prepare and plan for, absorb, recover from, and more successfully adapt to adverse events, according to the National Academies of Science, Engineering, and Medicine. Investing in resilience can reduce the need for far more costly steps in the decades to come. Since February 2013, GAO has included Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks on its list of federal program areas at high risk of vulnerabilities to fraud, waste, abuse, and mismanagement or most in need of transformation. GAO updates this list every 2 years. In March 2019, GAO reported that the federal government had not made measurable progress since 2017 to reduce fiscal exposure to climate change. This testimony—based on reports GAO issued from October 2009 to March 2019—discusses (1) what is known about the potential economic effects of climate change in the United States and the extent to which this information could help federal decision makers manage climate risks across the federal government, (2) the potential impacts of climate change on the federal budget, (3) the extent to which the federal government has invested in resilience, and (4) how the federal government could reduce fiscal exposure to the effects of climate change.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.956141471862793, "language": "en", "url": "https://office-office-office.com/blog/use-microsoft-office-excel-to-keep-control-over-your-finance/", "token_count": 524, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.03271484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8d9e30d8-56cb-4ae8-97c8-cb4eca2a39e2>" }
I used to be a financial analyst and used Microsoft excel in a variety of ways. From building financial models to using it as a crude database, excel can be used for many different purposes and you can use it very easy to keep an eye on your finances. This first thing to do is to create an excel file. Name it “Finance” or something like that and save it on your computer and also on a flash drive for backup. If you have never used excel before, then let me first tell you that the time you invest into learning some very basic excel skills will help you later because once you have a basic model set up, maintaining it takes very little time. I will not go over the rudimentary basics of excel in this article but will assume that you know how to work your way around an excel file and some basic commands. The first step is to set up your output sheet. The first sheet on the bottom left of the screen should be renamed “output”. In this sheet, you will have a top-level view of your finances. For example, the first line should track your bank account. You can have separate lines for your checking and savings accounts. The next line should be about your investments. The goal is to separate out your inflows from your outflows. So the first few lines should be your inflows. Then the next few lines should be all your outflows such as recurring monthly bills, credit card payments and your estimate of your monthly food bills, electric bills, etc. Feel free to color code your inflows and outflows. I like to use Green and Red. The last line should subtract the outflows from the inflows and should be labeled “Savings.” Your goal is to keep this line positive every month. Now you can use all the other sheets for each specific line item. One sheet should track your checking account; another should oversee your savings account and a third – your credit card and so on. Link up each sheet with the output sheet. Now you have your skeleton file set up and all you have to do is key in the input numbers into the background sheets every month. Then calculate the totals in each of those sheets and have the totals feed into your output sheet. Keep tabs on your finances this way and you will never lose track of them. Robert Morris a Microsoft Office expert has been working in the technology industry from the last 5 year. As a technical expert, he has written technical blogs, manuals, white papers, and reviews for many websites such as office.com/setup
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9336338043212891, "language": "en", "url": "https://www.adaptalift.com.au/blog/2012-04-02-what-is-the-bullwhip-effect-understanding-the-concept-definition", "token_count": 729, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:302f9dd1-0a4d-4e53-907c-d6abaade1118>" }
What is the Bullwhip Effect? Understanding the concept & definition Through the numerous stages of a supply chain; key factors such as time and supply of order decisions, demand for the supply, lack of communication and disorganisation can result in one of the most common problems in supply chain management. This common problem is known as the bullwhip effect; also sometimes the whiplash effect. In this blog post we will explain this concept and outline some of the contributing factors to this issue. What is the bullwhip effect? The bullwhip effect can be explained as an occurrence detected by the supply chain where orders sent to the manufacturer and supplier create larger variance then the sales to the end customer. These irregular orders in the lower part of the supply chain develop to be more distinct higher up in the supply chain. This variance can interrupt the smoothness of the supply chain process as each link in the supply chain will over or underestimate the product demand resulting in exaggerated fluctuations. What contributes to the bullwhip effect? There are many factors said to cause or contribute to the bullwhip effect in supply chains; the following list names a few: - Disorganization between each supply chain link; with ordering larger or smaller amounts of a product than is needed due to an over or under reaction to the supply chain beforehand. - Lack of communication between each link in the supply chain makes it difficult for processes to run smoothly. Managers can perceive a product demand quite differently within different links of the supply chain and therefore order different quantities. - Free return policies; customers may intentionally overstate demands due to shortages and then cancel when the supply becomes adequate again, without return forfeit retailers will continue to exaggerate their needs and cancel orders; resulting in excess material. - Order batching; companies may not immediately place an order with their supplier; often accumulating the demand first. Companies may order weekly or even monthly. This creates variability in the demand as there may for instance be a surge in demand at some stage followed by no demand after. - Price variations – special discounts and other cost changes can upset regular buying patterns; buyers want to take advantage on discounts offered during a short time period, this can cause uneven production and distorted demand information. - Demand information – relying on past demand information to estimate current demand information of a product does not take into account any fluctuations that may occur in demand over a period of time. Example of the bullwhip effect Let’s look at an example; the actual demand for a product and its materials start at the customer, however often the actual demand for a product gets distorted going down the supply chain. Let’s say that an actual demand from a customer is 8 units, the retailer may then order 10 units from the distributor; an extra 2 units are to ensure they don’t run out of floor stock. The supplier then orders 20 units from the manufacturer; allowing them to buy in bulk so they have enough stock to guarantee timely shipment of goods to the retailer. The manufacturer then receives the order and then orders from their supplier in bulk; ordering 40 units to ensure economy of scale in production to meet demand. Now 40 units have been produced for a demand of only 8 units; meaning the retailer will have to increase demand by dropping prices or finding more customers by marketing and advertising. Although the bullwhip effect is a common problem for supply chain management understanding the causes of the bullwhip effect can help managers find strategies to alleviate the effect. Hopefully this blog post has given you a simple understanding of the term.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9732061624526978, "language": "en", "url": "https://www.edwardsragatz.com/study-higher-alcohol-taxes-reduce-fatal-crashes/", "token_count": 629, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0205078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0cfa78b2-b955-4f0d-9f52-32039eae5e09>" }
Channel 4 recently reported on a study recently issued by University of Florida Health. According to the study, increasing state alcohol taxes could prevent thousands of deaths a year from car crashes. How did they come to this conclusion? They looked to Illinois who increased taxes on beer, wine and spirits. After the increase in taxes in 200, Illinois saw fatal alcohol-related car crashes declined 26 percent.. The decrease was even more marked for young people, at 37 percent. The reduction was similar for crashes involving alcohol-impaired drivers and extremely drunken drivers, at 22 and 25 percent, respectively. The tax wasn’t a drastic increase in prices for an alcoholic drink. The law had the tax on beer to go up by 4.6 cents per gallon, on wine by 66 cents per gallon and on distilled spirits by $4.05 per gallon. How would this cost be passed to the consumer, assuming the entirety of the taxation cost was passed to them? It would result in a .4 cent increase per glass of beer, a .5 cent increase per glass of wine and a 4.8 cent increase per single serving of spirits. The research team used records of fatal crashes from the National Highway Traffic Safety Administration from January 2001 to December 2011. They looked at the 104 months before the tax was enacted and the 28 months after it was enacted to see whether the effects of the tax change differed according to a driver’s age, gender, race and blood alcohol concentration at the time of a fatal motor vehicle crash. The research team defined an impaired driver as having a blood alcohol level of less than .15 percent and an extremely drunk driver as having a blood alcohol level of more than .15 percent, which translates to roughly six drinks within an hour for an average adult. To control for several other factors that can affect motor vehicle crash rates, the researchers compared the number of alcohol-related fatal crashes in Illinois with those unrelated to alcohol during the same time period as well as alcohol-related fatal crashes in Wisconsin, which did not change its alcohol taxes. Results confirmed that the decrease in crashes was due to the tax change, not other factors. The larger-than-expected size of the effects of this modest tax increase may be because the tax change occurred at the same time as the Great Recession — a time when unemployment was high and personal incomes lower, according to the study. Alcohol-related motor vehicle crashes account for almost 10,000 deaths and half a million injuries every year in the United States. Alcohol is more affordable than ever, a factor researchers say has contributed to Americans’ widespread drinking and driving. Drinking more than 10 drinks per day would have cost the average person about half of his or her disposable income in 1950 compared with only 3 percent in 2011. Alcoholic beverages have become so inexpensive because alcohol tax rates have declined substantially, after taking inflation into account. What does this study bring out that is unique? As told to US News World and Report by one of the researchers, “We identified that alcohol taxes do in fact impact the whole range of drinking drivers, including extremely drunk drivers.”
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9510177969932556, "language": "en", "url": "https://www.reddie.co.uk/2020/05/28/flying-taxis-are-they-just-around-the-corner-or-a-pipe-dream/", "token_count": 1807, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2158203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f3ae5f03-b3d0-439b-8657-2907b3839ee0>" }
For most people, the term “flying taxi” may conjure up images of Milla Jovovich crashing into Bruce Willis’s taxi in “The Fifth Element” in the year 2263 rather than feats of present-day engineering. However, a slew of long-established industrial giants, like Toyota, Boeing, and Airbus, newer tech giants, like Uber, Google, or Amazon, and disruptive start-ups of which there are too many to list, are doing their best to bring flying taxis from 23rd century science-fiction to present day reality. So called electric vertical take-off and landing (eVTOL) aircraft, like helicopters, are capable of taking-off and landing vertically – essential for flight in dense urban environments. However, unlike helicopters, eVTOLs rely exclusively on electric propulsion, and are almost exclusively pilotless, making them the most promising contenders for the development of “flying taxis”. So, why can’t you ride-hail flying taxis yet? Amongst the established companies trying to make flying taxis work is Toyota, who as far back as 2014 filed patents reminiscent more of Back To The Future: Part II than present-day taxis. In 2017, the Japanese carmaker expressed a desire to have a “flying car” light the Olympic flame at the opening ceremony of the 2020 Olympics in Tokyo. Although that target was put off by at least one year by the coronavirus crisis, and the 2020 Olympics became the 2021 Olympics, the ambitions of established automotive companies, tech giants, and start-ups remain sky-high. The main selling point for flying taxis seems obvious: flying taxis do not get stuck in traffic and, as a side effect, fewer cars on the road mean fewer traffic jams for everyone. However, the regulatory and technological barriers that will have to be overcome are plentiful and challenging. Three of the main challenges for developers of eVTOLs are range, safety, and noise. As the range of eVTOLs is mainly dictated by the energy of Li-ion batteries, which are preferred by most manufacturers, any major development in this area is most likely to come from novel batteries rather than eVTOL developers. Regulators like the European Aviation Safety Authority (EASA) have set stringent safety standards, including failure rates of less than one per billion flying hours, which will only be achievable by providing redundant components to take over in case of emergency. As almost all eVTOLs are designed to be pilotless, the challenges for developers are comparable to those encountered in the development of self-driving cars. In some ways, autonomous flight of an eVTOL may be easier to achieve than autonomous driving of a car, as, once airborne, there are fewer obstacles with which to contend. Noise is another major concern as eVTOLs are targeted at dense urban areas and thus will have to be significantly quieter than helicopters to allow for widespread use. This can be achieved by lower weight, lower rotor speeds, and a large ratio of rotor surface area to weight compared to helicopters. Flying taxis aren’t the only application for eVTOLs While flying taxis are one possible use for eVTOLs, delivery of goods is another. Firms such as Wing Aviation, a subsidiary of Google’s parent company Alphabet, have already started making deliveries using eVTOLs, albeit on a relatively small scale. However, given the increased need for contactless delivery in the present crisis, drone deliveries may become more common in the coming months and years. What can the patent landscape tell us about who is working on eVTOLs? Use of terms related to eVTOLs in patent applications has increased significantly over the past ten years. The term “eVTOL” was first mentioned in a patent application in 2009 (AU2009202662A1), and the term “vertiport” (meaning a landing infrastructure, usually for eVTOL aircraft) was first mentioned in 2016 – but use of both has increased significantly since 2016, from 0 and 1 for “eVTOL” and “vertiport”, respectively, to 32 and 14, in 2019. In the past ten years, the number of patent applications related to aircraft capable of landing or taking-off vertically (CPC classification codes B64C29/0025 – propellors being fixed relative to the fuselage and B64C29/0033 – propellors being tiltable relative to the fuselage) and aircraft characterised by the type or position of power plant using steam, electricity, or spring force (CPC classification code B64D27/24) has risen steadily (Fig. 1). For CPC classes ‘25 and ‘33, the numbers have risen from 40 and 97 in 2011 to 270 and 547 in 2019, respectively. For CPC class ’24, the rise was even more pronounced, from only 99 in 2011 to 1256 in 2019. Although some of the applications using these classification codes are related to conventional helicopters rather than eVTOLs, at least some of the significant increase in recent years is driven by the increased research and IP protection of eVTOLs for flying taxis and drones. Fig. 1 – Patent applications published per year in three CPC classes related to eVTOLs (numbers for 2020 are extrapolated from patent applications published in the first quarter of 2020). Perhaps unsurprisingly, patent applications using these classification codes are mostly filed by US companies. While traditionally General Electric has been the largest filer for VTOLs with propellors being fixed relative to the fuselage in CPC class ‘25 (2010 to 2020 – Fig. 2a), in 2019 and 2020 (as of 31 March 2020), Kitty Hawk, a US start-up backed by Google co-founder Larry Page, led the way with over 30 published appications (see Fig. 2b). Fig. 2 – a) Total number of patent applications published for various Applicants between 2010 and 2020 in CPC class B64C29/0025; b) Number of patent applications published for various Applicants in 2019 and 2020 (until 31 March) in CPC class B64C29/0025. Other industrial giants such as Boeing and Porsche (who partnered to build an eVTOL in 2019), Airbus, and Embraer are further well-known (and perhaps expected) heavy filers in the area. X development, a secretive R&D subsidiary of Google, had 8 applications published last year, while Amazon (presumably mostly for delivery drones) and Uber (Uber Air – a development arm for flying taxis) had 6 applications each published between January 2019 and March 2020 (see Fig. 2b). Looking at applications published for VTOLs with propellors being tiltable relative to the fuselage in CPC class ‘33, Bell Helicopter filed by far the most patent applications; over 1000 between 2011 and 2020 (Fig. 3a), and over 300 in 2019 and 2020 (as of 31 March 2020 – Fig. 3b) alone. Fig. 3 – a) Total number of patent applications published for various Applicants between 2010 and 2020 in CPC class B64C29/0033; b) Number of patent applications published for various Applicants in 2019 and 2020 (until 31 March) in CPC class B64C29/0033. Bell Helicopters has been working on military aircraft with tiltrotors for many years, and is agressively protecting IP for their Nexus eVTOL (a six fan, 150 mile-range “flying taxi”), which puts Bell into an excellent position to use their experience developing military aircraft to turn flying taxis from concept into commercial product. Nevertheless, start-ups such as Kitty Hawk and Joby Aero (backed, amongst others, by Toyota) are filing more patents in this CPC class, having had 11 and 7 patent applications published, respecitively, in 2019/2020 (see Fig. 3b). It remains to be seen when flying taxis will become a product for the mass market, and if they can revolutionise transportation. It also remains to be seen who will succeed, and who will fall by the wayside. In a similar fashion to the current automotive market, the winners and losers will come from the large OEM incumbents, and the disruptive start-ups. Nevertheless, for now, at least when looking at patent filings in the area, the US OEMs and start-ups appear to be leading the way. This article is for general information only. Its content is not a statement of the law on any subject and does not constitute advice. Please contact Reddie & Grose LLP for advice before taking any action in reliance on it.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9633907079696655, "language": "en", "url": "https://www.tnp.no/norway/panorama/5165-paris-climate-agreement-makes-everything-complicated-for-norway", "token_count": 491, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.423828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:82103c25-7ff1-4e24-be98-ed092996a5ba>" }
The 2015 United Nations Climate Change Conference ended in Paris yesterday with a historical agreement. The conference reached its objective for the first time in history to achieve a universal agreement on methods to reduce climate change in the pact (Paris Agreement), by all the nations of the world. The agreement will become legally binding if it is ratified, accepted, approved or acceded to by at least 55 countries that represent at least 55 percent of global greenhouse emissions. The ambitious goals taken in the conference will put Norway, as a leadin oil and gas producer, in a challenging situation. The dispute over the future of the oil industry in the country has already started. Liberal Party (Venstre) climate policy spokesperson, Ola Elvestuen said to VG that it will have major consequences for Norwegian oil and gas business and the fossil fuels will be replaced by renewable energy. – When 195 countries gather to set such ambitious goals for how to bring down greenhouse gas emissions, Norway can not turn its back and continue as before, said Elvestuen. She also noted that Norwegian oil market will adapt to the new reality. On the other hand, she noted the new situation may lead to more demand for Norwegian gas. She notes that there is now a need and desire to replace coal with gas, and Norway can help here, says Sundtoft to VG. Norway will continue to pomp oil and gas as before! Petroleum and Energy Minister of Norway, Tord Lien aggrees that Norway can have a more active role with its natural gas. Moreover, he criticizes those who think Norway should stop new exploration and production of oil in North Sea. He states that the government will pump oil and gas, just like before. But the new aggreement will bring more compications for Norway in terms of oil production. During the conference, EU introduced its binding offshore safety directive, which will dramtically increase the oil production activities in North Sea. Directive was first adopted as a result of the Deepwater Horizon accident in the Gulf in 2010. The directive requires oil companies to take into account so-called worst-case scenarios when they put their contingency plans for development and operation of new oil fields. So it means much higher costs than today’s current security scheme, which Norway follows today. According to VG, the costs could be higher in northern areas, where petroleum extraction is already expensive.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9414533972740173, "language": "en", "url": "http://www.excellingcommunity.org/youth-excelling/", "token_count": 199, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.197265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:990dca5b-0a81-4966-abf8-15239eb95d4b>" }
According to the most recent Youth and Money survey conducted recently, the knowledge of money management is not reflected in the behaviours and attitude of today’s youth. The study also indicated that the misconception of money and its management could hurt youth financially in the future, if not at the present as well. Our aim is: - To teach youth and young adults God’s design and purpose for money in their lives. - To provide information that will allow youth and young adults to become better managers of the money entrusted to them. - To teach principles of money management and wealth creation evidenced by cutting spending, avoiding and/or reducing debt, and establishing and investment and savings plan. - To help youth and young adults establish a “sound mind” in regards to money and have a proper relationship with money. - It is never too early to begin to start taking responsibility for your financial future. Education and training Workshops and seminars Programmes and events
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9621069431304932, "language": "en", "url": "https://channels.theinnovationenterprise.com/articles/the-obsolescence-of-driving-and-the-advent-of-the-self-driving-vehicle", "token_count": 1044, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c979621e-a338-44a2-8be6-298438e3bd95>" }
We, as a society, are on the cusp of a transportation revolution, the likes of which haven’t been experienced in decades. The automobile has been one of the most important inventions ever created in our long history, and with the advent of the latest auto technology, we’ll soon have cars that will be able to drive themselves. The auto industry, as it is today, generates vast amounts of revenue; it’s estimated that in total, the auto industry is worth 1.6 trillion dollars. Our current cities are populated with the vehicles that we’ve come to rely on, so it’ll be quite a massive change when vehicle automation starts to become the norm, rather than the exception. Already, companies like Elon Musk’s Tesla are incorporating automation features into their newest vehicle’s design that electric powered car owners are currently using with aplomb. The 21st-century futurist even has extensive plans for how self-driving vehicles will affect the civic landscape. Google, one of the largest tech companies in the world, has also been heavily investing in vehicle automation for years. Recently, they indicated that the company could soon start experimenting with ride-hailing services on college campuses that would allow students to travel easily from place to place by using automated vehicles. This level of commitment is a prime indicator that companies are beginning to really consider the viability of an industry that is expected to see ten million cars on the road by the year 2020. Cars to Get Around With, but Not Neccesarily Drive Today, cars grant freedom, and it’s much easier to travel the way you want when you have a vehicle to take you where life leads you. With this in mind, what happens when you don’t need to actually own a car to partake of this traveler’s freedom? It’s now much easier to get around without owning a car than it has been for decades. Companies like Uber have been commoditizing mobility for several years now, and millennials have felt less of a need to buy a vehicle than any other prior generation of adults. Now, imagine a world where, instead of your friendly Uber driver, you get from point A to point B via a driverless Uber vehicle. This type of world will soon be here, and this change will be major when considering how much cost is involved in owning a vehicle today. Gone will be the days of paying for upkeep, auto coverage, or even vehicle purchasing. The convenience of utilizing a vehicle service will only increase as the cost of employing drivers is virtually eliminated, resulting in plummeting service costs. Hitting Municipalities and Businesses in the Wallet It’s estimated that large municipalities like New York City can earn up to $1.9 billion per year from vehicle-related fines. With this in mind, it’s not a farfetched scenario to imagine that with less overall vehicle ownership, major cities will lose a large chunk of their revenue stream. Speeding tickets alone in 2014 were responsible for $6.2 billion in U.S. revenue, so if we bring driverless cars into the equation, there’ll be less speeding, and thusly, local governments will have to find another source for income. On top of the profit that our governments generate based on vehicle ownership and operation, businesses reap similar rewards through parking lots and spaces. Vehicle storage is a $100 billion business that requires very little overhead for a big profit. On top of this, the construction of parking structures also brings in a sizeable income to local businesses and easy parking is one of the great ways to get consumers into these same businesses. With the advent of the self-driving car, adopters of the technology will be able to be dropped off where they need to be and the car can automatically move onto the next user, almost completely eliminating the need for the additional space required for parking structures; freeing up more locations for additional commercial and residential projects. If not in use, the self-driving vehicles can reside in lots that are in less in-demand areas, where they can use dedicated mobile broadband to receive new ferrying assignments. A Changing Vehicular Landscape Similarly to how no one could fully predict the rise of highways as a development spurred by the creation of the automobile, no one will fully be able to predict where the revolution of the self-driving vehicle will lead us. Automobiles have become a major part of the lifestyles of people around the world, so a sudden removal of the need to own a car or truck will certainly have unforeseen effects. There are a few changes that this revolution is sure to usher in. Firstly, private car services as we know them will surely go into decline, causing the loss of millions of dollars in revenue. Secondly, those businesses that specialize in the upkeep of our vehicles will experience a plethora of unpleasant changes. Thirdly, there won’t need to be quite as many vehicles on the road as there are today; ensuring that the landscape of our world will be greatly changed as this vehicular revolution takes hold.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9649907350540161, "language": "en", "url": "https://factsyoudidntknow.com/the-worlds-largest-chocolate-maker-announced-1-billion-investment-for-fighting-climate-change/", "token_count": 529, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.412109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5c8e1e77-0f08-4bec-8ced-6e0e01c091fd>" }
The largest chocolate maker in the world will deal with climate change. In order to help in fighting climate change, Mars, the producer of the popular chocolate brands Snickers, Twix, and M&Ms, promised an investment of $1 billion in the next few years. The path towards environmental sustainability includes investment in food sourcing, renewable energy, cross-industry action groups, and farmers. This company has already made the first steps towards sustainability, by building wind farms in Texas and Scotland that generate enough power necessary for the company’s operations in the U.S. and the U.K. The construction of wind and solar farms by the end of 2018 is announced for nine other countries, as well as some measures to cut greenhouse gas emissions by 27% by the end of 2025 and 67% by the end of 2050. Mars’ CEO, Grant Reid, revealed the reasons for this huge investment, explaining that “most scientists are saying there’s less than a 5% chance we will hit Paris agreement goals…which is catastrophic for the planet.” He pointed out that the global supply chain is disrupted and because of that it needs “transformational, cross-industry collaboration” in order to be revived. With a workforce of over 80, 000 employees all over the world, Mars is dependent on farmer’s production, as it uses their raw materials in the production processes. Because of this, Reid explains that for Mars this investment is much more than just the right thing to do. According to him it will also be beneficial for the business. Mars’ Chief Sustainability Officer, Barry Parkin, explained that if they don’t take some proactive measures, “more extreme weather events… causing significant challenges and hardships in specific places around the world, whether that’s oceans rising or crops not growing successfully” will follow. He also pointed out that they “believe in the scientific view of climate science and the need for collective action.” Consequently, Mars was among the companies that signed a letter in June suggesting President Trump not to withdraw from the Paris Climate Agreement. This “Sustainability in a Generation” plan was announced just before the UN General Assembly and Climate Week that took place in New York. According to Reid, this announcement could act as a stimulus for other countries to join the battle against climate change. Mars as a company with an estimated worth of $35 billion, plans to send the message through a M&Ms advertising campaign including mascots with windmills.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9175505638122559, "language": "en", "url": "https://ohiolink.oercommons.org/courseware/module/1941", "token_count": 150, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2080078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c544b326-aa4a-47d3-97aa-aa682554c107>" }
In this topic, students will be introduced to monopoly. They’ll learn what a monopoly is, how it differs from perfect competition and what conditions give rise to it. They’ll also learn how monopolists decide on the profit-maximizing level of output and price. The social costs and benefits of monopoly will also be covered. In addition to monopoly, the topic will cover price discrimination. Ohio TAG Social and Behavioral Sciences (OSS) Standards Core OSS004 Outcome: Core skill demonstrated by students who successfully complete a Principles of Microeconomics Course Standard: Understand basic microeconomics terms and concepts, including scarcity and choice, equilibrium, efficiency and equity, positive and normative economics, comparative advantage, and specialization.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9444996118545532, "language": "en", "url": "https://valuewalkpremium.com/2018/10/short-terms-solutions-that-create-long-term-problems/", "token_count": 353, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5e84a016-d049-43c7-8b12-77e92a7638aa>" }
Short Terms Solutions That Create Long Term Problemsvalueplays This article concerning a Farm Bill in 1985 and the outcome today reveals much of what we do not do, but should do, when learning and teaching others about developing investment perspective. The article bemoans the negative impact this government policy is having today and the accompanying US$/Soybean Price chart provides the long-term vs short-term economic context Much of new government policy is in response to previous poorly developed policy. The lack of historical perspective is driven by the political need to ‘do something’ in order to reelect politicians. The current over-planting of Southern Yellow Pine which proved such a loss to CALPERS recently is the product of government spending beyond its means of the 1960’s-1970’s which led to high inflation which in turn led to Paul Volcker raising rates sharply in response which in turn resulted in global capital rushing to invest in 18% Treasury obligations which drove the US$ higher by 50% and resulted in plunging agricultural prices in 1985. Plunging farm prices moved politicians to create the Food Security Act 1985(Feb 1985) which included the Southern Pine planting program thus taking some land out of agricultural production. This series of events came from a series of government policies attempting to engineer societal outcomes and now 30yrs+ later another poor outcome. Only by connecting society’s perceptions and responses to market activity on a global basis and over the long-term can we truly develop investment common sense. Man Who Steered Timber Subsidy Program Calls It His Biggest Regret It ‘turned into a boondoggle,’ says Mike Gunn, who led efforts to add the Conservation Reserve Program to 1985 Farm Bill
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9776173830032349, "language": "en", "url": "https://www.cappsonline.org/the-downside-of-reduced-student-borrowing/", "token_count": 902, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01202392578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:27bbad2a-4532-4f2f-aff4-0fa87283bbbc>" }
New research shows more student borrowing is connected to greater academic success — at least at community colleges — and indicates reduced borrowing could lead to higher loan defaults. The student debt crisis has become ubiquitous in headlines and even in the mouths of some lawmakers. New research, though, suggests that if many students are taking out unnecessary loan debt, others aren’t borrowing enough to support their pursuit of a degree. The studies found that community college students who borrow more have stronger academic outcomes than those who took out fewer loans or reduced their borrowing. And one experiment involving Maryland community college students found that positive effects of increased borrowing carry over to students’ financial well-being after college — whether or not they actually completed the degrees. As both federal officials and college administrators raise concerns about overborrowing, the new research points to the possible downsides of messaging that could make low-income students averse to loan debt. Andrew Barr, an assistant professor of economics at Texas A&M University, who co-wrote the studyinvolving Community College of Baltimore County, said the findings show more nuance is necessary in discussions of student loan debt. “There clearly are downsides to borrowing for certain people. But there is a reason we have student loans,” he said. “It allows students to finance their education. And for certain students, if you reduce the amount they perceive they can borrow, they seem to do worse.” Barr, along with Kelli Bird, an assistant professor of education at the University of Virginia, and Ben Castleman, an associate professor of education at UVA, tracked the effects of a monthlong outreach campaign that used text messages to inform students at the Baltimore community college about their student loan debt. Students who received the texts reduced their borrowing through unsubsidized federal loans by about $200, or 7 percent, on average. That reduced borrowing resulted in students performing worse in their courses. Those who received the texts and subsequently took out lower loan amounts were less likely to earn any credits and more likely to fail a class in the semester studied. Barr said that could be because students cut back on costs like food or spent more time working outside class to cover additional costs after reducing their borrowing amount. The study also notably found that students who borrowed less were 2.5 percentage points more likely to default on their loans within three years. But those who borrowed more were less likely to default whether or not they completed a degree, Barr said. “Even for people very unlikely to get a degree, academic performance matters for their likelihood of eventual default,” he said. Higher ed researchers have found that students who leave college without a degree or credential are at the highest risk of default. But the study suggests that those with worse academic performance are at even greater risk of default. Barr said it’s not clear why that’s the case, but credit accumulation, a higher grade point average or some other factor involving academic achievement appeared to make a difference for students who borrowed more in the experiment. The study builds on previous findings from a study by Benjamin Marx, an assistant professor of economics at the University of Illinois at Urbana-Champaign, and Lesley Turner, an assistant professor of economics at the University of Maryland at College Park. In a separate study of community college students released earlier this year, Marx and Turner found that messaging from a college could lead students to make substantial reductions in their borrowing. The study looked at the results when an unnamed college didn’t include student loans in financial aid packages. Colleges that participate in the federal student loan program can’t dictate the amount of loans available to students. But they can choose the loan amount displayed in financial aid letters. Students who randomly received financial aid offers including student loans were 40 percent more likely to borrow than were those who got an offer with no student loan funds. And students who received award letters with student loan aid borrowed an additional $4,000 and completed 30 percent more course credits. “It’s important to avoid a knee-jerk reaction that we need to get rid of student loans,” Marx said. “Lots of community colleges are dropping out of the federal loan program entirely. And there’s evidence that that’s harming students.” Some community colleges have incentives not to participate in the federal loan program…(continue reading)
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9579411149024963, "language": "en", "url": "https://www.cbpp.org/research/poverty-and-inequality/pulling-apart-a-state-by-state-analysis-of-income-trends", "token_count": 3328, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1298828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5fffc724-7e19-4645-b080-bfb3276bae11>" }
Pulling Apart: A State-by-State Analysis of Income Trends November 15, 2012 A state-by-state examination finds that income inequality has grown in most parts of the country since the late 1970s. Over the past three business cycles prior to 2007, the incomes of the country’s highest-income households climbed substantially, while middle- and lower-income households saw only modest increases. During the recession of 2007 through 2009, households at all income levels, including the wealthiest, saw declines in real income due to widespread job losses and the loss of realized capital gains. But the incomes of the richest households have begun to grow again while the incomes of those at the bottom and middle continue to stagnate and wide gaps remain between high-income households and poor and middle-income households. As of the late 2000s (2008-2010, the most recent data available at the time of this analysis): - In the United States as a whole, the poorest fifth of households had an average income of $20,510, while the top fifth had an average income of $164,490 — eight times as much. In 15 states, this top-to-bottom ratio exceeded 8.0. In the late 1970s, in contrast, no state had a top-to-bottom ratio exceeding 8.0. - The average income of the top 5 percent of households was 13.3 times the average income of the bottom fifth. The states with the largest such gaps were Arizona, New Mexico, California, Georgia, and New York, where the ratio exceeded 15.0. This analysis uses the latest Census Bureau data to measure post-federal-tax changes in real incomes among high-, middle- and low-income households in each of the 50 states and the District of Columbia at four points: the late 1970s, the late 1990s, and the mid-2000s — similar points (“peaks”) in the business cycle — and the late 2000s. In order to generate large enough sample sizes for state-level analysis, the study uses combined data from 1977-1979, 1998-2000, 2005-2007, and 2008-2010. The study is based on Census income data that have been adjusted to account for inflation, the impact of federal taxes, and the cash value of food stamps, housing vouchers, and other government transfers, such as Social Security and welfare benefits. Realized capital gains and losses are not included, due to data limitations. As a result, our results show somewhat less inequality than would be the case were we to include realized capital gains. In this analysis, changes in income inequality are determined by calculating the income gap — i.e., the ratio between the average household income in the top fifth of the income spectrum and the average household income in the bottom fifth (or the middle fifth) — and examining changes in this ratio over time. These changes are then tested to see if they are statistically significant. States fall into one of two categories: (1) those where inequality increased (that is, the ratio increased by a statistically significant amount), or (2) those where there was no change in inequality (the change in the ratio was not statistically significant). In no state did inequality fall by a statistically significant amount. Similarly, income gaps between high- and middle-income households remain large. - Nationally, the average income of the richest fifth of households was 2.7 times that of the middle fifth. The five states with the largest such gaps are New Mexico, California, Georgia, Mississippi, and Arizona. Gaps Separating High-Income Households from Others Grew Prior to Recession The long-standing trend of growing income inequality continued between the late 1990s and the mid-2000s. - On average, incomes fell by close to 6 percent among the bottom fifth of households between the late 1990s and the mid-2000s, while rising by 8.6 percent among the top fifth. Incomes grew even faster — 14 percent — among the top 5 percent of households. - In 45 states and the District of Columbia, average incomes grew more quickly among the top fifth of households than among the bottom fifth between the late 1990s and the mid-2000s. In no state did the bottom fifth grow significantly faster than the top fifth. Similarly, households in the middle of the income distribution fell further behind upper-income households in most states between the late 1990s and the mid-2000s. - On average, incomes grew by just 1.2 percent among the middle fifth of households between the late 1990s and the mid-2000s, well below the 8.6 percent gain among the top fifth. Income disparities between the top and middle fifths increased significantly in 36 states and declined significantly in only one state (New Hampshire). An examination of income trends over a longer period — from the late 1970s to the mid-2000s — shows that inequality increased across the country. - In every state plus the District of Columbia, incomes grew faster among the top fifth of households than the bottom fifth. Nationally, the richest fifth of households enjoyed larger average income gains in dollar terms each year ($2,550, after adjusting for inflation) than the poorest fifth experienced during the entire three decades ($1,330). - Middle-income households also lost ground compared to those at the top. In all 50 states plus the District of Columbia, the income gap between the average middle-income household and the average household in the richest fifth widened significantly over this period. Top 5 Percent of Households Pulling Away Even Faster The widening income gap is even more pronounced when one compares households in the top 5 percent of the income distribution to the bottom 20 percent over the last three decades. We conducted this part of our analysis for the 11 large states that have sufficient observations in the Current Population Survey to allow the comparison of the average income of the top 5 percent of households between different time periods. - In these 11 large states, the average income of the top 5 percent rose between the late 1970s and mid-2000s by more than $100,000, after adjusting for inflation . (In New Jersey and Massachusetts, the increase exceeded $200,000.) By contrast, the largest increase in average income for the bottom fifth of households in these states was only $5,620. In New York, for example, average incomes grew by $194,000 among the top 5 percent of households but by less than $250 among the bottom fifth of households. - In the 11 states, the incomes of the top 5 percent of households increased by 85 percent to 162 percent between the late 1970s and mid-2000s. By contrast, incomes of the bottom fifth of households didn’t grow by more than 27 percent in any of these states, and in one state —Michigan – they actually fell. - The average income of the top 5 percent pulled away from those in the middle as well. In the late 1970s, the incomes of the top 5 percent were 2.5 to 3 times those of the middle fifth in these 11 states. By the 2000s they were more than 4 times as much in all 11 states. Causes of Rising Inequality Several factors have contributed to the large and growing income gaps in most states. - Growth in wage inequality. This has been the biggest factor. Wages at the bottom and middle of the wage scale have been stagnant or have grown only modestly for much of the last three decades. The wages of the very highest-paid employees, in contrast, have grown significantly. The erosion weakness of wage growth for workers at the bottom and middle of the income scale reflects a variety of factors. Over the last 30 years, the nation has seen increasingly long periods of high unemployment, more intense competition from foreign firms, a shift in the mix of jobs from manufacturing to services, and advances in technology that have changed jobs. The share of workers in unions also fell significantly. At the same time, the share of the workforce made up of households headed by women — which tend to have lower incomes — has increased. Government policies such as the failure to maintain the real value of the minimum wage and to adequately fund supports for low-wage workers as well as changes to the tax code that favored the wealthy have also contributed to growing wage inequality. Only in the later part of the 1990s did this picture improve modestly, as persistent low unemployment, an increase in the minimum wage, and rapid productivity growth fueled real wage gains at the bottom and middle of the income scale. Yet those few years of more broadly shared growth were insufficient to counteract the decades-long pattern of growing inequality. Today, inequality between low- and high-income households — and between middle- and high-income households — is greater than it was in the late 1970s or the late 1990s. - Government policies. Government actions — and, in some cases, inaction — have contributed to the increase in wage and income inequality in most states. Examples include deregulation and trade liberalization, the weakening of the safety net, the lack of effective laws concerning the right to collective bargaining, and the declining real value of the minimum wage. In addition, changes in federal, state, and local tax structures and benefit programs have, in many cases, accelerated the trend toward growing inequality emerging from the labor market. - Expansion of investment income. Forms of income such as dividends, rent, interest, and capital gains, which primarily accrue to those at the top of the income structure, rose substantially as a share of total income during the 1990s. (Our analysis captures only a part of this growth, as we are not able to include capital gains income due to data limitations.) The large increase in corporate profits during the economic recovery after the 2001 recession also widened inequality by boosting investors’ incomes. States Can Mitigate the Growth in Inequality Growing income inequality not only raises basic issues of fairness, but also adversely affects the nation’s economy and political system. While it results to a significant degree from economic forces that are largely outside state policymakers’ control, state policies can mitigate the effects of these outside forces. State options include: - Raise, and index, the minimum wage. The purchasing power of the federal minimum wage is 13 percent lower than at the end of the 1970s. Its value falls well short of the amount necessary to meet a family’s needs, especially in states with a high cost of living. States can help raise wages for workers at the bottom of the pay scale by enacting a higher state minimum wage and indexing it to ensure continued growth in the future. - Improve the unemployment insurance system. Unemployment insurance helps prevents workers who lose their jobs from falling into poverty and keeps them connected to the labor market. Yet some states have cut benefits deeply. These states should restore those cuts and others should build on recent efforts to fix outmoded rules that bar many workers from accessing benefits. - Make state tax systems more progressive. The federal income tax system is progressive — that is, it narrows income inequalities — but has become less so over the past two decades as a result of changes such as the 2001 and 2003 tax cuts. Nearly all state tax systems, in contrast, are regressive. This is because states rely more on sales taxes and user fees, which hit low-income households especially hard, than on progressive income taxes. (The income inequality data in this report reflect the effects of federal taxes but not state taxes.) Many states made their tax systems more regressive during the 1990s. Early in the decade, when a recession created budget problems, states were more likely to raise sales and excise taxes than income taxes. Later in the decade, when many states cut taxes in response to the strong economy, nearly all made the majority of the cuts in their income taxes rather than sales and excise taxes. There are many ways a state can make its tax system more progressive. For example, it can reduce its reliance on sales taxes. States can offset the impact of state taxes on those least able to pay by enacting or expanding tax credits targeted to low-income taxpayers. For example, more states could follow the lead of the 24 states that have adopted earned income tax credits. As state revenues slowly recover from the recent recession, some states are cutting taxes. The bulk of the tax cuts enacted this year, however, disproportionately benefited higher-income families. If these trends continue, states will make their tax systems even more regressive and diminish their ability to restore the large spending cuts of the last few years. - Strengthen the safety net. States play a major role in delivering social safety net assistance, which pushes back against growing inequality by helping low-wage workers move up the income ladder and shielding the nation’s most vulnerable citizens from the long-term effects of poverty. Top Ten States for Selected Income Inequality Measures Greatest Income Inequality Between the Top and the Bottom, Late-2000s Greatest Income Inequality Between the Top and the Middle, Late-2000s 1. New Mexico 1. New Mexico 2. Arizona 2. California 3. California 3. Georgia 4. Georgia 4. Mississippi 5. New York 5. Arizona 6. Louisiana 6. New York 7. Texas 7. Texas 8. Massachusetts 8. Oklahoma 9. Illinois 9. Tennessee 10. Mississippi 10. Louisiana Greatest Increases in Income Inequality Between the Top and the Bottom, Late 1990s to Mid-2000s Greatest Increases in Income Inequality Between the Top and the Middle, Late 1990s to Mid-2000s 1. Mississippi 1. Mississippi 2. South Dakota 2. New Mexico 3. Connecticut 3. Illinois 4. Illinois 4. South Dakota 5. Alabama 5. Alabama 6. Indiana 6. Connecticut 7. Massachusetts 7. Missouri 8. Colorado 8. Colorado 9. Kentucky 9. Florida 10. New Mexico 10. Oregon Greatest Increases in Income Inequality Between the Top and the Bottom, Late 1970s to Mid-2000s Greatest Increases in Income Inequality Between the Top and the Middle, Late 1970s to Mid-2000s 1. Connecticut 1. Connecticut 2. Massachusetts 2. California 3. New York 3. Oklahoma 4. Kentucky 4. New York 5. Illinois 5. New Mexico 6. California 6. Illinois 7. West Virginia 7. Oregon 8. Colorado 8. Texas 9. Rhode Island 9. Massachusetts 10. Mississippi 10.Rhode Island There are a host of options states can consider to strengthen their safety nets. States can create a more streamlined process for enrolling in work supports such as food stamps and child care as they retool their health insurance systems under the Affordable Care Act. States also can boost the prospects of poor children by increasing temporary cash assistance to the neediest families in state Temporary Assistance for Needy Families (TANF) programs. Improving access to SNAP (food stamps) and providing assistance with rent can help low-income families afford food and housing. In addition, states can improve the child care system by providing child care subsidies with affordable co-payments and by investing in quality early care and education programs as well as after-school programs. - Protect workers’ rights. States can raise wages by protecting workers right to bargain collectively and by strengthening and enforcing laws and regulations to prevent abusive employer practices that deprive workers of wages that they are legally owed. While these are all useful steps, federal as well as state policies will have to play an important role if low- and middle-income households are to stop receiving steadily smaller shares of the income pie. The late 1990s are compared to the mid-2000s (rather than to more recent years) because these periods reflect comparable points in the economic cycle — namely, when the economy was at or near a peak. These peak periods are compared to show how inequality has changed. Currently, the nation is in the middle of an economic cycle that started when the economy began to expand in 2009. It is too soon to track the changes in inequality during the current economic cycle at the state level. These states are California, Florida, Illinois, Massachusetts, Michigan, New Jersey, New York, North Carolina, Ohio, Pennsylvania, and Texas.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.933810830116272, "language": "en", "url": "https://www.enotes.com/homework-help/how-long-will-take-an-investment-double-value-322895?en_action=hh-question_click&en_label=hh-sidebar&en_category=internal_campaign", "token_count": 255, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.030517578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f0deddcd-e016-47d0-8a3c-4d2a03ab05ac>" }
How long will it take an investment to double in value if the interest rate is 11% compounded continuoulsy and what is the equivalent annual interest rate? The best way to approach this is to use the traditional continuous compound interest formula: `A(t) = A_0 e^(rt)` Here, `A(t)` is the amount after a given amount of time, `A_0` is the initial amount you invest, `r` is the interest rate (11%), and `t` is the time in years. Let's answer the first part, where we calculate the amount of time that is needed for the investment to double. In other words, we want to find when `A(t)` would be `2A_0`. So, let's substitute our given value of `r` and our value for `A(t)` into the continuous compound interest formula: `2A_0 = A_0e^(0.11t)` The problem asks us to solve for the amount of time, so let's go ahead and do that now. Start by dividing both... (The entire section contains 379 words.) check Approved by eNotes Editorial
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9222975969314575, "language": "en", "url": "https://www.exxonmobil.ru/en-RU/Energy-and-technology/Looking-forward/Energy-and-Carbon-Summary", "token_count": 572, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0142822265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b1f0df3d-2e33-4146-9037-b76aebcab8da>" }
ENERGY AND ENVIRONMENT Energy and Carbon Summary ExxonMobil has a proven record of successfully meeting society‘s evolving demand for energy. With longstanding investments in technology coupled with the ingenuity of our people, we are well positioned to continue to responsibly meet the demands of a more prosperous world while managing environmental impacts. Energy is essential. Accessible and affordable supplies of energy support our ability to meet the basic requirements of life, and power society's progress around the world. As the world’s population grows to more than 9 billion in the next two decades, rising prosperity will increase energy demand, particularly in developing countries. Stable and affordable energy supplies will make it possible for more people to access the health care, transportation and education that contribute to quality of life and improved living standards. With this increased energy demand comes the potential for greater environmental impacts, including greenhouse gas emissions and the risks of climate change. As a global community, we need to manage environmental impacts as we meet this growth in demand. This is society’s dual challenge. This Energy & Carbon Summary describes how we at ExxonMobil are doing our part in addressing the dual challenge. It describes the steps we are taking to responsibly develop new resources to ensure the world has the energy it needs while also minimizing environmental impacts. It also provides detailed information on how we view and manage the risks associated with greenhouse gas emissions and climate change. Our effortsExxonMobil aspires to position itself as a leader in providing energy while evolving the energy system. Through these four pillars, we ensure that processes and programs are implemented to mitigate risks, reduce emissions and improve our energy efficiency. A rigorous risk management approach is integral to ExxonMobil’s governance framework and ensures risks are appropriately identified and addressed. ExxonMobil’s Board of Directors oversees risks associated with our business, including the risks related to climate change. Metrics and targets ExxonMobil has established programs to drive improvements in energy efficiency and mitigate greenhouse emissions. These programs are supported by key performance metrics, which are utilized to identify and prioritize opportunities to drive progress. ExxonMobil uses a risk management framework based on decades of experience to identify, manage and address risks associated with our business. Our business strategies are underpinned by a deep understanding of global energy system fundamentals. These fundamentals include the scale and variety of energy needs worldwide; capability, practicality and affordability of energy alternatives; greenhouse gas emissions; and government policies. We consider these fundamentals in conjunction with our Outlook for Energy to help inform our long-term business strategies and investment plans. The ExxonMobil Energy & Carbon Summary is aligned with the core elements of the framework developed by the Financial Stability Board’s Task Force on Climate-related Financial Disclosures (TCFD), designed to encourage informed conversations.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9450921416282654, "language": "en", "url": "https://www.groasis.com/en/green-musketeer-blog/blog-26-the-treesolution-is-finally-happening-right-now", "token_count": 2345, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.057861328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:591ffcac-a2e1-40dc-be2b-1c9f48eb607f>" }
26th blog, July 18, 2019 In 2003 we started developing the Groasis Ecological Water Saving Technology, which allows planting of trees on degraded land at 90% lower cost, with 90% less water, and with a 90% survival rate (*). After proving that the Groasis Technology worked, in 2008 we published our book The Treesolution and opened the URLs www.onetrilliontrees.com and www.onetrilliontrees.org. In “The Treesolution” I explained that we can solve climate change with a money-making business model, based on planting one trillion productive and ecologically interesting trees on degraded land. One of the slogans on the cover of the book is: ‘Learn how to create wealth from CO₂’. A sudden shift In 2015, the Paris Agreement was adopted by 196 state parties at COP21. However, on 8 October 2018, the UN Intergovernmental Panel on Climate Change (IPCC) made an emergency call to reduce CO₂ emissions by at least 50% by 2030 – since fossil-fuel emissions in 2018 increased at the fastest rate for seven years, rather than decreasing. It was obvious that existing policies were not sufficient. This realisation shocked the world, and people are starting to open up to other solutions. Two policies: CO₂ reduction or CO₂ offsetting? There are countries – e.g. the Netherlands – who think they can only avoid climate change through a ‘reduction of emissions’ policy. However, this is the most expensive way to solve climate change, since the cost per ton of avoided CO₂ emissions reaches incredibly high amounts – and the effectiveness and feasibility are questionable given population growth and an increased desire for higher living standards. For example: in the Netherlands electric cars are subsidized and each avoided ton of CO₂ costs the government (and the population!) two thousand Euros. To compare: The Treesolution with the Groasis Ecological Water Saving Technology – based on disconnecting the CO₂ molecules with photosynthesis – costs less than 5 Euros per ton of CO₂. So, for the same amount of money that the Netherlands spends to avoid 1 ton of CO₂ emissions, other countries or companies can offset 400 times more CO₂, by planting productive trees(!!). Neutralizing CO₂ emissions with trees will create a competitive advantage, so I expect tree planting will become the preferred climate change prevention solution. Countries who chose for a ‘reduction policy’ will soon lose their competitive advantage and outprice themselves in the global market. Energy intensive industries will have to leave from countries who chose for a ‘CO₂ reduction policy’ to countries who chose for a ‘CO₂ neutralizing policy’ – a development that is already happening in Europe. The Treesolution is adopted by countries Countries start to see agroforestry as an efficient opportunity to neutralize their CO₂ emissions, and can commit to their Paris COP21 pledge. Several countries have chosen agroforestry as a permitted solution for obtaining carbon credits. A few months ago the European Union published the following press release: Technologies for removing CO₂ (note they use the word ‘removing’, no longer the word ‘reduction’) from the atmosphere must be integrated into climate policy by 2019, say national science academies across the EU EASAC - Scientific advice for the well-being of Europe. Climate models suggest that an early application of NETs (Negative Emission Technologies (note: such as the Treesolution) in parallel with mitigation offers a greater chance to achieve the goals of the Paris Agreement (Paris COP21) and to prevent catastrophic consequences for the environment and society, then later on in this century to apply on a larger scale to the NET. The EU and national governments should identify a European research, development and demonstration program for NETs that is in line with their own skills and industrial base. Reducing deforestation, reforestation, increasing the carbon content in the soil and improving wetlands remain the most cost-effective and viable approaches to CDR, and should now be implemented as low-cost solutions that are relevant to both developed and developing countries. Australia is also embracing the Treesolution. The Australian government launched a 2-billion-dollar fund in which vegetation management is accepted as a method to remove CO₂ from the atmosphere. Norway has signed an agreement with Indonesia to plant trees at its cost in Indonesia to reduce the CO₂ concentration in the atmosphere, and reduce Norway’s net carbon footprint. Many other countries have announced tree planting programs. India announced a plan to plant 2 billion trees. Ethiopia recently launched and started a forest recovery plan by planting 4 billion trees. China has announced that it will increase the tree cover of its country to 23%, and planted an area the size of Ireland in 2018. Ireland itself is also planting a few thousand hectares of trees each year. The Treesolution is now happening everywhere, finally! Companies also adopt the Treesolution Shell was the first oil company to announce they will neutralize their CO₂ emissions with trees. ENI soon followed, and announced that they will plant more than eight million hectares of trees to neutralize their CO₂ emissions. Then Total announced that they will spend USD 100M per year on reforestation to offset their emissions. This makes them the third oil company using the Treesolution. Since CCS (Carbon Capture and Storage – a technology to remove CO₂ from combustion gasses) costs approximately 90 to 150 Euros per ton of abated CO₂, it makes sense that as soon as the first companies start to use trees (which cost 5 Euros per ton of offset CO₂), other companies must follow to remain competitive. I expect that within 5 years the whole energy industry will use trees to offset their CO₂ emissions, and that governments will allow carbon credits from tree planting. Science confirms that the Treesolution is the right way forward Scientists now start to publish that a trillion trees should be planted, the number that we recommended in 2008 in our book “The Treesolution”. Dr. Thomas Crowther says that scientists 'underestimated the potential of trees to combat climate change on a huge scale'. He also says that 'if we plant a trillion of extra trees, this would surpass any other method for tackling climate change - from building wind turbines to vegetarian diets'. The Karlsruhe Institute of Technology and the University of Edinburgh say that the 'global climate targets will be missed if deforestation continues on this scale'. Back in 2016, two Dutch scientists reviewed the claims made in our book and published their own paper, supporting the claims. Here are some other interesting publications from scientists: publication 1 // publication 2 // publication 3 . Organisations and the Treesolution The New York Declaration on Forests (NYDF) is a political declaration by governments, companies, indigenous peoples and civil society to halve the loss of natural forests by 2020, and strive to end it by 2030. Its ten goals include restoring 350 million hectares of degraded landscapes and forestlands, and reducing emissions from deforestation and forest degradation. The World Business Council for Sustainable Development (WBCSD), with over 200 corporate members, has pledged its support for reaching the targets as agreed in the Bonn Challenge. The Bonn Challenge is a global effort to bring 150 million hectares of deforested and degraded land into restoration by 2020 and 350 million hectares by 2030, meaning 20 million hectares of degraded land will need to be restored each year between 2020 and 2030. Climate change is not an isolated problem Most climate experts treat climate change as a stand-alone problem. For this reason, countries like the Netherlands believe in a ‘CO₂ reduction’ policy. However, I believe that climate change is an integrated problem of what I call 'The 7 Challenges for Humanity': - Food shortage - Climate change - Rural-urban migration - Falling groundwater levels These '7 Challenges for Humanity' are all inextricably linked. The way most climate experts want to combat climate change – except for those who consider using trees as a solution – focuses on solving only one of the 7 Challenges: climate change. However, their proposals do not solve the other 6. A windmill, CCS, or a CO₂ levy do not help solve erosion, poverty, food shortage, unemployment, rural-urban emigration or declining groundwater levels. The Treesolution addresses all the 7 Challenges that humanity is facing. Productive trees produce food and create employment, so they combat poverty and create wealth. Productive and ecologically interesting trees restore soil fertility, enhance bio-diversity and underground water levels. Once people find that they can create wealth in their own region or country, it will take away the need to migrate. Aside of these advantages, like a cherry on the cake, trees remove CO₂ from the air. Trees are a money-making opportunity to remove CO₂ from the atmosphere. Instead of increasing taxes to combat climate change through a ‘CO₂ reduction policy’, we can reduce taxes while combatting climate change by planting trees on degraded land, and create wealth. Low cost solution The main part of the cost price for CO₂ offsetting with trees is the cost of land. Fertile land -where trees can be planted without irrigation - is expensive. There is also another problem with fertile land: land-grabbing. Countries or corporations need huge areas to neutralize their emissions. Scientists estimate the required surface at approximately 2 billion hectares. If these countries or corporations use fertile land to plant trees, then local populations will protest as they rely on this land to grow their food and generate an income. They cannot, and will not, accept a new form of imperialism which colonizes their land in order to solve the climate problem of the wealthy few. With the Groasis Ecological Water Saving Technology we can use huge areas of currently infertile, dry, degraded and eroded land that is very cheap or even available for free, often not inhabited, at very low cost. This ensures that the cost price of CO₂ offsets will drop to less than 5 Euros per ton, and that no land-grabbing takes place. During the last 50 years, the world has used outsourcing as a main driver to create wealth. Production has moved to those places where costs were lowest. Why should we not solve climate change – and the other 6 aforementioned Challenges – through outsourcing? Why would we spend two thousand Euros per avoided ton of CO₂ to subsidize a car, if we can create wealth by outsourcing the planting of productive trees in those areas where people are in urgent need of a fertile soil, water, employment and food - and remove 400 tons of CO₂ instead of 1 ton of CO₂ by spending the same amount of money? If climate change really worries us, then we should remove as much CO₂ from the atmosphere within the shortest period of time, at an optimised cost. The best solution to do that, is the Treesolution. I am so happy that it starts happening now! * As demonstrated in more than 40 countries around the world, reports can be downloaded here
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9480240941047668, "language": "en", "url": "https://zivahub.uct.ac.za/articles/dataset/Personal_Inflation_Calculator/6882941/1", "token_count": 162, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.004486083984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:70c52b3a-72e2-4cab-bdc4-5ae7a967ba51>" }
Personal Inflation Calculator 2018-08-09T08:58:28Z (GMT) by Inflation rates experienced by different groups of consumers within a country vary. This is because the prices of goods and services and the expenditure patterns of consumers differ. The published inflation rate is used for important decisions regarding the preservation of consumer purchasing power. These include the adjustment of social grants and minimum wages by government and the benchmarking of returns by investors when making investment decisions. It is thus vital that inflation is measured accurately to ensure the purchasing power of consumers is preserved. Current measures of inflation published by Stats SA are applicable to typical consumers and are not relevant to each individual. This resource supplements a study that seeks to provide a publicly available model that can be used by consumers to calculate their personal rate of inflation.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9481158256530762, "language": "en", "url": "http://www.innovativeindustry.net/the-100-billion-dollar-business-of-green-chemistry", "token_count": 273, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.004119873046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d8670d50-98b4-464f-868b-c4ee68987f64>" }
A recently published study about Green Chemistry anticipates dramatic growth rates for green chemicals during the coming decade. According to this report from Pike Research, Green Chemistry represents a market opportunity that will grow from 2.8 billion US$ in 2011 to approximately 100 billion US$ by 2020. “Green chemistry markets are currently nascent, with many technologies still at laboratory or pilot scale,” says Pike Research president Clint Wheelock, “and many production-scale green chemical plants are not expected to be running at capacity for several more years. However, most green chemical companies are targeting large, existing chemical markets, so adoption of these products is limited less by market development issues than by the ability to feed extant markets at required levels of cost and performance.” Despite these dramatic growth rates for green chemicals during the coming decade, these emerging markets represent just a drop in the bucket compared to the 4 trillion US$ global chemical industry. The total chemical industry is expected to expand to 5.3 trillion US$ in annual revenues. This Pike Research report examines the three major segments of the Green Chemical market: waste minimization in conventional synthetic chemical processes, green replacements for conventional chemical products, and the use of renewable feedstocks to produce chemicals and materials with smaller environmental footprints than those produced by current processes. Get more details about the Green Chemistry report from Pike Research
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9652166366577148, "language": "en", "url": "http://www.pm-consultinggroup.com/investing-101-what-are-capital-markets/", "token_count": 617, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1552734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e87224ff-66ef-4af5-8cdb-ff5bce4b8962>" }
Let’s assume you’re the CEO of The Best Widgets, Inc. Your Chief Technology Officer (CTO) slaps a proposal on your desk that promises to create an innovative new type of widget that will be a huge market success. The downside is that the plan requires a substantial investment in new technology and employee training. The CTO’s plan is certainly feasible, but it is going to take more money to implement than the company has to invest. So, how do you make this technological revolution possible? That is where capital markets come into play. Capital Markets Defined According to Shailesh Kumar at Value Stock Guide, the simplest definition of capital markets is that they are markets in which companies, or governments, sell securities to investors in order to raise money for things like building, business expansion, infrastructure upgrades, etc. The stock market is a capital market because companies sell shares in order to finances their business ventures. Governments typically sell bonds, which makes the bond market another type of capital market. Although, it is important to note that although government or municipalities are the first bond issuers that most people think of, corporations can sell bonds as well. Other Capital Markets Stock and bonds are the two biggest types of a capital markets, but other derivatives, such as options and futures are capital markets as well. For example, if a farmer wants to lock in a good return on his crop yield, the farmer may sell future contracts to investors. If the crop grows beyond expectations, then the investors makes money and the farmer gets his guaranteed return, which ensures that he’ll have money to plant more crops. The Importance of Capital Markets Without capital markets, the farmer would have no guarantee what he’ll get for his crops and that would make his situation very precarious. Even more important, if capital markets didn’t exists, then governments would have to find other ways to fund projects, which would likely result in outrageous tax hikes, or they just wouldn’t be able to do as much for their citizens. By the same token, corporations rely on capital markets to raise money to keep their business growing and to continue to offer new products to customers. Why Companies Need Capital Markets To use a fairly recent example, Facebook started publicly selling shares of their company because the service’s popularity had grown beyond the company’s assets. In order to keep introducing new features to users and improve their business processes and increase revenue, the company started selling stocks to investors. Most major corporations go public (offer stocks) at some point in order to keep growing. On a final note, remember that capital markets are different than money markets. Money markets are typically debt markets in which a corporation borrows money from investors. In a capital market, such as the stock market, the company sells shares that give investors partial ownership of the company in exchange for money upfront. The investors may or may not see a good return on their investment, which makes some capital markets a bit riskier.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.959818422794342, "language": "en", "url": "https://brandtrends.com/the-kid-consumer/pocket-money-in-france/", "token_count": 1318, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.01129150390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5c20c9fc-c6d7-4d4c-9a05-126b2ee34204>" }
Pocket money is a widely observed sociocultural activity that occurs to some extent in many countries worldwide. The piggy bank has become an icon, a vessel in which children have stored coinage retrieved from anywhere they can find it for centuries! However, in our digital age, there are major signs that piggy banks – or other physical means of storing pocket money – are on the way out. Instead, pocket money now is going digital and so are its related markets and purchases. In France, we can see some major signs of this trend as we can in other countries. Also, we have observed that children who are the heaviest digital users tend to receive the most pocket money of all those measured. What pocket money do children get across the globe? Pocket money varies hugely worldwide both in prevalence and value. In countries where more children receive pocket money, the value of that is also often higher but not always. For example, Saudi Arabia takes the top spot with 93% of children receiving pocket money with an average monthly value of €41.50. Turkey and South Africa follow, though interestingly, despite the widespread nature of pocket money in South Africa, the value is very low. Contrastingly, some countries, like Denmark, have a relatively low prevalence of pocket money giving but the kids that do receive tend to get high monthly values. The USA sits towards the top in both % of kids receiving pocket money and monthly value and it’s perhaps no surprise that the pocket money market in the USA is easily the biggest in the world, exceeding €10 billion. How much pocket money do children in France receive? 32% of children between 4 – 6 receive pocket money in France but this rockets to 70% amongst 7 – 11 year-olds forming an average of 56% kids overall that receive pocket money with an average monthly value of €15.3. European countries occupy the middle of the table with the UK and Germany giving the highest values of pocket money to the most kids – between 79% to 80% of kids receive €22 to €26 a month. Down the scale we find France who’s parents give a moderate amount of pocket money compared to other European countries measured. Fewer kids receive pocket money than Italy, UK, Spain and Germany, and the average value is also the lowest of all countries in Europe except for Poland. The pocket money market in France each year is roughly €795 million, roughly half of that in the UK – around €1.5 billion. This is still considerable and not too far below the European average. The Characteristics of Pocket Money in France So what types of kids get what levels of pocket money? Interestingly, there is a very strong trend indicating that digital users get more pocket money than non-digital users. In fact, 70% of heavy digital users receive pocket money vs 40% of non-digital users. It is quite clear that the more digitally engaged a child is in France, the more pocket money they receive. There are two explanations for this, digital use may be correlative with higher wealth and parents are more likely to give money to their kids or that digital users receive more money in the form of in-app or in-game purchases and pay-to-play content. The other category that receives high pocket money in France are collectors, kids who enjoy collecting products from small quantities of brands. This may result in more regular purchases to keep collections going, keep up with new releases, etc. Parents may be keener to facilitate this when they are helping their child collect their favorite toys or products and thus, tend to give more pocket money. Stats worldwide are showing that pocket money is digitizing as parents are expected to spend more online than ever before, paying for their children to access premium content and France seems to follow this. A British study showed that app purchases and gaming content had risen sharply to occupy more of the pocket money market than ever before. Parents are even forgoing normal cash payments to top up their children’s bank accounts and other digital accounts instead. One poll suggested some 34% of parents had decided to give digital pocket money instead of cash. What does it mean? There is perhaps greater pressure on parents than ever to ensure their children have a quality experience online. Kids are engaging with streaming content more than ever before, for example, 30 – 50% of 7 – 11-year-olds watch Netflix at least once a day. With Disney + and Amazon Prime Video, it’s likely that many kids have more than 1 active subscription. Smartphone ownership also increases the likelihood of purchases. 50% of kids own smartphones by 11 and it’s likely that amongst these kids, you can find digital users that require regular financial assistance from parents for in-app purchases and other premium content. How Does Pocket Money Affect the Market? As child experiences migrate online, so does the pocket money market. For marketing, advertising and licensing, this data is indicative of where the market is going – digital. Digital platforms are filtering down to children and their parents who now, after centuries of cash gifts, are turning to digital pocket money. Digital pocket money is naturally spent online and therefore, the market for children’s pocket money is focussed on digital content, brands born on the internet, and brands with a digital presence. There are many ranging from streaming brands such as Netflix and Disney + with the many franchises they host to games like Fortnite. The French pocket money market is certainly an opportunity for retailers, especially in the digital space. What are the key take-aways from our findings? - Fewer French children receive less pocket money than in most countries in Europe - The highest receivers of pocket money in France are digital users - Kids who spend more time online or engaged with digital platforms spend more pocket money. This may be associated with affluence or it may simply be that parents lose track of money when they have to pay for subscriptions For more information take a look at our Pocket Money France Report. In the report, you’ll find the full statistical overview of pocket money in France. This will help you engage with retail, marketing and advertising strategies that operate around the pocket money market. Pocket money creates a large market on its own, and it doesn’t even include gift buying, Christmas, etc. Knowing what kids spend their regular money on is absolutely crucial to your business – don’t miss out!
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9465427994728088, "language": "en", "url": "https://businessjargons.com/economic-order-quantity-eoq.html", "token_count": 625, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1259765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2dfc9382-0f35-432c-a8a0-4130b148a33f>" }
Definition: Economic Order Quantity, popularly known as EOQ is the standard order quantity of materials which a firm should order at a given point in time with an aim of minimizing the annual inventory costs like holding/carrying cost, and order cost. It is a production scheduling model which was coined by Ford W. Harris in the year 1913 and has been updated with the passage of time. Ordering cost refers to the fixed cost involved in the preparation and processing of the supplier’s order irrespective of the lot size, such as cost of inviting quotations, cost of placing an order, inspection cost, documentation, transportation cost, etc. The total cost of holding, i.e. storing and maintaining a specific lot of the inventory, is called holding cost or carrying cost. It embraces warehouse expenses like rent, utilities, salaries, property taxes, etc. opportunity cost, and inventory cost associated with leakage, obsolescence and insurance. Along with that, the cost of funds invested in inventories is also covered in it. Formula of EOQ - A = Annual Requirement (demand) for raw material for the year - O = Cost of placing per order for purchase - C = Cost of carrying average inventory per unit, annually. EOQ formula is used to decide the optimal order size, i.e. the number of units of products to be added to the inventory with each order at one time. It is a well-known fact that the cost of ordering the inventory decreases with the increase in volume, because of economies of scale, but due to the increase in the size of the inventory, the carrying cost increases. At EOQ both ordering cost and carrying cost are minimum. It is also called an optimum lot size. It is mainly used in the field of production, operations, logistics and supply chain management, to ascertain the volume (how much) and frequency (how often) of the orders, needed so as to fulfil the specific level of demand. EOQ is helpful in determining the ideal order size, so as to maintain a supply chain which is cost-effective. In this, a fixed quantity is ordered whenever the inventory level is down to a certain reorder point. It helps in the calculation of reorder point and reorder quantity, to facilitate immediate refilling of the inventory to avoid shortage. Assumptions of EOQ There are certain assumptions with respect to EOQ, which are discussed as under: - Material cost per unit, Ordering cost per order and holding cost per unit (on annual basis) is known and fixed. - Potential raw materials or inputs usage in units is known. - Quantity of material ordered is received instantly, meaning that there is no lead time. - The new batch of raw materials is delivered in full - The inventory decreases at a fixed rate until it becomes nil. EOQ calculation determines exactly when an order has to be placed and the quantity which is to be ordered, for uninterrupted production and minimum total cost of inventory.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9395579695701599, "language": "en", "url": "https://opzones.ca.gov/faqs/", "token_count": 2451, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.1279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:042c4021-ea6a-4259-9961-a996816bb3e3>" }
Opportunity Zone 101: I want to learn… What are Opportunity Zones? Opportunity Zones are census tracts that are defined by the Internal Revenue Service (IRS) as “economically-distressed community where new investments, under certain conditions, may be eligible for preferential tax treatment.” They were added to the tax code by the Tax Cuts and Jobs Act on December 22, 2017. You can learn more about the program on the federal website for the program located here. How were the Opportunity Zones selected? Opportunity Zones were nominated by state governors and certified by the Secretary of the U.S. Treasury in April 2018. A nominated tract had to meet one of the criteria under the definition of “low-income community” in Internal Revenue Code Section 45D(e): (1) a poverty rate of at least 20%; (2) a median family income below 80% of the greater of the statewide or metropolitan area median family income if the community is located in a metropolitan area; or (3) a median family income below 80% of the median statewide family income if the community is located outside a metropolitan area. Where are Opportunity Zones? Twelve percent of US Census tracts are Opportunity Zones (8,762 tracts). They are found in every state. You can explore a map of Opportunity Zones in the United States with the Community Development Financial Institutions Fund’s map. You can explore a map of Opportunity Zones in California with the State Integrated OZ Map. Where can I find a list of all the census tracts for Opportunity Zones? The Community Development Financial Institutios Fund’s Opportunity Zones Resources’ page hosts a spreadsheet link to a list of designated Qualified Opportunity Zones (QOZs). This spreadsheet allows you to filter Opportunity Zones by state, county, census tract number, and census tract type. What about properties adjacent to Opportunity Zones? There currently is no federal policy as it pertains to properties that are adjacent to Opportunity Zones. However, communities throughout the United States are exploring ways to support projects in census tracts adjacent to Opportunity Zones. Can new Opportunity Zones be created? No. As the federal program currently stands, new Opportunity Zones cannot be created. Can the boundaries of an Opportunity Zone be adjusted? No. As the federal program currently stands, the boundaries of an Opportunity Zone cannot be adjusted. Opportunity Funds: I want to Invest… How do I invest in an Opportunity Zone? Investments in Opportunity Zones are made through Qualified Opportunity Funds. You must make your investment through a Qualified Opportunity Fund in order qualify for any benefit. What is a Qualified Opportunity Fund? A Qualified Opportunity Fund is any investment vehicle that files either a corporate or partnership federal income tax return and is organized for the specific purpose of investing in Opportunity Zone assets. To become a Qualified Opportunity Fund, an eligible investment vehicle must self-certify by filing IRS Form 8996 with its federal income tax return. Where can I find a list of all the Qualified Opportunity Funds? There currently is no complete list of all Qualified Opportunity Funds. There is also no ability to provide confirmation that an investment is in a Qualified Opportunity Zone. Can I invest in a Qualified Opportunity Fund if I am not within an Opportunity Zone? Yes. You can invest in a Qualified Opportunity Fund if you do not work, live or own property within an Opportunity Zone. What benefits can I receive from investing in a Qualified Opportunity Fund? There are primarily three benefits available. - Capital Gains Tax Deferral: An investor that re-invests capital gains into a Qualified Opportunity Fund can defer the payment of federal taxes on the realized gains of the investment as late as December 31, 2026 - Capital Gains Tax Reduction: An investor that holds their investment in a Qualified Opportunity Fund for at least give years can reduce their tax bill on the capital gains differed by 10%. If the investor holds their investment for at least seven years, the reduction increases to 15%. - Elimination of Taxes on Future Gains: An investor that holds their investment in a Qualified Opportunity Fund for at least ten years will not be required to pay federal capital gains taxes on any realized gains from the investment. Can a Qualified Opportunity Fund make investments in multiple Opportunity Zones? Yes. If the Qualified Opportunity Fund holds at least 90% of its assets in Opportunity Zone property, the fund can invest in as many qualified Opportunity Zones as it desires. Is there a timeframe which investments must be made in a Qualified Opportunity Fund? The IRS rule making process dictates the timeframe for investment. The investor must deploy their capital into an Opportunity Fund within six months of realizing the capital gains that are invested. Communities: I want to Engage… Is my community an Opportunity Zone? The State Integrated OZ Map displays current Opportunity Zone tracts in California. The upper right corner of the map includes an address search function. You can search directly for your address. You can also search by ZIP Code. How do Opportunity Zones benefit my community? Opportunity Zones are a tool for economic development. They are a means to attract new capital to be deployed into a community. They allow investors to defer, reduce, or eliminate taxes on their unrealized capital gains. Opportunity Zones can be utilized to fund a wide array of community supported projects, from renewable energy to affordable housing. As a community member, how do I engage with Opportunity Funds in my area? You can utilize the State Integrated OZ Map to first identify eligible census tracts in your area for Opportunity Funds to invest in. We encourage you to then reach out to your city, county, or local elected representative to understand how your local government is engaging with Opportunity Zones in your community. Communities are also holding presentations, meetings, and informational sessions to share with investors what types of investments they are seeking. Will Qualified Opportunity Funds be used to fund affordable housing projects in my community? Qualified Opportunity Funds can be used to support and fund affordable housing projects. The State of California also has many resources to support the development of affordable housing. The Department of Housing and Community Development makes loans and grants through more than 20 programs, many of which include affordable housing. How do small businesses in Opportunity Zones in my community benefit? Opportunity Zones can be used to support small businesses by providing access to loans and venture capital that are needed to start or expand a small business. Opportunity Zones can also be used to develop innovation and small business hubs that support local businesses and entrepreneurs. Are there safeguards built in to this program to prevent abuse? Investors are required to substantially improve their investment in order to receive benefits from investing in an Opportunity Zone. The IRS will conduct tests to ensure that the investments are maintaining at least 90% of their assets in the qualified Opportunity Zone(s). Local governments are also creating and exploring ways to ensure that investments in their jurisdictions are aligned with what their communities desire. Is my local government engaging with Opportunity Zones? Local governments are approaching Opportunity Zones in different ways. Some local governments are using social media, presentations, and information sheets to help market their regions and share what types of investments their communities are seeking. We encourage you to reach out to your city, county, or local elected representative to understand how your local government is engaging with Opportunity Zones in your community. State of California: I want to know its role… Where are Opportunity Zones in California? The Department of Treasury has certified 879 census tracts in California as Qualified Opportunity Zones. Opportunity Zones can be found in 57 counties throughout California. You can find Opportunity Zones in California with the State Integrated OZ Map. Will the State of California be conforming its treatment of capital gains to align with the federal Opportunity Zone Program? Currently there is no tax conformity to the federal Opportunity Zone program. The issue of conformity to the program is currently in consideration by the Governor and the Legislature. What is the current role of the State in Opportunity Zones? California is committed to ensuring that the Opportunity Zone program aligns with California ideals. State agencies are actively providing technical assistance to communities who are working to attract impactful projects to Opportunity Zones. Is the state holding any meetings on Opportunity Zones? The State intends to hold convenings on Opportunity Zones. You can sign up for info of notice relating to these meetings here. In addition, the Governor’s Office of Business and Economic Development (GO-Biz) held a statewide webinar on November 12, 2019 on both Opportunity Zones and Promise Zones. The webinar recording highlighting these programs can be found here, under “Annual Meeting.” Can the State’s list of Opportunity Zones be re-designated? No. The federal program does not currently allow for the re-designation of Opportunity Zones. Opportunity Zones were nominated and certified by the U.S. Secretary of the Treasury in 2018. What other programs does the State offer that overlap with Opportunity Zones? There are many State and local grant programs, economic development tools, and tax credits that promote State ideals and can couple with the federal Opportunity Zone Program. Some of these programs are listed and detailed below: - California Competes Tax Credit – an income tax credit available to businesses that want to come to California or stay and grow in California. - Enhanced Infrastructure Financing Districts (EIFDs) – a local tax increment financing tool that can finance traditional public works, such as transportation, transit, and parks and libraries. EIFDs can also fund other activities such as affordable housing development, brownfield restoration, and land acquisition. - Community Revitalization and Investment Authorities (CRIAs) – a local tax increment financing tool limited to areas that are former military bases, disadvantaged communities, and defined low-income census tracts. CRIAs can finance a wide variety of projects and activities, including infrastructure, affordable housing, business assistance, and local grant programs. - Industrial Development Bonds – tax-exempt financing available to manufacturers for the acquisition of manufacturing facilities and equipment. - Electric Program Investment Charge (EPIC) – the state’s premier energy research, development, and deployment program for the advancement of science and technology in the fields of energy efficiency, renewable energy, advanced electricity technologies, energy-related environmental protection, transmission and distribution as well as transportation technologies. - California Sustainable Energy Entrepreneur Development Initiative (CalSEED) – a funding and professional development program for innovators and entrepreneurs working to bring early-stage clean energy concepts to market. - Transformative Climate Communities (TCC) – a state cap-and-trade funded grant program that funds community-led development and infrastructure projects that achieve environmental, health, and economic benefits in California’s most disadvantaged communities. - Affordable Housing and Sustainable Communities (AHSC) – a state cap-and-trade funded grant program that funds land-use, housing, transportation, and land preservation projects to support infill and compact development that reduce greenhouse gas (“GHG”) emissions. On November 12, 2019 the Governor’s Office of Business and Economic Development (GO-Biz) featured several of these programs in a statewide webinar. The webinar recording highlighting these programs can be found here, under “Annual Meeting.” Are the State’s climate goals being considered? Yes. The State is actively promoting impactful projects that advance the State’s plan for addressing climate change. This includes projects that advance the use of renewable sources of energy, reduce harmful GHG emissions, and incorporate climate adaptation and resiliency strategies.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9433526992797852, "language": "en", "url": "https://www.buildings.com/article-details/articleid/20952/title/tech-companies-embrace-renewables-for-data-centers", "token_count": 399, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1865234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:caf10a03-3e28-4c9d-af6c-8a56e9c095b0>" }
As the size and complexity of data centers continue to grow, energy consumption grows right along with it. In response, tech companies like Google and Microsoft are implementing wide-scale renewable energy installations to ensure a reliable energy supply that doesn’t overtax the grid. Google is on track to reach 100% renewable energy for all of its global operations this year, including offices and data centers. The company is purchasing 2.6 GW of wind and solar energy from 20 renewable energy projects, enough to account for all of the electricity its operations consume. Google also plans to branch out with additional renewable energy sources this year. “Electricity costs are one of the largest components of operating expenses at our data centers,” says Urs Holzle, Senior Vice President of Technical Infrastructure for Google. “Having a long-term stable cost of renewable power provides protection against price swings in energy.” Microsoft derives roughly 44% of its electricity consumption from a mix of wind, solar and hydropower and aims to achieve 50% by 2018 and 60% by the early 2020s. It recently announced a 237 MW data center in Cheyenne, WY, that will be powered entirely by wind energy. The wind installation also allowed the local utility, Black Hills Energy, to avoid increasing rates for taxpayers to account for the additional load. “Traditionally, when presented with a constraint on the system relating to reliability, load growth or intermittent generation, a utility had one option – building new infrastructure,” says Brad Smith, President and Chief Legal Officer of Microsoft. “Microsoft approached Black Hills Energy with a new solution to deliver reliability without additional costs for ratepayers. A new tariff available to all eligible customers lets the utility use the data center’s backup generators as a secondary resource for the entire grid. The natural gas turbines offer a more efficient solution than traditional diesel backup generators and ensure that the utility avoids building a new power plant.”
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9387945532798767, "language": "en", "url": "https://www.fashionweekly.com.au/lifestyle/why-having-good-health-insurance-can-be-beneficial.html", "token_count": 962, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07763671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0d5f8f13-75cf-48b3-8fcb-82a06fa7dd90>" }
The world is full of risks and uncertainties in everyday life, involving assets, businesses, families, and individuals. Sometimes the inevitable happens, leaving you exposed to various types of risks, including loss of properties, life, health, and personal assets. For many decades, people continue to purchase insurance policies to protect against such losses. Today, you can find insurance policies for medical care, hospitalization, vehicles, homes, critical illnesses, and disability. While most insurance types are essential to provide overall protection, the focus in this article is on critical illness. Most people have medical coverage for wellness checkups and to visit a general physician. But, what they do not have is a critical illness insurance policy that protects them against financial distress in the event of a significant illness, such as cancer or heart disease. The insurance differs from a hospitalisation insurance plan and medical benefits plan. Because cancer continues to be on the rise internationally, the discussion is about the plans and the benefits of having a good insurance policy. First, let’s look at important information about people with cancerous diagnosis and the cancer types. A cancer society reports that 35 people receive diagnoses every day. It affects babies, women, men, adolescents, and children. The number one common type of cancer in women is breast and in men is a lung. About 29.1 per cent of women had breast cancer, and approximately 17.2 per cent of the men had lung cancer. Of all the cancers, the breast is the leading type. Men have a higher rate of lung cancer compared to women, ranging at 13.4 per cent. The five most common types of cancers found in women, ranking number one to five, are breast, lung, colorectal, liver & intrahepatic bile ducts, and pancreas. In men, with the same ranking, the five most common are lung, colorectal, liver & intrahepatic bile ducts, prostate, and stomach. A reputed cancer society has plenty of information and resources to learn about cancer and how to reduce your risk. List of Diseases under a Critical Illness Health Insurance Plan - Severe Heart Attack - Sudden Hearing Loss - Loss of Speech - Liver Disease - Kidney Disease - Lung Disease The list above is a few of the diseases covered under a critical illness insurance plan. Cancer insurance Singapore differs from a hospitalization plan. Cancer insurance is a significant illness insurance plan type that protects family members and primary income earners against financial burden. Hospitalization and surgical plan only cover hospital bills. You don’t receive any benefits from time away from a job or funds to support your family. About Cancer Insurance A cancer insurance policy is a critical insurance coverage plan type that provides you with financial protection if you happen to receive a cancer diagnosis. Some insurance companies will pay you a lump sum of money to cover your household expenses and basic needs while you recover. Types of Cancer Plans - Early Stage The total amount of the insurer’s major illness defines the early stage plan, and the LIA (Life Insurance Association) explains the severe-stage of illness for a standard plan. Before purchasing a policy, consider the following factors below; - Coverage you need during the treatment and recovery stage. - Most coverages have a minimum of five years or longer. The estimation is the time an average person needs to recuperate from cancer and return to work and make the necessary adjusts to a normal lifestyle. - Consider the premium payment term of the cancer plan. You will need to make a projection of the cost of the plans and select one that meets your needs financially. - Insurers want to know the medical history of cancer in your family to access the likelihood of the disease recurring. - Review cancer plans for waiting and survival periods. The benefits only take effect after a specified period passes since a diagnosis. A plan with a survival period pays if you can survive seven, 14, or 30 days after the diagnosis. Read the terms carefully before making a final decision. Seven Benefits of Good Critical Illness Insurance Plans - Prevents financial stress if you lose your income. - Reasonable payout to cover financial needs not covered under a hospitalization plan. - Premium refunds. - Protection against income losses. - Support for immediate family members. - Maturity benefits. - Free health checkups. Insurance will always have significance in everyday life for protecting against the loss of properties and health. Healthcare insurance is a coverage type comprising a variety of plans, including hospitalization, critical illnesses, and medical. Do some research before selecting a plan that meets the needs of you and your family.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.956805944442749, "language": "en", "url": "https://www.investopedia.com/terms/l/lawofdiminishingutility.asp", "token_count": 560, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.16796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b4c51f09-8e6c-4798-8699-125a83d46b59>" }
What Is Diminishing Marginal Utility? The Law Of Diminishing Marginal Utility states that all else equal as consumption increases the marginal utility derived from each additional unit declines. Marginal utility is derived as the change in utility as an additional unit is consumed. Utility is an economic term used to represent satisfaction or happiness. Marginal utility is the incremental increase in utility that results from consumption of one additional unit. Understanding the Law Marginal utility may decrease into negative utility, as it may become entirely unfavorable to consume another unit of any product. Therefore, the first unit of consumption for any product is typically highest, with every unit of consumption to follow holding less and less utility. Consumers handle the law of diminishing marginal utility by consuming numerous quantities of numerous goods. The Law of Diminishing Marginal Utility directly relates to the concept of diminishing prices. As the utility of a product decreases as its consumption increases, consumers are willing to pay smaller dollar amounts for more of the product. For example, assume an individual pays $100 for a vacuum cleaner. Because he has little value for a second vacuum cleaner, the same individual is willing to pay only $20 for a second vacuum cleaner. The law of diminishing marginal utility directly impacts a company’s pricing because the price charged for an item must correspond to the consumer’s marginal utility and willingness to consume or utilize the good. Example of Diminishing Utility An individual can purchase a slice of pizza for $2; she is quite hungry and decides to buy five slices of pizza. After doing so, the individual consumes the first slice of pizza and gains a certain positive utility from eating the food. Because the individual was hungry and this is the first food she consumed, the first slice of pizza has a high benefit. Upon consuming the second slice of pizza, the individual’s appetite is becoming satisfied. She wasn't as hungry as before, so the second slice of pizza had a smaller benefit and enjoyment as the first. The third slice, as before, holds even less utility as the individual is now not hungry anymore. In fact, the fourth slice of pizza has experienced a diminished marginal utility as well, as it is difficult to be consumed because the individual experiences discomfort upon being full from food. Finally, the fifth slice of pizza cannot even be consumed. The individual is so full from the first four slices that consuming the last slice of pizza results in negative utility. The five slices of pizza demonstrate the decreasing utility that is experienced upon the consumption of any good. In a business application, a company may benefit from having three accountants on its staff. However, if there is no need for another accountant, hiring a fourth accountant results in a diminished utility, as little benefit is gained from the new hire.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9568362832069397, "language": "en", "url": "https://www.investopedia.com/terms/t/treasurybond.asp", "token_count": 935, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0194091796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e7403277-60a6-47bc-9115-ea5085f5b8fc>" }
What Is a Treasury Bond (T-Bond)? Treasury bonds (T-bonds) are government debt securities issued by the federal government that have maturities greater than 10 years. T-bonds earn periodic interest until maturity, at which point the owner is also paid a par amount equal to the principal. Treasury bonds are part of the larger category of U.S. sovereign debt known collectively as treasuries, which are typically regarded as virtually risk-free since they are backed by the U.S. government's ability to tax. - Treasury bonds (T-bonds) are fixed-rate U.S. government debt securities with a maturity range between 10 and 30 years. - T-bonds pay semiannual interest payments until maturity, at which point the face value of the bond is paid to the owner. - Along with Treasury bills, Treasury notes, and Treasury Inflation-Protected Securities (TIPS), Treasury bonds are one of four virtually risk-free government-issued securities. Treasury bonds (T-bonds) are one of four types of debt issued by the U.S. Department of the Treasury to finance the government’s spending activities. The four types of debt are Treasury bills, Treasury notes, Treasury bonds and Treasury Inflation-Protected Securities (TIPS). The securities vary by maturity and coupon payments. All of them are considered benchmarks to their comparable fixed-income categories since they are virtually risk-free, backed by the U.S. government, which can raise taxes and increase revenue to ensure full payments. These investments are also considered benchmarks in their respective fixed-income categories because they offer a base risk-free rate of investment with the categories' lowest return. T-bonds have long durations, issued with maturities of between 10 and 30 years. As is true for other government bonds, Treasury bonds make interest payments semiannually, and the income received is only taxed at the federal level. T-bonds are known in the market as primarily risk-free; they are issued by the U.S. government with very little risk of default. Treasury bonds are issued at monthly online auctions held directly by the U.S. Treasury. A bond's price and its yield are determined during the auction. After that, T-bonds are traded actively in the secondary market and can be purchased through a bank or broker. Individual investors often use T-bonds to keep a portion of their retirement savings risk-free, to provide a steady income in retirement, or to set aside savings for a child's education or other major expenses. Investors must hold their T-bonds for a minimum of 45 days before they can be sold on the secondary market. Treasury Bond Maturity Ranges Treasury bonds are issued with maturities that can range from 10 to 30 years. They are issued with a minimum denomination of $1,000, and coupon payments on the bonds are paid semiannually. The bonds are initially sold through auction in which the maximum purchase amount is $5 million if the bid is noncompetitive or 35% of the offering if the bid is competitive. A competitive bid states the rate the bidder is willing to accept; it is accepted depending on how it compares with the set rate of the bond. A noncompetitive bid ensures the bidder gets the bond, but he has to accept the set rate. After the auction, the bonds can be sold in the secondary market. The Treasury Bond Secondary Market There is an active secondary market for Treasury bonds, making the investments highly liquid. The secondary market also makes the price of Treasury bonds fluctuate considerably on the trading market. As such, current auction and yield rates of Treasury bonds dictate their pricing levels on the secondary market. Similar to other types of bonds, Treasury bonds on the secondary market see prices go down when auction rates increase, as the value of the bond’s future cash flows is discounted at the higher rate. Inversely, when prices increase, auction rate yields decrease. Treasury Bond Yields In the fixed-income market, Treasury bond yields help to form the yield curve, which includes the full range of investments offered by the U.S. government. The yield curve diagrams yields by maturity and is most often upward sloping, with lower maturities offering lower rates than longer-dated maturities. However, when longer maturities are in high demand, the yield curve can be inverted, which shows longer maturities with rates lower than shorter-term maturities.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.972207248210907, "language": "en", "url": "http://www.butleranalytics.com/art-making-decisions-uncertainty/", "token_count": 732, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.060791015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4120a7f3-2f69-4c70-a1d1-5f9e72e8d39d>" }
On a daily basis we all find ourselves making decisions under uncertainty, and this is particularly true in businesses, where complexity can be overwhelming. As human beings we need to simplify, and because decisions have to be made, we make them after a process of simplification. At one extreme we might use complex machine learning methods such as Bayesian Networks to try and establish causal links. On the other hand we might just look at a graph and reach some form of conclusion. Either way randomness, the meaningless noise in our environment, is easily interpreted as signal – as something meaningful. A simple example will help. Consider you work for a firm whose monthly revenues have largely been flat. They do of course vary from month to month, but not by more than 10% say. It is quite feasible that six months of increasing revenue will show up, and believing that the business has got second wind, managers decide to employ more people and invest in additional production capacity. The problem is that six consecutive months of increased revenue will happen on a random basis every five years or so. It’s easy to work out. Looking back over the trading history the business management see that revenue for one month is higher than the previous 50% of the time – on average. Meaning that it is also lower 50% of the time. It’s a bit like flipping coins. Getting six tails in six flips will happen every 64 flips on average. Or for our business, it will get six consecutive months when revenue is higher every 64 months – on average. The opposite is also true – six consecutive months of decreasing sales every 64 months. Obviously this is very simplified, but it illustrates an important point. Seemingly unusual things can happen, purely by accident, with no inherent meaning at all. Anyone who has run a business will know that random variations can imply all sorts of things – most of which are meaningless. Statisticians use something called a p-value to try and weed out these random variations, but it isn’t all that helpful. A p-value of 5% is often used as a standard – meaning that if an event is less than 5% likely to occur by accident then we should interpret the data as meaningful. This wouldn’t have helped in our example. And so business managers have to apply judgement. All the charts and graphs in the world cannot eliminate uncertainty. If a firm has just released a new product, and hired a new Sales Director then maybe a steady rise in revenue is more justified as meaningful. The hard fact of the matter though, is that you will never know with certainty. While gut feeling is currently very unpopular as a means of honing decisions, there is increasing evidence that it is often more powerful than we might imagine. Gerd Gigerenzer and others have conducted meaningful studies into the power of gut instinct, and find that it often outperforms rigorous analysis – although the term rigorous is never really true, all analysis is subject to uncertainty. There is a growing realization that ‘evidence based decisioning’ is often flawed, simply because people are blind to the uncertainties, and ignore the human factor in decision making. It seems the most powerful solution is a combination of formal analysis and human judgement. This does not imply that these two approaches will necessarily agree, but that a middle position can be found that is stronger than each approach on its own. There is much literature dealing with these issues, and some very readable and entertaining books. These include The Flaw of Averages by Savage, The Signal and the Noise by Silver, and Fooled by Randomness by Taleb.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9576163291931152, "language": "en", "url": "https://debitoor.com/dictionary/administration", "token_count": 702, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0262451171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ec74b11f-7969-4b8b-8266-9aebb89ede5b>" }
Administration – What is administration? Administration is the process of selling off assets to recover a company’s debt. Avoid going into administration by managing your cash flow more effectively. Find out how to manage cash flow with Debitoor or try it for free for seven days. Why does a business go into administration? Businesses may go into administration if they have serious problems with their cash flow and cannot pay their creditors the money they owe them. Company directors might start the administration process themselves, but creditors might also force a company to go into administration via a court order. The process of a company going into administration When a company goes into administration, there is a specific process that needs to be followed: 1. Appointing an administrator The first step in the process of going into administration is to appoint an administrator. The administrator that you appoint must be a professional ‘insolvency practitioner’, and your company must cover their fees. While you’re in administration, the administrator has control over your business and your assets. This not only means that they can sell any of your company assets to pay off your debts, they can also renegotiate or cancel contracts, and they can also make employees redundant. 2. Informing relevant parties Once you’ve appointed an administrator, they’ll write to your creditors and Companies House. They’ll also publicise their appointment in The Gazette, which is an official journal of statutory notices. The goal of the administrator is to repay the company’s creditors as quickly and fully as possible by leveraging (or selling) the company’s assets. They have eight weeks to come up with a statement that explains how they plan to achieve this. Once they have a full plan, a copy needs to be sent to creditors, employees, and Companies House, inviting them all to support or amend their plans at a meeting. The most common plans involve: - Negotiating a Company Voluntary Arrangement (CVA) - Selling your business to another company as a ‘going concern’ - Selling your assets, paying your creditors, and then closing your company - Closing your company if you have no assets to sell 4. Ending administration The process of going into administration ends when either: - The administrator decides that the goals of the administration have been achieved. - The administrator’s contract comes to an end. This automatically happens after a year unless the contract is renewed. Once your administration period has ended, you won’t have protection from any legal action your creditors might take. Differences between going into administration and liquidation? Administration and liquidation are both processes that happen when businesses are struggling to repay their debts; however, there are some important differences between going into administration and liquidation. Firstly, businesses go through administration when they have issues with cash flow but might still be viable in the long-run. On the other hand, liquidation occurs when a business is no longer viable. In other words, while administration might result in the closure of a company, liquidation almost always ends with a business closing. Secondly, the purpose of administration is to avoid the closure of a business, whereas liquidation is the process of selling assets so that a company can close. Administration therefore deals with avoiding closure, whereas liquidation involves preparing for closure.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9391786456108093, "language": "en", "url": "https://enrichbroking.in/international-trade-exchange-rate-trade-deficit", "token_count": 650, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06103515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9321ffe7-edb5-448b-81c4-552e30a20eef>" }
Flows from Foreign Direct Investment (FDI) and Foreign Portfolio Investments (FPI) Foreign capital flows to a country can be either in active form called as Foreign Direct Investment (FDI) or passive form called as Foreign Portfolio Investment (FPI). In case of FDIs, investing entities participate in decision making and drive the businesses. On the other hand, Portfolio Investment, as names points out is investment in markets – equity or bonds by the Foreign Portfolio Investors (FPIs) without any management participation. There are upper limits on the individual and combined holding by FPIs in the paid up capital of the Indian companies. FDI is greeted by all the developing economies and has numerouspaybacks in addition to bring in capital to the country: New products and services New managerial skills As FDI is long term in nature and steady money, FPIs money is measured as hot money as they can pull out the money at any time which could create complete risk for the economy. International Trade, Exchange Rate and Trade Deficit International trade indicates to the total business that a country does with all other countries in the world. A country’s balance of payment is the report revealing transactions of a country with the rest of the world. Balance of payment report is generally divided into two accounts namely the current account and the capital account. The current account has all the particulars of transactions on revenue account. Imports and exports of goods and services while the capital account impound all the capital flows like FDI, FII, loans, and grants etc. When imports are more than exports, then country will have a current account shortfall and if exports are more than imports then it will have current account excess. Likewise, capital account will be in excess if inflows are more than outflows and in shortfall if outflows are more than inflows on capital account. Surplus and/or deficit on both current and capital accounts put jointly makes it poise of payment number for a country. If a country is running incessant shortfall on current account, it would need excess on capital account to support that or reduce its foreign currency reserves. In either circumstance, the country runs the risk of trailing poise of market participants in the country as the currency of the country would lose value very fast. Currencies get traded in the world markets like commodities. Exchange rate refers to the value of one unit of a currency with respect to other currency/currencies. For instance, if Indian Rupee is excerpt alongside the dollar as $/Rs. 65, it means one dollar is priced at Rs. 65. Currencies can become more expensive and/or lose their value vis a vis other currency based on the comparative strength of the countries’ economy. Unemployment rate indicates the appropriate and willing to work unemployed population of the country in percentage conditions. During a hold back in economies, unemployment rate rises and during an expansion phase, the unemployment rate drops as more jobs are generated as supply goes up. Higher employment means income, which recovers the ability of people to spend, which entails potential growth in the economy. The repeal would be true for economy going through tough times and high unemployment rates.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9629462361335754, "language": "en", "url": "https://longrunplan.com/school/how-to-value-a-stock-like-warren-buffet-the-discount-cash-flow-valuation/", "token_count": 873, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.010009765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e5d16e87-3a47-47bf-9df7-0e535f1da7e8>" }
The valuation method that is used by most value investors, analysts and fund managers to value assets is the Discount Cash Flow (DCF) method. Unlike the multiples method which is based on comparison with other companies, DCF valuation is based on the expected financial results of the company itself, thus it is the most reliable valuation method and gives the best estimate of the real value of the company. Despite the fuzzy name, the main idea behind the DCF method is quite simple: each company is equal to the present value of all cash flows that are expected to be generated throughout its years of operation (‘present value’ means how much these cash flows are worth today. See further explanations below). To be complete, the cash and Equivalents on hand less the company total debt need to be added to the calculation. If the current stock price is lower than this fair value then the stock is considered cheap (undervalued), and if it is traded above this price it is expensive (overvalued). For example, if company ABC has $10 million of cash and equivalents, $25 million of debt, the present value of its future cash flows is $100 million, and the company has 5 million shares outstanding, then its fair price is $17 ( [10M – 25M + 100M] / 5M = $17 ). If ABC’s stock currently trades at $10 then it’s cheap and if it trades at $25 then it’s too expensive to buy. Here’s a schematic representation of this model: What are Free Cash Flows and why use it and not earnings to value companies? Many investors, including analysts, attribute too much importance to the company’s profits, especially net profit (or Earnings Per Share). On one hand, net profits are indeed a measure of how much money the company earns after deducting all expenses from its sales. The problem is that these profits are not always translated to the real cash flow. Remember that company sale products deduct the recurring expenses and declare net profit. But customers usually pay in credit for the items they buy, thus the company didn’t yet collect all the money from its sales. This means that profits are recorded in the Income statement but no cash flow was actually generated from these sales. What if the costumer changed its mind and returns the item for a refund? In addition, not all expenses involved streamline of cash. This is because a company also needs to buy additional inventory and sometime also new property, plant or equipment. All of these things cost money but are not included in the income statement, thus not reflected in net earnings. Therefore, a different measurement has to be used to evaluate how much cash the business actually generates. The Free Cash Flow (FCF) measures it very accurately. The FCF starts with net income, add back all the expenses that doesn’t involved cash flows, such as Depreciation or deferred taxes that weren’t paid yet. It also deducts all the cash expenses that weren’t included in the income statement like buying additional inventory or fixed assets (expenses which are called Capital Expenditure or CapEx). The result is the actual net cash flow that was generated during the year. This is the amount of cash the company can use to pay its liabilities, invest in the business or pay dividends to its shareholders, and thus this is the parameter that values the fair worth of the business. It may sound simple, but estimating the future FCFs and deducing the fair value from it is a more complex task that requires better knowledge of company analysis. However, after you have learned the process and gained enough experience in doing it, you have the power to know how a stock is currently priced. Warren Buffett and many other Value Investors are using this method to choose their stocks and this is how they beat the market in the long run. I do the same in the LongRunPlan portfolio – I search for stocks that are traded deep below their fair price, buy them and wait until the valuation anomaly is closed. Then, I sell and replace it with a new attractive stock. By using this process I managed to beat the market big-time since 2001. Read on to the next chapter:
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9646444916725159, "language": "en", "url": "https://www.lewrockwell.com/2008/08/john-m-peters/lakefront-fixer-upper/", "token_count": 1517, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.396484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c5494a56-4e7b-4969-99cb-33d0547acfe4>" }
That is how a real estate listing for the nation’s 26th state might read today. Michigan, once an industrial and economic giant, is among the fastest failing states in the country. A look at how this happened is revealing and instructive. Michigan’s economic destiny has traditionally been tied to the City of Detroit. Although it is not the state capital, Detroit and automobiles are what most people associate with Michigan. Certainly that was true of the thousands who migrated to Michigan during the first 50 years of the 20th century to work in the automobile plants and related industries from which fortunes and empires were built. From 1920 to 1940 that migration made Detroit the nation’s 4th largest city. By 1950, Detroit’s population had peaked at nearly 2 million. However, since the 1970’s Detroit has seen its population plummet by almost one million. Studies predict that the trend will continue with Detroit’s population falling to 700,000 by 2035. Some of that population drop can be explained by migration into Detroit’s suburbs which are home to a population in excess of four million people. Yet, the general trend for the state has been net population loss since 1965. Similar trends are present for Michigan businesses and jobs. Michigan has experienced six straight years of job loss, losing an estimated 370,000 jobs through 2008. Michigan ranks first among all states in the percentage of unemployed and one of every eight state residents is currently receiving food stamps. Detroit’s foreclosure rate is almost five times the national average with one in every eighty homes in foreclosure, making the city first in foreclosure rates. As of 2005 Michigan ranked 47th in personal income growth. Numbers like this have not been seen in Michigan since the Great Depression. Why are people and businesses deserting Michigan in droves? The answers are not complex. Neither are the solutions. FORECAST: COLD AND CLOUDY? Some argue that Michigan is a victim of a younger generation shift to Sun Belt locations. Clearly, Michigan is not a state for those who despise winter. Heck, even our state bird — the robin — flies south to avoid it. Yet, Chicago, with much tougher winters, continues to thrive and attract large numbers of college graduates just across Lake Michigan. STATE OF THE UNIONS It is economic climate more than physical climate which is driving people away from Michigan. Michigan is a state mired in a union past. UAW, AFSCME, AFL-CIO and the Teamsters continue to dominate state politics and plague the private sector despite their growing irrelevance and impediment to business development. Those who benefited from union sway in decades past are now in denial about the changing environment in which they and their children must survive. LIVE BY THE BUMPER, DIE BY THE BUMPER Primary among Michigan’s problems is the role of the auto industry. For decades it was Michigan’s buffet where everyone could feed and get fat. Fat salaries and wages, fat prices on fat cars with fat appetites for fossil fuel. The Big Three viewed themselves as immune from good business practice. Management never seriously challenged union demands, opting instead to simply give in and pass the increased costs on to consumers. They demanded customer loyalty by labeling any state resident who would dare purchase a foreign car as unpatriotic. This rule was not applied to those who bought expensive German imports, only Asian manufacturers were considered the enemy. This despite the fact that while the Big Three were laying off auto workers in droves and opening newer, more competitive plants abroad, Honda and Toyota were building manufacturing facilities in the U.S. and hiring American workers. The Big Three have long ignored the wave of foreign competition and the effect of rising fuel costs. The result has been a precipitous plunge in their sales and stock prices to the point that they are flirting with bankruptcy. ADRIFT IN LANSING Then there is the crux of Michigan’s problems, state government. Seated in Lansing is perhaps one of the most expensive and inept state governments in the nation. The state budget is a disaster. Its credit rating continues to drop. The governor and legislature continue to entertain the delusion that they can tax their way out of the problem while businesses and residents continue to leave the state precisely to avoid the current taxes. Lansing finally eliminated the despised Single Business Tax, but instead of making a commensurate reduction in state spending they chose to selectively impose a new sales tax on such economic stalwarts as carpet cleaners, tanning salons and manicurists. There was immediate opposition and the measure died a quiet death in the state legislature. It was not unlike the outrageous grab of 2004 when the state tried to force residents to pay county real property taxes six months before they were due. With many owing their jobs to union support, Lansing politicians continue to thwart legislation which could make Michigan a competitive right-to-work state. Exasperated state residents are now circulating petitions to take the long overdue step of reducing the size of Michigan government, including the state supreme court and legislature. So why would anyone want to live in Michigan? There are several reasons why Michigan’s future is bright if the political and business climates are reformed. Known as "the mitten" because of its unique geographic profile, Michigan is surrounded by the largest collection of fresh water in the United States — the Great Lakes — and has almost one hundred inland lakes of 1,000 acres or more dotting the state. Unlike the increasingly popular southwest, Michigan rarely sees water rationing. Whether it is for personal or commercial use there is always enough fresh water to go around in Michigan. The need for fresh water sources will continue to be a growing concern for businesses and individuals well into the future. The flat expanses of Michigan may be boring to pass on the interstate, but they could be a key in the rising global demand for food and livestock. Michigan has an immense agricultural capacity of 10 million acres producing over 200 commercial commodities, second only to California. As the global demand for food continues to outstrip production this capacity bodes well for Michigan. Michigan continues to have one of the largest pools of skilled labor in the nation. This is a clear advantage for companies in need of a ready work force. Skilled labor is one area where foreign competition has not made significant inroads. However, that pool will remain idle or disperse to other states if Lansing fails to adopt policies which will attract employers such as transforming Michigan into a right-to-work state. Location and transportation also recommend Michigan. Michigan is readily accessible by plane, rail, interstate and ship. It shares borders with Canada, Wisconsin, Ohio and Indiana. It boasts some of the most renowned universities and research facilities. Michigan is a state of incredible natural beauty, with strong tourism and recreation traditions. Michigan has more private boat registrations than California or Florida. Many even consider Michigan’s climate to be a plus with four seasons, including a warm summer and a beautiful fall. Michigan has winter, but it is devoid of the hurricanes, floods, runaway forest fires, mud slides or earthquakes suffered by perpetually warmer destinations. The only real disasters in Michigan have been man-made. The issue is not whether Michigan has anything to offer, but rather, how much more it could offer if state government would get out of the way and let the market work. August 5, 2008
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9649480581283569, "language": "en", "url": "https://www.minds.com/persuethisfinanceguide/blog/types-of-financial-advisors-933866399056396288", "token_count": 520, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0498046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bdfb96f9-381b-4cfa-80f0-d5457b893db3>" }
A financial adviser is someone who directs their clients on the best ways of saving, investing and growing their money. Money management is done by a financial advisors who help their clients to reach their goals. Financial advisors or planners are of many types. A financial planner that helps individuals to focus on tax preparation is an enrolled agent. An investment portfolio is built by a chartered financial analyst also a type of financial advisor. A broker or stock broker usually buys and sells financial products on behalf of clients in exchange for a fee. Not only should stock brokers pass the exam but also register with the securities exchange commission. Advise on financial planning is provided by a certified financial planner. In order to get the certification from the Certified Financial Planner Board of Standards through financial advisers need to complete the lengthy education requirement, pass a strength test and demonstrate work experience. Clients that have a high net worth are focused on by wealth managers who also provide holistic financial management. As for this finance company they are known to provide advice and make recommendations in exchange for a fee. Based on the size of the company, registered investment advisors are registered with the Securities and exchange commission or a state regulator. As long as an individual is a financial advisor they can acquire the other titles. Figuring the types of service you want will enable you as an individual to choose a financial advisor. Various types of services requires knowing the various types of financial planners. After which individuals need to consider what cost level works for them. A certain percentage of the assets is paid annually to robo advisors as their cost charge. When it comes to human advisors they usually charge a percentage of the amount managed with a median fee. Before committing to any financial planner, individuals need to understand the cost and fees on this page. By understanding the costs and fees, individuals need to check out the qualifications and standards of financial advisors. The qualifications and standard include their credentials, ethics, experience and fit. As part of the qualifications, then financial planners need to have experience. So as to be sure of a financial advisor then individuals need to look at whether they have ethical or legal marks against them like any criminal charges, investigations, bankruptcies or unpaid liens. Despite checking, financial advisors also need to disclose any disciplinary action and conflicts of interest at hand. As for fit, individuals need to be able to trust their financial advisors as they are going to know a whole lot about your personal details. For more information about finance, click on this link: https://en.wikipedia.org/wiki/Certified_Financial_Planner.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9319077134132385, "language": "en", "url": "https://dailyiconews.com/crypto-news/the-difference-between-private-and-public-blockchain/sammitchell/2019/03/19/", "token_count": 623, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.3515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bba833a9-0272-40dd-8a8c-0812fb8730d6>" }
The concept of blockchain has been a rather difficult subject to most people. And you have probably heard an array of characteristics ranging from immutability to transparency. While they might have told you that the purest form of blockchain advocate for transparency, there are private blockchains that don’t exactly subscribe to this decree. This piece explores public and private blockchains and the differences that emanates between the two. Public blockchains are permission-less. But as for private blockchain, the opposite is true. Private Blockchains depend on read and access permissions regarding the network. There are a number of blockchain types, each with unique underlying characteristics that impact its functionality and performance. Public Blockchain Essentials Public blockchains function on an open ledger rule. This is where anybody can read or write on the platform. It is normal for the open ledger platforms to apply incentives to implement gamification mechanism to boost participation. Bitcoin is one example of a public blockchain. People interested in the distributable property of blockchain favor public blockchain as opposed to private. Most blockchain enthusiasts believe that when the public blockchain is implemented, data is shared in a fairer peer to peer manner. In addition to its ideological benefits, public blockchain ledgers are extremely secure. The fact that they are transparent and can be edited by anyone at any given time makes it easier to identify fraud than in any other form of blockchain. But in as much as they are a favorite, public blockchains have their shortcomings. The amount of computing power required for each node to complete the authorization of a transaction is tremendous. This computation power translates to time, environment and financial expenses. And as more participants join the network, the problem persists. What Is Private Blockchain? When it comes to private blockchains, only specific entities can actively contribute. The developer additionally has the ability to set the extent to which the constituents can contribute to the network. The private blockchain is, in essence, a closed network and only offers the network participants the shared ledger benefits without distribution. Hyperledger is one example of a private blockchain. Corporates and enterprises prefer private blockchains. Private blockchain participants need to be vetted by the network members or at least be known to them. The fact that private blockchains have limited nodes means there is a lesser computation power and therefore, they facilitate the handling of greater throughput. Furthermore, private blockchains also allow certain elements within each transaction to be hidden. Elements like transaction value and sensitive personal data remain hidden. Like their public counterparts, private blockchain is not perfect. They are neither necessarily an answer to public blockchain problems either. A user on a private blockchain is at the mercy of the administrator- a problem that Satoshi aimed at eradicating from the very beginning. Unprofessional admins or even colluding entities may manipulate the ledger. The Consortium Blockchain Consortium blockchain are a hybrid between public and private blockchains. It could work in a number of ways. It can also be custom designed to fit specific circumstances. Its benefits and drawback depend on the element of private or public blockchain that participants choose to adopt.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9688568115234375, "language": "en", "url": "https://healthforward.org/uneven-opportunity-safe-and-available-employment-during-a-pandemic/", "token_count": 980, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.4140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5632c4ce-1056-4508-aa1d-dc250d07167a>" }
Even as businesses and workplaces cautiously re-open, Health Forward is acutely aware that among low-income individuals and people of color, employment opportunities, working conditions, and benefits vary dramatically. Addressing these disparities is essential, as is advocating for policies that advance economic inclusion and health equity. We all know that too many people are out of work. But lower-wage workers are affected more severely by pandemic-era unemployment. According to a recently published Federal Reserve report, 20 percent of people nationwide who had been employed in February 2020 lost a job or were furloughed in March or early April. In comparison, looking just at those with household income below $40,000, the rate of reported job loss in the same period was 39 percent — almost double the national figure. Unemployment is impacting people of color at higher rates, too. In April, Hispanic unemployment surged to a record high 18.9 percent and black unemployment was recorded at 16.7 percent. During the same period, the unemployment rate among whites was 14.2 percent. And while many were cheered by the May 2020 numbers — a 1.4 percent decline in the national unemployment figures, a 1.8 percent decline in white unemployment, and a 1.3 percent reduction in Hispanic joblessness — black unemployment showed a different trend, rising slightly to 16.8 percent. The impact of lost income during the pandemic also varies by race and ethnicity. The Census Bureau’s weekly Household Pulse Survey tells us that, between mid-March and the end of May, 61 percent of Hispanic households and 55 percent of black households reported a reduction in employment-related income. For white households, the rate was 43 percent. Working conditions vary Among those who are working, a different challenge emerges: reducing work-related exposure to COVID-19 is not evenly attainable. The most startling examples include the over-representation of people of color in some essential but high-risk industries. While Hispanics are 18.3 percent of the U.S. population, 35 percent of meat-processing employees are Hispanic. Blacks make up 13.4 percent of the U.S. population, but 22 percent of meat-processing employees. These workers and others in jobs that require close contact with fellow employees or customers may be at greater risk for COVID exposure. What about working from home? Although emerging guidelines recommend continuing remote work when possible, not everyone has an equal opportunity to protect themselves (and their households) using this strategy. The Federal Reserve report tells us who has — and who has not — been working from home during the pandemic. During the last week of March, 41 percent of workers did all their work from home (in October 2019 this figure was 7 percent). Looking at these figures in greater detail reveals differences based on educational level. Only 20 percent of workers with a high school degree and 27 percent of workers who have completed an associate’s degree or some college worked entirely from home. Two-to-three times those rates (63 percent) of workers with at least a bachelor’s degree worked entirely from home. Benefits are not the same Low-wage workers, service industry workers, and workers with lower educational attainment are all less likely to have paid leave, putting too many people at risk of financial hardship if they experience coronavirus symptoms. The figures document clear disparities: - 57 percent of workers in the lowest quartile of earners have paid time off. Among the top quartile of earners, 86 percent have paid leave. - 43 percent of service workers have paid time off, while 82 percent of those in management, business, and financial operations have paid leave. - 64 percent of those with a high school diploma but no college education have paid time off, versus 79 percent of those with a bachelor’s degree or higher have paid time off. Missouri’s August election will include a ballot initiative to expand Medicaid, ensuring that more low-income individuals can access the health coverage that is needed to get through these unprecedented times. The estimated 230,000 Missourians who would benefit the most from this expansion include those on the front lines of the coronavirus outbreak: essential, low-wage workers in grocery stores, delivery drivers, and home health aides caring for our elderly neighbors. Health Forward encourages everyone to Vote Yes on 2 on the August ballot. We also recognize that business and community leaders are working tirelessly to identify additional strategies for advancing our economy while protecting public health. As that occurs, we must all advocate to ensure that one’s race, ethnicity, gender, or economic status has no influence on quality employment opportunities and safe working conditions. Moving forward together to achieve that goal can help us emerge from this pandemic healthier than ever before.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9602331519126892, "language": "en", "url": "https://www.9sblog.com/credit-score-can-check-credit-score/", "token_count": 612, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.04931640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6a63d12b-a72d-487a-870a-b26f5ad97847>" }
A credit report can say about your responsibility. When you sign up for an agency, you need to sign in a form of agreement. Agencies collect and maintain data about your habits for borrowing and repayment detailed. A file which contains your detailed information called credit report. A credit score is a division of your credit report. These reports offer the information about your credit history and personal identity. It also provides public records. This report will be characterized and aim credit level for you. You can also search by check my credit score online. A credit rating: The credit report is a measurement of your repaying debts. Different credit report agencies will give you a different type of ratings. Some will provide you a rating on a ratio of 1 to 9, some others provide you a description of your credit. For an example, a rating of “1” means you pay your bill within the last date of the month and “9” means you do not pay your bill. They also provide a rating of “R”. Lenders provide this rating for your previous history of renting. It also shows 1 through 9 range of ratings. Rating R1 means you pay regular way and R9 rating means you never pay. Government or any financial institutions do not settle your credit record, It settled by you. A credit bureau can complain against you when you will not repay your loan. How to define a credit score? Some numeric figure represents your credit score. The agencies of credit report like Equifax and TransUnion offers a ratio of 300 at 900. If your score is high, the liability for the lender is low. You also can proceed with a new loan. Some reason for the influence of your credit score: - Good payment history can make your credit score good. - The amount of your credit impacts your score. Too much or too little both are harmful to your credit score. - The length of your credit history also effects on your credit score. - Your credit score will change every time for your activities. You can change your overall score by applying for the new credit. How do I check my credit score Clients can analysis their credit score by using their agencies online service. Agencies offer an online app, mobile or desktop app for their customers. You can also contact credit bureaus for your credit report via e-mail. In that way, you will not need any fee. With a fee, you can see your credit report online. Or ask them ‘ how do I check my credit score ?’ How do you settle a good credit score? You can build a good credit score by paying your bill in time. You can do it with a credit card. If you have no any credit card you can apply for a credit card. With making the least payment you can grow a good credit history. This will bring a positive review to your next rent. You can talk to an advisor of your agency to get a good result.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9685305953025818, "language": "en", "url": "https://www.artofmanliness.com/articles/confused-by-the-stock-market-dont-be-a-primer-on-stocks/", "token_count": 3550, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2060546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f9adde47-53c2-4f88-9723-bd882d3f849e>" }
Editor’s note: This is a guest post from Matt Alden S., publisher of Dividend Monk. The stock market can be a confusing and hectic place. Especially after witnessing recessions, bank-bailouts, and huge volatility, it can seem like it operates with no rhyme or reason. A crucial skill for any man is the ability to master one’s own finances. It’s an important part of life to be able to understand exactly what he owns and why he owns it. But many men will admit they really don’t even know the basics of how the stock market works. Fortunately, behind all of the noise of computer trading and non-stop financial media, the basics of buying and selling stock are pretty simple. Advances in technology and increases in the speed of news distribution may have complicated it on the surface, but the same general ideas hold as true today as they did 100 years ago. This article provides an overview of what stock is, and how shares of stock are bought and sold on the market. A Share of Stock Is a Fraction of a Business The ownership of a public company is collectively called “stock.” The shares of stock that are bought and sold between investors represent tiny fractions of that business. For an example of a private company, if ten people hold equal portions of a small carpentry business, then the company can be thought of as consisting of ten shares, with each owner/investor holding one share. Since it’s divided ten ways between these shareholders, if the company earns a profit of $500,000 per year, the earnings per share will be $50,000. The owners of the carpentry business could collectively decide to reinvest those profits into growing the company, or they could use some of the money to pay themselves part of the profits. If they want to leave the business, they could sell their share to someone else. One of the owners may even decide to buy a share from another owner, so that he now holds two shares, or 20% of the company. A modern corporation works the same way, except that instead of ten shares, it consists of millions or even billions of shares. Corporations are “publicly traded,” meaning that there are “x” number of shares available and anyone can buy or sell them on an exchange. To organize that many owners into a leadership team, the shareholders vote for directors to lead the company. On an annual basis, the shareholders can vote to elect people to the Board of Directors, which is the highest operating authority in the organization. The Board then appoints the leadership team, including the Chief Executive Officer and other top management to run the day-to-day business operations. The Board of Directors makes high level decisions on the direction of the business, and they also make the decision on whether to reinvest all profits back into the company for growth, or to pay out a portion of the total profit as cash payments, called dividends, to the shareholders. Why Do Businesses Go Public? Companies that go public generally do so because they want to collect more money for growth opportunities. If the owners of a privately held company want to have an easier time selling their private shares, or if they want to sell part of their company to the public to raise money for the business to grow faster, they hold an Initial Public Offering, referred to as an “IPO.” During an IPO, they will sell some of the shares of the company to the public, and from that point on, people can buy and sell those shares amongst themselves. A smaller example can illustrate the basic process: A man named John owns a business that sells premium mustache combs — the best in the world. He has ten employees, and he owns 100% of the company. For a while this works well, with him earning $100,000 per year in profit and his employees doing pretty well also. But it seems like every time Movember rolls around, it gets harder and harder to keep up with all the sales. After careful consideration, he decides to invite five fairly wealthy acquaintances to help grow his business. They each chip in $50,000, and in exchange they each receive a 10% portion of the company. So John has raised $250,000 for his business, but now he only owns 50% of his company. The company can be considered to consist of ten shares worth $50,000 each, with John holding five of them and each of the five investors holding one of them. John can use this $250,000 to hire additional employees and buy more tools to grow the business at a quicker pace. After five years, the profit of the business is up to $500,000 per year. Since John owns 50% of it, his portion of income is $250,000, and each of the investor’s portion of income is $50,000. They collectively decide, however, to keep reinvesting part of these profits back into the company to hire more employees and to buy more equipment. After ten years, the company has grown even larger. It now has total annual profit of $2 million, and dozens of employees. But John is a particularly ambitious fellow, and he wants to spread his premium mustache combs throughout the world, and maybe even expand into other manly hygiene products. So John and the five investors decide to “go public,” which means they will have an IPO and sell a portion of their company to public investors. John still owns 50% of the company and each of the five investors still owns 10%, but they decide that they’ll sell half of the company to the public in this IPO. This will bring in a lot of new money to fund their expansion. John breaks the company into one million shares (meaning each one of the original ten shares was broken into 100,000 shares), and sells 500,000 of them to the public for $20 each. The rest of the shares are owned by John, who owns 250,000 shares (25% of the company), and the original five investors, who each own 50,000 shares (5% of the company each). John and the original investors diluted their ownership of the company (from 50% down to 25% for John, and from 10% down to 5% for each investor), but now, there is a lot more cash to work with. By selling 500,000 shares to the public at $20 each, John raises $10 million for his men’s hygiene business to hire employees and purchase more tools in order to make all the mustaches of the world that much better. The initial five wealthy investors are happy too, because now they can easily sell their shares to anyone. They could sell their shares and retire, or they could buy more if they want to. Now that John’s company is public, they have a lot more specific requirements to fulfill. They have to produce audited financial statements four times per year, they have to follow standards of sharing internal company information, and shareholders get to elect a Board of Directors to run the company. Shares of Stock are Bought and Sold on a Stock Exchange Since publicly traded companies consist of so many shares and are owned by so many people, it would be difficult to just buy and sell shares informally. To solve this complexity, the buying and selling of stock takes place on a dedicated stock exchange. The largest exchange is the New York Stock Exchange on Wall Street, and there are other major ones such as the NASDAQ, the London Stock Exchange, and the Tokyo Stock Exchange. These exchanges are marketplaces (either physical or electronic) where the products that are bought and sold are shares of stock. People can buy or sell shares of stock for a price they can agree on, and this price goes up and down over time based on the supply and demand of buyers and sellers. Over the short term, stock prices can be volatile because buyers and sellers have a whole multitude of reasons for trading stock at any given time. Over the long term, the business performance of the underlying company determines the value of the stock. When John’s business was only making $100,000 in profit per year, a share worth 10% of the business only had a moderate amount of value. Later, when he grew the business to $500,000 in profit and eventually to $2 million in profit, a 10% portion of the company would be worth far more than it was back when the company was only making $100,000. When the company was later split into a million shares, each share was less expensive because it only accounted for a tiny fraction of the company, but John and the original five investors each then owned thousands of shares due to the splitting of their original, larger shares. If John continues to skillfully manage what is now a publicly traded men’s hygiene products company, he could grow total profit to $5 million, $10 million, or even more. The price of the shares at any given time compared to the IPO price of $20 will fluctuate, but over the long term, if the business grows, the value of each share will grow. Each share in this case represents one millionth of a growing company. For example, when the company has $2 million in profit and one million shares, John and the Board of Directors may decide to use 50% of that profit to pay $1.00 in cash dividends to each shareholder for each share they own. The shareholders may keep holding shares, and receive these dividends every time the company pays them. Years later, if the company is bringing in $5 million in profit per year, and they’re still paying out 50% of their profit as dividends to shareholders, then each shareholder will receive $2.50 in dividends per share that they own. There are some companies out there that have raised their dividends every year for over 25 or even 50 consecutive years. On the other hand, if John’s business performs poorly over time, and the annual profit declines, then the shares of stock will eventually decline in value. If his company were to ever go bankrupt, then the price and value of shares would drop to $0. How to Buy Shares of Stock To buy a stake in publicly traded businesses, you have a few primary options. There is no best option; it depends on whether you want to pick individual stocks or not, how much money you have to invest, and what your goals are. 1) Use a Broker to Buy Shares of Stock Rather than going to a stock exchange yourself, you can use a middleman to do the buying and selling for you. A broker is a person who is registered with the exchange and able to buy and sell shares of stock on it. In older times, you’d have to call up your broker or see him in person, but nowadays most interaction is done online. You can create an account online with a brokerage firm, and buy or sell stock within your account. It’s similar to online banking, and the shares that you hold will be held in this brokerage account. For those that prefer seeing a broker face to face, there are full service brokerage firms that provide this option. They can provide personal investment advice and assist you with achieving your financial goals. Either way, you’ll typically have to pay brokerage fees in order for them to buy and sell stock for you. Investors that often buy and sell stock can accumulate a lot of fees, but by investing for longer periods and buying less frequently, you can keep the cost low. 2) Participate in a Direct Stock Purchase Plan Another low-fee or no-fee option is that you can buy stock directly from the company in a system called a Direct Stock Purchase Plan (DSPP). You can become a registered shareholder of the business directly through their transfer agent (the organization that manages and keeps track of their shares), and occasionally pay cash to buy more shares. Certain types of Direct Stock Purchase Plans are called Dividend Reinvestment Plans (DRiPs). Under these plans, you own stock directly, and when the company pays cash dividends to shareholders, the company will automatically reinvest the cash dividends you would have received into buying more shares for you instead, including fractions of shares. Over long periods of time, you can grow a small number of inexpensive shares into a larger and larger number of more valuable shares, and exponentially increase your wealth and dividend income. These plans are only for investors that intend to hold onto that company’s stock for quite a while. Unlike with a broker where you can easily buy and sell shares, these plans are meant for patient, long-term investors. 3) Invest in a Mutual Funds If you don’t want to buy shares of individual companies, such as The Coca-Cola Company or General Electric, then mutual funds are another viable option. A mutual fund is a collective investment vehicle where a pool of investors gather their money together and buy shares of many companies in one big fund. A fund manager is responsible for choosing which stocks to buy or sell within that investment vehicle. The assets that can be held within a mutual fund include stocks, bonds, cash, and other investments. Basically, rather than owning a specific stock, a mutual fund investor owns a piece of a bigger collection of a variety of stocks and/or other assets. There are a vast number of different types of mutual funds, but they can be thought of as two general categories: Actively Managed Mutual Funds. With an actively managed mutual fund, the fund manager is purposely selecting certain stocks to buy, hold, and eventually sell. Sometimes the fund manager’s goal is to try to provide a better rate of return for the investors than the average of other stocks, which means he’s attempting to “beat the market.” Other times, the fund manager may be trying to minimize volatility and preserve the wealth of the investors while still growing their money at a reasonable rate. In order to pay for the buying and selling of shares within the fund, and in order to pay the fund manager, mutual funds typically have fairly high fees. The fees are a small portion of the fund value each year, but they can add up substantially over time. Index Funds. An index fund is a passively managed mutual fund. The fund manager is not purposely choosing specific stocks to buy or sell in order to meet any goals. Instead, an index fund follows a specific list of many companies. The most widely followed index list is the Standard and Poor’s 500, which is usually referred to as the S&P 500. This is a list of 500 of some of the largest and most profitable companies in the United States, and serves as a primary benchmark for long-term stock market performance. The only goal of the fund manager of an S&P 500 index fund, is to try to replicate the performance of the list. He’ll buy shares of stock of those 500 companies in roughly the same proportions that the index recognizes. Because this process is rather automated, the fees for index fund investors are very low. Buying into an index fund allows an investor to quickly become diversified, because holding a simple S&P 500 index fund spreads your money over approximately 500 companies. The Relationship Between a 401(k), an IRA, and Shares of Stock One potentially confusing aspect about the stock market is the overlap between 401(k) plans, IRAs, and shares of stock. Some people tend to mistake 401(k) plans and IRAs for investments, but they are simply retirement vehicles for holding investments. Typically within a 401(k) plan, you can choose to invest in a variety of mutual funds, including index funds. Within an IRA, you can invest in mutual funds, individual stocks, and other assets. By working to build up some assets, either in the form of individual stock ownership, or through index funds and other investments, you can increase the financial flexibility that you have with regards to what you work on and how you live your life. The stock market can be a damaging thing to people that are unfamiliar with the mechanics behind it, and money that you need to have available in a few years should not be used to buy stock now due to the volatile nature of the market over shorter periods. Instead, investing in the stock market is a long-term approach that requires discipline and balance. Talking with a financial professional to get good advice, or seeking out information from a variety of sources, can provide a very rewarding outcome. The long term average rate of return of the S&P 500 over the last century or so has been around 9% per year. This means that, despite being volatile, an investor would have increased their wealth by 9% per year on average over a very long time frame. A rate of return of 9% per year translates into a doubling of your money every eight years. Matt Alden S. is the publisher of Dividend Monk, an investing and personal finance site that helps readers move closer towards financial freedom. The site includes comprehensive articles on dividend stocks, long-term investing, indexes, stock valuation techniques, and building wealth.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9105749130249023, "language": "en", "url": "https://www.openthenews.com/navdeep-arora-explains-about-blockchain/", "token_count": 920, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.06689453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:eab20350-f951-4cda-a587-3679f5411677>" }
“it an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way” Blockchain is a write only database, where all participants have identical copy of information. The information stored is timestamped and the data is cryptographically secured Blockchain network functions as - a backend database that maintains an open distributed ledger - an exchange network for moving value between peers - a transaction validation mechanism that does not require intervention from an intermediary How & Why does a Blockchain work in Insurance? Like multiple other business application due to it’s de-centralized, distributed ledger & security, Blockchain is used in insurance in multiple ways Decentralised Validation – The data is packed into blocks that can only be added to the blockchain after consensus is reached on the validity of the action, Thus allows participants to place their trust in transactions even in the absence of a central authority, thus enabling disintermediation. Redundancy – The blockchain is continuously replicated on all or at least a group of nodes in the network. Thus, no single point of failure exists. Immutable Storage – Each stored block is linked to its previous block in the chain, making it almost impossible for hackers to change subsequent blocks, as they would have to manipulate any succeeding block plus most replications. Strong Encryption – Digital signatures based on pairs of cryptographic private and public keys allow network participants to authenticate which participant initiated a transaction, owns an asset, signed a smart contract, or registered data in the blockchain. Smart Contracts – Blockchain enabled platform can be used for smart contracts, which are small programs running on a blockchain and initiating certain actions when predefined conditions are met. Five core blockchain capabilities hold tremendous promise for enabling the Insurance ecosystem - Distributed Ledger - Security & Cryptography - Validation & Consensus - Transparency & Auditability - Cryptocurrency & Bitcoin Most of the credible and scalable use cases adoption in Insurance so far have come from three areas - Overhead & support functions, Procurement, and Finance/ Re(In)surance accounting - Reinsurance contracting across Life, and P&C - Distribution, underwriting, servicing and claims mgmt. of microinsurance products (fixed benefit life & health, motor); Parametric insurance What are the Lessons from the banking sector for Insurance? The banking sector’s experience with Blockchain offers a useful lesson for Insurance - The R3 consortium has pivoted and successfully focused on more achievable, short-term problems such as KYC, AML, credit card authorisation and fraud, and cross-border payment reconciliation, after initially struggling with the complex global payment infrastructure and governance - B3i, RiskBlock Alliance, ChainThat, and iXLedger are some of the platforms that are working with the Insurance ecosystem to build ‘blockchain interoperability’ rather than ‘winner takes all’ What are the possibilities from blockchain in insurance value chains? The possibilities from blockchain technology for the life and non-life insurance value chains are indeed compelling. Capturing this opportunity will require insurance incumbents to proactively work across four dimensions - Diagnosing and prioritising use cases, POCs, and full scale production across the value chain - Working with the digital ecosystem (incubators, start-ups, regulators) to leverage partnerships/ alliances (rather than waiting to build inhouse) - Standardising internal organisation, processes and technology to enable blockchain interoperability - External connectivity with blockchain platforms and consortia Technology and digital trends are reshaping global demand and supply of insurance. We can certainly say, Blockchain disruption is one of them which has just began and is here to stay. About Navdeep Arora Navdeep Arora is a seasoned strategist, advisor, and investor in Insurance and Financial Services. Navdeep’s particular focus and areas of expertise are Insurance Strategy and Innovation. He works across the Insurance ecosystem including Insurance companies, data and technology providers, strategic investors, and start-ups in the InsurTech space. Formerly, Navdeep Arora was a Senior Partner with McKinsey & Company (16 years), and a Partner/Global Head of Insurance Strategy with KPMG (3 years). Navdeep’s educational qualifications include a Bachelor’s and a Master’s degrees in Engineering, and an MBA from Harvard University. Connect with Navdeep Arora
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9465211033821106, "language": "en", "url": "http://digjamaica.com/m/blog/dig-number-250-million/", "token_count": 203, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1064453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f7b85b44-60b7-4a9a-83e3-2e9af334d861>" }
Jamaicans living abroad are not only essential to the well being of their family remaining on the island but they are also important to the Jamaican economy. As many Jamaicans living and working in the Diaspora transfer a portion of their earnings on a monthly basis back to the island, as what is called Remittances. Remittances make up an estimated 15% of Jamaica’s Gross Domestic Product (GDP). The latest available information on Jamaica’s Remittances is at May 2014. Net remittances for May was US$171.6 million, representing an increase of US$9.7 million or 6 percent, versus May 2013. Net remittances for the period January to May, amounts to US$795.3 million, representing an increase of of US$42.1 million or 5.6 percent, compared to 2013 period. Click the link to view Remittances for the period 2009 to 2014 – diGJamaica Business Dashboard
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9726758003234863, "language": "en", "url": "https://247wallst.com/special-report/2016/05/24/poorest-town-in-every-state/", "token_count": 717, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.169921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2a4f09fc-d47a-48f8-8b3e-f0d2be545d33>" }
Poorest Town in Every State Incomes in the United States are far from uniform. A typical household in Scarsdale, New York earns nearly a quarter of a million dollars a year, or more than 13 times the income of a typical household in Macon, Mississippi of $18,232. Incomes also vary within each state. No matter how rich a state is on average, it has some very poor towns — towns where incomes are much lower than incomes of not only the state’s richest towns, but also than the median income statewide. In every state, there is at least one town with a median annual household income thousands of dollars lower than the state’s median. 24/7 Wall St. reviewed the poorest town in each state. In general, towns in wealthier states tend to be wealthier, and towns in poorer states tend to be poorer. Mississippi, the poorest state in the country, is home to the poorest town. Incomes in the poorest towns in Alaska and Hawaii, each among the five wealthiest states, are only slightly less than the national median of $53,657 a year. Some states have much more economic diversity. Many of the wealthiest states are also home to the poorest places not just in the state, but also in the country. Median incomes in Cumberland, Maryland and Camden, New Jersey are more than $40,000 lower than the typical household annual income across the state. The vast majority of the poorest towns in the country have a disproportionately high share of eligible workers who are jobless. In 40 states, the unemployment rate in the poorest town is greater than statewide jobless rate. A high share of households that are not currently earning a salary likely drives down the median income for an area. A weaker economy in general, where more people are unemployed, can also drive down salaries in a number of ways — even for those residents with jobs. Socioeconomic indicators such as low educational attainment rates can also help explain the low incomes in many of these towns. One important measure is the college attainment rate. A more educated population is more likely to be employed and to have access to higher-paid jobs. The poorest towns in only three states have a college attainment rate that exceeds the national rate of 30.1%. To identify the poorest town in each state, 24/7 Wall St. reviewed median household incomes in every town with a population of 25,000 or less in each state from the U.S. Census Bureau’s American Community Survey (ACS). Due to relatively small sample sizes for town-level data, all social and economic figures are based on five-year estimates for the period of 2010-2014. Still, data can be subject to sampling issues. We did not consider towns where the margin of error at 90% confidence is greater than 10% of the point estimate of both median household income and population. Towns were compared to both the state and national figures. We considered the percentage of residents who have at least a bachelor’s degree, the towns’ poverty rates, and the workforce composition — all from the ACS. The percentage of housing units that owned by their occupants — referred to as the homeownership rate — also comes from the ACS. Because poverty rates can be skewed in areas with high shares of college students who frequently have very low incomes, college towns were also excluded. College towns are defined as towns where more than 40% of the population is enrolled in undergraduate or graduate school. These are the poorest towns in each state.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.945065438747406, "language": "en", "url": "https://facethefuture.com/nl/nieuws/the-contribution-of-forests-to-climate-change-mitigation", "token_count": 369, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.054443359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8183b512-9479-42d0-9a56-6d3e47cf1691>" }
Forest and climate change mitigation Did you know that forests already remove around 25% of the anthropogenic carbon emissions added to the atmosphere each year? And did you know that forests, especially tropical forests, are also a very cost-effective means to mitigate climate change? Greenchoice and the REDD+ Business Initiative commissioned Face the Future to write a technical report about the contribution and cost-effectiveness of forests in tackling climate change, based on the knowledge and research that is currently available. This report demonstrates that forests can play an essential role in combating climate change. Forest conservation, afforestation/reforestation, restoration and improved management of existing forests have a large potential for the reduction of carbon emissions as well as the removal of carbon dioxide from the atmosphere in a cost-effective way. Although compliance markets have yet to accept REDD+ offsets, there is a large potential for industrialized countries, without (tropical) forest, to significantly contribute to climate change mitigation through investments in REDD+ abroad. Moreover, REDD+ has a significant positive impact on biodiversity conservation and restoration, livelihoods and the preservation and recovery of a broad range of ecosystem services provided by forests. These benefits are very much interlinked and can have an impact well beyond the boundaries of the forest itself. On the one hand this underlines the high potential impact and significance of REDD+, but also the massive damage that deforestation and forest degradation can cause on multiple levels and scales. By attracting revenues from carbon sequestration, REDD+ contributes to the conservation and enhancement of forest ecosystem services for which no market or other funding of this scale yet exists. In turn, these forest ecosystem services contribute to achieving multiple Sustainable Development Goals and targets across the 2030 Agenda. Curious to read more about the role of forests in climate change mitigation? The report can be downloaded here
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9604748487472534, "language": "en", "url": "https://wholesalesuiteplugin.com/cost-plus-pricing-worth/", "token_count": 1071, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0947265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ec3a0f04-e401-4171-838d-caf2147d1ec4>" }
Setting the price is one of the most important jobs of marketing, no matter what business you are in, in that the price of a product or service often plays a great role in that product’s or service’s success, not to mention in the profitability of a business. A pricing policy refers to how a business sets the prices of its products and services based on costs, value, demand, and competition. Pricing strategy, on the other hand, refers to how a business uses pricing to reach its strategic goals. An example is offering lower prices to increase the number of sales or offering higher prices to reduce the backlog. Though there is a certain level of difference between the two terms, pricing policy and strategy tend to overlap. The different policies and strategies are not necessarily mutually exclusive. Cost-plus pricing is a cost-based pricing strategy that is used for setting the prices of products and services. Cost-plus pricing is gotten by adding a markup to the unit cost. The unit cost comprises the sum of the fixed costs and the variable costs, divided by the number of units, or products. The markup is a percentage expected to be the profit margin for the manufacturer or seller. Advantages Of Cost-Plus Pricing - It is simple to calculate the prices of products or services. However, you need to define the overhead allocation method. To be regular in calculating prices of several products and services. - It is a great pricing method for contract profits. As the contractor is certain of having their costs reimbursed by the customer and of making a profit. - The pricing can be justified by the supplier or the manufacturer, simply by pointing out the increase in the costs of production. Disadvantages Of Cost-Plus Pricing - Cost-plus pricing excludes competitive prices. Businesses may set prices of their products without checking the market situation or what prices their competitors are selling similar products. They may then end up setting the prices for their products or services either too high or too low. Either way, this kind of pricing can greatly impact market shares and profits, as well as customer patronage. - The engineering departments of companies will not feel the need to carefully design a product or service with the appropriate set features and characteristics for the target market. They can simply design what they want and launch them on the market. - If a government body enters into a contractual agreement with a supplier under the cost-plus pricing method, the supplier or manufacturer can include as many costs as they want, and they will be fully reimbursed. This happens because there are no cost-reduction incentives included in the contract. The contractor and the supplier should have cost-reduction incentives in the contract for the suppliers. - Cost plus pricing strategy does not give room for replacement costs. This is because this pricing strategy works based on historical costs, ignoring the fact that those costs may have changed. Ignoring these replacement costs means that the products and services may be priced way too low or way too high. Ultimately the prices will work to the detriment of the business. This pricing strategy is better used in a contractual situation. The supplier and the customer can settle on the terms of their contract and the prices and everyone will walk away free and happy. Cost-plus pricing is not acceptable for pricing products and services that are sold in a competitive market. This is because they are usually overpriced. Product and service prices should be set at what the customers are willing to pay. Not based on a sum total of your costs plus a markup percentage to guarantee you a profit. More Potential Problems of Cost-Plus Pricing Critics argue that the cost-plus pricing strategy fails to provide a business with an effective pricing strategy. A major problem with the cost-plus pricing strategy is that determining a unit’s cost before its price is difficult in many industries because unit costs may vary depending on volume. As such, many business analysts have criticised this method, arguing that it is no longer appropriate for modern market conditions (as mentioned in the disadvantages, it calculates based on historical costs). Cost-plus pricing typically leads to high prices in weak markets and low prices in strong markets, thereby limiting profitability because these prices are the exact opposites of what strategic prices would be if market conditions were being considered. While businesses must factor in costs when creating a pricing strategy, costs alone should not determine prices. Many businesses involved in producing or selling industrial products and services sell their products and services at incremental cost and make their substantial profits from their best customers and from short-notice deliveries. When considering costs, managers should ask what costs they, the customers, can afford to pay. Taking into account the prices the market allows, and still allow for a profit on the sale. In addition, managers must consider production costs in order to determine what goods to produce and in what amounts. Nevertheless, pricing generally involves determining what prices customers can afford before determining what amount of products to produce. By bearing in mind the prices they can charge and the costs they can afford to pay, managers can determine whether their costs enable them to compete in the low-cost market. Where customers are concerned primarily with price, or whether they must compete in the premium-priced market. In which customers are primarily concerned with quality and features.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8106693625450134, "language": "en", "url": "https://www.efinancialmodels.com/downloads/budget-vs-actual-general-business-115828/", "token_count": 257, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.027099609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2ed36ee8-a08d-47e5-8bee-45fc415975ed>" }
|All Industries, Financial Model, General Excel Financial Models| |Budget, Budgeting, Excel, Financial Model, Financial Projections, Financial Reporting, Financial Statements, Forecast, Forecasting, Macros, Variance Analysis| The Budget vs. Actual financial model is used to measure actual results against the budget projected for the financial period. The 1 year financial period can be broken down by Month, Quarter, or All by enabling Macros on the worksheet. Simply click on the buttons labeled Months, Quarters and Reset to adjust the columns and view of the template. Information can be entered in the columns labeled Budget and Actual for each month and all other columns will calculate automatically. This is a great template to use for annual or monthly budget reports. This model will show the variance between the budget and actual figures so you can analyze what areas need attention. This Budget vs. Actual model is a general template for simple business models but has the flexibility to be customized. Important Note: This model template file contains Macros. Enable the Macros for ease of use scrolling between Months, Quarters and All. All cells in black font are input cells where custom information can be entered. All cells in blue font are formulas set to streamline the model.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9670401811599731, "language": "en", "url": "https://www.forex.in.rs/formula-for-periodic-payment/", "token_count": 659, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f64041db-f529-4acb-866c-bf60616c2d0a>" }
Annuity Payment Formula for Calculation The annuity payment formula is the equation used to calculate the periodic payment on an annuity, typically used for ordinary annuities. Finding the correct annuity payments are very important in order to ensure loans are paid within the time frame specified. For example, when an individual receives a loan, they must pay a certain amount for a certain period of time. In order to determine these factors, an annuity payment formula is used. An annuity is defined by a series of periodic payments that are fully received at a later date. The initial payout of the loan is known as the present value. The original payment on an amortized loan can be valued as the PV. Annuity Payment Formula When using this ratio, it is understood that the rate does not change. The payments are scheduled to be at the same exact value. This formula also assumes that the first payment is one installment away. An annuity that grows at a proportional and consistent rate would use the formula as well. If the annuity changes the overall payment or rate, it must be adjusted each time. This is an important factor to note as it can change the periodic payments and other terms of the loan. We have two important equation here: Periodic payment formula when Present Value is known : Periodic payment formula when Future Value is known : where are : The annuity payment formula can be used for a range of different types of loans. Some of the most common scenarios used with the annuity payment formula are amortized loans, income annuities, lottery payouts as well as structured settlements. Constant period payments can easily use this formula in order to generate a successful and reliable method. It is also important to note that the rates are calculated per period. Calculating rates per period is a unique strategy used in the formula. Rate per period and number per period should always reflect exactly how often payments are made. When the rates and numbers reflect the per period statistics, the loan is more structured and accurate. For example, if the payments are made more frequently, the rate and number are subject to decrease. In contrast, if the payments are made less frequently, the rate and number are subject to increase as a result. The annuity payment can be easily determined by fully rearranging the present value of the annuity formula. Rearranging the formula and solving for ‘P” is an important step in solving the equation. It is also important to note that the equation can be furthered simplified by multiplying the numerator by the reciprocal. When the numerator is multiplied by the reciprocal, the formula can be solved easier and is simplified in order to determine the annuity payments. As stated previously, some of the most common uses for annuity payments are lottery payouts, income annuities and different types of amortized loans. Any type of structured settlement can take advantage of periodic payments. If the payments are increased in frequency, it will affect the other payments scheduled to take place in the future. Therefore it is important to take all of these factors into consideration. If the payments change as a result of larger payments made earlier in the loan term, other payments may not be needed frequently.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9333168864250183, "language": "en", "url": "https://www.greensolartechnologies.com/blog/what-net-metering", "token_count": 443, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.00482177734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:803a3d31-54e0-4383-83be-e80512d142e5>" }
Understanding Net Metering Let’s start with an exact definition. Net metering is a billing mechanism that credits solar energy system owners for the electricity they add to the grid. For example, if a residential customer has a solar power system on the home's rooftop, it may generate more electricity than the home uses during daylight hours. In essence, net metering is a system of debits and credits that help your utility company even out your bills. Based on the difference between how much energy your solar panels produce and how much energy you are consuming monthly, your utility bill either will be debited or credited. Credits build when your solar power system produces more energy than you are using and vice versa. Not all meters are created equal. A bi-directional meter is unlike a standard electricity meter. These meters are designed to measure electricity flow in two directions. When you connect a solar power system to the grid, you need a meter that can tell you not just how much energy you’ve consumed, but also how much you’ve sent back into the grid. Monitoring Your Solar Power System Consistent monitoring helps you regulate your energy usage. There are a few ways of reviewing your energy consumption and delivery back into the grid system. Some bi-directional meters will allow you to view the total amounts on the unit itself and another way to view this is through an online monitoring service. Not all solar power systems come with this feature built into their products, but it can be very useful for reviewing your current energy usage and allow you to make savvy decisions about how you use your energy. Not every home is eligible for net metering. This is due to certain states and utility companies who do not offer this feature. If you are wondering if your state and utility company offers net metering, visit here for a free solar quote and energy evaluation. We will provide you with a thorough solar energy estimate where you will learn how much solar energy for your home will cost you and if you are eligible for net metering or any cost-saving incentives. Also don't forget to learn more about solar at our Solar University. by Eddy Martinez & Ged Friedman
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9446672201156616, "language": "en", "url": "https://www.odi.org/publications/4744-impact-economic-crisis-and-food-and-fuel-price-volatility-children-and-women-kazakhstan", "token_count": 412, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.130859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e7607bcb-6696-4d65-a7e5-0b0a42306074>" }
Kazakhstan is a well-resourced country with nevertheless relatively high poverty levels among both children and adults. In comparison with its immediate, poorer, neighbours Kazakhstan has better indicators of child wellbeing but, although there have been improvements in key indicators over the past 10 years, there is some way to go to bring these in line with countries with similar gross domestic product (GDP) per capita levels. Key issues of concern, such as infant and maternal mortality, child morbidity, access to health care and education, housing conditions and water supply are all being addressed, in principle, through national planning processes and public policy, in particular the National Development Strategy (Strategy 2030), the Target Social Assistance (TSA) programme (2005 onwards) and the Children of Kazakhstan programme, aimed at raising children’s living standards. These programmes have resulted in a steady increase in public social sector expenditures since 2002 and commitments to maintain such investments during this period of financial crisis in response to rising poverty rates after several years of significant poverty reduction. Nonetheless, Kazakhstan’s expenditure on health care and education are lower than for other countries with similar GDP. Additionally, better planning and policy implementation could make spending in the social sectors more cost effective, generating better outcomes for the resources invested. This is particularly so in the case of local governments, which are responsible for spending a significant share of resources. Policymaking functions remain concentrated at central level, making it difficult for sub-national governments to optimise spending and better link it with expected results. Similarly, sector ministries are still not successfully aligning sector policy and spending, a particular challenge in a context of changing needs and programme responses such as during this crisis. Taking this context into account, this report discusses what should be done in the present financial crisis to address obvious suffering; to prevent more people falling into poverty; to redress any backsliding in current poverty reduction trends; and to turn crisis into opportunity in order to reform policy and institutions, better enable implementation and bring Kazakhstan into line with its GDP potential.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9478369355201721, "language": "en", "url": "http://pulsamerica.co.uk/solar-panels-could-this-be-the-start-of-an-energy-saving-revolution/", "token_count": 952, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.052490234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e4ba2621-1087-429d-bec9-6ffc4953ec62>" }
Many consumers of energy have started to ask themselves if solar panels are worth the investment, and if they are able to save you money in the long term. Solar panel technology is relatively new in the consumer space and has started to see adoption increase in certain countries around the world. If you use a lot of energy, these can be considered to be an attractive proposition. To get the most benefit, you’ll also want to make sure you’re paying the lowest price possible for your gas and electricity. Ultimately, the total amount you can save from solar panels alone will depend on your usage statistics, where you’re located in the country and how much the total installation cost will be. How do solar panels help you save money on energy? For solar panels to be a viable energy cost reduction option, you will need to live in your current property long enough to recover the initial installation costs. By using this technology, you’ll be less reliant on electricity from the power grid as you’ll be generating your own. Over the past decade, the cost of electricity has soared, so if you’re able to generate your own power, you can save even more. Even while you’re not at home, you can store any energy that has been produced and use it at a later time. Previously, some customers even had the option to sell this energy back to the grid, although this Government initiative is no longer available to new customers. Aside from helping you save money; solar panels can also help to increase your property’s market value. Those in the property market may be more willing to pay the asking price, as this is a desirable feature, and will only become more popular in the coming years. This is especially important when you consider that we’re heading towards a more sustainable future. If you’re a landlord, a tenant may be willing to pay a little extra as they will save this amount on their utility bills. Is solar panel technology worth the investment? The amount you will be able to save through using solar panel technology will depend on a number of different factors. We have outlined these below. Start by looking at your gas and electricity bills for the past 12 months. If you see that your consuming a lot of energy, your household may benefit from installing a system with around 5kW capacity. This would help you generate between 18kW/h to 25kW/h. These figures are a rough estimate and will depend on how much sunlight is available and where you’re located. Most residential solar panels currently available are rated to produce between 250 and 400 watts each per hour. Domestic solar panel systems typically have a capacity of between 1 kW and 4 kW. The energy you generate through your solar panels will always be used before you access power from the grid. This means that if you’re primarily consuming energy whilst your panel is generating it, you will be less reliant on the grid helping you reduce your bill. Most installations will include an energy storage facility. This will allow you to store energy and use it later on when you need it. For example, during the day you could be at work while energy is being generated, allowing you to make use of it later in the evening upon your return. The more panels you’re able to fit on your roof, the more energy you will be able to generate. Having a system with larger storage capacity will also help you avoid wasting excess power. The size of your property will determine how many panels can be installed. The amount of power you will be able to generate will depend on where you’re located in the country. It’s worth noting that solar panels don’t necessarily need direct sunlight and can utilise normal daylight. Initial cost of installation| One of the biggest barriers to entry for many interested customers will be the initial cost of installation. Typically, you can expect this to cost at least £5,000 in the UK. This figure could be much higher depending on what option you choose. It’s worth checking if you’re eligible for any Government grants. Hopefully, as the technology becomes more popular and manufacturing costs are reduced, this will become far more affordable in the near future. In summary, installing a solar panel system in your home will initially be expensive. However, you will likely see savings over a longer period of time. Think about your household energy consumption, and where you live in the country to see if you can take advantage of this modern technology. It may be worth waiting to see how things evolve rather than becoming an early adopter.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9674934148788452, "language": "en", "url": "https://www.avivaindia.com/insurance-guide/5-money-mantras-you-should-share-your-teenager-financial-success", "token_count": 760, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.142578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:62e0fa1c-4e9c-41ee-8965-5bd16b1b4849>" }
5 Money Mantras You Should Share With Your Teenager For Financial Success Parents always wish the best for their children right from the minute they step into the world. Be it the best education, best clothes, best food and the best in life, parents go out of their way to make it happen. However, invariably, a few parents tend to miss out on imparting in them one of the most valuable life lessons, the importance of money. Children often look up to their parents for almost everything and financial lessons are an integral part of this learning journey. If you start early, therein lies a brilliant opportunity to inculcate a healthy attitude in your children towards saving and investing. Here are some quick tips you can share with your child that will surely help prepare them for life in the real world. 1. Get An Early Start Make sure that your kids get an early start because when children develop and hone financial skills from an early age, they'll be ready to face the financial challenges by the time they hit adulthood. Establish a good foundation right from the get-go by teaching your children about financial basics such as budgeting, saving and spending. 2. Monitor Spending Patterns Regularly Another important habit which you need to instil in your child is the value of money. Encourage your children to write down how much they spend and what they spend on so they know exactly where their money goes. Last but not the least, make them earn their stripes; instead of handing them cash, let them earn the pocket money by asking them to do odd-jobs around the house. This will help them understand an all-important lesson in life – money is earned and not just handed over. 3. Always Save Make your children recognize the importance of saving by asking them to make saving a habit. This can be done by teaching them to revisit their spending and savings goals at the end of every month. A great way of encouraging saving in children is by rewarding them; either by agreeing to put something towards it or by matching what they put aside. Moreover, when your children come of age, make them feel responsible by opening up a bank account which will help in sharpening their money management skills as well. Imparting the right education when it comes to saving and spending will provide a foundation that allows them to grow into responsible, successful adults. 4. Reap Benefits While showing them the benefits of savings is all well and good, unless they see some tangible benefits emerging from the savings habit, they wouldn’t take it seriously. Teach your children to strengthen their willpower by differentiating between wants and needs. If your child wants to buy a fancy bicycle, encourage them to set goals by making them work towards it. Help your child understand that by putting aside some of her allowance every day, she can someday use it to buy the bicycle she’s been so craving for. 5. Tax Savings It pays to save. The importance of tax planning should be imbibed in your children as soon as they form an understanding about it. Explain the advantages of proper tax planning and how it not only helps in reducing the tax liability but also in building up saving towards the various goals one has set at different life stages. While you’re at it, do explain the benefits and importance of life insurance and policies. The ‘Bottom Line’ Teaching your kids the value of money is extremely crucial in order to make them financially responsible for a secured future. Parents have a great deal of influence on their children and they tend to learn a lot about how to handle money through their parents. Make sure to set a good example if you want them to emulate your habits. AN Dec 20/17
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9479742050170898, "language": "en", "url": "https://www.covenantwealthadvisors.com/post/bonds-vs-stocks-vs-mutual-funds", "token_count": 3447, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0576171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1ee8740f-5909-43db-9db1-ed745989e77a>" }
Bonds vs Stocks vs Mutual Funds: What You Need to Know Updated: Mar 16 Everyone knows you shouldn’t keep all your eggs in one basket. This is especially true when it comes to investing. But, its important to understand bonds vs. stocks vs. mutual funds if you want to preserve and grow wealth. When you’re investing for retirement and other life goals, it’s important to have different types of investments to achieve the returns you need to reach your goals. Bonds, stocks, and mutual funds are powerful components of a well diversified portfolio. That’s why it’s important to understand what these investments are and how they differ. Bonds are investments designed to help governments or corporations raise money to finance projects. They can be viewed as a loan to investors. The investor does not receive stock ownership in the company, but they do receive an interest payment. Example: Apple needs to raise $10 million to build more computers. They decide to offer a 5 year bond to investors to raise the money. You purchase the bond at the issue price and Apple pays you interest on the money paid for the bond. After the bond matures, Apple pays you back the value upon maturity, known as the face value. Bonds are “fixed income” assets, which means they pay interest at regular intervals until they reach maturity. They’re called fixed income because the amount of the interest payments are fixed in advance. When you buy a bond, you’re basically making a loan to the issuer. When you think of bonds vs stocks (we’ll explain mutual funds a bit later), bonds are usually considered the safest of the two assets. Bonds are safer because corporations are required by law to pay back bond investors before stock investors in the event of bankruptcy. But that doesn’t make bonds risk free. Bonds are rated for credit quality by a credit rating agency such as Moody’s or Standard and Poor’s to help investors gauge their risk. Investment-grade bonds typically have a rating of A, AA, or AAA. Eight bond terms to know Types of bonds Bond issuers can be cities and states (municipal bonds), the US Treasury (government bonds), or government-affiliated organizations such as the FHA or SBA (agency bonds). When governments and government agencies need to raise money to finance debt, they can only issue bonds, which is a unique characteristic of bonds vs stocks vs mutual funds. Businesses also issue bonds (corporate bonds) instead of seeking a loan from a bank. Doing so is usually cheaper because the bond market has lower interest rates and better terms in many cases. How much do you actually pay for bonds? Most investors who buy stocks and mutual funds have a good idea of what they pay in commissions or expenses. What you may not realize is that bond dealers also charge commissions (known as markups), but these costs are rolled into the quoted price for the bond. Companies such as Merrill Lynch, Wells Fargo, and Davenport & Company in Richmond, VA all charge markups to their clients. The problem is that most clients don’t know, and aren’t told, the true cost of their bond purchase in advance. This can make buying bonds as an individual much more costly than meets the eye. Although new regulations require brokers to publish their bond markups, they don’t have to do so until after the sale. That makes it hard to know what you’re actually paying for bonds. Stocks and mutual funds are far more transparent. Markups vary a lot, but Standard and Poor’s puts the average markup at about 1.2% for municipal bonds and 0.85% for corporate bonds. Some markups are as high as 5%! Given the relatively low yield of most investment-grade bonds in 2020, markups can have a huge impact on your overall returns. For individual investors seeking bond exposure, we almost always recommend that clients purchase bond mutual funds or ETFs instead as a way to reduce cost and improve diversification. More on this later. Where do bonds fit in your portfolio? A great approach to investing for retirement is to aim for growth and income. The idea is to achieve growth with your stocks and income and stability with your bonds. Bonds offer the potential to stabilize a diversified investment portfolio. The reason is that certain types of bonds can be very stable when stock markets decline. Your personal financial goals and preference for risk will dictate how much you may want to allocate toward bonds in your portfolio. Unlike bonds, when you buy stock, you buy ownership in a company and in effect tie your financial future to theirs. If the business does well by selling more of their products and services, you may benefit by seeing the value of your stock increase; if it does poorly, you risk losing some or all of your investment. Stocks tend to be riskier than bonds because you are not guaranteed that the stock will do well. But, you also have the opportunity to enjoy greater growth on your money. Companies sell stock for a lot of reasons. They may want to expand into a new market, develop new products, or even pay off debt. The first time a company sells stock, it’s called an initial public offering or IPO. Determining a “good” price for an individual stock is far from a precise science. That’s why you see wildly different analyst forecasts for the same stock. Picking individual stocks can be a risky business. If you choose a winner, however, the results can be amazing: A $10,000 investment in Google’s 2004 IPO would be worth over $300,000 today. Unfortunately, the vast majority of investors fail to be “good stock pickers”. Substantial research has shown that even the brightest professional investors are unable to consistently identify winning stocks in advance. That’s why we often recommend that our clients purchase diversified mutual funds or index funds instead. What does it mean to diversify your stock portfolio? People often confuse asset allocation and diversification, but they are two different things. Asset allocation can be defined as the right mix of stocks and bonds in your investment portfolio across different asset classes. Asset classes can be described as more narrowly defined segments of stocks and bonds. For example stocks may be broken down further into U.S stocks, non U.S. stocks, small stocks, and large stocks. Bonds may be broken down into short-term maturity bonds, high credit quality bonds, non-U.S. bonds, and U.S. bonds. The right mix of asset classes is your asset allocation. Diversification is choosing different investments within each asset class to spread the risk and boost returns. Here’s why it’s important to diversify your portfolio: Example: Charles is following the asset allocation strategy recommended by his financial advisor of 60% stocks and 40% bonds. From the outside, it looks like Charles is doing a good job balancing his risk. On closer inspection, however, Charles only owns 20 technology stocks. His biggest stock holdings include Google, Amazon, and Apple. His bond investments are from the same corporate issuers - Google, Amazon, and Apple! All of his investments are tied to the technology industry and he has too few holdings —his portfolio is not diversified. It may be much better from a risk and return perspective for Charles to further diversify his investments so that many different industries are represented in his portfolio. Moreover, he should own considerably more stock and bond holdings. While there is no guarantee, proper diversification may protect you against downturns in a particular sector or stock, and helps boost your returns with exposure to industries and markets with high growth potential. Understanding mutual funds In the bonds vs stocks vs mutual funds comparison, mutual funds sound the most complicated, but the concept is simple. In a mutual fund, investors pool their money to buy a collection or portfolio of assets. The money in the pool is managed by a fund manager who decides what assets to buy and sell based on the fund’s objectives. Mutual funds may own stocks, but they’re not the same as stocks. When you buy shares in a mutual fund, you don’t actually own shares of the stock it invests in, you own a piece of the fund itself. A mutual fund share price is called the net asset value (NAV), and it’s calculated by dividing the total value of the assets in the fund’s portfolio by the number of outstanding shares. Mutual funds aren’t traded on the stock exchange. When you place an order to buy or sell mutual fund shares, the order is filled after the market closes and the NAV is determined. Different types of mutual funds Mutual funds can invest in any asset class, so you can find bond funds, stock funds, money market funds, funds that invest in commodities such as precious metals or oil and gas, foreign exchange (forex) funds, real estate funds, and even cryptocurrency mutual funds. If you’re interested in exploring growth opportunities in markets with high barriers to entry, a mutual fund is a great way to get your feet wet. Stock funds are one of the most common fund types. They are grouped according to what the investments are based on, such as: Company size, i.e. large-cap or small-cap funds Sector or industry such as health care or technology Location—a single country (Japan, for example), a region (Europe) or global Investing style such as growth funds, value funds, and blended funds It’s possible to find a mutual fund for just about every investing style and objective. What about index funds and ETFs? An index fund is a type of mutual fund that tries to replicate the performance of an underlying stock index such as the Dow, the S&P 500, or London’s FTSE 100. Instead of hiring analysts to pick stocks for the fund, the fund manager simply buys the stocks on the index in roughly the same proportion as the underlying index. Exchange traded funds or ETFs are a type of investment that is similar to a mutual fund, however there are some key differences. For example, an ETF can also be indexed or it can be actively managed. Some invest in commodities; you can even buy ETFs backed by physical gold or silver bullion. ETFs trade on the exchange just like stocks, which means you can buy and sell them during the trading day. ETFs can be either passively or actively managed. What is active vs passive management? Mutual funds are either actively or passively managed. Index funds are passively managed; the fund manager’s job is to make sure the equities in the fund closely match the benchmark index. Passively managed funds aren’t out to “beat the market,” they simply want to generate the same returns as the underlying index. If the index declines, the fund manager doesn’t adjust the stock mix in an attempt to improve returns. Actively managed funds aim to beat the market. These funds are usually pegged to an underlying index to measure performance. For example, a fund’s objective might be to outperform the Russell 1000. Its management team relies on in-depth market research, analysis, and forecasting to pick stocks. Fund managers have to take more risk to generate higher returns, and there is more trading activity in these funds compared to index funds. When you’re looking at bonds vs stocks vs mutual funds for your retirement investment strategy, passively managed funds have historically outperformed their active counterparts a majority of the time. Standard and Poor’s produces a scorecard each year that shows how actively managed funds performed compared to their benchmark index. In 2019, 89% of all actively managed domestic mutual funds underperformed their benchmark over a 15-year period. In other words, if you put your money in a low-cost index fund, 9 times out of 10, you’d have better results than someone investing in a high-priced actively managed fund. Which is best: Bonds vs stocks vs mutual funds There’s no single asset class that’s best for every investor. You should base your investments on four criteria: Your age. Younger people have more time to recover if one of their investments doesn’t perform as expected. They can afford to be more aggressive in their stock and mutual fund choices. Length of time until you need the money. If you are saving for college and your child graduates high school in three years, you need safer investments—think bond funds, CDs, and cash—than someone saving for college in 20 years. Income generation. If you’re building a retirement portfolio, you want assets that generate income and preserve your nest egg. Bonds and dividend stocks are good options. Risk tolerance/willingness to tolerate decline. This goes to the heart of who you are as an investor. If you’re the sort of person who panics over a 10% swing in the market, even knowing recovery is likely in a well-diversified portfolio, you won’t be comfortable with an investment plan heavily weighted toward stocks. When it comes to risk in your portfolio, here’s my rule of thumb: Take your maximum tolerable 12-month decline and double it. That’s the percent of your portfolio you may consider investing in stocks and equity funds. The rest should be in safe assets such as bonds, bond funds, and money market funds. Example: Bill and Catherine are approaching retirement. They both agree that they would be very uncomfortable if their nest egg lost 25% in a year. Bill and Catherine may want to limit their stock exposure to no more than 50% of their retirement portfolio. Building your portfolio It takes time and effort to build a well-diversified portfolio; there are over 10,000 stocks availble worldwide and 8,000 different mutual funds. It’s a huge task to compare them all and find the ones that align with your values, goals, and investment objectives. Keeping expenses low is an essential part of building a portfolio that lets you retire with confidence. Management fees, transaction costs, tax liabilities all drag on performance. Even a 0.5% difference in returns has huge consequences over the long term. Once you build your portfolio, it needs regular attention to make sure your investments are performing as expected and to replace those that no longer match your objectives. It needs to be rebalanced periodically to make sure your portfolio is in alignment with your asset allocation strategy. At Covenant Wealth Advisors, we help you build a portfolio to help you achieve your investment goals. We take the time to get to know you as a person—find out what’s important to you—so your investments not only meet your financial needs, they align with your values. We also offer expert advisory and management services to make sure your investments continue to work for you. Covenant Wealth Advisors is an independent, fee-only advisory firm. We offer unbiased recommendations and transparent fees. If you’d like help building and growing your investments to help you reach your financial goals, get in touch for a free consultation. Author: Mark Fonville, CFP® Mark is a CERTIFIED FINANCIAL PLANNER™ and President of Covenant Wealth Advisors, a wealth management and fee-only financial planning firm in Williamsburg and Richmond, VA. Disclosures: Covenant Wealth Advisors is a registered investment advisor. Past performance is no guarantee of future returns. Investing involves risk and possible loss of principal capital.The views and opinions expressed in this content are as of the date of the posting, are subject to change based on market and other conditions. This content contains certain statements that may be deemed forward-looking statements. Please note that any such statements are not guarantees of any future performance and actual results or developments may differ materially from those projected. Please note that nothing in this content should be construed as an offer to sell or the solicitation of an offer to purchase an interest in any security or separate account. Nothing is intended to be, and you should not consider anything to be, investment, accounting, tax or legal advice. If you would like accounting, tax or legal advice, you should consult with your own accountants, or attorneys regarding your individual circumstances and needs. No advice may be rendered by Covenant Wealth Advisors unless a client service agreement is in place.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9607478380203247, "language": "en", "url": "https://www.willahjosephmudolo.com/porters-five-forces-framework/", "token_count": 785, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.047607421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c3358449-b161-4d63-a082-529df8b84093>" }
A vital task that a business has to perform continually is to analyse the competition to identify emerging threats and find ways to address them. Knowing who the competition is and how it affects the business is critical to the present and future life of a business. Whether the business is a one-person shop or a multinational corporation, analysing the competition is key to success. Understanding the competitive landscape helps a business to come up with a clear strategy. One of the tools that a business can make use of is Porter’s five forces framework (or model), so named after Michael E. Porter, that describes the five specific factors that have an impact on a company’s success. Mr. Porter believed that understanding the competitive intensity of an industry made it easier for firms to determine its attractiveness (profitability). Since the publication of the five forces, the model has become one of the most highly regarded strategy tools for businesses. The model has helped numerous entities understand their industries, and it is a popular reading topic for entrepreneurs across the world. Among them is Willah Joseph Mudolo, a co-founder and President of Global Operations for ADF Group. Mr. Mudolo is widely recognised as a business start-up specialist in emerging markets. The use of Porter’s five forces has gone beyond assessing competition, and firms often use it to understand whether new services or products will have the desired impact. By understanding where the power is in a business context, the model can be used to identify a firm’s strengths (to build on), weaknesses (to improve upon) and potential mistakes (to avoid). The five forces identified are competitive rivalry, buyer power, supplier power, threat of substitution and the threat of new entrants. They are explained further. This force explains the existing competition that a business faces in the market and their capacity to undercut the business. Where there are a lot of competitors of relatively equal size and power, the rivalry is bound to be stiff. The same is also noted in a slow-growth industry or where customers can switch between competitor products at little cost. Significant levels of rivalry are often characterised by price and advertising wars that can hurt a business’s profitability. Additionally, the intense competition is notable where leaving the industry is costly, forcing competitors to remain even if profit margins decrease. This force analyses the ability of customers in an industry to exert pressure on companies within it to lower prices for products and services. It happens when the buyer has numerous options to choose from so that if buyers come together, they can bargain for lower prices. For business strategists and entrepreneurs, the work lies in understanding the market and the client base they serve. A powerful customer base can negotiate for better deals while having independent customers can make it easier for the business to charge higher prices. For a company to produce, it requires raw materials, some of which are sourced from suppliers. Businesses know the importance of consulting and researching well to find the best suppliers to provide inputs at the best prices. However, in an industry where suppliers are few, they may have more power over the rates charged for the supply of inputs. Threat of Substitution Substitute products and services that customers can use in place of a business’s product offerings pose a significant threat to the business. Companies that provide products and services with no close substitutes are better placed to sell these at favourable terms, as opposed to firms that deal in goods with close alternatives available. The Threat of New Entrants An attractive industry is likely to have new entrants, and when there are too many new players, profitability is affected. If it takes little investment and effort to enter into a market and compete, then rivals can quickly take a firm’s place in the industry. If there are substantial barriers to entry, a firm can keep its position and make the most of it.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.938302755355835, "language": "en", "url": "http://admin.indiaenvironmentportal.org.in/reports-documents/economic-impact-floods-and-waterlogging-low-income-households-lessons-indore-india", "token_count": 241, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.08447265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8ccdc902-6aa4-4589-aac3-0f7140a3a46a>" }
The economic impact of floods and waterlogging on low-income households: lessons from Indore, India As Indian cities grow, urban planners must ensure that basic infrastructure and public services are provided on a sustainable and equitable basis. Access to amenities such as water, electricity, food, drainage, sewerage systems, solid waste disposal, healthcare and transportation are key to the smooth functioning of urban areas. Indore, like several other rapidly growing cities in India, faces the problem of ever-changing land use, the emergence of high-rise buildings and walled townships, and growing informal settlements across the metropolitan area. These developments render the urban poor vulnerable to disease, accidents, loss of assets and daily livelihood struggles, as well as exposure to severe economic and non-economic losses as a result of severe weather events. This study estimates the economic losses suffered by the urban poor in terms of assets and productivity due to climate-induced waterlogging and floods. It examines how the vulnerability of slum dwellers living in informal settlements is exacerbated by a lack of supportive institutional mechanisms, the nature of non-inclusive economic growth, the social exclusion of urban landscapes and discriminative access to public services.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9513497948646545, "language": "en", "url": "http://forestindustries.eu/content/pricing-natural-assets-could-spur-green-growth-world-bank", "token_count": 940, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1708984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b5055171-f9f9-49c5-8c64-60493bdfc823>" }
Pricing natural assets could spur green growth - World Bank Declining stocks of forests, farmland, fish and other natural resources threaten to derail economic growth around the world and curb progress against poverty, the World Bank warned on Wednesday. To prevent this, countries need to move toward more resource-efficient “green growth”, said Rachel Kyte, the bank’s vice president for sustainable development. “For the past 250 years, growth has come largely at the expense of the environment,” the bank said in a report. But at the rate natural resources are being used, “we are in danger of undermining the basis on which growth has been achieved,” Kyte warned. Changing that will require policy shifts, such as ending government subsidies that promote wasteful use of fuel and other resources, or investing more in green technologies and infrastructure like urban public transit systems, Kyte said. Equally important will be transforming the way GDP is used to measure economic progress, to put a value for the first time on currently free natural assets, like clean water and standing forests, and the benefits they provide, Kyte said. Coastal mangrove forests in Thailand, for example, are worth about $860 a hectare, the value of their wood. If that wood is cut down and the land turned over to shrimp farming, the value rises to some $9,500 a hectare, Kyte said. But coastal mangroves play a big role in protecting vulnerable communities from coastal flooding, as they trap and slow floodwaters – a function with no economic value up to now. Under a new pricing methodology recently approved by the United Nations, a hectare’s worth of standing mangroves in Thailand has a flood-control value of $16,000. When it comes to determining what to do with coastal mangroves, “you make a different set of decisions with those numbers,” Kyte said. So-called “natural capital accounting” could also put a value on the functions of protected forests, including their role in curbing erosion, providing clean water to cities, supporting wildlife and regulating the rain cycle to ensure enough rainfall for crops. Discussion of natural capital accounting - part of a system of proposed GDP changes called “GDP plus” - has been around for 40 years, Kyte said. But it is now gaining a wide range of new backers, including insurers, who are trying to figure out how to respond to extreme weather, which is being exacerbated by climate change, and the disasters and insurance payouts that come with it. Governments and aid agencies too are trying to find cost-effective ways to lessen the risks of weather and climate disasters – and cutting flooding by preserving mangroves rather than building dikes could be one of them. “It’s folly to keep spending these amounts of money year on year if there was something we could do to make development more resilient in the first place,” Kyte said. That’s particularly true at a time when economies are flat or shrinking, she said. Not all countries are eager to embrace natural capital accounting. Some are concerned that drawing up a balance sheet of natural assets could make them appear less attractive to investors than their competitors. Others argue that, even if a mangrove forest in Thailand earns a high societal value for providing flood protection, that may make little difference to a villager with a machete who’s more concerned about getting firewood for today than protecting her family from any future flood. Nonetheless a number of countries – including Colombia, Costa Rica, Madagascar and the Philippines - are already experimenting with the new natural resource accounting, and a few – Botswana, Australia and Spain – have pilot programmes underway. With its huge stores of natural resources, Africa in particular stands to benefit from tallying and protecting its assets, Kyte said. GDP as a measure of economic growth was created during a time of crisis in 1939, as countries facing World War II were trying to figure out how to pay for it, Kyte said. Now, as climate change brings new threats and resources grow scarcer, “we think we’re in a crisis today, of natural resources,” she said. With world leaders set to gather in Rio de Janeiro in June to discuss ways to promote green growth and make better use of the earth’s resources, promoting natural capital accounting may be a way “to move the ball forward”, Kyte said.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9480695724487305, "language": "en", "url": "https://byjus.com/commerce/emerging-modes-of-business/", "token_count": 1413, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.031494140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8809bbb2-8a32-4365-8a75-96a095716c44>" }
Gone are those days, when we had to plan to visit a market to buy one commodity. Nowadays, everything is one click away, place an order from your phone, and the item gets delivered within a few minutes or a day. Online shopping is getting successful because of its convenience and simplicity. It is possible because of only two electronic networks, commonly known as e-business and e-commerce. Let’s understand each concept in details. Related link: The difference between Businessman and Entrepreneur Introduction of E-Business E-business, commonly known as electronic or online business is a business where an online transaction takes place. In this transaction process, the buyer and the seller do not engage personally, but the sale happens through the internet. In 1996, Intel’s marketing and internet team coined the term “E-business”. Additional Reading: What Is Entrepreneurship? Features of E-Business Here are the few features of e-business: - Easy setup - No geographic barriers - Flexible trading hours - Cheap marketing policy - No interaction between buyer and seller - Delivery of goods takes extra time - Transaction threat is prominent than traditional business - People can buy any goods and services from anywhere and at anytime You might also want to know: What is Marketing Mix? Advantages of E-Business There are various e-business advantages, but the most notable points are mentioned below. - Easy to Organize – The online business can be set-up at home but only if the necessary software, the internet, and a device are available. - More Economical – Online business is more affordable as the cost required to set-up a traditional business is much higher. - No Geographical Barriers – There are less geographical boundaries in terms of e-business as anyone can buy anything from anywhere at any given time. - Government Subsidies – e-Business or online businesses receive advantages from the government as they are promoting digitalization. - Flexible trading hours – Since the internet is available every time, anybody can buy and sell goods or service to the customers through the business website at any given point. Do you know? What is the difference between Marketing vs Branding? Disadvantages of E-Business Though e-business has many advantages, they also have certain disadvantages. Some of the barriers are mentioned below : - No Personal Connection – There is no personal touch, and the customer cannot feel and touch the product when buying. This makes it difficult for the customer to verify the quality of the product. Whereas, in the traditional business, we can make contact with the seller or salesperson and develop trust with the customer. - Delivery Time – It takes time to deliver the products as compared to the traditional business where you see the product and buy it. This delivery duration often discourages customers to buy online. However, e-businesses like Amazon are promising one-day delivery time. - Security Issues – In online business, people are often engaged in a scam as it is effortless for hackers to get the necessary financial details of a customer. Added link: Importance of Consumer Protection Definition of E-Commerce E-Commerce stands for electronic commerce and is a process through which an individual can buy, sell, deal, order and pay for the products and services over the internet. In this kind of transaction, the seller does not have to face the buyer to communicate. Few examples of e-commerce are online shopping, online ticket booking, online banking, social networking, etc. The essential requirement to operate e-commerce is a website. After that, selling, marketing, advertising, and conducting transaction are done through the internet. Types of E-Commerce - Business-to-Business (B2B) – When the selling and buying of goods and services are between businesses. Manufacturer and wholesalers operate with this kind of electronic commerce. Example: Oracle, Alibaba, Qualcomm, etc. - Business-to-Consumer (B2C) – Here, the goods are commercially traded by the business to customer. Such as Intel, Dell etc. - Consumer-to-Consumer (C2C) – The commercial business is done between customer to customer. Example: OLX, Quickr etc. - Consumer-to-Business (C2B) – The business transaction happens between customer to the business. Introduction to Outsourcing It is a process where the business operation or particular business activity is given as a contract to the specialized agency. Most of the company outsource security, sanitation, pantry, household, etc. by making a formal agreement with that particular agency. The agency then assigns the workforce as required by the company and charge them for their assistance. Across the globe, outsourcing business is rising rapidly, and with its help, firms can focus on their core operations and gain more profits and enhance product quality. Advantages of Outsourcing Few advantages of outsourcing are given below. - Cost-Benefit – No need of hiring anyone in‐house permanently. Hiring costs are reduced by saving time and efforting on training. - Encourage Employment, Entrepreneurship, and Exports – It encourages entrepreneurship, employment, and exports in the nation from where the outsourcing is made. - Less Labour Cost – The cost of labour is cheaper than the host nation. For instance, In India, there is a significant skilled human resource. Therefore, the labour cost is much less expensive. - Passage to High‐quality Services – Only the skilled individual is given a particular task resulting in better service and few errors. - Low investment – The company do not have to invest in the latest software, infrastructure, and technology themselves, and let the outsourcing partner manage the complete infrastructure. - Enhanced Performance – The outsourcing results in improved productivity in the complementary areas of a company. Explore link: What is 4 p of Marketing? Disadvantages of Outsourcing - Less Customer-Centric – An outsourced merchant caters to multiple companies, so they lack concentration on an individual company’s tasks. - Security Threat – A company’s confidential news may be leaked, so there is a safety concern and may result in a company’s losses. - Inferior Services – Sometimes, outsourcing includes sub-standard quality service and extends of delivery time. - Ethical Problems – Outsourcing creates employment and generates capital for another nation instead of the origin country. - Lack of Communication – Can include disagreement in various steps of operation due to the lack of communication, delayed services, and poor quality. The above mentioned is the concept, that is elucidated in detail about ‘Emerging Modes of Business’ for the Commerce students. To know more, stay tuned to BYJU’S.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9361345171928406, "language": "en", "url": "https://eco-act.com/climate-neutral/becoming-a-climate-neutral-business/", "token_count": 390, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0255126953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c5892d47-247f-49de-9336-cd30d9a150d0>" }
Becoming a climate neutral business We know that more than ever, companies are taking seriously their impacts on the climate. 2016 research demonstrated that 99 of the largest public companies in the UK are reporting carbon emissions in their annual reports.Reporting carbon is first step on the journey towards climate neutrality with the ultimate goal a low carbon world. By aiming for carbon neutrality, companies are setting their ambitions to get as close to this goal as possible. Becoming a climate neutral business allows an organisation to reduce operational spend, increase resilience to changing legislation and ensure their business is future proofed. How can I have no negative impact on the environment? Our guide to becoming a climate neutral business is for any organisation that wants to create long-term sustainable growth, whilst limiting climate change impacts. Initiatives such as the United Nations Climate Neutral Now campaign and The International Air Transport Association’s (IATA) commitment to carbon neutral growth by 2020, demonstrate the appetite to look towards climate neutrality as a target for industry and businesses. What is climate neutrality? The term climate neutrality indicates that an organisation or product has contributed no net greenhouse gas emissions to the atmosphere, i.e. its impact on climate change is zero. This guide to becoming a climate neutral business outlines the straightforward steps to reduce your environmental impact. - Understanding your carbon emissions and climate impacts - Planning for the future based on your emissions and business needs - Strategy and target setting - Beginning on the path to carbon neutrality by reducing emissions - How to ensure carbon neutrality - compensating for unavoidable emissions This process is fairly simple but the decision making behind the four steps can be complex. Download our eBook today to find out some of the things to think about on your journey to becoming a climate neutral business. Over the next 4 weeks the lovely people of Carbon Clear will help you to understand the mechanisms available to reach climate neutrality in our weekly blog.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9429250955581665, "language": "en", "url": "https://equityways.com/2018/09/03/mergers-and-acquisitions/", "token_count": 912, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.046630859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f23094f3-c990-4e7f-9fad-4590eb9678f7>" }
Mergers and acquisitions, (M&A) involves the process of combining two companies into one. The goal of combining two or more businesses is to try and achieve synergy – where the whole (new company) is greater than the sum of its parts (the former two separate entities). ☑Mergers occur when two companies join forces. Such transactions typically happen between two businesses that are about the same size and which recognize advantages the other offers in terms of increasing sales, efficiencies, and capabilities. The terms of the merger are often fairly friendly and mutually agreed to and the two companies become equal partners in the new venture. ☑Acquisitions occur when one company buys another company and folds it into its operations. Sometimes the purchase is friendly and sometimes it is hostile, depending on whether the company being acquired believes it is better off as an operating unit of a larger venture. ⏩Benefits of Combining Forces 1 Improved economies of scale. By being able to purchase raw materials in greater quantities, for example, costs can be reduced. 2 Increased market share. Assuming the two companies are in the same industry, bringing their resources together may result in larger market share. 3 Increased distribution capabilities. By expanding geographically, companies may be able to add to their distribution network. 4 Improved labor talent. Expanding the labor pool from which the new, larger company can draw can aid in growth and development. 5 Enhanced financial resources. The financial wherewithal of two companies is generally greater than one alone, making new investments possible. 1 Large expenses associated with buying a company, especially if it does not want to be acquired. 2 Higher legal costs, which can be exorbitant if a company does not want to be acquired. 3 The opportunity cost of having to forego other deals in order to focus on bringing two companies together. 4 The possibility of a negative reaction To a merger or acquisition, which drives the company’s stock price lower. TATA STEEL-CORUS: Tata Steel is one of the biggest ever Indian’s steel company and the Corus is Europe’s second largest steel company. In 2007, Tata Steel’s takeover European steel major Corus for the price of $12.02 billion, making the Indian company, the world’s fifth-largest steel producer. Tata Sponge iron, which was a low-cost steel producer in the fast developing region of the world and Corus, which was a high-value product manufacturer in the region of the world demanding value products. The acquisition was intended to give Tata steel access to the European markets and to achieve potential synergies in the areas of manufacturing, procurement, R&D, logistics, and back office operations. HINDALCO-NOVELIS: The Hindalco Novelis merger marks one of the biggest mergers in the aluminum industry. Hindalco industries Ltd. is an aluminum manufacturing company and is a subsidiary of the Aditya Birla Group and Novelis is the world leader in aluminum rolling, producing an estimated 19percent of the world’s flat-rolled aluminum products. The Hindalco Company entered into an agreement to acquire the Canadian company Novelis for $6 billion, making the combined entity the world’s largest rolled-aluminum Novelis operates as a subsidiary of Hindalco. ONGC-IMPERIAL ENERGY: Oil and Natural Gas Corporation Limited (ONGC), national oil company of India. Imperial Energy Group is part of the India National Gas Company, ONGC Videsh Ltd (OVL). Imperial Energy includes 5 independent enterprises operating in the territory of Tomsk region, including 2 oil and gas producing enterprises. Oil and Natural Gas Corp. Ltd (ONGC) took control of Imperial Energy UK Based firm operating in Russia for the price of $1.9 billion in early 2009. This acquisition was the second largest investment made by ONGC. If you liked this post then we suggest you to check this video of ours – SEBI के अनुसार Mutual Funds के categories.What is Equity Mutual Fund.इक्विटी म्यूच्यूअल फण्ड क्या है –
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9378466606140137, "language": "en", "url": "https://tvon.com/blogs/jewelry-blogs/diamond-appraisal", "token_count": 2225, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01708984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2820e579-15e3-4d50-9abb-3797eb9cfd64>" }
With diamonds, the stakes are nearly always high. Mistakes or oversights when you buy can be very costly. Lack of adequate insurance after you buy can also be costly. Educating yourself about diamonds is therefore very important and that’s where we help. If you’ve already familiarized yourself with the types of diamonds available, you may want to know the market value of your diamond. You do this with a diamond appraisal. Jewelry appraisal usually requires the professional skills of a gemstone expert. But what do they look for when appraising a diamond? What is the procedure? What does it tell you about your diamond? And what can you use the appraisal for? The following diamond appraisal guide covers the key information you need to know from the start. What is a diamond appraisal? A diamond appraisal considers the quality and appearance of a diamond, with a view to determining its current monetary value. As well as its weight, shape, and general measurements, it takes into account the “4 Cs” of any diamond, namely: Your appraisal will not go into great detail about these factors. Instead, the appraiser will use their observations of these characteristics to include an estimate of the value of the diamond, based on the current market value. If the diamond is set into jewelry, an appraisal will also be able to consider the weight, purity, and styling of the ring or other piece of jewelry, before determining the value. A full diamond appraisal is normally arranged through two main sources: - The retailer - An independent diamond appraiser What happens during an appraisal? Gemologists are usually in charge of performing diamond appraisals. They follow a series of steps to create an appraisal document that can be used for a number of purposes, which are discussed later. These steps usually include: - Cleaning the diamond until it is completely clear - Checking for imperfections and inclusions using a microscope - Checking for the serial number (which is engraved in the diamond and requires a 20x magnifying glass or microscope) - Determining the color, using a daylight lamp and a special white cardboard - Checking the carat weight of the diamond using very sensitive scales (when the diamond is set, a “leveridge” is used as a counterweight) - Assessing the quality of the cut of the diamond - Determining the amount of fluorescence (gas content) - Estimating the value of the diamond - Preparing the appraisal document What information does a diamond appraisal include? The following information may be included in a typical diamond appraisal: - Certificate number - Name and address of the purchaser - Invoice number (proof of purchase) - Article number - Description of the diamond from the vendor - Weight of the diamond(s) - Color grade - Clarity grade (including a description of impurities and imperfections) - Cut: diamond shape and – if possible – the quality of that shape - Retail value - Name(s) of the appraiser(s) Why do you need a diamond or jewelry appraisal? As mentioned, mistakes with diamond jewelry can be very costly. A diamond appraisal is the best way to evaluate the true monetary value of a diamond and minimize the risks associated with owning it. It provides an accurate indication of the quality, condition, and value of the diamond and/or piece of jewelry so that you can arrange the right level of insurance for your own peace of mind. Say you want to insure your diamond ring… Firstly, your insurance company will almost certainly request an appraisal before insuring you. Besides, you need to know how much to insure it for in case it gets lost or stolen. Note that some insurers will insist on a re-appraisal of your diamond ring every couple of years because of the regular changes in the diamond market. Your appraisal will be based on the current value of your diamond ring – which can go up or down depending on the market. This also means that the appraisal may value your ring at more or less than what you paid for it, even if that purchase was quite recent. Make sure that you communicate the purpose of the appraisal when you order it as it may affect the estimated value of the jewelry. Does every diamond have an appraisal? Not every diamond or piece of diamond jewelry comes with an appraisal document. If you need an appraisal, you may need to request it or arrange it yourself through independent professionals. Many retailers will provide a detailed diamond appraisal automatically when you buy a piece of diamond jewelry from them. This will usually be suitable for insurance purposes. However, it is your prerogative to also get your diamond appraised independently. Professional diamond appraisers When you order an independent diamond appraisal, it will usually be performed by skilled and certified professional gemologists. They have access to the necessary testing equipment to accurately assess diamonds in sufficient detail to appraise them. However, do your homework first as not everyone who appraises diamonds is suitably qualified. Organizations like the Independent Certified Gemologist Appraisers and Certified Gemologist Appraisers of the American Gem Society are independent bodies that you can trust to arrange high-quality, professional appraisals. You can find a fuller list of independent diamond appraisers here on the IGA website. Most professionals from such associations have diplomas in gemology and advanced training in jewelry and diamond appraisals specifically. They are generally not associated with the sale of diamonds, so you can trust that you will receive a fair and accurate appraisal of your jewelry. Diamond appraisals vs diamond certificates When a professional and independent diamond appraisal is complete, you will receive a diamond appraisal document. This document will contain all the information necessary about the quality and value of your jewelry for insurance purposes from a trusted and reliable source. Though it is sometimes called a “diamond appraisal certificate”, this is quite misleading. A diamond certificate usually refers to something other than an appraisal. It’s important to be aware of the differences. You already know what a diamond appraisal covers and roughly how it’s put together. So, what is a diamond certificate? A diamond certificate is more detailed than an appraisal; and, unlike appraisals, every diamond should have one. The information it contains is compiled only by a diamond laboratory that specializes in the inspection of loose diamonds and gemstones. A certificate contains far more intricate information than a simple appraisal – for instance, a consideration of the internal quality of the diamond as well as the external appearance. Highly specialized instruments are used by technicians in order to magnify the diamond so that its physical nuances can be observed. They will grade and list all the key scientific information and key characteristics of the diamond (the four Cs) as well as any imperfections observed. Specialists will test and assess each characteristic. The information in the certificate will remain valid as long as the diamond itself because a diamond doesn’t change its characteristics unless it is damaged. As there is no estimation of the value in a diamond certificate, it is not dependent on market conditions. The laboratories used to test the diamonds and produce these certificates are usually independent of jewelers. The certificate services provided by IGI and GIA are good examples. Beware of diamond certificates issued directly by jewelers or diamond salespeople as, most times, they will not independently and professionally assess the diamonds. Instead, the certificate will come from the retailer’s own agents and therefore the grading conclusions cannot be trusted. Note that buying a diamond that has an appraisal but no certificate could be asking for trouble. You may end up paying too much because of the lack of accurate and independent grading. Note also that only polished diamonds are either certified or appraised – not rough diamonds. Diamond or jewelry appraisal cost A diamond/jewelry appraisal for insurance purposes should cost the same as for any other purpose. As already pointed out, an accurate, thorough appraisal is generally required for any worthwhile insurance (after all, what’s the point of insurance that doesn’t cover the cost of reimbursing lost jewelry?) So, the cost of a diamond appraisal is one that you’ll have to meet if you want to get your jewelry insured properly. Fortunately, it doesn’t cost the earth. And it doesn’t cost more for more valuable diamonds as you’re essentially paying only for a professional appraiser’s time. You should pay by the hour or a flat fee. If the appraiser wants to charge a percentage of the value of the diamond, it may be best to walk away as there may be a temptation for the value to be artificially driven up. Hourly rates should vary from around $50 to $150 depending on the level of experience of the appraiser and the complexity of the piece of jewelry. Can you use an online diamond appraisal calculator? Since the explosion of online jewelry sites, there are now more online diamond appraisal calculators than ever. With these, you usually select your diamond shape and the carat weight of your diamond, and provide details on the color and clarity and then hit GO for an instant estimate of value. These can be a fun way to arrive at a very rough idea of the value of your diamond. However, online diamond appraisal calculators are notoriously inaccurate and they should never replace a thorough, professional, and independent assessment. Summary of the benefits of a diamond appraisal A diamond appraisal is usually undertaken to establish the approximate value of the diamond or piece of jewelry that it is set in. There are several benefits associated with getting a diamond appraisal: - It helps you secure the right level of insurance - It provides you with valuable information about the appearance and quality of your diamond/jewelry - It can prevent you from selling your jewelry for a lower price than necessary - During the examination of a piece of jewelry, appraisers may find loose diamonds or other gemstones that you can fix before they fall off and you lose them How do arrange a diamond appraisal? The most important aspect of arranging a diamond appraisal is selecting a reputable organization to carry out the appraisal. The insurance company will insist on this. An appraisal may be issued for free when you buy a diamond with a retailer - but only with some retailers. You may need to research the types of organizations mentioned above, choose one yourself, pay the fee, and await your appraisal. Need a diamond appraisal? With the above guide, you should be clearer about what a diamond appraisal can and can’t do and how to go about organizing one.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9455249309539795, "language": "en", "url": "https://www.entrepreneur.com/article/296059", "token_count": 1191, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2373046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4e9fa03f-d90e-4bcc-b98b-bf5ae386f32b>" }
What You Need To Know Before Starting A Franchise Business Grow Your Business, Not Your Inbox You're reading Entrepreneur Middle East, an international franchise of Entrepreneur Media. In basic terms, a franchise is a license, or permission to use the name and products of an existing business, as well as the trademark. The party that grants the franchise is known as a franchisor and the party taking the franchise is known as a franchisee. Once the franchisor grants the franchise to a franchisee, the franchisee will have the benefit of the franchisor’s brand, can start selling the franchisor’s products and have the benefit of the franchisor’s know-how, trademarks and other resources such as other accounting services, logistical services and professional advisors. Well known franchises include KFC, Pizza Hut, McDonalds, Hertz, Subway, Papa Johns, Pet Depot, and Broccoli Pasta and Pizza. Franchises represent a convenient way for someone to start their own business- almost everything to operate the business is either supplied or communicated by the franchisor. However, there are several pitfalls that may occur to both franchisors and franchisees, and it is the purpose of this particular article to cover some of the major issues. The franchise process is often dictated by the franchisor who would lay down the procedure. It would often be the case that a franchisee has no choice but to accept the process required by the franchisor. A typical process would be comprised of the following steps: - approach by the potential franchisee to the franchisor. - due diligence and vetting of the potential franchisee by the franchisor. - due diligence by the franchisee of the franchise. The franchisee must be confident that the franchise is profitable and suitable for him/her. - approval by the franchisor, often with certain conditions that need to be satisfied. - the franchisor will send to potential franchisee the franchise agreement to sign and return. - And now, both parties will start the franchise. The franchise agreement is a very important document and it governs the legal obligations of the franchisor and the franchisee. It can often be very lengthy and complex, and therefore both parties should take legal advice from lawyers specializing in franchises. If you are a franchisee, the document would often contain onerous requirements on you. You should obtain legal advice so that you are fully aware of what your obligations will be. The key terms are as follows: - Payment provisions: The franchise agreement will usually require various payments to be paid to the franchisor during the franchise. They are usually the initial sum, the management fee, and the advertising fee. - Term: This would state for how long the franchise is being granted. - Intellectual property: This deals with any trademarks, patents or copyright of the franchisor and the products/services and how the franchisee is allowed to deal with the same. - Supplies: It may be that the franchisee has to purchase all items sold, by the franchise, from the franchisor. - Confidentiality: This goes with intellectual property provisions and would usually state that any information or document provided by the franchisor is to remain confidential. - Guarantee: Often the franchisee would be a company- that is the business owner would set up and incorporate a company for running the franchise. If this is the case, the franchisor would often require the business owner of the franchisee company to give a personal guarantee. It is essential the business owner takes independent advice before giving and guarantee. - Accounting records: If the franchisor requires a management fee to be paid, then the franchise agreement would have detailed provisions regarding the accounting formalities of the franchise. The franchisee would have to ensure that a full set of records are kept, a firm of competent and reputable accountants audit the franchise regularly, and the franchisor has a right to inspect the accounts and see copies of invoices/receipts. - Employees: The franchisor might have particular requirements to employees such as qualifications, particular experience or maybe they have to pass a training program designed by the franchisor. - Software: If the equipment supplied by the franchisor uses a particular software, the franchisor will normally require the franchisee to sign/agree the terms of the software license. - Property/premises: With some franchises, the franchisor will insist on acquiring the property and then leasing the same to the franchisees. A related matter to the franchisee’s property/premises is the fitting out. Some franchisors insist on designing and then fitting out the franchisee’s premises to their specification. - Assignment: This part of the franchise agreement would stipulate the process if the franchisee wishes to sell/dispose of his franchise.This may include provisions regarding the premises as well, but often it will just state what a third party wishing to buy the franchise has to do. - Non-competition: The franchisor would want to ensure that after the franchise has ended, the franchisee does not compete with the franchise or the franchisor. Therefore, the franchise agreement would have provisions restricting the franchisee from such activity. This would often be a geographic restriction as well as a period of time. At TWS Legal Consultants, we believe in partnering with our clients. With franchisors, we can be involved from the beginning of the process, sitting with our clients, and other professional to devise the most appropriate structure and documentation. Doing so will better the chances for a good relationship between franchisor and franchisee- and thereby pave the way for a thriving business.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9506691098213196, "language": "en", "url": "http://hawaiifreepress.com/ArticlesMain/tabid/56/ID/21182/Hawaii-GE-Tax-Takes-8360-from-Family-of-Four.aspx", "token_count": 567, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1552734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:688e83e3-f1db-4e6c-b211-2a33656f0da5>" }
States Where Americans Are Paying the Most Taxes by Evan Comen and Thomas C. Frohlich, WS24/7, February 16, 2018 In the U.S. federalist system, each state government decides how to generate revenue — that is, which taxes to collect, and how. No state tax code is identical and, largely as a result, what the average American pays annually in taxes varies from state to state. 24/7 Wall St. reviewed the tax burden of residents in each state — the portion of income that goes to state and local governments’ taxes — from the report, “Facts & Figures 2017: How Does Your State Compare?” provided by tax policy research organization Tax Foundation. These tax burdens do not include the federal taxes paid by all Americans regardless of state. According to the report, tax burdens in the 2012 tax season were as low as 6.5% in Alaska and as high as 12.7% in New York. (Hawaii 10.2%) In addition to federal, state, and local taxes, Americans pay taxes to other states. Out-of-state visitors pay sales taxes as tourists, investors pay capital gains taxes on investments in other states, and drivers filling up at gas stations in other states pay those states’ excise taxes. For this reason, the tax burden is not always a perfect reflection of taxes collected. Approximately 78% of taxes Americans pay go to their own state and local governments. The variation in tax burden between states is due largely to differences in each state’s tax code. High tax states tend to collect more taxes and at higher rates. While most states tend to collect income, property, and sales taxes among several others, not all states collect all taxes and not at the same rate. High tax burden states collect more taxes and at higher rates, while lower burden states collect less taxes and at lower rates. For example, in the 10 highest burden states, individual income tax collections per capita in fiscal 2015 exceeded the national average of $967. By contrast, five of the 10 states with the lowest tax burden collect no income tax. Similarly, property taxes tend to exceed the national average in high burden states, while they tend to be lower in states at the other end of the tax burden spectrum. read … Full Report 37. Hawaii (13th-highest) - Taxes paid as pct. of income: 10.2% - Income per capita: $50,363 (18th highest) - Income tax collections per capita: $1,389 (10th highest) - Property tax collections per capita: $980 (17th lowest) - General sales tax collections per capita: $2,090 (the highest)
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9324907660484314, "language": "en", "url": "https://blog.coleintl.com/blog/blockchain-and-container-shipping", "token_count": 482, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0108642578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5a8c2950-08e4-4683-9328-fdd97b3a9610>" }
What is blockchain? Blockchain technology is a computer-based open-source system for undertaking and tracking transactions. With an agreed-to network of interconnected participants, a blockchain eliminates the need for third party oversight traditionally provided by a bank or online tracking portal. The concept was conceived of to support the international online currency, bitcoin. In blockchain, the management of all transactions – most often financial or logistical – takes place collectively by the network of participants in a peer-to-peer environment. Because all participants can see – and need to agree – before a transaction can be confirmed or modified, transparency is increased, which helps to: - reduce fraud and errors - reduce the time that goods spend in the transit and shipping process - improve inventory management and, ultimately - reduce waste and cost. Is it more secure? There are varying opinions, but most say yes. By storing data across the network and employing digital encryption, it is generally believed that the security risks associated with traditional centralized storage systems are reduced. Blockchain & Shipping Blockchain technology could be the next big thing to happen in the shipping industry. It provides an exciting opportunity to go paperless while allowing all parties (seller/buyer of cargo, ship owner, charterer, bank, agent, customs officials, port authority, etc.) to benefit from a transparent and secure online environment throughout the supply chain. Blockchain can allow all stakeholders to: - schedule and track physical transactions - exchange and store information - meet their contractual obligations - give and accept instructions and - securely exchange payments. Calculations, approvals and other transacting activities could eventually become automated and paperless, too. It’s already happening Earlier this year, shipping giant Maersk and software company IBM teamed up to conduct an end-to-end digitized supply chain pilot using blockchain technology. By all accounts, it was a success and both companies are optimistic about this system increasing efficiency and cutting costs of shipping. Maersk has already committed to moving and tracking 10 million of its 70 million container shipments by the end of 2017 using blockchain. This is new. We’re learning, too. Email us today to learn more about blockchain and the role it can play in modernizing your shipping business. Information provided by: Canadian Customs Consulting Dept. - Cole International
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9514569640159607, "language": "en", "url": "https://blueskywa.com/blog/a-brief-history-of-bear-markets", "token_count": 1411, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.212890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:06c05d5f-ff32-4f13-9a21-2640b1287833>" }
Any discussion of bear markets should begin with an explanation of what they are. Bear markets are generally defined as an extended drop in multiple broad market indices (e.g. S&P, Dow Jones, NASDAQ) of 20 percent or more. Like their bull market brethren, bears come in two varieties: (1) Secular, or long-term, and (2) Cyclical, or short-term. Interestingly, cyclical markets—bull and bear—can occur within secular markets. A word of caution, don’t confuse bear markets with corrections. Corrections may feature similar declines but they last for much shorter periods of time. Since 1900, there have been five secular bear markets with a duration between four and 20 years plus a smattering of cyclical bears. The time between bear markets has typically been three to five years. Clearly, bear markets are a regular occurrence and investors get paid for assuming risk to get through them. But why are they a necessary evil? NOTE: In the Federalist Papers, Alexander Hamilton once cautioned the citizenry about the “necessary evil” of fielding a national army. A similar analogy can be made when it comes to bear markets: They are a necessary evil* that investors must endure from time to time to earn a return. Bear markets signal a shift away from excessive optimism (irrational exuberance) which tends to drive the market to unrealistic and unsustainable return expectations and prices. When this happens, investors are forced to reevaluate the amount of return they expect for a reasonable level of risk. So, while bear markets can be painful, they do play an important role in calculating the risk premium investors expect when buying stocks. Risk and Expected Returns After the global market bottom in March 2009, we experienced a few stumbles including a debt fight/downgrade and 15 to 20 percent declines in the stock market. Thankfully, these recent declines have been short and shallow. In fact, we have been in a mini-bull market for the last four years. But, when bull markets come to an end—and they will—you are likely to suffer some financial pain. That’s when knowledge of risk and expected returns come into play. Expected returns are based on the amount of risk associated with an investment. On short-term treasury securities (the benchmark of risk-free assets) and some short-term corporate paper, returns are fairly predictable and reliable. If, however, you invest in equities, commodities or longer term bonds, returns are not guaranteed and the value will fluctuate over time. As a result, you must be willing to accept some risk in order to earn a return. It’s important to note that if a return is guaranteed, it is likely to be very low, unrealistic or both. Think about it, if bonds always pay four percent and stocks always return nine percent, you would never invest in bonds because there is no risk in stocks. In the real world, the volatility of returns among different asset classes is what determines risk and potential reward. This is especially true when buying individual companies. For example, in the stock market we attribute more risk to small companies than we do big companies. Not to say there isn’t risk in big companies (all of us remember Enron and WorldCom, two companies that disappeared almost overnight). But the mortality rate among small companies is pretty high, so they command more return in exchange for taking the investment risk. Likewise, value companies (companies with low earnings multiples that are largely out of favor with investors) also offer a higher potential return and therefore more risk. This was the case in 2009 when GM was being bailed out and Ford—while not a recipient of bailout money—was experiencing declining sales and had slashed its dividend. Because there was the real chance that the auto companies could go out of business, they had to offer the potential for more return in exchange for the amount of risk associated with owning them. On the flip-side, growth companies typically have lower potential expected returns because investors overpay for growth they assume will continue forever. But there is a finite amount of global GDP (think market share) so growth stocks can’t continue to increase in value at a sustained rate. A good example of this is Apple. Last summer, Apple became the largest company in the world in terms of market capitalization. As we told our clients who asked about it, once a company gets to be number one, there is usually only one direction to go: Down. And, Apple has come down significantly since August of last year. It’s critically important that you understand risk and reward, especially during bear markets or periods of high volatility when the discipline of investing really gets tested. If you are new to investing and didn’t experience the downturn of 2007-2009, or if you think that the Dodd-Frank bill or the Consumer Financial Protection Bureau will protect you from another Lehman Brothers or AIG, think again. Putting your money under your mattress won’t eliminate the risk of someone stealing it; and putting your money in a bank won’t protect you from inflation risk. Risk is everywhere. So the question becomes, “How do I combat volatility and survive bear markets?” First, you want to avoid panicking or worse yet abandoning ship. This happened recently when investors bailed out of Europe because they thought the euro was collapsing; Greece and Italy were going to slide into the Mediterranean, etc. But, European markets actually outperformed the U.S. last year. The best thing you can do is set goals and devise a realistic plan based on time-tested principles to achieve them. You need to visualize what the road looks like as you invest for the future. After all, if you drive from Los Angeles to Miami, you need a road map. And while the route you choose may have unforeseen detours, you generally continue to heading east to reach south Florida. The same is true when it comes to investing. If you believe that over time the stock market adequately compensates investors for the risk they take, you must stick with the investment plan you’ve devised. It is okay, even prudent, to make adjustments to the plan if the risk/reward ratio of various asset classes changes significantly. But you should avoid making short-term or wholesale changes in an effort to time the market. The bottom line is, if you want to achieve your long-term investment goals, you must do three things: (1) Have a realistic plan that takes emotion out of the equation; and make adjustments as market conditions warrant or as your goals change, (2) Don’t take on more risk than you are willing to accept, and (3) Avoid getting into a market timing scheme. If you do these things, you should find long-term success as an investor.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9357149600982666, "language": "en", "url": "https://essayintl.com/principle-of-employment-relationship-2080436", "token_count": 1077, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2255859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:66cb94bd-f037-4746-908a-93defcb9e160>" }
The paper 'Labor Market in Australia' is a great example of a Business Case Study. In 2008, Australia’ s population was 21 million while the GDP was greater than US$1 trillion. Out of 10 million civilians underemployment, 75 percent are working in the service industry while 20 percent are working in the manufacturing and construction industry. Australia's economy, however, remains highly dependent on export earnings from mining and agriculture, which employs only 5 per cent of total labor force (Greg, Russell and Nick, 2011). Over the last two decades, the Australian labor market has seen great improvements. There has been a creation of over 3.2 million jobs; this has witnessed the reduction of the unemployment rate to current4.8 percent (ABS, 2008). However, the Global Financial crisis has not been very good to the good state of labor markets in the country. Financial crisis usually brings about economic depressions, which are difficult to manage in the short run. The unemployment rate rose by 0.7% over time while the number of unemployed citizens rising by about 17.4% between 2008 and 2009. (ABS Labour Force, Australia, 2009). Since 1970, the country’ s economic base shifted to the service-based industries from the primary industries. With the increasing level of output from the manufacturing sector, employment has not been in correlation with sector growth. Within the period of November 1998 and the same month of 2008, the employment rate declined by 1.7% (ABS, 2008). Since 19th century, Australian’ s economic structure has shifted to a great extent and so have the guidelines that regulate it. The labor market has been changing with the dynamic environment. Several changes especially in the labor-management, the pressures of labor unions and the requirements of what constitutes an appropriate model of employee relations have evolved over time (ABS, 2008). MAJOR CHANGES IN THE LABOUR MARKET Globalization has been the major catalyst in the Australian labor market changes. The Australian government has amended its strategies by what means the economy ought to respond as a response to changes in the international economy. Various changes in the economy have necessitated changes in the labor market (Beardwell, 2010). To explain the changes in the labor market, political and economic trends are of much help. The various governments which have come in place try to bring different amendments in labor laws in order to achieve their promises to the electorates. PERIOD: 1970’ s This is a period of a turning point. Many changes were experienced during this period. The main employment change during this period is the increase in unemployment. With economic decline coupled with the change in policy from the old conservative approach, the labor market also had to change in line with the economy and political movements. The rate of unemployment during this period rose due to economic downturns (Lansbury, 2000). The economic slump had an implication that profits had to reduce to a level where employers lacked what to use in employees’ salaries. Therefore, downsizing was the only available option. The unemployment rate rose during this period to 6.3% (Bamber, 2011). This period also saw the introduction of structural changes in the means of production. There was a high introduction to the use of machinery and computers in the production and service delivery to customers (Gardner, 1997). The high reliance on technology during these times saw the further dismissal of unskilled workers who were unable to fully use the new technology in the working sector. The use of the new technology also reduced the labor force given the fact that technology is capital-intensive means of production and hence is faster and requires fewer employments than the other labor-intensive means of production (Lansbury, 2000). ABS. (2008). Labour force survey in Australia of november 2008. Coventry: Australia Bureau of Statistics. Beardwell, J. (2010). Human resource management approach.6th edition, New York: Financial times management. Beverley Rogers, C. (2010). Australian Fair Work Act 2009: With Regulations and Rules,1st edition, New South Wales : CCH Australia Limited. Derek Torrington. (2008). Human resource management. Jakarta: Financial times. Gardner, G. P. (1997). The Employment Relationship. 2nd edition,london: Macmillan Education AU. Greg J.Bamber, R. D. (2011). International and comparative employment relations.5th edition London: Sage puplications ltd. Lansbury, R. D. (2000). Workplace change and employment relations reform in Australia.4th edition,Sydney: university of Sydney. Marilyn Jane Pittard, P. W. (2007). Public Sector Employment in the Twenty-first Century. Canberra: ANU E Press. Peter Ackers, A. W. (2003). Understanding work and employment: industrial relations in transition, illustrated edition. oxford: Oxford University Press. Statistics, A. B. (2007). Year Book, Australia, Issue 89. canberra: Aust. Bureau of Statistics. Australian Industrial Relations Commission. (2006). Historical Overview. The Australian Industrial Relations Commission. Retrieved from
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9542555212974548, "language": "en", "url": "https://www.cashmatters.org/blog/bank-international-settlements-covid-19-cash-and-future-payments/", "token_count": 427, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.09619140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7b753503-6bcb-469a-9b3f-715937a35f9e>" }
The BIS Bulletin notes unprecedented concern about whether cash is safe to use in times of corona by analyzing the sharp rise of online searches on the matter. The report answers their questions with a collection of conclusions from medical experts, who have deduced that: "the probability of transmission via banknotes is low when compared with other frequently-touched objects." Ultimately, this BIS report highlights "the value of having access to diverse means of payments", and that means keeping cash an option. - The Covid-19 pandemic has fanned public concerns that the coronavirus could be transmitted by cash. - Scientific evidence suggests that the probability of transmission via banknotes is low when compared with other frequently-touched objects, such as credit card terminals or PIN pads. "A realistic assessment of the risks of transmission through cash is particularly important because there could be distributional consequences of any move away from cash." - To bolster trust in cash, central banks are actively communicating, urging continued acceptance of cash and, in some instances, sterilising or quarantining banknotes. Some encourage contactless payments. - Looking ahead, developments could speed up the shift toward digital payments. This could open a divide in access to payments instruments, which could negatively impact unbanked and older consumers. The pandemic may amplify calls to defend the role of cash – but also calls for central bank digital currencies. "If cash is not generally accepted as a means of payment, this could open a ‘payments divide’ between those with access to digital payments and those without. This in turn could have an especially severe impact on unbanked and older consumers. In London, one reporter (Hearing, 2020) has already noted the difficulties of paying with cash, and the consequences for the 1.3 million unbanked consumers in the United Kingdom. In many of the emerging market and developing economies where authorities have recently called for greater use of digital payments, access to such alternatives is far from universal. This could remain an important debate going forward, potentially asking for a strengthening of the role of cash."
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9657508134841919, "language": "en", "url": "https://www.rand.org/blog/2016/07/fixing-inequality-of-opportunity.html", "token_count": 669, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.46484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:23cf6a75-10e7-4108-ab29-6b073e1d2d92>" }
In the wake of the 2008 financial crisis and the tepid recovery that has followed, concern about economic inequality has moved into the mainstream of political discussion. Recent research has also fueled concerns about how high and rising income inequality drives an inequality of opportunity for subsequent generations. In what has been dubbed the “Great Gatsby Curve” (or the “Line to Serfdom”), economist Miles Corak (PDF) found that countries with higher income inequality at a given point in time tend to have lower economic mobility between generations. In a separate set of studies, a team of economists from Harvard and the University of California at Berkeley found that, in the United States, metropolitan areas with smaller middle classes and higher income segregation were associated with lower economic mobility between generations. For families at the bottom of the economic ladder, this immobility could put the American Dream out of reach permanently. As most people would readily acknowledge, income and wealth inequality can be directly propagated across generations through inheritance, or the lack thereof. Furthermore, the policies used to address this direct channel, such as inheritance taxes, are fairly obvious and generally well understood. However, in a recent study, we looked into how economic inequality can also be indirectly propagated through investments in human capital (the term economists use to describe a person's accumulated knowledge and skills). For example, affluent parents can pay to send their children to high-quality schools, an option not often available to low-income parents. This disparity has the potential to become a vicious cycle of low income leading to fewer opportunities and vice versa across generations. But while inequality of opportunity adds complexity to the problem of income inequality, it may also offer solutions. For indirect human capital channels, we found a portfolio of policies that evidence indicates could reduce inequality of opportunity. Essentially, policies that narrow the gap in investments in children also reduce the gaps in outcomes between affluent and poor children. Improvements to education, such as additional spending on low-income students, particularly in early childhood, have been shown to result in improved outcomes later in life. Likewise, efforts that improve the health care of low-income children allow them to achieve more in school and thus earn more as adults. More specifically, policies designed to reduce poverty, such as the Earned Income Tax Credit or food stamps, can play a role in improving parental financial stability and therefore the stability of children's upbringing which is beneficial for child development. Additionally, novel ideas, like vouchers for families to move from low-income areas to higher-income areas, could reduce segregation by race and income and have been shown to improve children's earning potential as adults. As new studies warn of high and rising inequality, it is important to remember that commonsense approaches — from improvements in education and access to quality health care to reductions in poverty and increased integration — have been shown to provide young people with better opportunities. Policymakers of all stripes should keep these concepts in mind when stumping about America's economic problems and designing policies to solve them. Carter Price is a mathematician at the nonprofit, nonpartisan RAND Corporation. This commentary originally appeared on Spotlight on Poverty and Opportunity on July 6, 2016. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9567921161651611, "language": "en", "url": "https://www.synergiafoundation.org/insights/analyses-assessments/black-swan-decade", "token_count": 1573, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.474609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ec2a38e7-b975-4fac-9755-126d179d4bef>" }
The deadly Coronavirus is disrupting global supply chains. How will countries cope with this unexpected “decoupling” from China? COVID 19 and the global economy Hubei Province, the epicentre of the COVID -19 epidemic, is a densely populated manufacturing and transport hub. With the international travel and the onset of a Chinese New Year, conditions were ideal for the spread of the deadly virus which would soon threaten to turn into a pandemic. China is a global supply chain hub, and disruption there undermines output elsewhere. Along with Hong Kong, the Chinese trade surplus is $301 billion which accounted for 16 % of the global exports. It is clear that global supply chains, market and economies have been affected. Even if the outbreak wanes and these negative demand and supply shocks fade into memory, the damage to China and the global repercussion would be lasting. The global growth rate is reported to have slowed down from 3.6 per cent to 2.9 percent. Epidemiological estimates state that the global GDP is likely to shrink by $500 billion which is the economic price tag for the aftereffects of the coronavirus: workplace absenteeism, reduced productivity, decline in travel, distorted supply chains and lower trade and investment. Impact on the economic ecosphere around China South Korea has declared an economic emergency, to limit the damage to the economy deeply intermeshed with China. South Korean electronics companies, automakers and electrical equipment makers are grappling with the problem of getting parts they need from China to keep their factories running. Japan has the most virus cases outside China and is facing the biggest contraction in more than five years. Toyota and Nissan have had their output disrupted at Chinese factories while inbound Chinese tourism has reduced immensely. Japan’s economy has contracted at an annualised rate of 6.3 per cent in the fourth quarter of 2019. Add to that the loss of trade with China – a recession now seems likely. Japan is scheduled to host the 2020 Olympics in July and has spent more than 1.37 trillion yen on preparations for the games. Doubts are being raised whether they will go through as scheduled. Southeast Asia is very vulnerable to lower Chinese demand, in an array of sectors including tourism and manufacturing. Vietnam is suffering due to their increased dependence on China's supply chain linkages and exports. The reduction in Chinese tourist numbers could cost the Thai economy about $3.05 billion, according to The Tourism Authority of Thailand, not counting the revenue loss of other nationalities choosing to stay away Europe is more dependent on China for trade than the United States, being extensively linked through a web of supply chains. European automotive sectors have stated that they are only weeks away from having to stop production due to depleting components supply. Germany’s exports to China last year stood at a whooping €94 billion ($101.6 billion). These levels are impossible to match this year and the price will be paid by its already stressed manufacturing sector. Before the coronavirus struck, Italy’s economy was contracting at 0.3 per cent and a recession was looming. Chinese tourists are a major source of earning and premier fashion brands are particularly exposed. The United States The United States, the world’s second-largest economy, appeared relatively resilient by comparison, but 2.1 per cent real (inflation-adjusted) GDP growth in the fourth quarter of 2019 hardly qualifies as a boom. China exported $539.5 billion in goods and services to the U.S. in 2018, according to the Office of the U.S. Trade Representative which is more than 21% of all imports into the country. Chinese tourists to the U.S. have increased 13 times more than in 2002, making China the largest foreign consumer of U.S. travel. A report from the Tourism Economics states that it is expected that the United States will lose about 1.6 million visitors from mainland China. Each Chinese visitor to the US spent an average of $6,500 in 2018 which is the highest among all internal visitors. China was to lower its tariffs for the import of millions of dollars’ worth of American farm products as a good will gesture to resolve the trade war. This is now likely to be affected. The Middle East China is the largest consumer of petroleum products in the world and a slump in its industry will directly impact oil producing nations. Saudi Arabian crude supply to China for March has been reduced by 500,000 barrels per day. As the virus spreads around the world, oil prices have sunk to $50 a barrel, with fears that things could get worse. The UAE earned substantial revenue from Chinese tourists who make up 6 per cent of total tourist inflow. Dubai’s renowned high-end shopping and resorts are seeing falling footfalls and sinking profits. Iran’s emergence as a hot zone for Coronavirus has made it complicated for the economy which is sliding deeper into recession. It is reeling from tough sanctions, which includes wide ranging restrictions on its economic bloodline the oil. The US sanctions have also worsened Iran's medical sector, which has struggled to keep up with soaring prices of medicines and medical instruments. Australia is the world’s most reliant economy on China with about a third of its exports going there. There are 160000 Chinese students which means one-third of the total fee of Australian $32 billion came them alone. International students contribute beyond just fees. They spend money on accommodation, food and other experiences. Other sectors that have been affected are fishing and tourism. Globalisation and Pandemics The world is one big market, for good or for bad. This was made amply clear by the crashing stock markets in all major economies as investors sensed the impending economic fallouts of the virus. It has been the worst week for Global share prices since the world financial crisis in 2008. The organization, resources, labour, sourcing and logistics are the key components that convert raw materials to finished products and move them to the end customer. The Coronavirus has put limitations at all levels and could create a further lag in the process timelines as quarantine guidelines get stricter. Eighty percent of the world’s goods move by ship, and demand for shipping is starting to slip noticeably. - The global economy is staring at a recession and the reason is China. The “World’s Factory,” touches the life of every working man, around the world. This global connection, hailed as a miracle of globalisation, is now proving to be a bane. Whether this experience will act as a further dampener on globalisation and “outsourcing” will only be known once the world has crushed Coronavirus. The voices against globalisation, especially in North America and Europe, are already getting more stringent. - A global epidemic reinforces the need for intelligent, data-driven supply chain systems for faster and effective decision-making. Artificial intelligence and machine learning foundation can trigger real-time alerts based on public data feeds. Supply chains can then proactively take measures to avoid the disruption. - China's One Belt One Road initiative intertwines 152 nations. China now has the dubious distinction of being home to pandemics, largely due to its unregulated exotic food markets. This innate fear could be a dampener on efforts to revitalise these ancient Chinese concepts. - The virus has surfaced when there is erosion of trust within and between countries. Catastrophic risks are occurring at a greater frequency and the world has to jointly devise global solutions to absorb such shocks. Image Design: Chris Karedan, Synergia Foundation
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9479666948318481, "language": "en", "url": "https://www.upcounsel.com/collective-bargaining-definition", "token_count": 1124, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.046142578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fead7fb4-66ad-4ef2-bbcd-5faf3fcb1857>" }
Collective Bargaining Definition: Everything You Need to Know Collective bargaining is when a group of working people, assisted by their unions, negotiate their employment contracts with their employer.4 min read Collective Bargaining Definition Collective bargaining is when a group of working people, assisted by their unions, negotiate their employment contracts with their employer. Terms discussed include salary, perks, working hours, vacation time, health and safety, and work-life balance. The general aim of collective bargaining is to make a bargain or a deal with a company's managerial department that discusses a number of problems in a specific workplace. This deal is a form of labor contract and is also known as a "collective bargaining agreement" (CBA). What Is a CBA? A CBA is the result of collective bargaining and it is a legal agreement that specifies the policies that both parties have agreed to by both management and labor. This document usually contains a grievance procedure that details the steps aggrieved parties follow to resolve disputes over the contract, and in any event of employee discipline or termination. Why Collective Bargaining? Collective bargaining is widely-considered as being the best means for negotiating better wages in the USA. Through this method, union members have negotiated higher wages, improved benefits, and safer workplaces. The Laws That Cover Collective Bargaining Employees in various industries are entitled the right to collective bargaining under various laws. - The Railway Labor Act 1926 (RLA) grants collective bargaining to railroad workers, airline workers, and many other transportation workers. - The National Labor Relations Act 1935 grants rights of most other private-sector employees. Considers collective bargaining as the "policy of the United States." - The National Labor Relations Act (NLRA) states that employees have the right to join unions and collectively bargain. This act prevents employers from interfering with or preventing employees that want to form a union. - National Labor Relations Board supplements and enforces the NLRA. - Other state statutory, and federal law, administrative agency regulations, and judiciary decisions. Collective Bargaining: Resolving Disputes If there is a dispute between the employee and the employer, then arbitration is a common method used to resolve the problem. State and federal law governs the use of arbitration. In a dispute, although the Federal Arbitration does not apply to employment contracts, it is being increasingly applied to labor disputes by federal courts. Forty-nine U.S. states have set the Uniform Arbitration Act (1956) as state law. If labor disputes become legal battles, the National Labor Relations Board is a federal agency that deals with them. The board also takes enforcement action when violations occur. When Does Collective Bargaining Occur? Collective bargaining occurs when a group of employees enter a negotiation with their employer to negotiate the details of a new or existing employment contract. Who Can Collectively Bargain? Not all industry sector employees are entitled to collectively bargain. Entitled to Collectively Bargain - Private Sector Employees: According to the NLRA, the majority of private sector employees can organize unions and participate in collective bargaining. Railway and airline employees are also entitled, under the RLA. - Federal Employees: Many federal employees can collectively bargain over a limited set of concerns under federal law. - Government Employees: Entitled under state law. Human Rights Watch considers collectively bargain to be a right and preventing bargaining would be a violation of international human rights law. Not Entitled to Collectively Bargain Some people working in the private sector are not able to participate in collective bargaining. These include farm workers, domestic workers, independent contractors, supervisors, and individuals working for very small businesses. What Topics Can Employees Bargain Over? Employees are entitled to bargain over subjects that are considered mandatory to their employment contract. These generally relate to salary, working hours, pension schemes, healthcare, and workplace conditions. Employees are not entitled to bargain over things that are not considered mandatory to their contract, or illegal subjects which violate the NLRA. The Collective Bargaining Process The collective bargaining process usually starts when employees meet as a union and make a list of demands. In the USA, this generally takes place between one employer and its employees. If the bargaining is happening in an industry such as hospitality or trucking, then sometimes an industry-wide or regional negotiation is necessary. For example, the collective bargaining agreement may affect employers who are in a certain city or across a whole industry. In the construction industry, collective bargaining shouldn't need to happen because a project labor agreement (PLA) is in place before hiring workers, which sets the terms and conditions of employment for the project. What Happens When Management and Labor Don't Agree? If the two parties cannot come to an agreement, they can participate in a mediation process where a federal or private mediator helps them. Economic pressure in the private sector usually results in a strike or a lockout, but in the public sector, workers can only strike if the relevant law says that they can do so. If you are currently dealing with a situation that may result in collective bargaining, or you need any further advice, you can post your legal need on UpCounsel's Marketplace. UpCounsel only accept the top 5% of lawyers to its site and they come from schools such as Harvard Law or Yale. Our lawyers have an average of 14 years of legal experience, and this includes working with prestigious companies like Google and Twillo.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9648419618606567, "language": "en", "url": "http://ecoiner.org/switzerland-bitcoin-price-marketwatch.html", "token_count": 904, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.26953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:73b0c3a2-d27d-46cb-87b2-db8bb9fc25ab>" }
Ethereum's blockchain uses Merkle trees, for security reasons, to improve scalability, and to optimize transaction hashing. As with any Merkle tree implementation, it allows for storage savings, set membership proofs (called "Merkle proofs"), and light client synchronization. The Ethereum network has at times faced congestion problems, for example, congestion occurred during late 2017 in relation to Cryptokitties. Several news outlets have asserted that the popularity of bitcoins hinges on the ability to use them to purchase illegal goods. Nobel-prize winning economist Joseph Stiglitz says that bitcoin's anonymity encourages money laundering and other crimes, "If you open up a hole like bitcoin, then all the nefarious activity will go through that hole, and no government can allow that." He's also said that if "you regulate it so you couldn't engage in money laundering and all these other [crimes], there will be no demand for Bitcoin. By regulating the abuses, you are going to regulate it out of existence. It exists because of the abuses." In 1998, Wei Dai published a description of "b-money", characterized as an anonymous, distributed electronic cash system. Shortly thereafter, Nick Szabo described bit gold. Like bitcoin and other cryptocurrencies that would follow it, bit gold (not to be confused with the later gold-based exchange, BitGold) was described as an electronic currency system which required users to complete a proof of work function with solutions being cryptographically put together and published. A currency system based on a reusable proof of work was later created by Hal Finney who followed the work of Dai and Szabo. In 2016 a decentralized autonomous organization called The DAO, a set of smart contracts developed on the platform, raised a record US$150 million in a crowdsale to fund the project. The DAO was exploited in June when US$50 million in ether were taken by an unknown hacker. The event sparked a debate in the crypto-community about whether Ethereum should perform a contentious "hard fork" to reappropriate the affected funds. As a result of the dispute, the network split in two. Ethereum (the subject of this article) continued on the forked blockchain, while Ethereum Classic continued on the original blockchain. The hard fork created a rivalry between the two networks. In October 2015, a development governance was proposed as Ethereum Improvement Proposal, aka EIP, standardized on EIP-1. The core development group and community were to gain consensus by a process regulated EIP. A few notable decisions were made in the process of EIP, such as EIP-160 (EXP cost increase caused by Spurious Dragon Hardfork) and EIP-20 (ERC-20 Token Standard). In January 2018, the EIP process was finalized and published as EIP-1 status turned "active". Alongside ERC-20, notable EIPs to have become finalised token standards include ERC-721 (enabling the creation of non-fungible tokens, as used in Cryptokitties) and as of June 2019, ERC-1155 (enabling the creation of both fungible and non-fungible tokens within a single smart contract with reduced gas costs). Basically, cryptocurrencies are entries about token in decentralized consensus-databases. They are called CRYPTOcurrencies because the consensus-keeping process is secured by strong cryptography. Cryptocurrencies are built on cryptography. They are not secured by people or by trust, but by math. It is more probable that an asteroid falls on your house than that a bitcoin address is compromised. A lot of people have made fortunes by mining Bitcoins. Back in the days, you could make substantial profits from mining using just your computer, or even a powerful enough laptop. These days, Bitcoin mining can only become profitable if you’re willing to invest in an industrial-grade mining hardware. This, of course, incurs huge electricity bills on top of the price of all the necessary equipment. Bitcoin prices were negatively affected by several hacks or thefts from cryptocurrency exchanges, including thefts from Coincheck in January 2018, Coinrail and Bithumb in June, and Bancor in July. For the first six months of 2018, $761 million worth of cryptocurrencies was reported stolen from exchanges. Bitcoin's price was affected even though other cryptocurrencies were stolen at Coinrail and Bancor as investors worried about the security of cryptocurrency exchanges.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9513678550720215, "language": "en", "url": "http://jlr.sdil.ac.ir/article_58297.html", "token_count": 311, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:daea2a0f-daee-4620-b82e-d73e1994e276>" }
عنوان مقاله [English] The impact of laws on estate acquisition on city real estate. State and municipalities to do the duties and responsibilities that are responsible, such as real and legal persons, in the necessary time, they form the contracts with others which these contracts are done in the form of contracts such as sale, rent, compromise, mortgage, power of attorney, swaps and contracts referred to in Article 10 of Civil Code. In addition, to implement their programs and projects, they need to acquisition and possession of properties which according to existing laws and regulations, they apply the acquisition and possession of their personal properties. Of course, they must use national and state property and lands. In many cases, because of the inadequacy of the estate, land acquisition and property of people is necessary. Thus, it is possible that a conflict arise between the private interests of individuals and the public interest of community. However, due to reasonable and logical reasons, discarding and ignoring the public interest is not accepted. But this should not be cause the damage of individuals. The respect to the principle of autonomy and the principle of freedom of contracts requires that the rights and aspirations of individuals are considered. However, comply with the above principles should not be conflicted to supply the wants and needs. Therefore, the necessity of laws and regulations that concern the interests of property owners and the interests of society is considered. Reviewing the relevant laws and regulations and pay attention to the question of to what extent the rules is in this regard, is necessary.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9503889679908752, "language": "en", "url": "http://www.captradinggroup.com/2018/07/", "token_count": 372, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.314453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e5a002e5-82ca-4765-8ecd-b4949aa61da5>" }
Such methods of control of monetary and credit markets inherent in the socialist economy, when governments intervene directly in economic processes. Market-based monetary policy – a means of influencing the central bank monetary area by creating certain conditions in the money market and capital market. The main market-based instruments of monetary policy, through which is provided by the central holding of banks, monetary policy in one way or another country, belong to are: – implementation of open market operations – the establishment of minimum reserve requirements for banks – interest rate policy – the implementation of operations foreign exchange market – deposit operations of the central bank and other open market operations as an instrument of monetary policy are buying and selling central bank securities. Acquisition of securities in commercial banks increases the resources of the latter, respectively, increasing their lending capacity and vice versa. Check out Angus King for additional information. For the first time this instrument began to be used in the twentieth century. In particular, in the U.S. in the 20-ies in the UK – in the 30's. It was due to the high development of the securities market in these countries. Lakshman Achuthan has many thoughts on the issue. The main types of securities that are held open market operations, ie: Treasury bills, interest-free treasury bills, bonds government borrowing governments and local governments, bonds, individual private companies committed to open trade, as well as some other first-rate short-term securities. Most often the central Banks use government bonds. Open market operations are not tools of deep action, their influence quickly short-term. Nevertheless, their advantages are: flexibility, efficiency, the central bank's autonomy in their implementation, the ability to quickly change the direction of their actions, and more regulation of reserve requirements is to establish mandatory rules resources that commercial banks must keep with the central bank as a percentage of funds raised.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9671822190284729, "language": "en", "url": "http://www.customs.gov.my/en/ci/Pages/ci_hist.aspx", "token_count": 4580, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.46875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:574d0ed5-a0b6-498c-b3b7-514540bfe905>" }
EARLY HISTORY OF TAX ADMINISTRATIONBefore the West ever set foot here, a tax administration system had actually existed, that is during the heyday of the Melaka and Johor-Riau Sultanate.In the era of the Melaka Sultanate, maritime and harbour laws existed along with matters pertaining to a tax structure involving the foreign and local merchants. During that period the tax collector and all tax-related matters were the responsibility of the Chief of the Exchequer:" ... the Chief of the Exchequer. (He) controlled all the revenue and Customs Officers and looked after the palace building and equipment".(R.J.W. Wilkinson "The Melaka Sultanate". JMBRAS VOL. XIII-Pt.2, 1935, p.31).The portfolio in charge of tax collection was the Harbour Master. He was entrusted by the king with the power to enforce rules and Harbour Laws.TAX ADMINISTRATION IN THE STRAITS SETTLEMENTS AND FEDERATED MALAY STATESWith the establishment of the Straits Settlements (which consisted of Singapore, Penang, Melaka, Labuan and Dinding in Perak) in 1826, tax administration were supervised by a Governor and a Council directly answerable to the Governor General in Calcutta, India who was in turn controlled by the Board of Governors of the East India Company.Even though the Straits Settlements had been established, a few tax structure and practices applied by the Malay chieftains were retained, for example the Tax Farming system. In this system, a lessee with the highest bid had the authority to collect tax. The lessee was given a license and was subject to specific rules. This facilitated the process of obtaining Excise Duty revenues.To prevent smuggling, particularly opium, from 1861 onwards the number of police personnel were reinforced and new recruits swelled the ranks.Even though a Customs and Excise Department had yet to exist, all customs activities were operated by a body called the Government Monopolies. This body was authorised to grant import license and process and sell certain goods such as opium, tobacco, arrack, cigarettes and matches.At that time, excise duty were imposed on such goods as rice-wine (samsu), toddy and locally made opium whilst customs tax was imposed on opium imported from China, tobacco, cigarettes, liquor and fire crackers.Government Monopolies, the body that controlled these customs and excise activities existed until 1937 whereby in that year the Straits Settlement Customs and Excise Department was officially launched as H.M. Customs and Excise. Following that, a Revenue Collection Branch and a Preventive Branch were set up to oversee customs and excise activities until 1937.A station called the Coast Post was set up to place Customs Officers (at that time called Revenue Officers) who will collect tax and monitor commercial / trading activities. The Customs Department collaborated with the Harbour Master, Post Master and Immigration Department to ensure a smooth day to day operations.In 1938 ship raiding were introduced to curb smuggling activities.H.M. Customs and Excise continued until 1948 up to the extant of the Malayan Union era whereby the Federation of Malaya Customs and Excise Department were then established, comprised of the entire Malay Peninsula (except Singapore).TAXATION SYSTEMS IN THE FEDERATED MALAY STATES Before British intervention in the Malay States and before the Resident System was introduced, there existed a tax administration in Pahang, Perak, Selangor and Negeri Sembilan managed by the Malay chieftains. At that time, the Malay States were divided into several provinces or districts with a chieftain authorised to collect tax over the people. Among the taxes imposed were:Rubber $1.00 per pikulAnimal skin $12.50 per pikulTin $4.00 per baharaRice $16.00 per koyanTobacco $2.00 per pikulOpium $4.00 per pikulOil 10% each type(C.R.J. Wilkinson, Paper on Malay Subjects, O.U.P., London, 1971, p.13)The British had taken over tax collection from the local chieftains with the introduction of the Resident system from 1870 to 1880s. Before the formation of State Councils in the states governed by the Resident System, tax collection relied on the discretion of each Resident who is also the Sultan’s Advisor. The State Council subsequently would determine all matters related to Customs tax in the states. Among the functions of the Council were:" ... Connection with the Government of the country influential natives and others with whom the Government may consult, regarding proposals for taxation, appointments, concessions, the institution or abolition of laws and other matters ...;"Among the new taxes were:Tin mines lease system with an export tax of $15.00 per bahara and 1/10 for other metals.Farming revenue -2.5% tax.Tax on imported opium.Systematic tax administration activities in the Malay States led to the establishment of the Customs Department. Customs stations were situated at river estuaries and state borders and were in charge of collecting duties on agricultural products, mining, alcoholic beverages, opium and gambier. The management system of the Customs Department varied from state to state.The management of the department and tax collection were carried out by the clerks in the District Office and State Treasury Office; therefore the Customs Department did not fully manage customs and excise duty. For example in Telok Anson, Taiping and Kuala Lumpur, tax were collected by the Government Treasurer, whilst at the ports, river estuaries as well as the borders of Perak, Selangor and Negeri Sembilan, tax collection were done by Customs Clerks who were directly responsible to the District Officers in those areas. In Selangor import tax on opium were collected by the Chinese People Affairs Protection Officers.Customs Union in the Malay States had not existed then. This led to complication in enforcing tariff on goods and differing tax rate in the different states. As a consequence there was a need to form a federation between the Malay States and this basically had been approved by the British Foreign Secretary and Straits Settlements Governor.With the formation of the Federated Malay States, there were efforts to integrate Customs matters between states. As an outcome, in 1904, a new legislation called the Goods Revenue Enactment Number II was enacted, with the purpose of controlling all Import tax revenue on alcoholic beverages. Under this legislation, the retailer was given a special license to import alcoholic beverages with a fixed rent payment.On the 1st of January 1907, a new post of Inspector of Trade and Customs was created. On the same date, a legislation called The Customs Regulations Enactment was introduced with the approval of all four States Legislative Board. In 1908 the title of Inspector of Trade and Customs had been changed to Commissioner of Trade and Customs.With the existence of this enactment, import and export tax schedules became uniform in the Federated Malay States. A complete integration occurred in 1920 with the establishment of one uniform Customs enactment for the Federated Malay States.With this integration, Customs stations at the borders were abolished and no tax was imposed if the goods brought from a state to another were from the union. To collect export duty on goods that were brought out by trains, tax collecting stations were established in Singapore in 1918, Prai (1919) and Melaka (1922). The stations were also tasked with collecting duties on imported goods.The establishment of the Customs Union in the Federated Malay States had harmonised all Customs regulations between the states. In fact, all Customs offices under the Trade Commissioner and Customs Department were responsible directly to the Chief Secretary of the Federated Malay States.In 1938, the title Commissioner of Trade and Customs was changed to Comptroller of Customs.THE ESTABLISHMENT OF CUSTOMS UNION IN THE MALAY PENINSULAIn 1931 during the Federated Malay States Rulers Conference or Durbar in Sri Menanti, Negeri Sembilan, the British High Commissioner, Sir Cecil Clementi proposed an expansion of the union. The proposition was based on the annual increase of import tax.Until the year 1932, Customs Tariff had already encompassed a majority of goods and preferential duty had to be created for goods coming from the British empire. The heavy reliance on import duty as a source of revenue for the Federated Malay States led Sir Cecil to opine:"Like the rest of the British Empire, the Malay States had become increasingly dependent on Customs import duties as their main source of revenue, and it was on this score that he strongly recommended the creation of a customs union embracing the whole of the Malay peninsula if trade is not to be intolerably cramped, and the interdependence of one territory upon another in matter of commerce ".(C.R. Emerson, Malaysia a study in Direct and Indirect Role, Out 1979, p.190).Henceforth he suggested an establishment of a Customs Union for the whole of the Malay Peninsula. This was so that the tariff growth in the Federated Malay States would not disturb the smooth trade transactions in the states.However, the Customs Union for the Malay Peninsula could only be established in 1946, that is with the formation of the Malayan Union in April, 1946, and the department was given the name Customs and Excise of Malayan Union. Nevertheless, with the dissolution of the Malayan Union in 1948, this department was reorganised. The Customs Department then did not only comprise of those under the Federated Malay States but it also included those under the administration of the Non-Federated Malay States and the Straits Settlements.THE ESTABLISHMENT OF THE CUSTOMS AND EXCISE DEPARTMENT OF THE FEDERATION OF MALAYAIn 1948, with the formation of the Federation of Malaya, the Customs and Excise Department were established for the whole of the Malay Peninsula. Under the Customs Ordinance 1952, this department was put under the control of the High Commissioner for Malaya and headed by a Comptroller of Customs as can be found since 1938. This lasted until the country achieved its independence in 1957.Section 138, Customs Ordinance, 1952 gave the Federation Council power to issue all rules and regulations on Customs affairs. The Customs main area at that time was the whole of the Peninsular of Malaya excluding Penang (to maintain its free port status).As a result from the formation of the Customs Union in the Malay States in 1948, there was a dire need to boost staff performance to fulfill the needs of the country which was on her way to independence. In 1956 a training center was formed in Bukit Baru, Melaka.When the Federation of Malaya achieved its independence on 31st August 1957, the organisational structure of the Customs and Excise Department was reshuffled again to fulfill the needs of an independent Malaya. Customs and Excise Department administration was assigned under the Finance Ministry led by a Customs and Excise Comptroller who was responsible to the Finance Minister.The department was divided into three zones based on three main trading centers. For the Northern Zone the base was in Penang and covered Kedah, Perlis and Perak. The Central Zone was based in Kuala Lumpur and its area encompassed Terengganu, Kelantan and Negeri Sembilan. Lastly, the Southern Zone was comprised of the remaining states of Johor, Pahang, Melaka and the Customs station in Singapore. Each Zone was led by a Senior Assistant Comptroller of Customs.DEVELOPMENT, PROGRESSION OF ROYAL MALAYSIAN CUSTOMS AND EXCISE DEPARTMENTOn 16th September 1963, the structure of the Customs and Excise Department administration was reshuffled again with the inclusion of Sabah, Sarawak and Singapore into the Federation of Malaysia. The Customs department was divided into three main territories, that is the Peninsular of Malaysia (at that time known as West Malaysia), Sabah and Sarawak, where each territory were led by a Regional Comptroller of Customs and Excise.On Tuesday, 29th October 1963, in the Dewan Tunku Abdul Rahman, Jalan Ampang, Kuala Lumpur, an auspicious event unfurled as the Customs and Excise Department was conferred the title Diraja / Royal by HRH Seri Paduka Baginda Di Pertuan Agong. This was an honor from the Government for the Department’s untold contribution to the country. It was a momentous occasion in the history of the Royal Customs and Excise of Malaysia.Amendment to the Customs Ordinance 1952, enforced on 1st October 1964, had annulled the posts of Revenue Officer and Junior Customs Officer, and in its stead new posts were introduced called Customs Officer, Senior Customs Officer and Chief Customs Officer. Beside that, this amendment also created the posts of Assistant Superintendent of Customs and Superintendent of Customs.1964 also saw an all local selection of Customs Officers upon service completion of the last two English officials.Even though the Peninsula of Malaysia, Sabah and Sarawak were contained in the Federation of Malaysia, each state still worked under separate Customs Ordinances and Duty Orders. These affected the movement of goods from one territory to the other thus creating a bumpy journey through the different bureaucracies. Following these difficulties the Comptrollers from the three territories met in mid 1967.A result of the meeting was the enactment of the Customs Act No. 62, 1967 that gave the whole of Malaysia a single Customs law. Consequently, the Indirect Tax Committee of the Treasury actively prepared a collective tariff for the three zones.In 1972, the Royal Customs and Excise Malaysia were involved in a restructuring exercise following a report by an expert from the International Monetary Fund (IMF). On 1st August 1972, the title for the head of Customs Department was changed to Director General of Customs and Excise; and two new posts were created, that is the Deputy Director General of Customs (Implementation) and Deputy Director General of Customs (Management). From that date onwards, the position of Regional Comptroller of Customs and Excise (West Malaysia) was abolished and replaced with one Regional Customs Director each for the three areas, North, Central and South which were previously led by a Senior Assistant Customs Director.In 1972, a revenue legislation called the Sales Tax Act 1972 was declared in the Government Gazette as Malaysia Law Act 64 and implemented on 29th February 1972. This tax, known as Sales Tax, was imposed on all imported and local products, except those exempted under Sales Tax (Exemption) Order 1972, or were produced by manufacturers exempted from being licensed under Sale Tax (Licence Exemption) Order 1972.Accordingly in 1975, the Government introduced yet another law called the Service Tax Act 1975. This enabled the Department to collect service tax from business premises that provided services and goods which were taxed under the Second Schedule, Service Tax Regulations, 1975.The enforcement of the Motor Vehicle Levy Act taking effect on the 1st January 1984 also contributed towards increasing the department’s revenue collection. With the enforcement, all motor vehicles ferrying certain goods either leaving or entering the country, notwithstanding laden or empty, (unless those exempted) will be levied.Until the year 1977, even though Malaysia had existed for 14 years, there were still a few minor legislations in the three Customs controlled territories which operated separately. Research was done so that only one legislation is used in the regions of the Malay Peninsula, Sabah and Sarawak. The movement was called the Harmonisation Movement. The first benefit reaped was that movement of goods from one territory to another will no longer be considered as import or export; and goods will only be taxed once, that is when the goods were imported or exported for the first time from Malaysia. This means business transactions between territories can proceed unhindered.On 19th December 1977, Penang was declared as a Main Customs Area. With that, Penang’s free port status was withdrawn. The year also saw the Department embracing the International Unit System or SI, converting everything into metric by the 1st January 1978 through the Customs Duty Order1978.The Customs Department had another restructuring exercise in 1979. Whilst continuing to be helmed by the Director General of Customs, he was now assisted by three Deputy Director Generals who would be responsible for the Implementation, Prevention as well as Management and Policy programmes. Organisational structure at the headquarters level was arranged based on regional activity whereby each activity was headed by a Director of Customs. The activities were the Prevention Division, Customs, Internal Tax, Research, Planning and Training, Revenue Collection as well as General Administration and Finance. The same applied at state level where each state in the Federation of Malaysia were led by a Director of Customs. In addition, the department also had a station in Singapore administered by a Federation Customs Tax Collector.In 1983 history was made when, for the first time ever, the Royal Malaysian Customs and Excise Department celebrated World Customs Day. The date 26th January was chosen to coincide with the 30th anniversary of the Customs Cooperation Council ( now known as the World Customs Organization or WCO for short). The first Customs Day celebration was inaugurated by the Honourable Minister of Finance on the compound of the Royal Customs and Excise Malaysia Training College (now Royal Malaysian Customs Academy).1987 saw Langkawi declared detached from the Main Customs Area and made the second free port in Malaysia after Labuan. Free port status was conferred on the island beginning 1st January 1987 to encourage tourism in Langkawi (which had not been developed accordingly) wherewith it would increase the standard of living of the local people.The system of classifying goods in Malaysia saw the dawn of a new era beginning 1st January 1988. Thus began the history of applying a new nomenclature system called The Harmonised Commodity Description and Coding System, or Harmonised System. The purpose of the Harmonised System was to create a standard and universal nomenclature system, in line with developments in modern technology and advancements in international trade.In the fast paced world of the domestic and foreign trade development, the department’s role is always at a threat. To equip the Department with the required work force in facing this challenge, an upsurge in knowledge level, skills and ability of the work force is crucial. Among the main steps towards achieving this is by developing the Royal Malaysian Customs College that had been established since August 1956 (formerly known as the Federal Customs Training Center, Federal Customs Training School and Royal Customs Training College).After several years of going through the research and planning process, in 1989 the college was expanded. Now it is known as the Royal Malaysian Customs Academy (Akademi Kastam Diraja Malaysia or AKMAL, meaning ‘perfect’).Beginning 1st January 1990 another event was chalked in the Customs annals. Perlis was created a new Customs administration. Previously, the Perlis Customs came under the auspices of the State Customs Director of Kedah/ Perlis with Alor Setar, Kedah as the headquarters. With the establishment of the new Customs administration, all activities and Customs affairs could be ran more smoothly in the state of Perlis.In 1995, the Royal Malaysian Customs and Excise Department once again reorganised its structure. At the top level management, the status quo was retained whereby the Director General of Customs, aided by his three deputies, spearheaded the Implementation, Prevention and Management Programme. A new programme was introduced called the Corporate Planning and Development Programme. However this programme could only be found at the Headquarters level. In tandem with that, Customs activities in the Headquarters were arranged thus: (i) Customs, Internal Tax and Technique Service- under the Implementation Programme.(ii) Preventive - under the Preventive Programme. (iii)Personnel and Administrative, Finance and Procurement, Management Information System and Revenue Accounting - under the Management Programme. (iv)Corporate Planning and AKMA - under the Corporate Planning and Development Programme.Each of these activities is led by a Director of Customs. The post of State Director of Customs in the states remained. The same goes for the Federal Customs Tax Collector in Singapore and Customs Advisory Minister in Brussels, Belgium. Beside that, to create a greater impact for the Department, the Public Relations Unit, Internal Audit Unit and Legal Affairs are assigned directly under the Director General of Customs.On 23rd October 1998, the Right Honourable Prime Minister who was also the Minister of Finance I, in his speech for the Budget of 1999 in the Dewan Rakyat, announced a levy on windfall profit imposed beginning 1st January 1999 to help the government secure added revenue. Windfall profit is a surplus profit whereby a higher selling price occurred as a consequence of the Ringgit depreciation riding on the backlash of the economic crisis that hit the country since the middle of 1997. The first commodity to be slapped by this levy is crude palm oil where levy is imposed when the price exceeds RM2 000 per tonne.EXPECTATION AND CHALLENGEThe Twentieth century has ended and with the dawn of the Twenty-first century, Malaysia had announced the year 2020 as the definitive year – the year to declare that this country had attained the status of a developed country. Verily, the Royal Malaysian Customs carries a big responsibility in realising Vision 2020.As the main revenue collector, the Customs Department not only must continue to contribute but it also has to increase revenue collection annually. These are done with a delicate balancing act so as not to jeopardise the performance of the industrial sector. In fact, the department has to ensure that whilst its control on the related industries is minimal but effective, it is also there to lend a helping hand and push and prod the industries to develop and prosper. At the same time, preventive work, especially in stopping the entry of negative elements that can threaten the country’s security or those that brought moral decay, must be executed continuously.Thus, the upcoming years promise a million and one event that will be a part of history. May all the events be of benefit one way or another as we tread the path to glory.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9587298631668091, "language": "en", "url": "https://6toplists.com/what-is-blockchain-technology/", "token_count": 1588, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.04296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cf59cec8-6c3f-4bf6-afaa-b7195d132468>" }
Since the invention of the internet, digital information has been sent, received and documented. Today, sending an email to the other side of the world is instantaneous and easy. What would you say if money transactions or any other digital data could be sent the same way? This is exactly why the blockchain technology has been invented! Blockchain was created to serve as an open ledger that holds digital information. It was invented by Satoshi Nakamoto. This was the name of the person or multiple people who started the system and earned the very first Bitcoin. With this technology, people can store information in an open and decentralized system. Since its creation, there have been various new uses for it. For example, people can store information about a transaction, contract or virtually any digital data. Bitcoin is also not the only cryptocurrency that uses the blockchain technology. Many copy-cat digital currencies have emerged, such as Ethereum, Litecoin, Stellar Lumens and many others. If you want to see the top 10 cryptocurrencies of 2017, check out this article! I know why you’re here, to find out exactly what the blockchain technology is and how it works. So let’s not waste any time and jump straight in! The Basics of blockchain technology explained The blockchain technology is a peer-to-peer distributed open ledger that holds digital information. Basically, it is a chain of information that is distributed between all members of the network. The information is contained in blocks. Each time data is recorded with this system, a new block is added to the blockchain. A block holds the digital data or information and a link to the previous block that it’s connected to. Each block in a blockchain is linked to the block before and after it. This results in a system where everybody can see all of the transactions that have happened since the start of the network. The ledger is stored on all of the computers that are part of the network at the same time. Meaning that the ledger is available on many different computers around the world simultaneously, rather than one central location. Anybody can become a part of the system. They only need a copy of the blockchain and the software that goes with it. If you would rather invest instead of maintaining a blockchain, try Optioment. This is the best Bitcoin investment fund available. Click here and get started! The synchronization of the open ledger As everybody has a copy of the ledger, there needs to be a synchronization process that validates one of the ledger versions. If someone tries to add a new block, it will only be accepted if the majority of the other nodes that are in the network accept it. They solve a mathematical puzzle in order to make a decision. They look for a random key with which they can add the block to the blockchain and validate it. This is how cryptocurrencies are generated. As a reward for successfully validating the new block and adding it to the system. The first completed result is sent out to all of the connected nodes and they update the ledger to that version. If the majority agrees on the result, the changes will be finalized and the new block will be added. This is also when the reward (Bitcoin or other cryptocurrencies, you name it) is awarded. Determine how profitable Optioment, the Bitcoin investment fund can get. This profitability calculator should help! The maintainers of the ledger (miners) constantly update the ledger with the new information. High computing power and a lot of electricity is necessary for the mathematical puzzle (key) to be solved. What is blockchain technology trying to solve? The main thing that the blockchain technology is trying to solve is money transfers. Today, if someone wants to send money to another person, a trusted third party is involved. These are usually banks where we keep our money. Not only that the transaction will be only finished in a couple of days, but there are also fees that need to be paid. The fees are the banks essentially taking a portion for their services (sending the money to the recipient). There’s also a human element which can be a flaw. Many banks do not work on holidays and weekends, which can cause the finalization of the transaction to take even longer. With the blockchain technology, there is no third party involved in the transactions. The money goes straight from point A to point B, if the transaction is validated. This means that there is no need to pay a third party and more money can be saved. Not only that the fees are a lot smaller, but the transaction time is also very fast. Other uses for the blockchain technology Blockchain can also be used for various other things too. It could help small business owners validate their products, for example. Anybody who is interested in the product could take a look at the block and see all the necessary information. They would see where the product came from, how it was made and any other data that is added. The blockchain technology is also a great way to store deeds or contracts, as it’s hard to make changes to an already existing block. This means that if you sell a house for $10,000, the price cannot be later changed in the contract to $5,000 by the buyer. Everybody can see the information that is stored in the block, but nobody can make changes to it. At least not in a simple and effective way. Click here and read about 30 additional uses for the blockchain technology! This system could simplify a lot of things in our lives and this is exactly why it’s important to know about it. Unfortunately, still many people are not familiar with what a cryptocurrency is or how it’s earned. If you want to educate yourself further, check out some of the pros and cons of Bitcoin. Is blockchain technology safe? Since the ledger is distributed all over the world, on many different computers, it is actually very safe. What a distributed open ledger means is that in order for it to completely disappear, all of the copies must be destroyed at the same time. As long as there is one node or a copy, it can be sent to all of the other nodes as soon as they are online. As cyber-crime is a very real threat, one must ask if the blockchain technology is safe from it. In reality, it is, very much so. Hackers could potentially hack the system, but they would have to hack numerous nodes (mining computers) at the same time. We are talking about hundreds of thousands of computers, since the majority has to agree on the change. This would be nearly impossible, so the blockchain technology is quite safe from hackers. We tried to make understanding the blockchain technology easy. Even though this is not a simple concept, hopefully this article helped you understand it a little better. There’s a lot of potential to this network. Money transactions could be made simple and third parties would not be necessary. This means a lot of time and money saved. If the blockchain technology sparked your interest, you might be thinking about investing into Bitcoin. If you are, check out what Bitley’s, the Bitcoin investment fund is offering! The blockchain technology is also considered to be a safe system. It’s all because of its decentralized and transparent nature. There is no one party holding all of the data and keeping it a secret. All of the transactions can be viewed and checked and the records are public. We hope that the series of articles that we have done on cryptocurrencies helps understand them a little bit better. Who knows, you might start investing and earning money through cryptocurrencies after hearing about it from us! If you want to educate yourself on Bitcoins or cryptocurrencies in general, take a look at these articles:
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9560859799385071, "language": "en", "url": "https://cointelegraph.com/tags/bitcoin-price/amp", "token_count": 173, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11083984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5cd0ef72-4e39-47d5-93a7-072939978682>" }
Bitcoin Price News Bitcoin price establishing is an interesting phenomenon and widely differs from the way the price of ordinary money is set. First of all, despite popular belief, Bitcoin does have a cost price. It is set as a combination of expenses on electricity, transaction fees and the installation/purchase of software. However, the price of Bitcoin is not determined by its cost price and is mostly estimated by consumer demand. It causes huge fluctuations in the price of Bitcoin, as Bitcoin has no backing, and traders are largely dependent on the news of Bitcoin’s price, multiplying volatility of the asset. As Bitcoin’s market cap is nearly $159 billion, the price of Bitcoin became an important economical factor, gathering attention from financial institutions of various kinds and stimulated the research of behavioral factors that influence the price of Bitcoin and working on ways of predicting its changes.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9476531147956848, "language": "en", "url": "https://dbs-partners.com/the-great-balancing-act-managing-the-coming-30-trillion-deficit-while-restoring-economic-growth/", "token_count": 2663, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11767578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f5beadf7-692a-4478-b0b9-47fd9df2fda2>" }
This post was originally published on this source Open interactive popup The dual imperative of our time is to save lives and safeguard livelihoods—and governments around the world are pulling out all the stops to do so. The resulting ramp-up of relief and stimulus spending to unprecedented levels has occurred just as tax revenues have slumped. As a result, government deficits worldwide could reach $9 trillion to $11 trillion in 2020, and a cumulative total of as much as $30 trillion by 2023. Governments will need to find ways to manage these unprecedented deficits without crippling their economies. It is this challenge which creates the need for the great balancing act: managing the $30 trillion deficit while restoring economic growth. We believe that this can be done—but it will require governments and the private sector to work together like never before to lay the foundations for a new social contract and to begin shaping a postcrisis era of shared, sustainable prosperity. There is already concern that many countries will struggle to meet their commitments to creditors, sparking a debt crisis that would compound the economic crisis unleashed by COVID-19. 1 1. See, for example, New Atlanticist, “How to deal with the coming pandemic debt crisis?,” blog entry by Hung Tran, May 11, 2020, atlanticcouncil.org. Yet even if governments do avoid defaults, record public-sector debt levels could seriously dampen economic recovery if not managed effectively. Increased debt-servicing costs could crowd out vital investments in areas such as infrastructure and reskilling. Decisions to “print money” at scale could prompt a rise in inflation. And a big rise in taxation could hamper business innovation and growth and harm countries’ competitiveness. Any of these paths could lead to a vicious cycle in which both economic growth and public revenues are suppressed for years to come. But governments have more power than is commonly assumed to manage larger deficits and to ensure that they sustain sound public finances and economic competitiveness for their countries—and so foster a virtuous cycle instead. For example, there are opportunities to improve the effectiveness of tax collection, including the use of accelerated digitization. And careful spending reviews can reallocate budgets to the highest priorities while delivering savings through better procurement and fraud reduction. Potentially an even greater opportunity, and one that remains largely untapped, lies in creating transparency into governments’ entire balance sheets, including assets such as land and property and state-owned enterprises. There is considerable scope in many countries to manage and monetize such assets more effectively, both to strengthen fiscal sustainability and to support broad-based economic recovery. There are also real opportunities to hone the design and the target of the massive relief and stimulus packages precipitated by the COVID-19 crisis. The measures announced to date amount to some $10 trillion worldwide, and this spending is likely to rise as governments move from immediate support to households and businesses toward fostering long-term economic recovery. Wisely structured stimulus measures—designed and implemented in partnership with the private sector—could help prepare workforces for a technology-driven future and improve the long-term competitiveness and resilience of key industries. Indeed, we believe the crisis presents a historic opportunity for government and business to forge a new social contract for inclusive, sustainable growth. The world’s $30 trillion public-finance challenge Governments have announced more than $10 trillion in relief measures, primarily for households and businesses. Among the G-20 nations, the fiscal measures announced in the COVID-19 crisis to date amount to an average of 11 percent of GDP—three times that of the 2008–09 financial-crisis response. In some countries, stimulus packages have reached more than 30 percent of GDP. At the same time, the immediate shock of the crisis on companies and households, along with depressed GDP growth, is likely to reduce government revenues significantly. Worldwide, our analysis suggests that fiscal revenues could fall by between $3 trillion and $4 trillion (as much as 15 percent) between 2019 and 2020. GDP growth, along with government revenues, could take two or three years to recover to precrisis levels. Given the combination of record stimulus measures and steep reductions in revenues, governments are taking a range of steps to manage public finances, including budget reallocation. But the bulk of the gap is being closed through debt. Our analysis suggests that the world’s governments will experience a record global fiscal deficit in 2020 of between $9 trillion and $11 trillion—at least triple precrisis levels and equivalent to 12 to 15 percent of global GDP. By 2023, the world’s governments could face a cumulative fiscal deficit of between $25 trillion and $30 trillion (Exhibit 1). 2 2. These figures are based on McKinsey analysis, as of May 8, 2020, of the impact of a scenario (“A1”) in which there is virus recurrence, slow long-term growth, and a muted world recovery—considered the most likely scenario in a recent McKinsey global executive survey. For further detail on our scenario analysis, see Sven Smit, Martin Hirt, Penny Dash, Audrey Lucas, Tom Latkovic, Matt Wilson, Ezra Greenberg, Kevin Buehler, and Klemens Hjartar, “Crushing coronavirus uncertainty: The big ‘unlock’ for our economies,” May 13, 2020. As a result, sovereign-debt levels are likely to increase significantly across the world. The International Monetary Fund expects that sovereign debt in advanced economies will increase to 122 percent of GDP in 2020, up from a precrisis forecast of 105 percent. In emerging and middle-income countries, it is forecast to increase to 62 percent of GDP, up from 53 percent. 3 3. World economic outlook, April 2020: The great lockdown, International Monetary Fund, April 6, 2020, imf.org. How to manage record debt levels without crippling the economy Faced with sustained high levels of debt, governments will need to put a huge effort into managing deficits and debt-payment plans to maintain their creditworthiness and their ability to service their debt. Just as important, however, they will need to find the optimal ways to support economic recovery—at the national level, for individual companies, and for citizens. It is likely that governments will need to keep the focus on both of these dual objectives in a continuous balancing act over the next few years. That will limit their use of some of the traditional budget-balancing tools. For instance, our analysis suggests that an attempt to use austerity to close crisis-era government deficits would entail reducing public spending by about 25 percent—clearly a measure that few governments can contemplate. Likewise, using tax increases to fund the deficit would result in tax burdens rising by some 50 percent, which would severely limit corporate investment and reduce country competitiveness. Without a new approach, closing the cumulative 2020–23 fiscal deficit worldwide would require a 50 percent increase in tax revenues, or a 25 percent reduction in public spending. Instead, governments will need to find ways to improve their fiscal performance while maintaining their countries’ economic competitiveness as the foundation for sustained recovery. Alongside building excellence in debt issuance and management, governments can rethink their fiscal programs in a comprehensive manner, leveraging both income and balance-sheet measures (Exhibit 2). For a start, governments will need to adopt bold revenue-enhancement and cost-containment strategies. On the revenue side, they can use operational-performance levers to improve revenue collection. Previous McKinsey research has highlighted major opportunities for countries to improve the efficiency and effectiveness of their tax systems, including the use of well-planned digitization efforts. For example, governments can harness new data sources and analytics tools to recover around $1 trillion a year in fiscal leakage—both unpaid revenues and unjustified outbound payments. Turning to cost containment, governments can use spending reviews to reallocate budgets to the highest priorities while delivering savings through better procurement and fraud reduction. These steps can form part of a broader drive to improve public-sector productivity, which could save the world’s governments as much as $3.5 trillion a year, as our previous research has shown. Today, most governments change their spending allocations only marginally year over year, pointing to an opportunity to review and readjust spending much more decisively. Governments that have undertaken such reviews have often identified savings of around 10 percent or more of the target cost base, without sacrificing the scope or quality of services. At the same time, smarter procurement—via supply management, demand control, and processes such as e-tendering portals—can save governments around 15 percent of addressable spending while simultaneously boosting outcomes. An even greater opportunity, we believe, lies in partnering with the private sector to ensure that publicly owned assets—including land and property and state-owned enterprises—are valued properly, managed professionally, and securitized or monetized where appropriate. Armed with a new transparency on their balance sheets, governments can shape effective funding and asset-allocation strategies, and draw on the expertise of private-sector players such as the financial sector as they do so. These strategies could encompass a range of options, including using nonrecourse lending solutions such as public–private partnerships to finance capital-expenditure projects; selling nonstrategic assets, for example through land monetization; and leveraging existing reserves to manage the cost–risk ratio of government debt portfolios. The use of balance-sheet levers could unlock considerable value for governments, with minimal procyclical effects. Ensuring that fiscal stimulus drives rapid, inclusive recovery Beyond the imperative of securing sustainable public finances, governments will need to ensure that fiscal stimulus measures focus on the segments that support the recovery the most. And they will also need to find smart ways to use the current and planned support to accelerate economic transformations that are ongoing, or embed new shifts that will be needed. Given that we are still in the early days of the COVID-19 crisis, it is understandable that the world’s economic response to date has focused on relief; further interventions will likely be required to revive aggregate demand. In the United States, for example, the $3 trillion COVID-19 response has been allocated almost entirely to immediate relief measures. In contrast, the American Recovery and Reinvestment Act of 2009 allocated 55 percent of its total funding to stimulate industries and revive aggregate demand. Looking ahead, governments can work with the private sector to design stimulus measures that not only drive recovery but also support the long-term reimagination of economies and societies. For example, the COVID-19 crisis provides a prompt to accelerate government digitization and support companies to adopt new technologies—and thus strengthen productivity and citizen services. The shift to a contactless economy, driven by the pandemic, will contribute to this acceleration. Some countries have already seen individuals’ preference for contactless operations increase by 20 percent or more during the crisis, with the industries affected spanning payments, retail, food, accommodation, education, and healthcare. Governments, the private sector, and educational institutions can also look to the crisis as a catalyst to reskill workforces at speed and scale, with stimulus packages incentivizing a shift to a more productive and equitable economy. In Germany, for example, the recent Qualification Opportunities Act provides for government subsidies of companies’ employee-training programs; smaller businesses receive proportionally greater subsidies. Up to 100 percent of training costs for microbusinesses and up to 50 percent for small and medium-size enterprises are covered by the subsidies. Governments can also achieve other objectives, such as increasing registration of informal businesses and improving female participation in the economy, in return for financial support. There is room for smart, trust-based collaboration between government and business to rebuild and reimagine key sectors of the economy. There is also room for smart, trust-based collaboration between government and business to rebuild and reimagine key sectors of the economy. Sectors such as automotive manufacturing, construction, and transport have suffered significant disruption of demand and supply chains during the crisis and could require fundamental restructuring. Governments can work with industry associations and leading companies in such sectors to forge common strategic objectives and target joint investment to support reinvented business models, new agility, and greater competitiveness. The COVID-19 pandemic has already triggered large-scale increases in public borrowing, severely hampered economic growth, and disrupted key industries—and the crisis is far from over. In the months ahead, governments and the private sector will need to work together as never before to ensure the success of an epic balancing act: managing record levels of public debt while fostering broad-based economic recovery. Success would result in a new social contract that shapes a postcrisis era of shared, sustainable prosperity. Failure could lead to a sustained period of depression and austerity on a scale not seen since the 1930s. The stakes are high, and the need for bold, visionary leadership in the public and private sector has never been greater.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.961462140083313, "language": "en", "url": "https://employmenttribunal.claims/national-minimum-wage-national-living-wage/", "token_count": 973, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11962890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8e06f5c7-6d8e-47fd-bb63-3e2265fadb23>" }
There is now in effect in the United Kingdom a National Living Wage (NLW) for those over 25 and a National Minimum Wage (NMW) for those under 25. This is the minimum amount of gross pay per hour that an employer must pay an employee depending upon their age. Rates of pay National Living wage for workers over 25 is £7.50 per hour (from 1 April 2017) National Minimum Wage – People aged 21 to 24 must be paid at least £7.05 per hour National Minimum Wage – Between the ages of 18 and 20 an employee must be paid at least £5.60 per hour National Minimum Wage – Between the ages of 16 – 17 an employee must be paid at least £4.05 per hour National Minimum Wage – Apprentices must be paid £3.50 per hour The remuneration being received by an employee will be made up of many items such as a basic wage, bonuses, incentives, in some cases accommodation, clothing allowances, shift allowances etc. What is allowed and what is not allowed in calculation of the basic minimum wage is dependent on each case and expert advice should be sought. The hours for which the employer must pay at least the national minimum wage are calculated differently according to the type of work done. There are four types of work: - Employees paid for working a set number of hours, or a set period of time, are doing timework - Employees who have a contract to work for a set number of basic hours each year in return for an annual salary paid in equal instalments (for example each week or each month), are doing salaried hours work - Employees paid according to the number of things they produce or the number of deals or sales that they make are doing output work. In this case there is an option for the employee to have a written agreement with the employer stating a ‘fair estimate’ of the number of hours they should work. - If the employees have to do a number of specific tasks, but do not have any set hours, they are doing unmeasured work. Again, there is an option for them to have a written agreement with their employer setting out the average number of hours they should work each day. Who is entitled to it? The following are entitled to NMW or NLW: - agency workers, - home workers - commission workers - part time workers - casual workers Who is not entitled to it? There are a number of people not entitled to the NMW or NLW. These include: - Self employed people; - Company Directors; - Family members living in the family home and undertake household tasks. Non payment of NMW or NLW If an employee is entitled to the national minimum wage, the employer cannot force him/her to accept a lower rate of pay. Even if the employee has signed a contract agreeing to receive a lower rate of pay, this will have no legal effect. If a worker is not being paid the NLW or NMW to which they are entitled they should lodge a grievance with their employer. If the employer does not rectify the situation, the worker can make a complaint to the HMRC who will investigate and can send a notice of arrears to the employer plus a penalty for not paying the correct level of wages. If the employee thinks that he/she is being paid less than the NMW or NLW, the employee has a right to see his/her records about it. As long as the employee makes his/her request in writing, the employer must by Law supply the records within 14 days. If the employer refuses to let the employee see his/her records, the employee can lodge a complaint with an Employment Tribunal (which, if it upholds the complaint, will order the employer to pay compensation equivalent to 80 hours’ pay at the minimum wage rate). Employees are legally protected against being sacked or victimised by employers over asserting their entitlement to the NMW or NLW and over any action they may take to enforce their rights through the Employment Tribunal. If the employee is owed back pay for non payment of the NMW or NLW, the employee can bring a claim to the Employment Tribunal for Unlawful Deductions of Wages (limited to a 2 year backdating period). If you are owed money for non payment of the NMW or NLW or have otherwise suffered a detriment (including dismissal) because you have asserted your legal rights to be paid the NMW or NLW with your employer, contact one of our specialist No Win No Fee Employment Law Solicitors today by calling: 0800 612 9509 or by completing one of our online contact forms.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9563501477241516, "language": "en", "url": "https://resource.wur.nl/nl/show/Rice-production-in-Southeast-Asia-can-keep-up-with-population-growth.htm", "token_count": 391, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.07177734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f701a69e-40e9-4110-ab9f-7308e3b8182a>" }
Rice farmers in Thailand, Vietnam and Indonesia are able to grow enough rice in the years to come to feed the growing population in Southeast Asia, provided they learn from farmers with high yields. In the Philippines, however, more rigorous measures are needed, report production ecologists in the European Journal of Agronomy this month. The researchers juxtapose the current rice production in four Asian countries and the maximum attainable harvest in order to calculate the 'yield gap' between actual and potential rice production. This gives a value of between 2 and 5 tonnes of rice per hectare, or a quarter to more than half of the maximum attainable yield. They also calculated the output of the most productive farmers in each area. The difference between the average production and the output from the best farmers is 1.2 to 2.6 tonnes of rice per hectare. If every rice farmer in Thailand, Vietnam and Indonesia were to produce as much as the most productive farmers, there would be enough rice in 2050 for these three major countries in Southeast Asia. This would imply that rice farmers with existing varieties and techniques are able to raise their yields. A prerequisite for this is that all farmers must have access to the needed knowhow and production tools, says Van Ittersum. He calls for programmes to be directed at enabling farmers to learn from one another and from researchers. Farmers with the highest yields are usually better educated and use fertilizers and labour more efficiently. Such a policy, however, is not adequate in the Philippines, conclude the researchers. With the current rice varieties and production techniques, rice production in this archipelago will increase by only 18 percent in the coming decades, insufficient to cater to the population growth. As such, structural changes are needed there, such as new technology, varieties, better transfer of knowhow and another market organization. The Philippines, like Indonesia, is already importing rice, while Thailand and Vietnam are rice exporters.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8663156628608704, "language": "en", "url": "https://slideplayer.com/slide/5809911/", "token_count": 794, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0164794921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9a07a748-8dbd-44a1-948b-6a3c45ae9665>" }
Presentation on theme: "Animal Agriculture Economic Analysis: The National Overview United Soybean Board June 2013."— Presentation transcript: Animal Agriculture Economic Analysis: The National Overview United Soybean Board June 2013 Objectives Gather information on and analyze the following: Animal agriculture production, by state and by product Usage of soybean meal in animal agriculture Quantify the economic impact of animal agriculture at the state and national levels 2 Animal Agriculture Database Sources National Agriculture Statistics Service (NASS) Census of Agriculture Bureau of Economic Analysis, Dept. of Commerce Content of Excel files Quantity and value of production, animal numbers Soybean meal use Economic impact: output, earnings, jobs, taxes 3 Most US Animal Agriculture Expanded Between 2002 and 2012 The economic downturn beginning in 2007 impacted production, but production has bounced back. Cattle production is down 2.8% over the period. Pork production is up 22.7%. Broiler production is up 12.4%. Turkey production is up 0.7%. Egg production is up 6.5%. Milk production grew steadily, by 17.8% 4 US Cattle & Calves Production, 2002-2012 5 Cattle production is slowly declining. US Cattle & Calves Production Shifts, 2002-2012 6 US Hogs & Pigs Production, 2002-2012 7 Hog and pig output is advancing. US Output Index, 2002-2012 Cattle, Hogs, & Milk 17 All sectors, except cattle, experienced growth in volume. US Output Index, 2002-2012 Broilers, Turkeys, & Eggs 18 All sectors experienced growth in volume. Animal Agriculture Can Boost State Economies Animal agriculture plays a major role in the economy of several states and is important for nearly all states. The impacts are many: States benefit from the direct investment and jobs, both at the farm level and at the slaughter, processing, and manufacturing levels They also benefit from multiplier effects on other sectors Animal agriculture is also the leading source of demand for soybean meal. Growth in output is usually concentrated in a small number of states for each species. 19 Multipliers Estimate Broader Economic Impact Regional Industrial Multiplier System from the Department of Commerce Output and earnings multipliers = $ of total output and household earnings created by $1 of production in an industry Employment multiplier = # of jobs per $1 million Four animal agriculture industries are Beef cattle Dairy cattle Poultry and egg Hogs and pigs/ other 20 Economic Impact of 2012 Animal Agriculture 21 * States in blue contain estimates. Economic Impact, Continued 22 * States in blue contain estimates. Impact of Changes in Animal Output, 2002-2012 23 * States in blue contain estimates. Impact of Changes, Continued 24 * States in blue contain estimates. Animal Agriculture is Important to Wisconsin In 2012, Wisconsin ranked 6 th in the nation in cash receipts from livestock sales. Wisconsin ranked 2 nd in milk cash receipts with 13% of the value. Wisconsin also ranked 10 th in cattle, 16 th in hogs, and 11 th in turkeys. Animal agriculture in Wisconsin consumes significant quantities of soybean meal. 33 Economic Multipliers for Wisconsin, 2012 34 Every million dollars of livestock product output in Wisconsin results in $1.88-2.58 million in total economic output in the state, Generates $340,000-$460,000 in family income, And is responsible for 13-16 additional jobs. Economic Impacts in Wisconsin 35 Animal agriculture in Wisconsin represents $15.6 billion in revenues, $2.9 billion in household income, about 111,000 jobs, and yielded an estimated $765 million in income taxes. Wisconsin Output Index, 2002-2012 Cattle dropped again in 2012. Hogs are steady. Milk is increasing, up over 20%. Broilers are up 49%. Eggs are up 19%. 36 Wisconsin Value of Production, 2012 ($1,000) 37
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9541875123977661, "language": "en", "url": "https://venturebeat.com/2019/03/10/self-driving-cars-could-soon-navigate-the-world-via-micropaymets/amp/", "token_count": 1640, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.09765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:525f7822-4eb5-4ff8-bee3-37a8988ee25c>" }
With autonomous vehicles already debuting on our roads and in our airspace, you’ve probably grown accustomed to the idea of a new future for mobility. But what you may not have wrapped your head around yet is that these vehicles could create a whole new marketplace that makes transportation safer, greener, and more efficient. Mobility Open Blockchain Initiative (MOBI), a nonprofit consortium that counts the majority of the world’s large automakers as members, including BMW, GM, and Ford, as well as The World Economic Forum, Bosch, Denso, IBM, and Accenture, kicked off a three year “Grand Challenge” fueled by over $1 million in prizes. Winners were just announced for the first phase, focused on the building blocks of next-generation mobility networks and drawing competition from 23 teams across 15 countries. Connected machines and sensors have their own complex web of machine-to-machine communication. What they haven’t had is money to negotiate and transact among themselves. Vehicles increasingly collect vast troves of data that could help other machines make better decisions or drive new services that optimize the entire system — if only it were shared. Money can incentivize the sharing of this data, serving to align the interests of diverse participants in a new machine-to-machine economy — and ultimately help people and things move more efficiently. Corporations, entrepreneurs, and even governments are looking to the unique functionality of blockchains and digital currency to enable this future. “We envision a world in which vehicles can communicate their intentions with other vehicles and coordinate their behavior,” explains MOBI CEO Chris Ballinger. “Micropayments give machines an incentive, and whether it’s about a pothole, traffic, or other conditions, that data is valuable. Many of the useful things we can do with cars and other vehicles will require micropayments from one machine to another.” For example, when vehicles traveling near each other can safely form an ad hoc network and agree to share data in exchange for micropayments, a car could extend its perception farther than the range of its own sensors. It could “see around corners” — obtaining data, for example, that could anticipate a collision and adjust speed to avoid it. When not only vehicles but infrastructure is also equipped with digital currency wallets, the entire transportation ecosystem can work together to optimize flow. A city could offer incentives for using alternate routes or modes of transport to ease congestion and increase the longevity of aging physical infrastructure. If a commuter takes the bus or walks instead of driving or selects a less-trafficked route, for example, they could be rewarded with a payment. The cost of insurance could depend on how many passengers are in a car. Parking spaces could be found and paid for with less friction. A passenger in a rush to catch a flight could allow their vehicle to negotiate with and make payments to surrounding vehicles to obtain right of way. But the holy grail in mobility is to extract more value out of physical infrastructure. Infrastructure built to peak demand is often prohibitively expensive. And there are still traffic jams at peak periods. When connected vehicles have their own form of money, pricing mechanisms can smooth supply and demand instead of expensive infrastructure investments — and they can do so with more accuracy, breadth, and in higher fidelity than today’s toll roads. This “congestion pricing” would function like peak pricing on the electrical grid. This same principle could be applied to charge for the environmental impact of mobility choices, or “pollution pricing.” Blockchain-driven digital currency is uniquely suited for machine-to-machine transactions. It can manage complex rules called smart contracts, such as releasing payments to fit specific conditions in an escrow-like arrangement. Each machine or sensor can have a unique identity and a wallet associated with that identity that collects and makes payments. Without the need for intermediaries, the cost of a transaction can be reduced to a point at which micropayments are feasible. “When you can make a transaction valued at a penny, or even a tenth of a penny, it starts making sense to buy and sell small bits of data — the right of way on a road for the next block, or a three block rideshare, or windshield wiper data that can give insight on weather patterns, for example,” says Ballinger. “From where we are today, it’s hard to imagine all the business models that will become possible when data can be bought and sold in this ecosystem at such low friction.” Many are already working on the foundational elements of these new business models. Chorus Mobility won first prize of the Grand Challenge for its work using machine-to-machine payments to negotiate road space, use of infrastructure, and right of way. Oaken Innovations won second place for a pilot that demonstrated vehicles being charged for road use, congestion, and pollution in Portugal. Dovu, a startup backed by InMotion Ventures, Jaguar Land Rover, and Creative England (a fund backed by the UK government), is developing a protocol that could be used by local governments to influence how citizens move around a city. Participants are rewarded when they make changes to travel habits that lead to less congestion and lower emissions. The company recently announced a partnership with one of the UK’s leading transportation providers, Go-Ahead, to incentivize data sharing and changes in passenger behavior, first rolling out for the company’s Thameslink and Southern Rail services. DAV, based in Zurich, Switzerland, is building a protocol that allows transport assets to discover, communicate, and transact with each other without human intervention. For example, an autonomous drone could make decisions on its own based on parameters set up by its owner. The drone could hire itself out for geo-fenced missions. When its charge is low, it could negotiate and pay for services directly with charging platforms according to criteria set earlier — identifying the cheapest, or the most efficient. With each agreement, funds would move from one machine’s wallet into escrow, and with the completion of each task, from escrow into the wallet of another machine. Owners could empty these wallets at any time. “These are open networks that anyone can use,” says DAV CEO Noam Copel. “Businesses or consumers that own transport assets could build services on top — and they could earn income by putting those assets to work.” In a supply chain, goods moving from one mode of transport to another, often owned and operated by different companies, create a great deal of complexity. Increasingly robotized ships, ports, and rail lines could identify each other and transact among each other as goods make their way along the supply chain. At each step, money moves from one wallet to another without paperwork and accounting delays — and the blockchain safely and indelibly records the circumstances of the transaction. The stakes are high, and the potential impact wide-ranging. A shift to transaction-fueled mobility could affect nearly everyone who commutes to and from a job, and nearly every company’s supply chain. None of this is possible, or course, without the infrastructure and regulatory environment to support it. But there are other hurdles ahead. Innovators have made much progress in using blockchains to facilitate secure machine reputation and other identity data (essential to ensure that services and money are exchanged with the right machines), but the technology is still young. And importantly, it depends on getting to scale. “Putting a bus or car on a blockchain is easy. The hard problem is getting to network effects. This will be a team sport,” says Ballinger. Alison McCauley is an author and strategy consultant who has been researching and working with startups, corporations, and investors building blockchain-enabled applications. She is author of Unblocked: How Blockchains Will Change Your Business and CEO and Founder of communications and strategy consultancy Unblocked Future.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9761769771575928, "language": "en", "url": "https://www.economist.com/business/2003/01/02/in-the-balance", "token_count": 1190, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.24609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7499f7c1-737a-49a2-8b71-cbe7930dfaca>" }
JUST as Japanese firms did during their country's bubble in the 1980s, American firms spent the late 1990s borrowing up to their eyeballs. Once the bubble burst, Japanese firms continued to raise lots of debt, thus ensuring that the mess they were in got much worse, and remains horrid today (see article). Japanese corporate debt peaked only in 1995, five years after the Nikkei started its dizzying descent. Only in 1997 did debt begin to fall by much; and 13 years after the bubble popped, Japanese firms still have higher ratios of debt to equity than American firms. A growing number of Americans, having noticed the remarkable similarity between their bubble and the Japanese one, now fear that their firms' balance sheets will likewise continue to worsen, contributing to a prolonged economic slump in the world's biggest economy. Are they right to worry? The borrowing binge by American firms during the bubble was a big reason why five of the eight biggest bankruptcies in the country's history, involving some $375 billion of assets, occurred last year. All things being equal, the more debt a firm has compared with its equity (ie, the more leveraged it is), the riskier it is. Unlike dividends, interest has to be paid, even if the economy sours or interest rates soar. In a bubble, that risk is disguised by inflated share prices, which make leverage appear lower and less threatening than it is. Last autumn, in anticipation of worsening problems, overall spreads for investment-grade bonds rose, relative to Treasury bond yields, higher than they had ever done. The Merrill Lynch High Yield Index (ie, for “junk” bonds) rose to a spread of 12 percentage points over Treasuries. This, on the face of it, signalled a far worse outlook than was ever projected by Japan's famously inefficient capital markets. To take one by no means unique example, that 12-point spread over Treasuries is some 100 times the spread in 1997 on bonds issued by Chichibu Cement, a struggling firm with a lowly double-B rating, seven years after Japan's bubble burst. It is still too soon to be certain, but these life-threatening interest rates may have brought matters to a head, helping to bankrupt the worst-offending firms and encouraging the rest to start shaping up. A fast-growing number of American firms have started to reduce debt. In telecoms, Sprint has sold its directories business to pay off debt, Verizon some of its access lines, and SBC stakes in subsidiaries. In electric utilities and power production, Mirant, Dominion, Duke Energy and CMS have all sold assets or issued equity. So has Tyco, a big, troubled conglomerate. Household International, a big financial firm, first raised some $900m of new equity, and then agreed to be taken over by HSBC, a large British-based bank. There would have been far more activity, except that managers at many firms have so far chafed at selling cheaply in a buyer's market. Bond investors have of late been persuaded that many firms are now getting serious. Bonds issued by investment-grade firms have tightened against Treasuries by 80 basis points (hundredths of a percentage point) since October 15th, their widest level. This may not sound a lot, but is. Junk-bond spreads have also narrowed. True, many balance sheets are still in poor shape. And though fewer firms are defaulting than were doing so or threatening to do so a few months ago, the ratio of upgrades to downgrades by Moody's, a credit-rating agency, continues to decline. However, points out John Lonski, Moody's chief economist, net new borrowing by American firms has fallen sharply, as have repurchases of shares. Free cashflow—revenues that are not spent or distributed to shareholders—is rising sharply, not least because merger activity has been moribund. The result is more money that can be spent on cutting debts. Overall, then, it does appear that a corner has been turned, and that the turning point came soon after spreads in the corporate-bond market peaked. And this may not be coincidental. Indeed, there are good reasons for thinking that America's sophisticated corporate-bond market may be what saves its firms from the fate of their Japanese counterparts. There are good reasons for thinking that America's sophisticated corporate-bond market may be what saves its firms from the fate of their Japanese counterparts Most corporate finance in Japan came from the country's banks, especially the big ones, which were in effect guaranteed by the government. (They still are, although that guarantee may be worth rather less now that the guarantor itself is up to its neck in debt.) With only a couple of important exceptions, big Japanese companies were not allowed to go bust. Interest rates on loans were generally low and rarely varied according to differences in the credit-worthiness of borrowers. This indifference among Japanese creditors to the sorts of factors that so agitated investors in America's corporate bond market last autumn continued long after the bubble in stocks and land had popped: banks—and the government—did not want to let companies go bust. Thus they continued to lend, throwing good money after bad. Borrowing remained absurdly cheap even for the many firms that heartily deserved to die. Only in 1997 did credit spreads start to rise somewhat and become a bit more differentiated. But that was too little, too late. Here is a positive thought, then, with which to start the new year. Just because America and Japan have each experienced a stockmarket bubble does not mean that their companies must suffer equally badly in the aftermath of the burst. Anyway, fingers crossed. This article appeared in the Business section of the print edition under the headline "In the balance"
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9587356448173523, "language": "en", "url": "https://www.hbsfinancialgroup.net/create-a-budget-in-college", "token_count": 989, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.03759765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f9fd31cd-c6d1-48e3-a848-d858284afd2c>" }
Create A Budget In College Should I Create a Budget In College? – Absolutely, Learn How Here If you don’t know how to create a budget, or haven’t been advised just how important it is to have one, then this article is for you. If this the first time that you are on your own, and you aren’t able to get a handle on the various school expenses, then you may find that college life may be difficult. Not being able to manage your money responsibly can cause financial difficulties, increased debt, and other concerns that you really don’t need to have. You may be a very talented individual, especially in the field of study that you are pursuing, and college is expected to help you to take advantage of those talents. You need to expand your mind and learn to function as an independent and responsible adult. It’s very important that you are able to focus on your education, and at the same time, enjoy the social life and extracurricular activities while in college. To accomplish that, you will need to create a budget that contains a flexible spending plan. That means your variable and fixed expenses must at least equal the financial resources that are available to you. Hopefully, for you, the resources will exceed the expenses. However, it’s important that you create a budget that is not only realistic, but is flexible so that changes can be made if necessary. Creating a budget does not require a doctorate degree in math. As a matter of fact, that’s the easy part. Following your budget is another matter. If this is your first budget, you’ll soon see what a challenge it can be. Once you have your budget created, remember, it’s not cast in concrete. If you missed an item or didn’t calculate enough, then you need to make a change. [bctt tweet=”Learn how to create a budget in college and why you should…” username=”HBSMoneyTips”] Do your very best to follow your budget or you may soon discover that your money is gone, and you’ve dug a hole for yourself that will cause additional stress. Generally, the category that causes the most problems when you create a budget are the discretionary expenses. The best place to start to create a budget is to determine the amount of funds that you have available, and the expenses that are fixed for that school year. The fixed expenses shouldn’t vary. Start listing the income sources such as a part time job, work study programs, and don’t forget money from mom and dad. If you’re fortunate enough to have income from a trust, be thankful. Also list any student loans, grants, or other forms of financial aid. Cash gifts also qualify. Some examples of fixed expenses are tuition and related fees, plus room and board if you live on campus. If you are staying off campus, you will probably have rent and utilities. Added to this list of fixed expenses are also books, supplies, and equipment such as a laptop. If you have an auto loan, it will be in this category. Auto insurance and also medical insurance premiums will be a fixed expense, as well as your cell phone bill. When you have completed the list of fixed expenses, subtract that total from the gross income to arrive at your disposable income. That remainder has to cover all variable expenses such as food, snacks, and other personal expenses, such as laundry, haircuts, etc. You will need to factor in social activities and recreation as well. Gas and maintenance for the car, plus any parking fees are included in these expenses also. We hope that after covering the variable expenses, there will be discretionary income that should go to savings and/or an emergency fund to cover some future expense that is sure to pop up. You need to track your spending carefully during this time to make certain that you are following your budget. We recommend to our clients that they purchase a good personal budget software program that will make it so much easier to get a handle on all of those expenses. This program will guide you every step of the way and will teach you how to properly maintain a budget. Most importantly, it will also teach you to eliminate debt as quickly as possible, especially credit cards that can lead you into a difficult financial situation quickly. You should have a good idea after you finish your first or second semester, of how much is needed in each category, and where you can make changes to adjust. Don’t forget to keep track of all of your spending, and take the matter of budgeting seriously. When you create a budget, you can’t afford to fail. There’s no re-taking that test!
{ "dump": "CC-MAIN-2020-29", "language_score": 0.972527265548706, "language": "en", "url": "https://www.hbsfinancialgroup.net/teaching-children-about-money", "token_count": 849, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0017547607421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:56d31ff5-9058-42b9-bfab-952493ae51c2>" }
Teaching Children About Money Why Teaching Children About Money is Important Teaching children about money and how to manage it early on will prevent them from having to go through the debt vortex so many Americans find themselves sucked into. Schools are lacking in any real, useful knowledge about money management so the responsibility is on you to teach your children. The earlier they learn the more comfortable and wise they become about personal money management for kids. Be proactive in teaching your children about money as well as your grandchildren. Children are old enough to begin to learn about money when they are old enough to ask for things like candy or toys. Make sure that what you teach them is age appropriate. While it is never a bad idea to let them look over your shoulder as you are comparing flight fares and hotel rates, keep their lessons fun and light. Make sure their money lessons can be applied and used in their life. [bctt tweet=”Teaching children about money should start at an early age. See why…” username=”HBSMoneyTips”] Consider this time to be laying the groundwork for future money management success. Every child has daily chores. You can be teaching children about money lessons in their chores in the following ways. Teaching Children About Money & Money Management Is Critical - Allowance 411. Children need money in order to learn how to manage it properly. When children earn something for their work, they also gain a sense of accomplishment and responsibility. Pick an amount that you are comfortable with giving them. Small children are happy to have $1, while older children may expect more. Make sure you show that you value the work your child is doing.While it may be difficult, do not set restrictions on what they can spend their money on. They have to understand the consequences of spending money on junk. They will get tired of wasting their money sooner or later. Make sure they are aware that when their money is gone, they will not receive any more until their allowance rolls around again. Do not give in and give them money before their allowance. Sometimes it takes a little suffering to make wiser decisions. - Start saving. Children should learn the value of a savings account early on. Getting used to the saving aspect right from the start will avoid so many financial mistakes down the road.Open a savings account with your child. If they have a particular goal in mind to spend their money on, explain that the savings account will keep their money safe. Give children the exact dollar amount their goal is going to cost and set a time frame when he or she will need to reach their goal.Help them figure out how much they need to put into savings each allowance period. Make sure that amount is deposited into their account. Give your child a small notebook to track their growing savings.When their goal is met, take them to the bank to withdrawal their money. Celebrate their money management success. You can even reward them if you would like. It is important that children have a positive anchor, or association, with saving money and meeting financial goals. - Let children help you pay the bills. They will make the association of bills with getting to spend time with you. The more positive thinking you can create around money management, the fewer money hurdles your child will have later in life.Allow your child to look at bank statements, utility bills, and even a credit card bill. Explain to them ways how to manage your money better. When they get to peek into the adult world of money management, they have a more informed view of the real world. Being grown-up is not all about doing what you want like children think. Allow them to see that being grown-up carries responsibilities. Stress the importance of always taking care of your responsibilities. Make sure children have a true understanding of income and expense. It is truly my hope that the children growing up today will not fall into the same debt trap so many of us did. Make sure you are a good example by showing how to manage your money by teaching children about money. Children learn more by observing than by verbal teachings. You can also set up money activities for kids to generate more interest.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9321092963218689, "language": "en", "url": "https://www.tnah.com/hers-or-hes-breakdown-different-energy-rating-scales", "token_count": 659, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.09716796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:24e2efa7-ad18-4e47-becb-d2f21da568d2>" }
HERS or HES? A Breakdown of Different Energy Rating Scales The amount of money a home owner spends on utility costs to operate their house can have a significant impact on their monthly budgets and what they can comfortably afford, so it’s important to be able to speak to how energy efficient your buildings are to potential clients. With several energy rating systems out there, however, let’s take a look at how to decipher what each number means. You can get a general sense of how energy efficient the home might be using the following rating systems. Residential Energy Services Network (RESNET) created the Home Energy Rating System (HERS), which is based on an index where the lower the number, the more efficient the house. By conducting blower door and duct leakage tests, the HERS Rater compares the home to a reference home (a model home that is the same type, size and shape) for a relative score. As an example, a HERS Index Score of 0 is a net-zero energy home that produces as much energy as it uses. A home compliant with the 2006 International Energy Conservation Code (IECC) would receive a score of 100. In comparison, a home with a HERS Index Score of 60 is 40% more efficient than the 2006 IECC-compliant home. The Department of Energy (DOE) developed the Home Energy Score (HES) as a low-cost way to estimate a home’s energy usage. The HES scale is typically used for existing homes (whereas HERS is often used for new homes) and ranges from 1-10, with a higher score indicating lower energy use. HES estimates the home’s total energy use, not energy use per square foot, so a larger home will most likely score lower on the scale than a smaller home. The score is a gauge of how much energy the home might use, and is determined after a walk-through of the home. The assessor collects around 50 data points, such as insulation grade, window type, and information on the heating/cooling system. Why do these scores matter, and how do they fit into the broader market? The HES can help Federal Housing Administration (FHA) borrowers take out a larger loan if the home has a higher score, indicating that the house is more energy efficient and, therefore, the owner is expected to have lower utility costs. Although the HES is a low-cost and reliable way to get an idea of a home’s energy usage, it cannot be used to comply with the IECC. The Energy Rating Index (ERI) is used as a performance path to comply with the IECC. The HERS Index can be used for this performance path, so while more expensive than HES, it can be advantageous to builders in order to comply with the IECC. For more information about NAHB’s sustainable and green building programs, contact Program Manager Anna Stern. To stay current on the high-performance residential building sector, follow NAHB’s Sustainability and Green Building team on Twitter. *All articles are redistributed from NAHBnow.com*
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9611712098121643, "language": "en", "url": "http://www.startupinnovation.org/research/crowdfunding/", "token_count": 376, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.087890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8bda90c1-d1aa-4d2c-b11b-c3620f9651ff>" }
Though it has roots in fields like microfinance and peer-to-peer lending, crowdfunding represents a potentially radically new way for budding entrepreneurs to get initial funding for their projects. As a subject that brings together my interest in entrepreneurship and my studies of internet communities, I have been studying the impact of crowdfunding, including how potential entrepreneurs raise money from the crowd, and what causes them to succeed. A few major points from my research so far: - Delivery and fraud. Crowdfunding has been remarkably free from fraud, even though over 75% of projects deliver late. In my paper on the subject, I find that less than 1% of the funds in crowdfunding projects in technology and product design go to projects that seem to have little intention of delivering their results. I think this is in large part due to the influence of community on the crowdfunding process. The result is that (as covered in this great CNN piece) project creators almost always try to deliver, often at great cost to themselves. - How to get funding. Successful projects in crowdfunding seem to have the same characteristics as successful VC-backed projects: they demonstrate plans on how to deliver, show prototypes, indicate outside support by including quotes from journalists, and reference other successful work that the proposers have done in the past. - The double-edged sword. Crowdfunding makes potential entrepreneurs accountable to early customers or investors in a way that can cause problems, as well as benefits. Crowdfunding creates an obligation to deliver on a particular project, rather than allowing entrepreneurs the flexibility to change direction if they learn new things or find new opportunities. - Advice. For those seeking crowdfunding, this image shows how various factors affect the chance of success, both in the campaign and the long term (click to expand). More in this paper. For more on crowdfunding, there are details in my recently published paper
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9299761056900024, "language": "en", "url": "https://artspace-jhb.co.za/for-the-us-the-proliferation-of-smart/", "token_count": 4127, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.00006914138793945312, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fd1e6e55-0304-4d30-adc2-3cb6b95a7e61>" }
For organizations of all sizes, data management has shifted from an important competency to a critical differentiator that can determine market winners and has-beens. Fortune 1000 companies and government bodies are starting to benefit from the innovations of the web pioneers. These organizations are defining new initiatives and reevaluating existing strategies to examine how they can transform their businesses using Big Data. In the process, they are learning that Big Data is not a single technology, technique or initiative. Rather, it is a trend across many areas of business and technology.Big Data refers to technologies and initiatives that involve data that is too diverse, fast-changing or massive for conventional technologies, skills and infra- structure to address efficiently. Said differently, the volume, velocity or variety of data is too great.But today, new technologies make it possible to realize value from Big Data. For example, retailers can track user web clicks to identify behavioral trends that improve campaigns, pricing and stockage. Utilities can capture household energy usage levels to predict outages and to incent more efficient energy consumption. Governments and even Google can detect and track the emergence of disease outbreaks via social media signals. Oil and gas companies can take the output of sensors in their drilling equipment to make more efficient and safer drilling decisions.’Big Data’ describes data sets so large and complex they are impractical to manage with traditional software tools.Specifically, Big Data relates to data creation, storage, retrieval and analysis that is remarkable in terms of volume, velocity, and variety:Volume. A typical PC might have had 10 gigabytes of storage in 2000. Today, Facebook ingests 500 terabytes of new data every day; a Boeing 737 will generate 240 terabytes of flight data during a single flight across the US; the proliferation of smart phones, the data they create and consume; sensors embedded into everyday objects will soon result in billions of new, constantly-updated data feeds containing environmental, location, and other information, including video. 1 2 3Velocity. Clickstreams and ad impressions capture user behavior at millions of events per second; high-frequency stock trading algorithms reflect market changes within microseconds; machine to machine processes exchange data between billions of devices; infrastructure and sensors generate massive log data in real-time; on-line gaming systems support millions of concurrent users, each producing multiple inputs per second.Variety. Big Data data isn’t just numbers, dates, and strings. Big Data is also geospatial data, 3D data, audio and video, and unstructured text, including log files and social media. Traditional database systems were designed to address smaller volumes of structured data, fewer updates or a predictable, consistent data structure. Traditional database systems are also designed to operate on a single server, making increased capacity expensive and finite. As applications have evolved to serve large volumes of users, and as application development practices have become agile, the traditional use of the relational database has become a liability for many companies rather than an enabling factor in their business. Big Data databases,such as MongoDB, solve these problems and provide companies with the means to create tremendous business value.Powerful Big Data solutions with less effortMongoDB offers products and services that help you reduce effort and riskGet to Production FasterBig Data for the EnterpriseWith Big Data databases, enterprises can save money, grow revenue, and achieve many other business objectives, in any vertical.Build new applications: Big data might allow a company to collect billions of real-time data points on its products, resources, or customers – and then repackage that data instantaneously to optimize customer experience or resource utilization. For example, a major US city is using MongoDB to cut crime and improve municipal services by collecting and analyzing geospatial data in real-time from over 30 different departments.Improve the effectiveness and lower the cost of existing applications: Big data technologies can replace highly-customized, expensive legacy systems with a standard solution that runs on commodity hardware. And because many big data technologies are open source, they can be implemented far more cheaply than proprietary technologies. For example, by migrating its reference data management application to MongoDB, a Tier 1 bank dramatically reduced the license and hardware costs associated with the proprietary relational database it previously ran, while also bringing its application into better compliance with regulatory requirements.Realize new sources of competitive advantage: Big data can help businesses act more nimbly, allowing them to adapt to changes faster than their competitors. For example, MongoDB allowed one of the largest Human Capital Management (HCM) solution providers to rapidly build mobile applications that integrated data from a wide variety of disparate sources.Increase customer loyalty: Increasing the amount of data shared within the organization – and the speed with which it is updated – allows businesses and other organizations to more rapidly and accurately respond to customer demand. For example, a top 5 global insurance provider, MetLife,used MongoDB to quickly consolidate customer information from over 70 different sources and provide it in a single, rapidly-updated view.Selecting a Big Data Technology: Operational vs. AnalyticalThe Big Data landscape is dominated by two classes of technology: systems that provide operational capabilities for real-time, interactive workloads where data is primarily captured and stored; and systems that provide analytical capabilities for retrospective, complex analysis that may touch most or all of the data. These classes of technology are complementary and frequently deployed together.Operational and analytical workloads for Big Data present opposing requirements and systems have evolved to address their particular demands separately and in very different ways. Each has driven the creation of new technology architectures. Operational systems, such as the NoSQL databases, focus on servicing highly concurrent requests while exhibiting low latency for responses operating on highly selective access criteria. Analytical systems, on the other hand, tend to focus on high throughput; queries can be very complex and touch most if not all of the data in the system at any time. Both systems tend to operate over many servers operating in a cluster, managing tens or hundreds of terabytes of data across billions of records.Operational Big DataFor operational Big Data workloads, NoSQL Big Data systems such as document databases have emerged to address a broad set of applications, and other architectures, such as key-value stores, column family stores, and graph databases are optimized for more specific applications. NoSQL technologies, which were developed to address the shortcomings of relational databases in the modern computing environment, are faster and scale much more quickly and inexpensivelythan relational databases.Critically, NoSQL Big Data systems are designed to take advantage of new cloud computing architectures that have emerged over the past decade to allow massive computations to be run inexpensively and efficiently. This makes operational Big Data workloads much easier to manage, and cheaper and faster to implement.In addition to user interactions with data, most operational systems need to provide some degree of real-time intelligence about the active data in the system. For example in a multi-user game or financial application, aggregates for user activities or instrument performance are displayed to users to inform their next actions. Some NoSQL systems can provide insights into patterns and trends based on real-time data with minimal coding and without the need for data scientists and additional infrastructure.Analytical Big DataAnalytical Big Data workloads, on the other hand, tend to be addressed by MPP database systems and MapReduce. These technologies are also a reaction to the limitations of traditional relational databases and their lack of ability to scale beyond the resources of a single server. Furthermore, MapReduce provides a new method of analyzing data that is complementary to the capabilities provided by SQL.As applications gain traction and their users generate increasing volumes of data, there are a number of retrospective analytical workloads that provide real value to the business. Where these workloads involve algorithms that are more sophisticated than simple aggregation, MapReduce has emerged as the first choice for Big Data analytics. Some NoSQL systems provide native MapReduce functionality that allows for analytics to be performed on operational data in place. Alternately, data can be copied from NoSQL systems into analytical systems such as Hadoopfor MapReduce.Overview of Operational vs. Analytical SystemsOperationalAnalyticalLatency1 ms – 100 ms1 min – 100 minConcurrency1000 – 100,0001 – 10Access PatternWrites and ReadsReadsQueriesSelectiveUnselectiveData ScopeOperationalRetrospectiveEnd UserCustomerData ScientistTechnologyNoSQLMapReduce, MPP DatabaseCloud ComputingCloud computing refers to a broad set of computing and software products that are sold as a service, managed by a provider and delivered over a network. Infrastructure-as-a-Service (IaaS) is a flavor of cloud computing in which on-demand processing, storage or network resources are provided to the customer. Sold on-demand with limited or no upfront investment for the end-user, consumption is readily scalable to accommodate spikes in usage. Customers pay only for the capacity that is actually used (like a utility), as opposed to self-hosting, where the user pays for system capacity it is are used or not.As compared to self-hosting, IaaS is:Inexpensive. To self-host an application, one has to pay for enough resources to handle peak load on an application, at all times. Amazon discovered that before launching its cloud offering it was using only about 10% of its server capacity the vast majority of the time.Tailored. Small applications can be run for very little cost by taking advantage of spare capacity. Bandwidth, processing and storage capability can be added in relatively small increments.Elastic. Computing resources can easily be added and released as needed, making it much easier to deal with unexpected traffic spikes.Reliable. With the cloud, it’s easy and inexpensive to have servers in multiple geographic locations, allowing content to be served locally to users, and also allowing for better disaster recovery and business continuity.Overall, cloud computing provides improvements to agility and scalability, together with lower costs and faster time to market. However, it does require that applications be engineered to take advantage of this new infrastructure; applications built for the cloud need to be able to scale by adding more servers, for example, instead of adding capacity to existing servers.On the storage layer, traditional relational databases were not designed to take advantage of horizontal scaling. A class of new database architectures, dubbed NoSQL databases, are designed to take advantage of the cloud computing environment. NoSQL databases are natively able to handle load by spreading data among many servers, making them a natural fit for the cloud computing environment. Part of the reason NoSQL databases can do this is that related data is always stored together, instead of in separate tables. This document data model, used in MongoDB and other NoSQL databases, makes them a natural fit for the cloud computing environment.In fact, MongoDB is built for the cloud. Its native scale-out architecture, enabled by ‘sharding,’ aligns well with the horizontal scaling and agility afforded by cloud computing. Sharding automatically distributes data evenly across multi-node clusters and balances queries across them. In addition, MongoDB automatically manages sets of redundant servers, called ‘replica sets,’ to maintain availability and data integrity even if individual cloud instances are taken offline. To ensure high availability, for instance, users can spin up multiple members of a replica set as individual cloud instances across different availability zones and/or data centers. With MongoDB Atlas, both the infrastructure and the storage layer are delivered as a service. Rather than managing the deployment of replica sets or sharded clusters, MongoDB Atlas automates these operational tasks for the end user.Learn more about MongoDB Atlas.Combining Operational and Analytical Technologies; Using HadoopNew technologies like NoSQL, MPP databases, and Hadoophave emerged to address Big Data challenges and to enable new types of products and services to be delivered by the business.One of the most common ways companies are leveraging the capabilities of both systems is by integrating a NoSQL database such as MongoDB with Hadoop. The connection is easily made by existing APIs and allows analysts and data scientists to perform complex, retroactive queries for Big Data analysis and insights while maintaining the efficiency and ease-of-use of a NoSQL database.NoSQL, MPP databases and Hadoop are complementary: NoSQL systems should be used to capture Big Data and provide operational intelligence to users, and MPP databases and Hadoop should be used to provide analytical insight for analysts and data scientists. Together, NoSQL, MPP databases and Hadoop enable businesses to capitalize on Big Data.Considerations for Decision MakersWhile many Big Data technologies are mature enough to be used for mission-critical, production use cases, it is still nascent in some regards. Accordingly, the way forward is not always clear. As organizations develop Big Data strategies, there are a number of dimensions to consider when selecting technology partners, including:1. Online vs. Offline Big Data2. Software Licensing Models3. Community4. Developer Appeal5. Agility6. General Purpose vs. Niche Solutions1. Online vs. Offline Big DataBig Data can take both online and offline forms. Online Big Data refers to data that is created, ingested, trans- formed, managed and/or analyzed in real-time to support operational applications and their users. Big Data is born online. Latency for these applications must be very low and availability must be high in order to meet SLAs and user expectations for modern application performance. This includes a vast array of applications, from social networking news feeds, to analytics to real-time ad servers to complex CRM applications. Examples of online Big Data databases include MongoDB and other NoSQL databases.Offline Big Data encompasses applications that ingest, transform, manage and/or analyze Big Data in a batch context. They typically do not create new data. For these applications, response time can be slow (up to hours or days), which is often acceptable for this type of use case. Since they usually produce a static (vs. operational) output, such as a report or dashboard, they can even go offline temporarily without impacting the overall goal or end product. Examples of offline Big Data applications include Hadoop-based workloads; modern data warehouses; extract, transform, load (ETL) applications; and business intelligence tools.Organizations evaluating which Big Data technologies to adopt should consider how they intend to use their data. For those looking to build applications that support real-time, operational use cases, they will need an operational data store like MongoDB. For those that need a place to conduct long-running analysis offline, perhaps to inform decision-making processes, offline solutions like Hadoop can be an effective tool. Organizations pursuing both use cases can do so in tandem, and they will sometimes find integrations between online and offline Big Data technologies. For instance, MongoDB provides integration with Hadoop.2. Software License ModelThere are three general types of licenses for Big Data software technologies:Proprietary. The software product is owned and controlled by a software company. The source code is not available to licensees. Customers typically license the product through a perpetual license that entitles them to indefinite use, with annual maintenance fees for support and software upgrades. Examples of this model include databases from Oracle, IBM and Terradata.Open-Source. The software product and source code are freely available to use. Companies monetize the software product by selling subscriptions and adjacent products with value-added components, such as management tools and support services. Examples of this model include MongoDB (by MongoDB, Inc.) and Hadoop (by Cloudera and others).Cloud Service. The service is hosted in a cloud- based environment outside of customers’ data centers and delivered over the public Internet. The predominant business model is metered (i.e., pay-per-use) or subscription-based. Examples of this model include Google App Engine and Amazon Elastic MapReduce.For many Fortune 1000 companies, regulations and internal policies around data privacy limit their ability to leverage cloud-based solutions. As a result, most Big Data initiatives are driven with technologies deployed on-premise. Most of the Big Data pioneers are web companies that developed powerful software and hardware, which they open-sourced to the larger community. Accordingly, most of the software used for Big Data projects is open-source.3. CommunityIn these early days of Big Data, there is an opportunity to learn from others. Organizations should consider how many other initiatives are being pursued using the same technologies and with similar objectives. To understand a given technology’s adoption, organiza- tions should consider the following:The number of usersThe prevalence of local, community-organized eventsThe health and activity of online forums such as Google Groups and StackOverflowThe availability of conferences, how frequently they occur and whether they are well-attended4. Developer AppealThe market for Big Data talent is tight. The nation’s top engineers and data scientists often flock to companies like Google and Facebook, which are known havens for the brightest minds and places where one will be exposed to leading edge technology. If enterprises want to compete for this talent, they have to offer more than money.By offering developers the opportunity to work on tough problems, and by using a technology that has strong developer interest, a vibrant community, and an auspicious long-term future, organizations can attract the brightest minds. They can also increase the pool of candidates by choosing technologies that are easy to learn and use — which are often the ones that appeal most to developers. Furthermore, technologies that have strong developer appeal tend to make for more productive teams who feel they are empowered by their tools rather than encumbered by poorly-designed, legacy technology. Productive developer teams reduce time to market for new initiatives and reduce development costs, as well.5. AgilityOrganizations should use Big Data products that enable them to be agile. They will benefit from technologies that get out of the way and allow teams to focus on what they can do with their data, rather than how to deploy new applications and infrastructure. This will make it easy to explore a variety of paths and hypotheses for extracting value from the data and to iterate quickly in response to changing business needs.In this context, agility comprises three primary components:Ease of Use. A technology that is easy for developers to learn and understand — either because of the way it’s architected, the availability of tools and information, or both — will enable teams to get Big Data projects started and to realize value quickly. Technologies with steep learning curves and fewer resources to support education will make for a longer road to project execution.Technological Flexibility. The product should make it relatively easy to change requirements on the fly—such as how data is modeled, which data is used, where data is pulled from and how it gets processed as teams develop new findings and adapt to internal and external needs. Dynamic data models (also known as schemas) and scalability are capabilities to seek out.Licensing Freedom. Open-source products are typically easier to adopt, as teams can get started quickly with free community versions of the software. They are also usually easier to scale from a licensing standpoint, as teams can buy more licenses as requirements increase. By contrast, in many cases proprietary software vendors require large, upfront license purchases, which make it harder for teams to get moving quickly and to scale in the future.MongoDB’s ease of use, dynamic data model and open- source licensing model make it the most agile online Big Data solution available.6. General Purpose vs. Niche SolutionsOrganizations are constantly trying to standardize on fewer technologies to reduce complexity, to improve their competency in the selected tools and to make their vendor relationships more productive. Organizations should consider whether adopting a Big Data technology helps them address a single initiative or many initiatives. If the technology is general purpose, the expertise, infrastructure, skills, integrations and other investments of the initial project can be amortized across many projects. Organizations may find that a niche technology may be a better fit for a single project, but that a more general purpose tool is the better option for the organization as a whole.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9120700359344482, "language": "en", "url": "https://eponline.com/articles/2012/01/05/renewable-energy-sees-explosive-growth-in-2011.aspx", "token_count": 648, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1767578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f936f9f7-2942-4b99-976f-31ca6941dcde>" }
Renewable Energy Sees Explosive Growth in 2011 According to the most recent issue of the Monthly Energy Review by the U.S. Energy Information Administration (EIA), with data through September 30, 2011, renewable energy sources continue to expand rapidly while substantially outpacing the growth rates of fossil fuels and nuclear power. For the first nine months of 2011, renewable energy sources (i.e., biomass/biofuels, geothermal, solar, water, wind) provided 11.95 percent of domestic U.S. energy production. That compares to 10.85 percent for the same period in 2010 and 10.33 percent in 2009. By comparison, nuclear power provided just 10.62 percent of the nation's energy production in the first three quarters of 2011 -- i.e., 11.10 percent less than renewables. Looking at all energy sectors (e.g., electricity, transportation, thermal), renewable energy output, including hydropower, grew by 14.44 percent in 2011 compared to 2010. Among the renewable energy sources, conventional hydropower provided 4.35 percent of domestic energy production during the first nine months of 2011, followed by biomass (3.15 percent), biofuels (2.57 percent), wind (1.45 percent), geothermal (0.29 percent) and solar (0.15 percent). (On the consumption side, which includes oil and other energy imports, renewable sources accounted for 9.35 percent of total U.S. energy use during the first nine months of 2011.) Looking at just the electricity sector, according to the latest issue of EIA’s Electric Power Monthly, with data through September 30, 2011, renewable energy sources (i.e., biomass, geothermal, solar, water wind) provided 12.73 percent of net U.S. electrical generation. This represents an increase of 24.73 percent compared to the same nine-month period in 2010. By comparison, electrical generation from coal dropped by 4.2 percent while nuclear output declined by 2.8 percent. Natural gas electrical generation rose by 1.6 percent. Conventional hydropower accounted for 8.21 percent of net electrical generation during the first nine months of 2011 -- an increase of 29.6 percent compared to 2010. Non-hydro renewables accounted for 4.52 percent of net electrical generation (wind - 2.73 percent, biomass - 1.34 percent, geothermal - 0.40 percent, solar - 0.05 percent). Compared to the first three quarters of 2010, solar-generated electricity expanded in 2011 by 46.5 percent; wind by 27.1 percent, geothermal by 9.4 percent, and biomass by 1.3 percent. “Notwithstanding the recession of the past three years, renewable energy sources have experienced explosive rates of growth that other industries can only envy,” said Ken Bossong, executive director of the SUN DAY Campaign. “The investments in sustainable energy made by the federal government as well as state and private funders have paid off handsomely underscoring the short-sightedness of emerging proposals to cut back on or discontinue such support.”
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9680585861206055, "language": "en", "url": "https://itep.org/enid-news-eagle-a-low-tax-state-for-only-some-oklahomans/", "token_count": 176, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.412109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c61ae3e0-0a8f-4312-aedd-02ed09dac487>" }
While Oklahoma has a reputation as a low tax state, poor and middle-income Oklahomans are actually paying a greater share of their income in taxes than the national average, while the richest 5 percent of households — with annual incomes of $194,500 or more — pay less. That’s why Oklahoma ranks among the 10 worst states for tax inequality in the newly updated Who Pays report from the Institute on Taxation and Economic Policy (ITEP). The analysis evaluates major state and local taxes, including personal and corporate income taxes, property taxes, sales and other excise taxes. It finds that the poorest Oklahoma households pay 2.1 times as much of their incomes in taxes as the wealthiest 1 percent. In Oklahoma, the poorest 20 percent of households pay the 5th highest taxes as a share of their incomes — 13.4 percent — in the country.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9673730134963989, "language": "en", "url": "https://ontariograinfarmer.ca/2016/10/01/ontarios-2016-winter-wheat-crop/", "token_count": 1096, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.02294921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3228c400-3202-4d1e-8e6c-e5adcd97732a>" }
IMPLICATIONS OF A LARGE HARVEST IT’S A MEGA-YEAR for Ontario wheat acreages. In 1981, harvested acres of winter wheat in the province totalled 504,000, and about 20 years later in 2002, it wasn’t much different (580,000). However, a few years later it started to increase, hitting over one million acres four times between 2006 and 2013. In 2016, 975,000 acres of wheat were harvested. On the yield front, the average Ontario winter wheat yield was 51.6 bushels per acre (bu/ac) in 1981, and reached nearly 84 bu/ac last year. This year, the harvest was as high as 90 bu/ac. “Over the last few years, more acres of wheat have been planted in Ontario due to good prices and more farmers diversifying their rotation and sticking to their rotation,” says Todd Austin, manager of wheat marketing at Grain Farmers of Ontario, about long term wheat acreage trends. “In both 2013 and 2014, wet fall weather delayed soybean harvest so that farmers could not get the wheat in, but last year they were able to.” Several classes of winter wheat are planted in Ontario, but Austin says farmers mostly plant soft red winter wheat (about 80 per cent) because it provides the most marketing opportunities. Dana Omland, grain merchandising manager for Ontario at Ceres?Global Ag?Corp. in Guelph, says there is also less production risk with this class of wheat which makes it an attractive option. It’s not only Ontario that is increasing wheat production. China, India, Russia, Australia, Ukraine, France, Germany, and the U.S. are all planting larger amounts of wheat, Austin notes. India and China don’t export much — Russia, however, does. Reuters news agency reported in June that Russia’s grain crop could reach 110 million metric tons this year, the second-largest harvest ever and up from 105 million in 2015. “Russia is selling a fair amount into North Africa and the Mid-East,” says Austin. “They tend to offer good prices and have better proximity to those end markets. The ruble has devalued against the U.S. dollar, so it’s cheap in U.S. dollars.” Ontario has not exported much winter wheat in past years compared to domestic markets, only a little to the U.S. and a little overseas, but about half the soft red winter harvest is now exported, Omland notes. The prices farmers will get this year for wheat will depend on a few factors. Austin explains that over the last two years, global wheat prices were lower than Ontario’s and that they were weak in general due to a large worldwide inventory. In addition, he says wheat trades in sympathy with corn and the large U.S. corn production last year meant lower prices for corn and wheat. “It’s a poor year in the U.S. this year for wheat, but they still have a lot in their inventory from last year,” Austin adds. “Yes, demand grows each year, but so does global supply. Lower prices due to these factors can stimulate demand, but the question is when does that happen? Commodity prices are cyclical and the low points will be low enough that buyers come back, demand surges and prices go up, but that’s likely a while away.” Canada Grains Council president Tyler Bjornson notes there has been some downgrading of wheat production volumes and quality in the European Union (EU) this year, mostly in France, due to flooding. Omland believes this could result in more export demand to the markets that France usually services, such as North Africa and Mexico. However, volumes are high globally, and Bjornson agrees with Austin that there will be a softening of price. He also observes that, “whenever there is intense global competition, normally we see a rise in technical barriers to trade, customers demanding tighter provisions, and more scrutiny for mycotoxins, allergens, and other quality-related issues. We’re hopeful that doesn’t come to pass, those strategies to winnow out higher-quality wheat, but it may happen this year with the large volumes out there. The good news is that demand looks to be strong, especially in the feed market, because wheat is becoming competitive there with corn.” Omland agrees. He notes that because of the drought in Ontario this summer, some traders are anticipating the corn yield to be lower by 25 bu/ac. “This reduced supply of corn has the potential to soak up any excess of Ontario wheat,” he notes. “Three months ago, traders were wondering where it might go, but this corn harvest drop will likely mean more potential for wheat to work its way into the feed market.” In terms of how much wheat to plant this fall, Austin advises growers to stick to their crop rotation to get the full benefits of all that provides. “The other thing is to look for price jumps in the market and take advantage of them,” he says. •
{ "dump": "CC-MAIN-2020-29", "language_score": 0.958274245262146, "language": "en", "url": "https://www.caixabankresearch.com/en/economics-markets/public-sector/good-education", "token_count": 597, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1845703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6cd1851f-c527-45f4-8df2-79ca94daca15>" }
Public policies aimed at reducing poverty and inequality often face a dilemma between equity and efficiency. But a good education policy has ample potential to improve a workforce’s productivity and also promote social mobility by giving young children equal opportunities in early education. It is therefore a powerful tool to transform society, if not the most important. An education system needs to be of good quality to make the most of all its potential. Particularly at a time such as the present, when we must adapt to a digital revolution that is transforming the productive system and consequently the skills and abilities demanded by the labour market. It is important what, how and when people learn. A 21st-century education system cannot teach the same as the last century. It must especially teach people how to learn. Many studies have stressed the importance not only of cognitive skills such as language, communication, information processing, numeracy and logic, but also non-cognitive or soft skills such as the power of concentration and planning, perseverance, self-control and interpersonal relations. Knowledge needs to be passed on but so do approaches to working, organising oneself and learning. And also values. Regarding how to learn, there are many different myths regarding the best way to teach. For instance, available evidence suggests that variables such as class size and the amount of resources devoted to the system (in both cases within certain limits) do not affect the quality of education to any great extent. By far the most important factor for a successful education system is teacher quality. Those countries with the best systems, such as Singapore, Finland and Korea, can attract and retain the best talent by offering attractive careers, continued training and social prestige for the teaching profession. Parents are also important, especially in terms of the time devoted to their children in activities such as reading and talking. A good work-life balance is therefore essential for them to have such time available. Lastly, regarding the when, several research studies have shown the importance of investing in children’s education from birth up to five years of age to ensure equal opportunities. The first five years of learning have a huge influence on children’s potential as students and adults. The Nobel prize-winner for Economics, James Heckman, has estimated that investing in the most disadvantaged segments of the population at this age has a return of between 7% and 10%. Few public investments can offer better. These are just some of the things we know. But the quest for excellence in education must be an ongoing process supported by painstaking research. Constant innovation and evaluation are required. The most successful countries approach education like the field of medicine: pilot tests are carried out to evaluate innovations (in the what, how and when) and changes proven to be effective are adopted. This is the best way to adapt to continual change. In education, resistance to change or unwarranted change has a huge cost in terms of equity and efficiency. A cost we can ill afford. 30 April 2017
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9074194431304932, "language": "en", "url": "https://www.yourarticlelibrary.com/business/over-capitalization-meaning-causes-and-effect-of-over-capitalization/27978", "token_count": 1162, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1748046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2ccff1af-4667-4d99-8098-ce0eab07620b>" }
Over-Capitalization: Meaning, Causes and Effect of Over-Capitalization! Meaning of Over-capitalization: It is the capitalization under which the actual profits of the company are not sufficient to pay interest on debentures and borrowings and a fair rate of dividend to shareholders over a period of time. In other words, a company is said to be over-capitalised when it is not able to pay interest on debentures and loans and ensure a fair return to the shareholders. We can illustrate over-capitalisation with the help of an example. Suppose a company earns a profit of Rs. 3 lakhs. With the expected earnings of 15%, the capitalisation of the company should be Rs. 20 lakhs. But if the actual capitalisation of the company is Rs. 30 lakhs, it will be over-capitalised to the extent of Rs. 10 lakhs. The actual rate of return in this case will go down to 10%. Since the rate of interest on debentures is fixed, the equity shareholders will get lower dividend in the long-run. There are three indicators of over-capitalisation, namely: (a) The amount of capital invested in the company’s business is much more than the real value of its assets. (b) Earnings do not represent a fair return on capital employed. (c) A part of the capital is either idle or invested in assets which are not fully utilised. Causes of Over-Capitalisation: Over-capitalisation may be the result of the following factors: (i) Acquisition of Assets at Higher Prices: Assets might have been acquired at inflated prices or at a time when the prices were at their peak. In both the cases, the real value of the company would be below its book value and the earnings very low. (ii) Higher Promotional Expenses: The company might incur heavy preliminary expenses such as purchase of goodwill, patents, etc.; printing of prospectus, underwriting commission, brokerage, etc. These expenses are not productive but are shown as assets. The directors of the company may over-estimate the earnings of the company and raise capital accordingly. If the company is not in a position to invest these funds profitably, the company will have more capital than is required. Consequently, the rate of earnings per shares will be less. (iv) Insufficient Provision for Depreciation: Depreciation may be charged at a lower rate than warranted by the life and use of the assets, and the company may not make sufficient provisions for replacement of assets. This will reduce the earning capacity of the company. (v) Liberal Dividend Policy: The company may follow a liberal dividend policy and may not retain sufficient funds for self-financing. This may lead to over-capitalisation in the long-run. (vi) Inefficient Management: Inefficient management and extravagant organisation may also lead to over-capitalisation of the company. The earnings of the company will be low. Effects of Over-capitalisation on Company: An over-capitalised company may suffer from the following ill consequences or disadvantages: (i) The shares of the company may not be easily marketable because of reduced earnings per share. (ii) The company may not be able to raise fresh capital from the market. (iii) Reduced earnings may force the management to follow unfair practices. It may manipulate the accounts to show higher profits. (iv) Management may cut down expenditure on maintenance and replacement of assets. Proper amount of depreciation of assets may not be provided for. (v) Because of low earnings, reputation of the company would be lowered. Effects of Over-capitalisation on Shareholders: Over-capitalisation is disadvantageous to the shareholders because of the following reasons: (i) Over-capitalisation results in reduced earnings for the company. This means the shareholders will get lesser dividend. (ii) Market value of shares will go down because of lower profitability. (iii) There may be no certainty of income to the shareholders in the future. (iv) The reputation of the company will go down. Because of this, the shares of the company may not be easily marketable. (v) In case of reorganisation, the face value of the equity share might be brought down. Effects of Over-capitalisation on Society: The effects of over-capitalisation on the society are as follows: (i) The profits of an over-capitalised company would show a declining trend. Such a company may resort to tactics like increase in product price or lowering of product quality. (ii) Return on capital employed is very low. This means that financial resources of the public are not being utilised properly. (iii) An over-capitalised company may not be able to pay interest to the creditors regularly. (iv) The company may not be able to provide better working conditions and adequate wages to the workers. Remedies for Over-capitalization: In order to correct the situation caused by over-capitalisation, the following measures should be adopted: (i) The earning capacity of the company should be increased by raising the efficiency of human and non-human resources of the company. (ii) Long-term borrowings carrying higher rate of interest may be redeemed out of existing resources. (iii) The par value and/or number of equity shares may be reduced. (iv) Management should follow a conservative policy in declaring dividend and should take all measures to cut down unnecessary expenses on administration.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.969328761100769, "language": "en", "url": "https://answers.yahoo.com/question/index?qid=20130529140619AAedM7R", "token_count": 1112, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2236328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cac9626e-7672-4961-a7c9-fd929dc88396>" }
costs of low inflation? I only know the costs of INFLATION but i cant find the costs of LOW INFLATION there must be some... - juiceboxLv 57 years agoFavorite Answer One disadvantage of low inflation is that it hinders the ability of the central bank to fight severe recessions. To explain this, you should consider the following general points first: a) the central bank fights recessions by lowering real interest rates. b) the real interest is equal to the nominal interest rate - inflation. (r = i - π) c) the real interest rate is what firms and households normally consider for investment and consumption decisions; a lower real interest rate will induce firms and households to borrow and spend more, thus increasing the number of people who are hired to produce those things (say a company borrowing money to purchase machinery, which will have to be replaced or made by the providers of that machine; this will require labor, which will be hired. The laborers will get paid, and will spend on other products; there will in turn be additional people hired to meet the demand the laborers have generated for those products.) d) The central bank can control real interest rates because prices are slow to adjust for various reasons. So remember this equation again: r = i - π. If the central bank reduces i (the nominal interest rate), and prices are very sticky (that is, π is very slow to adjust relative to the increase in the money supply that reduces i), then the reduction in i will translate to a reduction in r (the real interest rate) which will increase employment and output. Ok, so those are the points. Now, say inflation is 2% coming into a severe and nasty recession; let's call it the Lesser Depression (comparing it to the Great Depression of the 30's). And let's say the nominal interest rate is 5%. The severe recession will make the central bank reduce the nominal interest rate all the way to zero, and given that inflation is 2%, the real interest rate is negative 2%. And given that inflation will come down more due to so many workers being out of work and not spending as much as they could to make the economy produce what it could potentially, inflation will probably slowly go down to 1%, making the real interest rate negative 1%. The recession the US experienced in the late 2000's required a much lower real interest rate (yes, I know, that's pretty surprising) than that to get the economy speedily back to where it normally should be. So what would have happened if inflation was say, 4%, coming into the recession? Well, the nominal interest rate would have probably been 7%. The central bank would have reduced it to 0 to fight the recession. The real interest rate would then be negative 4%, and given that inflation would probably decline a bit, it would still be around negative 2% to negative 3%. This would have had a much more positively stronger impact on the economy and helped the recession to be more mild and of less lengthiness. Another disadvantage could be that lower inflation will have perverse consequences for an economy in which a giant and important section of that economy (say, many many households that had borrowed a lot of money) has a lot of debt on their hands. Low inflation will hinder their ability to pay down that debt effectively and in a timely manner. The reason is that the value of debt households have (say, a $500,000 dollar loan held by a household that used it to buy their house) does not change with inflation. So if prices, and hence wages and the income people receive, rose 10 percent in a year, the value of that debt ($500,000) would not change with it. This would leave the household with a higher income relative to that debt, and make it easier to service it. In effect, the value of the debt the household holds has eroded due to higher prices in the economy. This leaves the household less constrained by their debt, and enables them to spend more, thus buoying the economy. If inflation is persistently low and their debts sufficiently large, the reverse process occurs and the economy will remain depressed for a long period of time. Another disadvantage is that low inflation is really close to outright deflation; that is, a decrease in the price level. Deflation is generally bad for an economy, because it increases the burden of outstanding debt, which can have bad consequences for financial institutions like banks. (<---this is what happened in the Great Depression. A precipitous fall in the price of many agricultural products meant that farmers could not earn enough money to pay back their loans to agricultural banks, which left those banks in big trouble.) Deflation also increases real interest rates, which serves to worsen a recession and thus reinforce the deflation that is occurring. Deflation also makes people tend to put off purchases of stuff, since they continuously expect the prices of those products to decrease further in the future. It therefore makes people want to sit on cash, which in modern economies is really really bad. Hope this helped - Anonymous7 years ago The most proved cost of low inflation is unemployment. It is demonstrated by the Phillipp's curve which is negative sloping between inflation and unemployment.To prevent it, the FED has turned to inflation targeting of 2-3%. If it is lower,there is possible to increase QE and lower interest rate. - 7 years ago
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9531989097595215, "language": "en", "url": "https://www.cato.org/policy-report/mayjune-2019/landmark-breakthrough-great-depression", "token_count": 509, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.220703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:73d58d29-649e-4733-8826-e42da705044f>" }
The history of monetary policy and economics is inextricably tied up in the history of the Great Depression. The causes and cures of the economic cataclysm that began in 1929 have been the battlefield on which much of the debate between free‐marketeers and central planners has been fought for nearly a century. Through the Depression itself and for many years afterward, the Keynesian interpretation of its causes reigned supreme as the widely accepted conventional wisdom. In 1963, Milton Friedman and Anna Schwartz upended that consensus with their seminal Monetary History of the United States, laying the blame firmly at the feet of the Federal Reserve. In this interpretation, the "Great Contraction" — in which a third of the country's money supply was destroyed over a short timespan — converted a garden-variety recession into theGreat Depression. Gold, the Real Bills Doctrine, and the Fed adds crucial new insight into how and why the Fed enabled this disaster, from two leading monetary historians: Thomas Humphrey, a 34-year research economist at the Federal Reserve Bank of Richmond, together with Richard Timberlake, emeritus professor of economics at the University of Georgia and an adjunct scholar at the Cato Institute. Their keen insight provides an explanation not only of what theFed did wrong but also of the flawed theory that drove policymakers to make such a monumental mistake. The so-called real bills doctrine was a widespread theory in the late 19th and early 20th centuries that held that money should only be created through "real bills" that represent transactions of real goods and services in the economy. This theory was one of the cornerstones of the Federal Reserve Act of 1913. Erroneous, though largely innocuous under normal circumstances, this doctrine was the theoretical basis for the Fed's refusal to counteract the collapse of the money supply at the onset of the Depression. Sen. Phil Gramm, economist and former chair of the Senate Banking Committee, offered glowing praise for this book, writing, "In my opinion, this book is the most important book written on the Great Depression since Friedman and Schwartz." Friedman himself praised an early manuscript prior to his death in 2006, saying that the authors' "emphasis of the Real Bills Doctrine complements in an important way [our] analysis of whyFed policy was so 'inept.' ... We did not emphasize, as in hindsight we should have, the widespread belief in the Real Bills Doctrine. " Purchase print or ebook copies of Gold, The Real Bills Doctrine, and the Fed at cato.org/store.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9756587147712708, "language": "en", "url": "https://www.comparefuelcards.co.uk/insight-analysis/parties-parcels-and-pizzas-the-gig-economy-and-fuel-cards/", "token_count": 231, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.28515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:af6bc28e-a35d-4df2-b8be-32441c5c2945>" }
The gig economy, defined as self-employed workers paid for each specific job they do, whether that's delivering a takeaway, a parcel or dropping people off at a party, has evolved quickly over the last 15 years, transforming the way people earn a living. In the gig economy, instead of a regular wage, workers get paid for the "gigs" they do, such as a food delivery or a car journey. While many people actively choose this way of working, the gig economy isn't without controversy. As people are deemed to be self-employed, they don't enjoy the same protections and benefits that someone in a full-time or part-time role receives. This has resulted in many high-profile court cases where gig economy workers have challenged their employers for the same rights as full-time staff such as sick pay, holiday pay, the minimum wage and employer pension contributions. As the gig economy continues to grow and companies such as Uber and Deliveroo become an even bigger part of our everyday lives, the people who work in the sector will be looking for smarter ways to pay for their fuel to maximise their income.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9416970014572144, "language": "en", "url": "https://www.icomparefx.com/currency-pair/usd-bbd/", "token_count": 1160, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.00177001953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3364527a-ccdc-474a-9c32-4e817bb9740d>" }
While Barbados is a relatively small country, it accounts for a fair amount of money flowing in and out of its shores, from and to the United States. In 2016, around U.S. $61 million entered Barbados from the United States as remittances. Around U.S. $3 million was sent in the opposite direction in the same year. Trade of goods between both countries during 2017 accounted for more than U.S. $574 million. - User Rating - Transfer Type - Regular Payments - Mobile App - Min Transfer Amount - Two Transfers Fee Free - 1 - 3 Days - $250 / £100 / €250 The U.S. dollar, in its current form, was adopted as the official currency of the United States in 1792. Now, it is used as official currency in other places too. These include the Caribbean, two British Overseas Territories, Turks and Caicos Islands, the British Virgin Islands, and Zimbabwe. Its unofficial use is prevalent in numerous places, some of which include Haiti, Belize, Panama, Costa Rica, Myanmar, Nepal, and Cambodia. Apart from being the world’s most commonly traded currency, the U.S. dollar is also the most preferred reserve currency globally. Its share of the global forex market turnover was 87.6% in April 2016. Forex market estimates suggest that over U.S. $5 trillion is traded internationally each day. |Nicknames||Buck, moolah, paper, dough, dead presidents, | bones, greenback, green |Bank notes||$1, $2, $5, $10, $20, $50, $100| |Coins||1c, 5c, 10c, 25c, 50c, $1| The first dollar-denominated currency introduced in Barbados, in 1882, came in the form of private banknotes. Some of the private banknotes also came with denominations in pound sterling from 1920. After the British West Indies dollar was introduced in 1949, the Barbadian dollar was officially linked to currencies of other British Eastern Caribbean territories. Private dollar-denominated banknotes in Barbados were issued for the last time in 1949. In 1965, use of the British West Indies dollar in Barbados was replaced by the Eastern Caribbean dollar. The Barbadian dollar is its currency form was adopted as the official currency of Barbados in May 1972. In 1973, it replaced the East Caribbean dollar at par. From July 1975, the value of the Barbadian dollar has been pegged to the U.S. dollar. |Currency symbol||Bds$, $| |Bank notes||$2, $5, $10, $20, $50, $100| |Coins||1, 5, 10, 25 cents, $1| U.S. Dollar / Barbadian Dollar Historical Rates The Barbadian dollar’s value has been pegged to the U.S. dollar since July 1975, at the rate of U.S. $1 = Bds$2. While most merchants and businesses in Barbados offer this fixed rate, exchange currency at airports might result in lower rates coupled with service fees. As long as the Barbadian dollar’s peg to the U.S. dollar stays in place, there will be little to no fluctuation in the USD/BBD exchange rate. However, the pair might experience slight volatility at the trading level, as has happened previously. For instance, the USD/BBD exchange rate has experienced some fluctuation from mid 2012, with the Barbadian dollar trading in between Bds$1.9700 and Bds$2.0566 against the U.S. dollar. USD/BBD in the last five years |U.S. $1 =| |1 July, 2013||Bds$1.9850| |1 July, 2014||Bds$2.0082| |1 July, 2015||Bds$2.0050| |1 July, 2016||Bds$2.0050| |1 July, 2017||Bds$2.0016| USD/BBD in the last five months |U.S. $1 =| |1 April, 2018||Bds$2.0016| |1 May, 2018||Bds$2.0007| |1 June, 2018||Bds$2.0016| |1 July, 2018||Bds$2.0016| |1 August, 2018||Bds$2.0016| What Affects USD/BBD Rates? There will be little change in the USD/BBD exchange rate as long as the Barbadian dollar remains pegged to the U.S. dollar. However, a revision in the peg may come about owing to a range of factors, some of which include major economic changes in both countries, as well as their interest rates, gross domestic products (GDPs), and trade balances. Given Barbados’ reliance on the U.S. for trade and investment, it is unlikely that there will be a revision in the USD/BBD peg anytime soon. If you plan to send money from Barbados to the United States or the other way around, it is important that you look beyond the USD/BBD exchange rate on offer. Other factors you need to take into account include the fees you need to pay, the turnaround time, as well as available payment and transfer methods.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9403572082519531, "language": "en", "url": "https://www.itsuptous.org/blog/what-happens-when-debt-ceiling-isnt-raised", "token_count": 723, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.333984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:541f76dd-7ca3-4cd6-bd7f-21afff883895>" }
In February, the Trump administration signed legislation suspending the ceiling until the end of the first quarter of 2019. This decision allows Congress plenty of spending room through March 1, 2019. A month after the suspension, the U.S. debt had already exceeded the $21 trillion mark. What happens when the debt ceiling is suspended? Essentially, the ceiling limits how much debt can be incurred by the government to sustain its operations. When a suspension occurs, the capping of debt is essentially "turned off" for one year, and the government can spend as needed until the period of suspension expires. The amount borrowed during the suspension gets added to the legal debt limit. The Bipartisan Policy Center notes when the ceiling is reinstated, the debt will be close to $22 trillion. What happens when the ceiling isn't raised? If the debt ceiling is reached and not raised, the U.S. Treasury is unable to issue or auction any more Treasury bills, bonds or notes. Routine and ongoing government expenses can only be paid as incoming tax revenues are received. Without the ability to expand beyond tax revenues, the Treasury Department must decide which debts to pay and postpone. This creates a ripple effect because if there isn't enough cash to go around: - Federal employees can be furloughed. - Federal pension payments aren't sent. - Foreign lenders don't get paid. - Interest on the national debt cannot be paid. Essentially, a government shutdown occurs. If various segments of the national debt cannot be paid, the government must make some choices. Since the law prevents any borrowing from Social Security and Medicare, it turns to retirement funds. Or, the government can withdraw money it keeps on hand, up to $800 billion, from the Federal Reserve Bank. U.S. will default on its debts without a ceiling increase Congress must raise the ceiling so the U.S. government doesn't default on its debt. If a default does occur, three things may happen: - The government could no longer make monthly payments, affecting the ability to pay past and future obligations. - The yields of Treasury notes sold on the secondary market will rise, creating higher interest rates. - A market panic occurs, and owners of U.S. Treasury will dump their holdings, dropping the dollar value. Ultimately, there are consequences either way. While debts need to be paid and raising the ceiling avoids nonpayment, fixing short-term financial conundrums, continuously increasing the cap leads to long-term fiscal problems, such as the United States's current whopping $21 trillion debt. Cause and effect of capping the ceiling The ceiling caps the amount of money the government can spend, reducing the risks of incurring higher debt. The Treasury Department indicates if the limit is not raised, the money runs out, creating "catastrophic economic consequences." - Government has suspended the ceiling five times since 2013. - Raising the ceiling has historically led to rapid increases in national debt. - Not lifting debt also causes problems because no limits are placed on government spending during suspensions. Essentially, taxpayers don't know how much borrowing Congress approves during the time of suspension. As we know, government spending easily gets out of control. Up to Us focuses on ways millennials can raise awareness and take action to counter the future negative effects created by current short-sighted government fiscal policies. The organization encourages college students to take leadership roles and help find solutions for a fiscally sound future for the U.S.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9734802842140198, "language": "en", "url": "https://www.laurenclarklaw.com/articles/for-many-owing-student-loans-the-situation-is-bleak/", "token_count": 636, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a01710cf-a932-4099-9182-79ea34c59e94>" }
For many owing student loans, the situation is bleak For many years going to college was seen as a necessary step to getting the best and highest paying jobs. However, over the years, the expenses of higher education have ballooned exorbitantly, forcing students to finance their education through student loans. At graduation, many students have six-figure loans to repay. Add to that the weak economy and bleak job market that many graduates face and it can be a recipe for financial disaster for many. Because of their inability to find a good paying job-or any job at all-many graduates find themselves having to file bankruptcy to relieve them of their debts. However, for the vast majority of them, their student loans will be next to impossible to discharge. A history of bankruptcy and student loans It was not always so difficult to discharge student loans in bankruptcy. Before the mid-1970s, student loans could be discharged as easily as credit card debt. After hearing a few reports of doctors and lawyers who discharged their student loans in bankruptcy, a resentful Congress passed a law in 1976 requiring students to wait at least five years before their loans could be discharged. In 1998, wishing to protect taxpayers from students walking away from their loans, Congress made federal student loans nondischargeable in bankruptcy. After intense lobbying by for-profit companies, in 2005 Congress toughened the rules for private student loans, making such loans much more difficult to discharge. Private student loans can be discharged in what are rare and extreme circumstances. Under the bankruptcy code, a private student loan can be discharged if the loan would impose an “undue hardship” on the borrower. However, Congress never defined what an “undue hardship” is, so the definition has been left to the courts. Nationwide, some courts disagree on the definition of “undue hardship”, but in most cases, the borrower must show: - The borrower has made good-faith efforts to repay the loan by finding a job, minimizing expenses etc. - The borrower cannot maintain a minimal standard of living based on current income and expenses - The borrower’s current financial situation is likely to continue for most of the loan’s repayment period Needless to say, it is currently very difficult for a borrower to successfully claim undue hardship, as the law requires the borrower to essentially prove that he or she cannot pay the loan and likely will never be able to. Unfortunately, unless Congress chooses to amend the bankruptcy laws or the Supreme Court formulates a new standard, the law is unlikely to change. Although some members of Congress have introduced bills in the past that would allow students to discharge private student loans, the bills have never gained traction. Consult an attorney Although student loans are difficult to discharge in bankruptcy, most other types of debt do not carry the same restrictions. If you find yourself overburdened by student loans, bankruptcy may allow you to restructure or discharge other types of your debt, freeing up more funds for student loan obligations. An experienced bankruptcy attorney can recommend a debt relief option that would be best for your circumstances.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9479609131813049, "language": "en", "url": "https://www.metro-magazine.com/10007190/how-transit-agencies-can-prepare-for-a-driverless-world", "token_count": 1980, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0673828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d76d609f-de97-466a-b64b-3cf1742ab86e>" }
Disruption is nothing new in the transportation sector. Historically, transportation innovation has been a catalyst for broader societal shifts that dramatically change how people live and work. Today, we are on the verge of the biggest transportation disruption in history, which is being driven by business model innovation. Mobility as a Service (MaaS) is the term often used to describe this trend. The MaaS model is largely driven by the sharing economy, which has already disrupted other industries such as music, television, hospitality — and now transportation. Many autonomous vehicle (AV) prognosticators have focused their attention on how driverless technologies will impact the choices of consumers, from car ownership to vehicle design. Less has been written, however, about how AVs will impact public transit. Agencies need to lead a broader conversation — not just within the industry — about their futures. Due to the planning horizon that they work within and the pressing needs of daily operations, it’s important to start that conversation now while staying grounded in what will realistically change over the next few decades. Preparing for an autonomous future Changes in how people choose to commute are all but guaranteed, but how much change and whether it will be directly impacted by AVs is still an open question. There are theories that the arrival of AVs will echo the e-scooter “Armageddon” that flooded sidewalks and streets in cities across the country over the last few years. In 2018, the National Association of City Transportation Officials said more than 38.5 million trips were taken on 85,000 scooters in 100 different cities in the U.S. That’s a lot of disruption in a short period of time (the first Bird scooter landed in Santa Monica in Sept. 2017). E-scooters may not represent an existential threat to public transportation and could prove to be a “last mile” solution for transit, but there are interesting parallels between them and how AVs might come to market. For example, venture-backed firms were able to dump massive numbers of e-scooters onto select U.S. cities seemingly overnight. In the same way, it’s conceivable that well-capitalized startups could deploy an entire fleet of AVs for hire in a city in a very short period of time, dramatically altering the mobility landscape. This scenario doesn’t feel far-fetched given Silicon Valley’s appetite for risk and willingness to burn cash if there’s an opportunity for reward. Several local cities, with support from transit, have already started looking for ways to limit traffic, and one of the factors in their decisions is the potential increase in AVs. The Los Angeles County Metropolitan Transportation Authority, for example, plans to explore a congestion-pricing model to reduce inner city traffic while generating funds for public transportation. (This pricing system has already found success in London and will be introduced in New York City within the next two years.) Embracing the opportunity of AVs On the other end of the spectrum from those who are preparing for AVs in a more reactive way, some cities are actively embracing the technology and building it into their long-term plans. One of the agencies at the forefront of this movement is the Jacksonville Transportation Authority (JTA), which is aggressively pursuing driverless shuttles and buses to replace its fixed guideway system. The agency hopes to bring public transit service to more people using this model. An automated system, such as the one proposed by JTA, could facilitate flexible routes, address first-mile/last-mile service issues that have plagued transit agencies for decades, and provide more transportation options for the elderly and disabled throughout the region. Anticipating regulatory uncertainty Like Uber and e-scooters, AVs have and will continue to invite regulatory scrutiny. This is inevitable with disruptive technologies. Nearly every constituency of city governments will have an opinion on the matter, from citizen groups concerned about safety to road bikers and cab drivers. This means transit agencies will have to make plans with a high degree of uncertainty about how regulators will react. Since the adoption of AVs has moved closer to reality, there has been no shortage of legislative efforts by states to address the issue. According to the National Conference of State Legislatures, 29 states have enacted legislation related to AVs since 2018. As the technology matures, so will the laws. This year, members of Congress called on stakeholders in the self-driving car industry to help draft a bill aimed at expediting the adoption of AVs. A prior version of this bill was shot down last year over a lack of safety protections. As we inch closer to the self-driving era, the need to regulate AVs will only intensify. As is the case with many new technologies, AV legislation is currently fragmented across states and the federal government. Ultimately, Congress should provide a strong, national framework for the industry. Until that happens, public transportation agencies will have to read the proverbial tea leaves to anticipate the most likely regulatory regime. A framework for thinking ahead Transit agencies have an opportunity — and obligation — to factor AVs into their short and long-term plans. To succeed, it’s important to create a framework for success. 1. Establish foresight Anticipating what could happen is a pragmatic approach to preparing for the unknown, providing agencies with the ability to adapt as external forces reshape the industry. Look for signals. Agencies seeking success in an AV world should foster a conversation about plausible futures informed by proper data analytics, and then begin surveying the industry for potential opportunities. Figure 1 illustrates the various stages that can be realistically anticipated as companies look for signals that change is coming. Utilizing data to spot trends offers organizations the ability to create entirely new markets on their terms. Predictive mobility will become a valuable tool for agencies across the U.S., especially those who utilize AV technology to the fullest. Data will become currency as firms measure and predict mobility demand, explore behavioral regularities, quantify service reliabilities and ultimately tailor personalized predictive services on a large scale. Create distance. When looking for potential opportunities, most agencies only look a few years ahead. Organizations that are truly focused on succeeding in the AV era need to adjust their scope to at least 10 to 20 years in the future. Doing so enables them to imagine scenarios outside the current confines of their industry or product. Create the future in layers. Making plans for the future in an uncertain environment requires a layered approach. Begin with an overarching view of the world, examining the potential role AVs can play on a global scale. From there, zoom in to look at the future of the market in a particular city or state. Continue zooming in to the personal level, which is ultimately where consumer decisions are made that impact the transportation industry. This exercise can be done for each of the horizons pictured in figure 1, in order to anticipate how each stage of AV evolution could impact the individual. 2. Grow capabilities Understanding the range of ways that the transportation industry might change is not enough. The ability to actually adapt is paramount as a driverless future approaches. The agencies that are able to understand the scale of change and adapt accordingly will be the success stories. Investing in capabilities is a major undertaking, but incorporating this process into the regular activities of the agency will help to make the process manageable in a step by step process. Step 1: To better serve its riders and innovate in an AV world, agencies should articulate its position on autonomous and electric vehicles. Without a clear vision, and buy-in from executive leadership, operations will continue to function in silos and perpetuate a non-harmonious approach to the market. Step 2: Once the vision is developed, transit agencies should prioritize short, medium and long-term opportunities and take a comprehensive portfolio approach to innovation. Step 3: Develop an operating model to make the vision a reality. Agencies should assess how they need to evolve operationally to execute short, medium and long-term goals. Step 4: Prototype pilots to test and learn. Much like JTA’s AV program, transit agencies should pursue pilot projects and partnerships to prototype offerings, test them and learn new insights. From there, agencies can decide to pivot if necessary. 3. Create a bias for commercialization All roads ultimately lead to commercialization. When it comes to AVs, this process is best approached through a methodology that allows organizations to explore efficiently, think iteratively and act quickly. Successful commercialization is enabled by a series of processes that validates the problem or opportunity, relies on rapid but inexpensive testing and ultimately leads to a better financial resolution. Companies are already doing this, and public transportation agencies need to do the same. For example, Dominos and Ford teamed up in 2018 to test how pizzas would be delivered with driverless vehicles. Using regular, human-operated sedans, Dominos completely cut off human-to-human interaction during the delivery process. What Dominos found was that the last 50 feet were the most challenging — getting the food from the car into the customers’ hands. The test provided Ford and Dominos with valuable data and insight into exactly how AVs will function in their respective industries. Agencies that start imagining and strategizing around an autonomous tomorrow will be primed to lead their organizations even as rapid industry change becomes the norm. If AV technology lives up to the expectations of industry experts, it has the potential to radically alter the transportation choices that people make. But will it radically change how public transportation agencies operate? The decisions that agencies are making now will determine the answer to that question.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9673416614532471, "language": "en", "url": "https://24plusnews.co.uk/category/public-health/", "token_count": 5376, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.5, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:72bbe7bb-ce3f-4988-88d7-940bc3b8ff8f>" }
— BBC Politics (@BBCPolitics) July 12, 2020 Steve Brine is a former Public Health Minister, and is MP for Winchester. Last year, the Government announced plans to make England smoke-free by 2030, building on previous initiatives such as the ban on smoking inside public places, and the introduction of plain packaging for cigarettes and hand rolling tobacco. A year on, we have yet to see enough detail of how it intends to achieve its ambition, and there are real concerns that England could miss this target unless further clarity is provided. The UK vaping industry, estimated to be worth more than £1 billion to the economy, can be a valuable partner in helping to deliver on the Government’s objectives and address the smoking cessation plateau. Regulators and health experts in the UK have already acknowledged that vaping could play a crucial role in reducing smoking rates, providing smokers with an effective tool to quit altogether. During my time as Public Health Minister, we laid out its plan for adopting a harm reduction strategy, aimed at maximising smoking cessation among adults and minimising uptake by young people. This policy was driven by previous research conducted by Public Health England, which found vaping to be at least 95 per cent less harmful than smoking cigarettes. Since then, a clinical trial led by Queen Mary University of London found that vape products were almost twice as effective as patches and gum – known as nicotine replacement therapies – at helping smokers to quit. With vaping, there’s no combustion, no smoke, no tar as found in traditional tobacco products. While vaping is not without risk, and we lack the long view afforded by decades of research science, we know it allows smokers to receive nicotine without the cancerous toxins produced by combustible tobacco. However, to encourage smokers to try vaping, they need to have confidence that the products they choose are safe. Recent negative media coverage means trust in the vape category has declined. For example, statements originating in the USA said that vapers could be at greater risk of contracting Covid-19, claims which are wholly unsubstantiated. This could deter smokers from transitioning to vaping, a significantly less harmful nicotine delivery method. Such developments represent an opportunity for the Government to reappraise the regulatory landscape and improve product quality across the industry, thereby increasing consumer confidence in vape products as a mechanism for addressing public health concerns. One of the simplest and most effective ways to achieve this would be to regulate non-nicotine products intended for vaping (specifically ‘shortfills’ – or ‘make your own’), which are not currently captured by the Tobacco and Related Product Regulations in the same way as nicotine-containing products are. This would not only improve consumer confidence, but would ensure the UK retains its position as a global leader in the regulation of vape products and could support the Government’s public health objectives. As the country moves through this unprecedented period of social upheaval, all efforts must be made to ensure that smoking rates do not rise again. The recent ban on menthol cigarettes will support this effort, provided regulators enforce it and challenge tobacco manufacturers who continue to flout the rules. Greater efforts must also be made to inform the public about the benefits of vaping and how it has already helped thousands of people to reduce their tobacco use. To be clear: if the question is ‘should non-smokers start vaping?’ the answer should and will remain no. But if we’re talking about smokers who are struggling to quit, then vaping is undoubtedly one of the better options in their toolkit. Government and industry therefore need to recognise the opportunity they currently have, and that future regulation must evolve to reflect societal changes. Ending rough sleeping poses a particular challenge in a free society. That is because it is not only a matter of making help available, but of persuading those who need it, to accept it. Another complication is that the help required goes beyond accommodation. The lack of a bed to sleep in is invariably a symptom rather than the cause of an individual’s difficulties. The coronavirus prompted greater urgency for the Government to take action. Ministers had already outlined in February a determination to find a long term solution – with the assistance of Dame Louise Casey. Though this issue is a moral disgrace and source of national shame the numbers involved are relatively small. The latest snapshot survey for those sleeping rough on one particular night last autumn came up with a figure of 4,266. The BBC gave a figure of 28,000 (based on FOI requests to local authorities) of different people who had slept rough at one stage or another over 12 months. How many have come off the streets during the coronavirus crisis? 15,000 have been provided emergency accommodation – though not all of those were rough sleepers. Some are from hostels and shelters which have had to close due to social distancing rules. Others will be those who would otherwise have got by as “sofa surfers”. There will also be those escaping domestic violence. However, there might also be around 5,000 who came straight from the streets. What is impressive is how high the acceptance rate has been from the rough sleepers offered a room. Many have been surprised it has been so high. Only a few hundred are thought to have spurned an offer. It could be the attraction of a hotel rather than a more humble shelter. It could be fear of the coronavirus. Then there is the tough choice that getting food – or the money to buy food – while staying on the streets would be harder. As noted, coercion is not available, but the tone of encouraging people to accept help has been emphatic rather than passive. Amidst the statistical fog, a couple of points emerge. Firstly, that in proportion to the population, the number of rough sleepers was already tiny. The population of England is 56 million. It follows that accommodating them is a relatively modest claim on the public purse. Providing for others – children, pensioners, the unemployed, the disabled – are vastly more costly items. Secondly, that the already small number sleeping on the streets before the pandemic has fallen substantially. Dame Louise says in an interview for The Big Issue: “I was due to do a review into rough sleeping and homelessness but we have all been turned upside down by Covid-19. The primary motivation so far was led by Covid-19 to do an extraordinary thing in unprecedented times, which was to say, “Let’s just get everyone in.” We had everybody getting on the phone to hotels, getting [charities] St Mungo’s, Thames Reach and Look Ahead in London to stand up enough staff to literally in a couple of weeks add to the estate in London by 2,000 beds. “We were chasing the virus just trying to stay ahead of it. When the inquiry eventually comes saying: “How did you do it? Why did you do it? And what choices did you make?” We just went for it, everybody went for it. We had to get everybody in, we cannot have people dying on the streets. And we cannot have people dying in communal night shelters and that is the prospect that we were facing. We need to be clear that right now we are dealing with this extraordinary situation where 15,000 people have been accommodated at this time. “I’m not saying that we don’t want to work out how do we not return to the situation that we have seen in the last few years. But our primary purpose so far has been to keep people safe. That will remain our primary purpose, but at the same time we feel that we should see this as an opportunity to think that we can get something extraordinary out of this but that will take an extraordinary effort. The homelessness sector itself and the wider community also needs to think, at this horrific time in our nation’s history, what they can do to help as opposed to what they call on the government to do.” Jeremy Swain, the Government’s adviser on homelessness, was also interviewed. He said: “I was involved with Housing First in the 1990s and I’m a big fan, but the problem is there is a slight danger that we think that everybody in those hotels at the moment needs wraparound support and they need it for a long time. What we need to be doing, as well as getting people into housing, is to get people into work. And that is what they are wanting. That’s what they want – when I was at Thames Reach and you put out the questionnaires, 75 per cent of people wanted the services to help them get jobs. Consistently it is bottom of the list for the homelessness sector when for the people themselves it is top of the list.” That is the tricky part. Amidst Government spending of £850 billion a year, funding an extra 5,000 hostel beds is a footling item. (That’s even before we consider the £10 billion a year we give to charity, often to help the homeless.) Getting those who have taken a wrong turn in life back on the path to proud, independent, and responsible existence is harder. Getting a job would be a pretty obvious ambition. Often that will mean overcoming such afflictions as drug addiction, alcoholism, and mental illness. When I was a councillor in Hammersmith and Fulham I found that very little specialist accommodation was provided – even though the Council had a very substantial Public Health budget which was largely wasted. Many of those in emergency accommodation have been put up in hotels that would otherwise be empty. It is welcome that hotels are going back to normal business as the economy reopens. That does mean that alternative places to stay are needed – though some hotels are extended their contracts for emergency accommodation. Some universities have made rooms available in their halls of residence – after all college authorities need the money and these rooms would otherwise be empty at present. Some YMCA hostels have single rooms. Then councils have managed to find rooms for some in the private rented sector. In the long term though, the Government plans new hostel places for 6,000. Much of this will be for specialist housing to cater for particular medical conditions. That will be crucial for these unfortunate souls to have their lives turned around. “Never let a good crisis go to waste,” declared Winston Churchill. The signs are encouraging with respect to the impact of the pandemic on rough sleeping. A passive response from the authorities to those sleeping in shop doorways and along underpasses is no longer acceptable. Most of those people have already made some reconnection with society and there is every chance that it will not be broken. When I borrowed the interviewer’s chair for the Moggcast earlier this month, I took the opportunity to ask about the Government’s approach to the nightlife industry. My concern was that as lockdown gradually eases, there was a danger that particular groups or sectors risked getting left behind, trapped in a system which is gradually getting less onerous for society as a whole. Of course, clubs aren’t the only part of the cultural sector under threat: some theatres are already closing. And it isn’t difficult to see why the Government isn’t in a hurry to let nightspots re-open, as their high-footfall, low-margin business models are almost uniquely ill-suited to the era of social distancing. But clubs pose a challenge which things like theatres don’t, namely that young people seem decidedly unwilling simply to wait for the Prime Minister’s say-so to go out. Instead, frustrated clubbers are helping to fuel a dramatic resurgence in illegal raves. (Wildcat stagings of popular plays and musicals are not yet in evidence.) This isn’t entirely a new phenomenon. The UK rave scene has endured, albeit with a much lower profile, since its Nineties heyday, sustained by a backbone of amateur enthusiasts and privately-owned soundsystems. These events occasionally get shut down by the police but are no scourge on society. Yet there is a big difference between this semi-private fringe and a party scene which replaces shuttered clubs outright. Larger crowds of less-experienced party-goers means an increased likelihood of injury and crime, not to mention much greater disruption to nearby communities. If this situation continues over the summer, it also becomes more and more likely that organised crime will start moving into this space. Such groups can clear huge sums off ticket sales, use their events to push drugs, and have the infrastructure to rebound from equipment seizures or other setbacks in ways the amateurs can’t. Worse still, if dire industry predictions do come true and hundreds or thousands of nightlife venues shut their doors, the gangsters moving into the party scene could be well-positioned to buy up vacant clubs and move into the official scene when Covid restrictions are finally eased. Speaking on LBC today, the Prime Minister was pressed on the timeline for opening various businesses, including gyms. But there is little sign that clubs, which probably lack much of a constituency at Westminster, are on the Government’s radar: Resident Advisor notes that the ‘Our Plan to Rebuild’ document mentions them only once. This needs to change. There may not be a good answer. It is indeed difficult to imagine how such venues could operate with social distancing in place. But this must be weighed against not only the relatively low risk Covid-19 poses to the young, but the obvious fact that they appear ready and willing to take those risks with or without the Government’s permission. The question isn’t whether people will go out this summer; it’s who profits. Chris Whitehouse leads the team at his public affairs agency, The Whitehouse Consultancy and is a papal Knight Commander of Saint Gregory. Lockdown gave an unprecedented character this year to the major celebrations of the great Abrahamic faiths. Those in the Jewish community endured Passover unable to join with family, friends and their wider community to celebrate the escape of the people of Israel from slavery in Egypt. Those of Muslim beliefs found themselves daily breaking their Ramadan fast alone, not together; and approached the culmination of that celebration, Eid, at best in small household groups rather than with communal rejoicing. ]The Christian faiths marked the Last Supper on Maundy Thursday; the passion, crucifixion, and death of Jesus on Good Friday; and the resurrection of their Christ on Easter Sunday, without the usual community support in the dark hours or the joyous celebrations of the greatest day in the Christian calendar. No amount of digital alternatives – Zoom meetings, live-streaming of services, on-line communal singing of religious songs – can really substitute for the mutual support in a time of crisis that comes from being together both physically and emotionally with those who share values and beliefs. All those whose beliefs and cultural traditions involve them coming together to pray, to worship and to be in social communion have suffered as they endured separation from their wider communities; but for those, in particular, whose faith is nurtured through holy sacraments, their separation from what they believe to be the source of grace has been particularly painful. Gathering in supportive worshipping communities and maintaining those horizontal relationships with other people is important. But for those whose beliefs involve a sacramental tradition, that vertical relationship to God that comes through their access to his grace in the sacraments (for example, of holy communion and confession), to deny them that access is to starve them of the spiritual nurturing and sustenance their faith teaches them to crave. For many of those Christians for whom the sacrament of communion, central to the mass, is the beating heart of their faith, to be able to be present in that sacrifice only remotely has not, for many, been to sense participation. On the contrary, it has exacerbated the sense of separation. For a church founded on the blood of martyrs, persecuted, tortured, and executed for their subversive beliefs, it has been particularly uncomfortable to see the doors of our Christian churches locked when they could, and should, have remained open to allow private prayer and socially distanced participation in services. That Westminster Cathedral and Westminster Abbey have remained closed, doors locked to keep out their faithful, whilst the local Sainsbury’s and Tesco have remained open, delivering socially-distanced access to physical food and drink, has been to exacerbate that pain of separation. Why a Warburton’s white medium sliced loaf, but not the bread of life itself? That church leaders surrendered to this position at the outset of lock-down was perhaps understandable given the sense of crisis and uncertainty that prevailed at that time, but the closure could and should have been only temporary whilst practical precautions were introduced. It was not for our political masters to decide on the importance to the faithful of access to spiritual sustenance compared to other goods and services. This plague has claimed many lives, including those of ministers of religion, and for their passing we mourn; but that they may have spent their final weeks denied the opportunity to share the sacraments with and to minister to the spiritual needs of their flocks must have been a cause of frustration and anguish to many. Not to hide behind locked doors did they tread the long and difficult path to religious ministry, but to share the love of God with his people and to be with them in their times of need. Where was the priest to baptise my new grandchild? To marry my daughter whose wedding was postponed? To hear my confession and grant me absolution? To offer the sacrifice of mass and to let me take a personal, risk-assessed decision as to whether I should receive holy communion? To give the last rites to friends of faith who have died during the pandemic? To comfort my elderly and vulnerable mother, alone and fearful in her home? For many people, these things are not just rituals, they are the building blocks of faith, the foundation upon which their lives, their families, their values, and their political views are based. Many are understandably frustrated, indeed angry, that these needs have been ignored. Faith leaders will have had troubled consciences about these decisions; and there is no desire to exacerbate their doubts and fears; but their redemption can come only through them learning from these tragic few months, and by them making plans for the future so that when the next plague comes they are ready, their lamps are full of oil, and their wicks trimmed. Church doors closed for a few hours for a deep clean and some social distancing sticky tape is acceptable; those doors being locked for 15 weeks is not. It must never happen again. Cllr Maria Higson represents Hampstead Town ward in Camden. She works professionally as a strategist for a major London teaching hospital. As the Covid-19 crisis moves to its next phase, the conversation is already turning to restarting elective care, and the lives taken indirectly in shutting these essential services. However, with ongoing pressures exacerbated by the impacts of the virus, now is a unique opportunity to innovate healthcare provision – not simply to go back to a system which was already struggling to cope. Public goodwill towards the NHS has never been higher. If the weekly clapping (accompanied by cheering, pan-banging and bell-ringing) doesn’t show this, the speed at which 750,000 citizens volunteered is surely a strong indicator. The idea of awarding the NHS the George Cross is neither unwelcome nor surprising. This goodwill is not misplaced; the NHS has stood up to the test of the Coronavirus with aplomb. To take just one example: by April 3rd, the necessary workforce, equipment, and space for over 2,500 additional adult critical care beds was found – an increase of over 50 per cent on pre-virus UK levels (and this excludes the Nightingale hospitals). This precious resource has provided headroom throughout the crisis, with over two thousand beds reported as available during the peak. However, once the crisis is over and the media has moved on (following in the footsteps of Brexit coverage), what next for the NHS? The pressures faced are as stark as ever, and the macro-trends are concerning. The UK population age 65 and over is due to grow 45 per cent by 2050; the average health spend of this additional 5.7 million people will be over four times as much as those aged 0-64. The potential impact on public health expenditure is enormous under any scenario, and that’s before considering the social care implications of our ageing population. Covid-19 adds long-term pressures both directly and indirectly. Of the thousands of intensive care survivors, up to 45 per cent may require rehabilitation support. In parallel, projections show up to two million people becoming unemployed following the crisis, with serious implications for physical and mental health. We will also need to contend with the backlog of elective care not provided during the pandemic. Given existing, predicted and Coronavirus-related pressures, we cannot simply insist that the NHS goes back to its old practices; we need our non-virus healthcare services to resume, but in a different way. However, if we really want change in our healthcare services, we need to do more than talk about “transformation”; we need to truly shift the mindset of politicians, professionals, and the public to NHS services. During Covid-19, service innovation suddenly became possible at break-neck speed. For years, the NHS has been calling for a greater prevalence of remote consultations, allowing patients to be seen quicker and without the risk of attending hospitals; where these had previously been resisted, they have now become commonplace. The NHS App – launched and rapidly expanded under the tech-loving leadership of Matt Hancock – saw a 111 per cent increase in registrations in March 2020. Patients have embraced new service models; this shift needs to stick long after the Coronavirus is over. The causes of this recent rush towards remote care are clear: closed services, constricted travel, and concern of contracting the virus in healthcare environments. However, as these drivers subside, we need to consider what was stopping people from shifting to them pre-virus. A core issue of remote GP consultations is that residents can still only register with one practice at a time – which means that signing up to an app-based service such as Babylon cuts you off from face-to-face GP care completely. However, an app can’t measure blood pressure, take samples, or listen to your chest (at least not yet). Surely the most effective model for an individual’s care would be a hybrid one, in which remote appointments could be used where possible, with the back-up option of requesting a visit to a local surgery; this is not an option under the current restrictions. The one-registration rule was created to allow for a single location of health records, but now that technology allows people to hold their own records – readily accessible on their mobiles – it’s time we scrapped it. Whereas remote GP services are readily available but not necessarily taken up by patients, remote hospital outpatient services are often not even available as an option. Many hospitals have started to implement new models such as telephone or video appointments and community clinics, but the pace of change pre-COVID was frustratingly slow. In 2019, the Shelford Group of leading hospital trusts wrote that change should be driven, in part, at regional and national levels. Whilst many hospitals have created innovative solutions, it is prohibitively expensive to expect each organisation to invest individually in the development and implementation of these schemes. The national Outpatients Transformation Programme – still yet to be formally established – must be an NHS priority, either to provide much-needed support to existing partnerships or to lighten the load by sharing best practice and cost. Funding and resources will need to both enable and follow these new structures. Critically, there will be a large infrastructure cost. The ‘NHS England Med Tech Funding Mandate’ makes organisations responsible for investing in innovation expected to deliver same-year savings, but central funding schemes must be increased and made more readily accessible for major investments. Implementing new technologies will require a workforce with additional skills and an open conversation between professionals and politicians to tackle our existing workforce shortage. It will also require a shift in where the workforce sits. The benefits of at-home consultations will only be maximised if follow-up care can also be done in the community or even better in the home as well. In February 2020 just 21.1 per cent of nurses and health visitors worked in the community – down from 24.2 per cent in February 2010. A lack of community nurses contributes to the centralisation of care into hospital settings. We must act to reverse this trend. Finally, we cannot expect all models to work first time round – successful entrepreneurs often have failures amongst their successes, and we need to give the NHS room to take risks as it improves. One anaesthetist summed up much of the issue in describing the need for “permission” (and a common understanding of it) to try new ways of doing things, and the average tenure of an NHS Trust CEO is just three years – not enough time to implement a major transformation. Politicians need to provide professionals with the air-cover to innovate. Of course, these are just some of the changes needed to help our NHS services to survive. To truly alleviate the pressure, we need to improve public health, and Boris Johnson is absolutely right to be launching a national anti-obesity drive. However, whilst we’re starting on that journey – which will surely be decades-long – we must continue to protect our NHS past Covid-19 by ensuring it is free to make the step-change towards sustainability it desperately needs.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9094212651252747, "language": "en", "url": "https://blogs.lse.ac.uk/mec/2019/01/17/kuwaits-hundred-million-dollar-supply-management-opportunity/", "token_count": 2288, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.23828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9052bf9e-171f-4440-93dd-39d4e936b346>" }
by Yousef Abdulsalam The supply chain of the health sector – the manufacturing, distribution, procurement and consumption of medical supplies – has severely lagged other supply chains in terms of efficiency, innovation and technologies. Supply expenses generally constitute about 15 percent of total hospital expenditure. Though this is dwarfed by payroll’s 50 percent (or higher) share of expenditure, supply expenses are the second largest cost category and present far greater opportunities for decreasing expenses and increasing resource efficiency. In Kuwait’s public health sector, an estimated 327 million KWD (£821 million) is spent on medical supplies, according to the Ministry of Health’s Annual Health Report for 2015. Pharmaceuticals account for almost 60 percent of this spend. In this segment, Kuwait misses a multi-million-dollar cost-saving opportunity every year: generic drugs substitution. Generic Substitution: The Hundred Million Dollar Opportunity Generic drugs are clinically-identical alternatives to branded drugs that enter the market after the patents on their branded counterparts expire. Although generic drugs are 50 to 80 percent cheaper, they generally exhibit equal efficacy, safety, and quality. Generics are cheaper as manufacturers do not have to factor the cost of research, development and marketing into the drug’s price. Unlike ‘copycats’ in other industries, where such a practice is usually illegal and carries a perception of lower quality products, governments encourage dispensing generic medication over their branded counterparts. Even the brand-name companies stand to profit from the generic drug market and, in fact, produce about 50 percent of generic drugs. In most mature healthcare systems, generics account for a significant portion of drug prescriptions. For example, in the United Kingdom, 81 percent of drugs are generically dispensed. In the United States, 89 percent of dispensed drugs (US$3.9 billion) were generics. On the other hand, generic drugs account for only 21.6 percent of total volume (and only 12 percent in terms of sales value) of prescribed drugs in Kuwait, according to a 2016 report. Kuwait’s peers, the Gulf Cooperation Council (GCC) countries, show similar rates of generic drug penetration in the pharmaceuticals market. For example, in Saudi Arabia and the United Arab Emirates, approximately 16 percent and 19 percent of drug dispenses were for generics, respectively. It seems that the allure of international brands in the GCC is not limited to apparel, shoes and accessories. Table 1: Generic Drugs Market Penetration |Country||Generic Market Share (Volume)||Generic Market Share (Value)||Source| |United States||89%||26%||Association for Accessible Medicines (2017)| |United Kingdom||83%||33%||NHS Business Authority Service (2017)| |Average of 27 OECD Countries||52%||25%||OECD (2017)| |Kuwait||21.6%||12.3%||IMS Health (2016)| |United Arab Emirates||19%||-||Dubai Exports. Pharmaceutical Sector Report (2018)| |Saudi Arabia||16%||-||Generics & Biosimilars Initiative (2015)| Let us assume generic drugs dispensing volume in Kuwait increases to 50 percent and that generics are, on average, half the cost of their branded counterparts. Based on Kuwait’s current spending patterns on pharmaceuticals, it can save roughly 30 million KWD (about US$100 million) annually without any capital investments or negative implications to healthcare quality if these assumptions can be met. And this is only a conservative estimate about how much can be saved via generic substitution. To demonstrate the cost saving potential of generic substitution at the item-level, let us consider Prilosec: the original patented brand of omeprazole, a drug prescribed to manage stomach acid and developed by the UK-based pharmaceutical company AstraZeneca. In the United States, each box sold for $3.27 before its patent expired in 2002. Today, the generic version of Prilosec sells for $0.08. The most common brands of omeprazole prescribed in Kuwait – Gasec and Losec – also happen to be twice as expensive as some equivalent alternatives. Similarly, there are over ten different brand names of paracetamol registered in Kuwait. Paracetamol is a drug used to treat mild pain and is more widely recognised as either Panadol or Tylenol. From a clinical perspective, these two products (and their generic counterparts) are identical. Table 2: Brand Names and Pricing Comparison for Omeprazole, 20mg, 14 Capsules/Tablets (identical chemical compound, dose, and units per package) |Brand Name||Manufacturer||Price in Kuwait* (KWD)| |Gasec Gastrocaps||Acino Pharma AG||5.510| |Risek||Julphar Gulf Pharmaceutical Ind.||4.960| |Minisec||Kuwait Saudi Pharm. Industries Co.||4.460| |Omiz||Tabuk Pharm. Manufacturing Co.||3.260| |Omeprex||Saudi Arabian Japanese Pharm. Co.||3.100| |Omezyn||Oman Pharm. Co.||3.100| With significant price mark-ups, how do brand drugs continue to sell long after their patents expire? The short answer: Pharmaceutical corporations exploit multiple supply chain and marketing tactics to maintain the profitability of their products long after patents expire. A few of these tactics are discussed below. To encourage expensive research investments in new drugs and treatments, governments grant exclusive selling rights to manufacturers of new drugs for periods that can last for 20 years. Exclusivity during the patent period, mixed with large investments in marketing and advertising over many years, induces strong brand recognition and loyalty – equally so among patients, pharmacists and physicians. Supplier Reputation and Relationships Public health systems and institutions don’t buy single products from suppliers but enter into multi-year contracts for numerous different products. Not all products in a trusted supplier’s purchase order will be optimally priced. Supply chain alliances must consider product costs as well as supplier reliability and reputation. For example, what is the manufacturer’s historical product defect rate? Can a local manufacturer of a generic drug produce and deliver the quantities needed consistently? Can a local manufacturer respond to demand or supply shocks as effectively as GlaxoSmithKline or Pfizer? In addition, suppliers often build strong relationships directly with physicians, getting them to ‘pull’ the products down the supply chain. Brand products invest much more in their packaging and marketing. But does packaging make such a difference? Consider bottled water manufacturers, who have mastered the art and science of packaging. They all sell a virtually identical product: clean water. Never mind all the misleading (and largely unscientific) claims that some brands make about their product’s ‘rejuvenating’ properties or the ‘natural’ source of their water. Forty percent of bottled water is plain tap water packaged more appealingly. Yet, dozens of successful bottled water brands continue to thrive and demand a wide range of prices. I digress. Packaging matters. Branded drugs attempt to maintain a lead over generics by ‘re-inventing’ their out-of-patent drugs to signal novelty and/or apply for a new patent. Earlier, I mentioned ten different paracetamol brands registered in Kuwait. Across these brands, there are over 100 registered strength-form versions and minor variations of paracetamol. An example: Panadol Advance and Panadol Extra Advance both have 500mg of paracetamol per tablet. Added to the Panadol Extra tablet is just 65mg of caffeine per tablet. For reference, the average coffee cup has about 100mg of caffeine. The ‘Extra’ in Panadol Extra Advance adds an additional 46 percent to the price. What can be done? Policy Makers & Managers Policy makers have the largest role to play. The Ministry of Heath should actively pursue the wider adoption of generic drugs in the healthcare system. While the Ministry of Health publishes the official prices of all drugs registered in Kuwait, this alone does not incentivise physicians and pharmacists to change their prescription habits in Kuwait’s ‘all-expenses-paid’ healthcare system. Creating price transparency and awareness regarding drug prices is only a first step. More policies that push for product standardisation and price ceilings on hospital supply inventories are necessary. The United Arab Emirates recently took an important step towards dispensing generic drugs. The Department of Health (DoH) mandated that pharmacies at health facilities dispense generic medicines (where one is available), effective 1 September 2018. Patients retain the option to procure the branded version but need to pay (out-of-pocket) the price difference between the branded product and a reference price published by the DoH. This strategy may well increase cost awareness as patients confront stark price differences when comparing alternatives. Physicians are the gatekeepers of pharmaceutical drugs, being the ones who select the drugs for the patients to consume. While physicians receive ample training regarding the clinical properties of prescription drugs, they have little awareness about the costs and the supply chain implications of their decisions. At a minimum, physicians should stay in touch with the changing pharmaceuticals landscape, as brand drugs lose patent protection and new alternatives emerge. This information may be sought from pharmacists, smartphone apps, or hospital procurement specialists. While patients have little control over their prescriptions, they should be better informed about the clinical equivalency of generic drug alternatives. False perceptions about a generic drug’s inferior quality due to lower price and lack of recognisable brand name need to be overcome. Perhaps it ought to be a physician’s responsibility to educate their patients about the viability of generic drugs. Unfortunately, a recent study shows that pharmacists and physicians in Kuwait hold the same sceptical views as the general public, with less than half of surveyed respondents believing that generics are as effective as brand drugs. Generic Substitution: One of many supply management opportunities in healthcare Mature healthcare systems have long since controlled the costs of drug spending by shifting more volume toward generic drugs. Their current efforts are aimed towards a similar opportunity in the medical devices supply chain, but one that presents a far greater challenge. It takes much more research and evidence to establish clinical equivalency of medical device alternatives (stents, in the field of cardiology, present an interesting case study), and price transparency has been much harder to establish in the medical devices industry. For Kuwait and the GCC countries, these opportunities and challenges are far off the horizon, as generic substitution currently presents a far simpler opportunity that has yet to be exploited. Yousef Abdulsalam is an Assistant Professor at Kuwait University’s College of Business Administration in the Information Systems and Operations Management Department. In 2018 he was a Visiting Fellow at the Middle East Centre, where his research investigated the health sector supply chain of Kuwait.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.961482584476471, "language": "en", "url": "https://forums.wincustomize.com/405417/peak-oil-is-upon-us", "token_count": 403, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.28515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1fc8e3eb-4c81-4a37-8396-c0d2ba130d93>" }
Plus, there are other problems - the producing countries are increasing domestic consumption, which means exports are falling even faster than production. Many countries turned from exporters to importers recently - when everyone will import, who will be the exporter? Peak oil is not about running out of oil. It is about reaching a global maximum of production and entering an era of terminal production decline. US oil has peaked this way, Brent has peaked too, why should be the global production different? Peak oil is about running out of cheap oil - and by switching from old, vast Middle Eastern easily accessible fields that could produce barrel for 5$ or so to sources like deep sea oil that produce from 60$ up, the change will be noticeable. About shales, tar sands and other sources - the key term here is EROI - energy return on investment. You can get quality, light, sweet crude from tar sands, but you must first heat the material and then refine it - a process that requires energy itself. That means you will end up with smaller net gain. Just for comparison - early large fields had EROI about 100:1, now oil declined to about 20 - 10:1. Tar sands may be as low as 3:1 - that means under current conditions (10:1 oil) you need three more times tar sands produced to get to the same net energy gain as with oil. Naturally, sources with better EROI are developed first, since they are cheaper, so we end up with badly accessible sources that are expensive to produce. If you examine past recessions, most of them were precluded by oil price spikes - recently in 2008. Oil price also drives food price higher, and that destabilizes the poorer regions where people are already living in great poverty - that's what's going on in Egypt, Lybia now. As for hydrogen economy - it's far too wasteful and inefficient to be scaled up. Read here, for example: