content
stringlengths
190
75.1k
score
stringlengths
1
6
source
stringclasses
1 value
Column 4
stringlengths
33
205
What is Childhood Soft Tissue Sarcoma? Childhood soft tissue sarcoma is a disease in which cancer cells begin growing in the soft tissue in a child's body. The soft tissues connect, support and surround the body parts and organs, and include muscles, tendons, connective tissues, fat, blood vessels, nerves and synovial tissues (that surround the joints). Cancer develops as the result of abnormal cell growth within the soft tissues. Types of Childhood Soft Tissue Sarcoma There are many types of soft tissue sarcomas are classified according to the type of soft tissue they resemble. Types include: Tumors of Fibrous (connective) Tissue Desmoid tumor Fibrosarcoma Fibrohystiocytic Tumors Malignant Fibrous Histiocytoma Fat Tissue Tumors Liposarcoma Smooth Muscle Tumors Leiomyosarcoma Blood and Lymph Vessel Tumors Angiosarcoma Hemangiopericytoma Hemangioendothelioma Synovial (joint) Tissue Sarcoma Synovial sarcoma Peripheral Nervous System Tumors Malignant Schwannoma Bone and Cartilage Tumors Extraosseous Osteosarcoma Extraosseous myxoid chondrosarcoma Extraosseous mesenchymal chondrosarcoma Combination Tissue Type Tumors Malignant mesenchymoma Tumors of Unknown Origin Alveolar soft part sarcoma Epitheloid sarcoma Clear cell sarcoma Risk Factors Soft tissue sarcoma is more likely to develop in people who have the following risk factors: Specific genetic conditions. Certain genetic syndromes, such as Li-Fraumeni syndrome, may put some people at a higher risk for developing this disease. Radiation therapy. Children who have previously received radiation therapy are at a higher risk. Virus. Children who have the Epstein-Barr virus as well as AIDS (acquired immune deficiency syndrome are at a higher risk as well. Common Symptoms A solid lump or mass, usually in the trunk, arms or legs Other symptoms depend upon the location of the tumor and if it is interfering with other bodily functions Rarely causes fever, weight loss or night sweats If your child has any of these symptoms, please see his/her doctor. Diagnosing Childhood Soft Tissue Sarcoma If symptoms are present, your child's doctor will complete a physical exam and will prescribe additional tests to find the cause of the symptoms. Tests may include chest x-rays, biopsy, CT (or CAT) scan and/or an MRI. Once soft tissue sarcoma is found, additional tests will be performed to determine the stage (progress) of the cancer. Treatment will depend upon the type, location and stage of the disease. Treatment Options Once the diagnosis of cancer is confirmed, and the type and stage of the disease has been determined, your child's doctor will work with you, your child, and appropriate specialists to plan the best treatment. Current treatment options may include surgery, radiation therapy or chemotherapy.
0.8876
FineWeb
``` { "topics": [ "What is Childhood Soft Tissue Sarcoma", "Types of Childhood Soft Tissue Sarcoma", "Diagnosing and Treating Childhood Soft Tissue Sarcoma" ] } ```
I would like to draw attention to three aspects of Brad Gregory’s The Unintended Reformation, a book whose courage and ambition I applaud, if for no other reason than that it exemplifies what an engaged form of historiography (and humanistic inquiry more generally) can and should do. The first aspect has to do with the commercialization and commodification of knowledge in post-Reformation modernity and how it impacts advanced inquiry today. From it follows my second concern, which lies with the indebtedness of Gregory’s own narrative to the fruits of modern, disciplinary and specialized inquiry. Finally, I wish to take up the question of whether Gregory’s historiographical approach might be seriously compromised by the apparent absence of a focused hermeneutical engagement with the major voices (theological, philosophical, political, economic, etc.) widely credited with shaping the landscape of post-Reformation modernity, both secular and religious. At every turn, The Unintended Reformation appears driven and directed by an unapologetically normative and internally cohesive Catholic view of our world and of what, somewhat vaguely, Gregory capitalizes as “Life Questions.” As Ian Hunter, in his hard-hitting critique, maintains, Gregory’s “narrative of the modern world is precommitted to the historical centrality of the Catholic and Protestant churches,” and his “portrayal and solution to the problem of modern cultural pluralism is thus wholly internal to his own confessional-intellectual position.” With this I agree, though I find that state of affairs to be much less of a problem than does Hunter. It would be a problem only if one of two things applied: 1) that one could show how any such account could be written free of all such “precommitments” (T. Nagel’s “view from nowhere”), or 2) that a different set of commitments would better serve our understanding of the issues. The first of these positions I consider to be logically indemonstrable and utopian; the second can only ever be adjudicated through an ongoing contest of narratives, in which case one still has reason to be grateful for Brad Gregory’s impassioned and erudite account of the genesis of modernity. The critiques that have appeared so far, both on this forum and elsewhere, would thus serve a valuable function, provided we recognize that their various insights are dialectically generated by the very account from which they dissent. For Peter Gordon, the gauntlet that Gregory throws down for modernity is simply unacceptable: “What philosophical or historical arguments could convince us that this [i.e., the pre-Reformation Catholic] ideal was special? And why should we not continue to believe that our own ideals only need to be realized with greater fidelity?” (italics mine). Curiously, the Catholic framework is charged with having to legitimate itself by “arguments,” whereas our dogged adherence to secular modernity is naturalized (Paul Silas Peterson speaks of a “soft consensus,” for instance) as resting on unflagging “belief” in its continued progress. Ironically, then, the aspirational stance of a modern liberal-secular culture intent on confronting challenges, overcoming crises, and making continual progress is not particularly different from the trajectory that Aquinas maps out for our progression toward a visio beatifica. One suspects that Gregory’s most acerbic critics actually tend to argue from within a set of precommitments of their own, and that their axioms are at least as inexplicit and unexamined as his, and probably rather more so. In the fifth chapter (“Manufacturing the Goods Life”), Gregory traces the distant and not-so-distant origins of the “prevailing consumerist cycle of acquire, discard, repeat” that, in the industrialized West and the booming economies of China, Japan, and Southeast Asia, has been the norm for some time. As regards the raw facts of modern consumerism at least, I suspect that most of his critics in the academy share his misgivings. It is certainly striking how little dissent has been voiced about this chapter by Gregory’s otherwise unsparing critics within the academy. Those working at the American Enterprise Institute or the Heritage Foundation, on the other hand, almost certainly would dissent. Inevitably, our “sense” of what is true and good is internal to the genealogies of inquiry and the institutional frameworks from within which we develop our claims. That the world today shows signs of systemic trouble and pervasive abuse, almost all of it driven by profit motives, few in the secular academy will deny. Broach such topics as the corrosive impact of corporate money on politics; the strong, and by now palpable, correlations among a rapacious global capitalism, environmental degradation, and global warming; the widespread immiseration of workers in Nike’s sweatshops and Foxconn’s factories; children around the globe pressed into service as soldiers, sex-slaves, or no-pay laborers; a global arms trade vigorously pursued by almost all “developed” nations, etc.—and intense verbal hand-wringing and professions of moral indignation are sure to ensue. Yet anyone taking an uncompromising theological and normative view of the manifest abasement of our life-world as “so much raw material awaiting the imprint of human desires” and critiquing modernity as one pervasive ethical and ecological miscarriage will quickly find himself on the receiving end of similar vituperations. Ironically, some of the more severe polemics against the The Unintended Reformation thus end up confirming one of Gregory’s central claims, namely, that “the secular academy is the domain of Weberian facts, not values—except, contradictorily, for the one hegemonic and supreme value that no judgments about competing truth claims pertaining to values or morality should or can be made.” Indeed, the disagreement between Gregory and his (il)liberal critics seems to be less about whether “religious truth claims made by billions of people are excluded from consideration on their own terms in nearly all research universities” (italics mine) than whether they should be. Part of what makes The Unintended Reformation such a courageous and intriguing, if also a deeply vulnerable work is that its author has extended his indictment of modern (Western) “hyper-pluralism” and its deeply impoverished and fragmented idea of community into a critique of the modern university; and here, Gregory is surely not alone. For the past two decades or so, the majority of those working in the humanities and the interpretive social sciences have witnessed the value of focused and sustained learning and the integrity of fields be progressively diluted and frittered away by an increasingly separate class of professional administrators. The prevailing impression is one of administrative hubris and a top-down, micro-managerial approach intent on fitting academic research on the Procrustean bed of donor-driven funding models and neo-utilitarian criteria of “relevance.” Is it not, then, at least plausible that one should want to inquire into the deep genealogy that has caused higher education to be redefined as a corporate endeavor, and knowledge as some amorphous “experience” pragmatically peddled in the academic marketplace? Has not the conflict of faculties identified by Kant expired amid the sheer indifference and parallel trajectories followed by countless fields and sub-fields of specialized inquiry? And is Gregory not right to worry “how the kinds of knowledge thereby gained in different disciplines might fit together, or whether the disciplines’ respective, contrary claims and incompatible assumptions might be resolved”? Can we really afford not to ask any questions concerning the ends of knowledge? Do we not ignore at our own peril Augustine’s distinction between the intrinsically normative intellectual virtue of studiositas and a strictly procedural, agnostic quest for new information (curiositas)? Today’s students’ self-image as consumers effectively prevents them from approaching study as a potentially transformative process and thus prevents them from grasping the historically contingent and strangely incoherent formation of the universities to which they flock. Instead, they are invited to have fleeting and often random “exposures” to what, at my home institution, are euphemistically called “areas of inquiry”—areas whose intellectual premises, Brad Gregory notes, are as incommensurable as their ends are uncoordinated and unarticulated. Yet criticize the desolate and confused culture of learning as it prevails at many private and public institutions as symptomatic of modernity’s deep-seated incoherence, and the same pattern of angry dismissal of such views as dogmatic, reactionary, and narrow is bound to resurface. To be sure, there is no shortage of well-researched and mostly dispiriting accounts of the excesses of the corporate university and the, by now, almost complete transformation of learning from a quest for meaning into a sort of intellectual tourism (see, e.g., the books by A. Delbanco, D. Bok, S. Collini, M. Edmundson, G. Tuchman, and many others). Yet, with few exceptions (such as Brad Gregory’s closing chapter or the work of his colleague Mark Roche), all these analyses conceive of the dramatic shifts in higher education over the past decade or two as some peculiar “crisis” that has suddenly and inexplicably erupted within the academy. To his critics, Gregory’s contention that these developments might be linked to the distant genesis of liberal and secular pluralism as it arose in the wake of the Reformation is rather too disconcerting. Hence contemporary academia alights on the idea of a specific and limited “crisis” (as in “The Crisis of the Humanities”), a term eagerly employed because it allows us to frame as a capricious and supposedly remediable mishap what, in fact, may have deep and potentially irreversible sources in the distant past. All this takes us to the second area of concern, which has to do with how Brad Gregory’s own “genealogical” approach relates to the prevailing historicist forms of inquiry that have shaped the social sciences since the age of Durkheim and Weber. Yet to raise that question is to confront something of a paradox in Gregory’s decidedly negative appraisal of post-Reformation culture. For his overall argument and, more particularly, his account of the deepening incoherence in the culture of advanced learning appear, at times, to be contradicted by his own procedures. First, one might wish that The Unintended Reformation had opened with a more explicit methodological reflection on how Gregory’s ability to tell his long and complex story about the pathway into our secular and hyper-pluralist modernity pivots on certain assumptions about causation in the overlapping domains of politics, economics, education, religious culture, and so on. At times, Gregory seems on the verge of giving us such an account, such as when, in his final chapter, he laments the de-centered and disjointed nature of modern historical and humanistic inquiry; and here I am for the most part inclined to concur. Yet at the same time, his “genealogical” narrative would have been altogether impossible without the abundant fruits of modern disciplinary and specialist forms of inquiry. Having distilled his narrative overwhelmingly from an abundance of secondary literature (economic, theological, philosophical, etc.) that he often synthesizes to brilliant effect, Gregory’s indictment of the “definitional secularity and highly specialized character […] of knowledge in the Western world today” leaves one with the impression that he might be biting the hand that feeds him. Whether Catholic, secular, or something else yet, we are all moderns and, as such, find our intellectual flourishing to be paradoxically bound up with modernity’s institutional and intellectual resources. This fact remains in force even as we struggle to achieve a perspective on modernity’s own presuppositions, which, one should think, not only ought to be acknowledged by the author of The Unintended Reformation, but should also temper the serene confidence with which many of his fiercer critics embrace the very modernity that Gregory calls into question. My third and final concern has to do with the something strangely absent from The Unintended Reformation, namely, a patient hermeneutic engagement with the major voices who were, themselves, the catalysts and self-styled advocates of the process of modernization (Luther, Bacon, Descartes, Hobbes, Locke, Hume, A. Smith, Kant, Hegel, and Mill, to name but a few). What makes this absence so strange is that it was precisely the strength of the Thomist model of disputatio to tie the validity of any reasoned position to its ability to prove itself under hermeneutic scrutiny over against competing arguments. In contrast with modern (post-Baconian) method, which argues from principle and, in so doing, tends to exclude rather than engage competing views, a Scholastic model of reasoning was wholly bound up with patient hermeneutic attention to other accounts of God, man, and world. To be sure, a book traversing such a broad swathe of history as The Unintended Reformation may not be able to give us detailed readings of the countless works and voices that have helped shape our successive understanding of the “modern.” Yet the pattern of rare and brief snippets of primary text carefully tailored to support the point Gregory is keen to make at that moment tends to drain these voices of their complex and, not infrequently, internally conflicted nature. Even as The Unintended Reformation tells a passionate, dispiriting, and often-persuasive story, it seems oddly removed from the voices that have wrought and, however inadequately, conceptualized the major shifts in our understanding of human agency, responsibility, notions of the good life, etc., that Gregory finds so troubling in post-Reformation Western societies. Close exegesis would show writers such as Locke, A. Smith, Kant, Hegel, or Darwin to have been, at times, genuinely troubled by the implications of their own arguments, as well as by their increasingly frayed and tenuous outlook on the theological traditions (Calvinist, Pietist, Lutheran, Anglican, et al.) that, well into the nineteenth century, continued to shape moral and spiritual lives in powerful ways, though perhaps more at the level of material practice than abstract inquiry. As a result of its lack of the kind of hermeneutic attention that is so richly on display in Gregory’s earlier monograph, Salvation at Stake (2001), the genealogical account put forward in The Unintended Reformation frequently seems overly schematic or “highly stylized” (Peter Gordon). Perhaps the most obvious case in point would involve his frequent characterization of “late medieval Christianity [as] an institutionalized worldview” and similar references to “the traditional Christian view,” a framework supposedly first weakened by Duns Scotus’ univocalism and then decisively overthrown by the Reformation. Here Gregory’s narrative rests too exclusively on a rather abstract and monolithic understanding of medieval Christianity, one distilled almost exclusively from its theological and philosophical propositions but surprisingly inattentive to how these often abstract contents were realized in extraordinarily complex and regionally diverse liturgical and sacramental practices. While the existence of pre-Reformation heterodox movements is acknowledged in passing, Gregory does not consider to what extent their presence might complicate either his notion of medieval Christianity as an “institutionalized worldview” or his claim that the decisive break with that view only occurred after 1500. Henricians, Waldensians, Franciscan Spiritualists, Joachite Millenarians, Lollards, Hussites, Conciliarists, and many other movements were driven, in their various ways, by a profound desire to understand more fully the Christianity in whose sacramental and liturgical practices and theological self-understanding they so controversially participated. Eamon Duffy, who surprisingly receives only one mention in Gregory’s account, has offered a powerful account of the beauty and intricacy with which pre-Reformation spirituality was realized in fifteenth-century English parish life. Emmanuel Le Roy La Durie’s pioneering study of Montaillou had drawn another rich and vivid picture, showing just how regionally varied, peculiar, and often heterodox the “institutionalized worldview” of medieval Christianity really was. It is precisely in the material practices of religious communities that we witness a hermeneutic of what a “sacramental worldview” concretely meant, something of far greater reality and presence than can be captured by any body of theological propositions. Invariably, though, the gravitational pull of The Unintended Reformation is away from social and material historiography and toward philosophical theology, which itself, in the two centuries leading up to the Reformation, becomes a far more fragmented enterprise than Gregory acknowledges. Instead, the emergence of univocal forms of theological predication—a line of argument lately revived in stridently absolutist terms by the “Nottingham School” (particularly by John Milbank and Catherine Pickstock)—seems to receive disproportionate emphasis. In passing, one should also note that Richard Cross (incidentally a colleague of Brad Gregory at Notre Dame) has seriously challenged the allegedly pivotal impact of Duns Scotus’ univocalism on the genesis of modern thought. To be sure, somewhere in late Scholasticism—and here I would think, above all, of Ockham, Gabriel Biel, and Nicolas Autrecourt—we do, indeed, observe a momentous shift, whereby God is placed on the same ontological plane as all created, particular existence, and where a quasi-legalistic theological concern with divine omnipotence (potentia absoluta) trumps, and implicitly jeopardizes, the rational order (potentia ordinata) wrought by God. The impression is that of a conspicuous failure on the part of late Scholastic theologians to imagine and embrace God’s absolute transcendence as post-Nicaean Patristic thinkers all the way up to Anselm and Aquinas had been able to do. Yet even if one were to restrict one’s understanding of medieval Christianity to its theological and philosophical superstructure, a book pursuing aims as ambitious as those of The Unintended Reformation owes its readers a more fully realized account of how, specifically, this momentous conceptual shift came about. In the end, it is the lack of specific hermeneutic engagement with the complex intellectual and material world of pre-Reformation religious culture that renders Gregory’s meta-historical approach perversely modern in its own right. With good reason, Gregory places great emphasis on the effects of “radical” (anti-magisterial and increasingly anti-ecclesiastic) Protestantism, a phenomenon that, so his thesis, “has distorted our understanding of the Reformation as a whole and obscured its relationship to contemporary hyperpluralism.” Still, however novel and disconcerting one may find the proliferation of religious views and spiritual ideas, so intensely felt and argued already in mid-seventeenth-century Puritan England, Catholic Christianity itself has, likewise, proven to be an internally complex formation, and that in at least two ways. First, as an attempt to build a viable form of community on something as mysterious and overwhelming in its implications as the incarnation, Christ’s atonement, and the resurrection, and as one of the first movements (along with Stoicism) to argue that a just moral order had to be universalizable, Christianity obviously had its work cut out for itself. Viewed less as a body of “settled” theological propositions than as a rich, variegated, and sometimes contested array of liturgical and sacramental practices, Catholic theology from late antiquity into the high middle ages constitutes less an apologetics of these practices than an ongoing and necessarily imperfect quest for understanding their theological intent and spiritual efficacy. At the same time, beginning with Patristic thought, Christianity understands itself as, above all else, an attempt at engaging with the secular—rather than either withdrawing from or anathemizing it—namely, by assisting individuals and communities with achieving stability, integrity, and orientation in their daily pursuit of life in a created, finite, and profoundly uncertain saeculum. Neither before nor after the Reformation was the secular understood as an alien realm inopportunely and illegitimately intruding into Christian life and disordering its outlook on the central “Life Questions.” Rather, the saeculum has always been the reality toward which Christian practice and thought is oriented in its attempt to reimagine it as a just, responsible, and sustainable community. This I take to be the point made by Adrian Pabst, who demurs at Gregory’s “false divide between a purely secular and an exclusively religious perspective.” Newman’s view of Christianity as an idea whose development over the preceding eighteen hundred years had gradually, though still not conclusively, clarified the implications of its central mysteries thus recognizes (rightly, I think) that struggle and contest are integral to human knowledge, no less in the area of religion than in that of things finite. Long before, Augustine had offered a forensic account of human psychology that shows this agonistic pattern to be integral to our will and our entire constitution as human beings. Peter Gordon has shrewdly pointed out how at some point or other in Gregory’s account every epoch (including medieval Catholicism) is said either to “have failed” or to “be failing.” What prompts these dispiriting appraisals is an underlying expectation of a definitive and just world, and a desire to draw a final line under the balance sheet of two thousand years of Western history. Yet wanting to locate the eschaton in this world is, itself, an intrinsically secular desire and, to say the obvious, very much at odds with any known form of Catholic theological reasoning. If modernity’s sin of choice is pride (as indeed I think it is), Brad Gregory’s most grievous fault may be that of despair.
0.623
FineWeb
```json [ "The Commercialization of Knowledge", "The Genealogy of Modernity", "Hermeneutic Engagement with Major Voices" ] ```
Local environment surrounding Zr atoms in the thin films of nanocrystalline zirconia (ZrO2) has been investigated by using the extended x-ray absorption fine structure (EXAFS) technique. These films prepared by the ion beam assisted deposition exhibit long-range structural order of cubic phase and high hardness at room temperature without chemical stabilizers. The local structure around Zr probed by EXAFS indicates a cubic Zr sublattice with O atoms located on the nearest tetragonal sites with respect to the Zr central atoms, as well as highly disordered locations. Similar Zr local structure was also found in a ZrO2 nanocrystal sample prepared by a sol-gel method. Variations in local structures due to thermal annealing were observed and analyzed. Most importantly, our x-ray results provide direct experimental evidence for the existence of oxygen vacancies arising from local disorder and distortion of the oxygen sublattice in nanocrystalline ZrO2. These oxygen vacancies are regarded as the essential stabilizing factor for the nanostructurally stabilized cubic zirconia. Soo, Y. L.; Chen, P. J.; Huang, S. H.; Shiu, T. J.; Tsai, T Y.; Chow, Y. H.; Lin, Y. C.; Weng, S. C,; Chang, S. L.; Wang, G.; Cheung, Chin Li; Sabirianov, Renat F.; Mei, Wai-Ning; Namavar, Fereydoon; Haider, Hani; Garvin, Kevin L.; Lee, J. F.; Lee, H. Y.; and Chu, P. P., "Local Structures Surrounding Zr in Nanostructurally Stabilized Cubic Zirconia: Structural Origin of Phase Stability" (2008). Physics Faculty Publications. 1.
0.5359
FineWeb
["Zirconia", "EXAFS Technique", "Nanocrystalline Structure"]
With January being International Brain Teaser month – Didn’t know that existed? Well now you know! – We decided to have some fun and put together a list of 10 brainteasers that are bound to leave you puzzled (excuse the pun)! Whether you decide to complete them over the weekend or whilst you’re bored at work, we’d love to hear how you did! Feel free to drop us a line on our and let us know how many you managed to complete on your own! And no cheating!! Good luck! - Each of the following sets of letters can be made into a real word by adding three letters to the beginning, and the same three letters in the same order to the end. For example, ANGLEM can have ENT added at the start and the end to become ENT + ANGLEM + ENT = ENTANGLEMENT. - I went on a trip last week. The traffic was moderate and the journey took two and a half hours. On the return journey, the traffic was similar, but I made it back in 150 minutes. Why? - Imagine you are in a room, no doors, windows or anything, how do you get out? - Can you complete these 5-letter words? _ _ _JO Q_ _ _A At a recent painting competition, Eileen's rendition of a Constable was not last. Jenny only just managed to avoid last place and came third. The lady who painted a Monet was very successful and took first place. Ada beat the lady who painted the Taylor and the lady who painted the Van Gogh beat Vera. Can you determine who painted what and who won? - Before Mt. Everest was discovered, what was the highest mountain in the world? - If you were running a race and you passed the person in 2nd place, what place would you be in now? - Count the number of times that the letter F appears in the following sentence:“Finished files are the result of years of scientific study combined with the experience of years.” - Which word, if pronounced right, is wrong, but if pronounced wrong is right? - What four digit number has digit 2 smaller than digit 4 which is two thirds of digit 1 which is two thirds of digit 3 which is three times digit 2? - ENT + ERTAINM + ENT = ENTERTAINMENT RED + ISCOVE + RED = REDISCOVERED ING + ROW + ING = INGROWING ANT + IOXID + ANT = ANTIOXIDANT MIC + ROCOS + MIC = MICROCOSMIC - The times are the same. # Name Artist 1. Ada Monet 2. Eileen Constable 3. Jenny Van Gogh 4. Vera Taylor - Mt. Everest. It just wasn’t discovered yet. - You would be in 2nd place. - How many letters F did you count? Three? Wrong, there are six! Almost everyone guesses three. Why? It seems that the brain cannot correctly process the word "OF". The letter F usually makes the "f" sound, like in "fox". However, in the word "of", it makes a "v" sound. Somehow, your brain overlooks the word "of" as it scans for the sound of "f". Digit 3 can only be 3, 6 or 9 (as it's 3 x Digit 2). Digit 2 can only be 1, 2 or 3. Digit1 can only be 2, 4 or 6. But, since Digit 4 is two thirds of Digit 1, Digit 1 must be 6. Making Digit 4 4 and making Digit 3 9. Making Digit 2 3.
0.952
FineWeb
["Brain Teasers", "Puzzles", "Riddles"]
Lucena, Raquel and Fresno, Fernando and Conesa, Jose Carlos and Palacios Clemente, Pablo and Seminóvski Pérez, Yohanna and Wahnón Benarroch, Perla Vanadium-Doped In and Sn Sulphides: Photocatalysts able to use the whole visible light spectrum. In: "2012 MRS Spring Meeting & Exhibit", 09/04/2012 - 13/04/2012, San Francisco, CA (EEUU). pp. 1-2. Using photocatalysis for energy applications depends, more than for environmental purposes or selective chemical synthesis, on converting as much of the solar spectrum as possible; the best photocatalyst, titania, is far from this. Many efforts are pursued to use better that spectrum in photocatalysis, by doping titania or using other materials (mainly oxides, nitrides and sulphides) to obtain a lower bandgap, even if this means decreasing the chemical potential of the electron-hole pairs. Here we introduce an alternative scheme, using an idea recently proposed for photovoltaics: the intermediate band (IB) materials. It consists in introducing in the gap of a semiconductor an intermediate level which, acting like a stepstone, allows an electron jumping from the valence band to the conduction band in two steps, each one absorbing one sub-bandgap photon. For this the IB must be partially filled, to allow both sub-bandgap transitions to proceed at comparable rates; must be made of delocalized states to minimize nonradiative recombination; and should not communicate electronically with the outer world. For photovoltaic use the optimum efficiency so achievable, over 1.5 times that given by a normal semiconductor, is obtained with an overall bandgap around 2.0 eV (which would be near-optimal also for water phtosplitting). Note that this scheme differs from the doping principle usually considered in photocatalysis, which just tries to decrease the bandgap; its aim is to keep the full bandgap chemical potential but using also lower energy photons. In the past we have proposed several IB materials based on extensively doping known semiconductors with light transition metals, checking first of all with quantum calculations that the desired IB structure results. Subsequently we have synthesized in powder form two of them: the thiospinel In2S3 and the layered compound SnS2 (having bandgaps of 2.0 and 2.2 eV respectively) where the octahedral cation is substituted at a â?10% level with vanadium, and we have verified that this substitution introduces in the absorption spectrum the sub-bandgap features predicted by the calculations. With these materials we have verified, using a simple reaction (formic acid oxidation), that the photocatalytic spectral response is indeed extended to longer wavelengths, being able to use even 700 nm photons, without largely degrading the response for above-bandgap photons (i.e. strong recombination is not induced) [3b, 4]. These materials are thus promising for efficient photoevolution of hydrogen from water; work on this is being pursued, the results of which will be presented.
0.8116
FineWeb
```json [ "Photocatalysts", " Intermediate Band Materials", "Solar Spectrum Conversion" ] ```
RAM Ultrasonic Humidifier The RAM Ultrasonic Humdifier allows the grower to increase the humidity of the growing environment up to 400ml / hour Complete with a 5L tank, this humidifier is fully adjustable and easy to use. To get the RAM Ultrasonic Humidifier to perform at it's best here are some tips: - Never restrict the mist flow as this will force it back into the body of the humidifier. - Remove the humidifier from excessively humid areas where it is not required. - Never try to fill the tank from the top. - When refilling the tank, make sure that you always empty any water that remains inside first. This will prevent a build up of residue salts that can shorten the life of the humidifier. - Wipe clean the water chamber in the base with a soft damp cloth. Never scrape these parts or use detergents.
0.8074
FineWeb
["RAM Ultrasonic Humidifier", "Humidifier Maintenance", "Growing Environment"]
53. Gita Doctrine [To Ramdas Gandhi] Do you understand the meaning of Ramagita [an abridgement of the Gita prepared by Gandhiji for Ramdas Gandhi]? The essence of it is devotion (bhakti) and its fruits. Pure devotion must lead to detachment (anasakti) and wisdom (jnana). If it doesn’t, it is not devotion, but mere emotionalism. Wisdom means the power of distinguishing right from wrong. If literally studies fail to invest a person with this power, they are nothing but pedantry. When Ramagita is understood in this sense, it rids us of all anxiety and impatience. (Recorded on December 6, 1932; ibid., p. 307; translated from the Gujarati.)
0.5149
FineWeb
```json [ "Gita Doctrine", "Devotion and Detachment", "Wisdom and Spiritual Growth" ] ```
Comfortable, Discreet Treatment with Invisalign As certified Invisalign® providers, Dr. Eugene D. Stanislaus and Dr. Lisa Reid can help patients realign their smiles without the use of traditional metal braces at Brooklyn Heights Dental®. These clear, virtually invisible aligner trays are made from smooth, BPA-free plastic and can help our patients in Brooklyn Heights and across New York City inconspicuously achieve straighter and more attractive smiles. During your treatment, you will be given a series of customized trays to be worn consecutively two weeks at a time in order to gradually realign your teeth into proper position. The number of trays given to you will depend on the extent of dental misalignment, your aesthetic concerns, and your treatment goals. Traditional Braces vs. Invisalign Traditional metal braces are made from stainless steel and involve brackets being bonded to the front of teeth, connected by arch wires, then secured with ligature elastics. During routine office visits, the dentist will apply pressure to the brackets by tightening the arch wires in order to gradually move teeth into proper alignment. Overall treatment usually takes anywhere from 18 to 36 months. Although traditional metal braces are still one of the more common types of orthodontics chosen today, there are many downsides to consider when debating Invisalign vs. traditional braces, including: - Food restrictions - Irritation to soft tissue, such as mouth sores - Potential for lip and gum lacerations - Potential damage to the front surface of teeth - Noticeable appearance - Difficulty cleaning teeth and gums effectively Invisalign trays are clear, comfortable, and virtually invisible, providing an excellent solution for adult patients interested in discreetly realigning their smile. Each tray should be worn for at least 22 hours a day, but can be removed for easy cleaning and eating, so you can still enjoy all your favorite foods. Trays can also be removed for a special event, such as an important presentation, wedding, graduation, or class reunion. Because you receive all the aligner trays upfront, follow-up appointments are required less often than with traditional braces. Depending on the severity of your dental misalignment, treatment can generally be completed in about six to 12 months. Are You a Candidate? Candidates for Invisalign should have good overall health and be looking to address cosmetic and oral health concerns, such as: - Gapped or widely spaced teeth - Open bite - Overcrowded or overlapping teeth - Crooked teeth - Temporomandibular joint (TMJ) disorder Additionally, patients should have realistic expectations of their results. Some dramatic enhancements, such as lengthening the appearance of short or worn teeth, might only be possible with porcelain veneers, dental bonding, or dental crowns. Why a Straighter Smile is Important When teeth are misaligned, the condition can make it difficult to keep teeth adequately cleaned. Even if you follow a disciplined oral health care program, teeth that overlap or are crooked can easily trap food and bacteria. This can lead to plaque buildup, tooth decay, cavities, and gum disease. Misaligned teeth can also cause improper jaw placement, which can lead to jaw clenching, teeth grinding, poor nutrition, TMJ disorder, and pain in your neck and shoulders. Straightening your smile not only provides many aesthetic and emotional advantages, but it can protect the long-term health of your teeth and body. Research has proven that the health of your smile can affect your overall physical health. When tooth decay and gum disease are left unchecked, it not only leads to tooth loss and costly restorative procedures, but it can lead to an increased risk of diabetes and heart disease. A straighter smile equals a healthier body. How Does Invisalign Work? Invisalign orthodontic treatment uses a series of specially designed clear aligners to straighten teeth. When used one after the other, these aligners will continue your orthodontic treatment by applying the necessary pressure to properly straighten teeth. Computer imaging and bite impressions of your mouth are used to prepare the Invisalign aligners to meet your specific needs. At our practice, the progress these virtually invisible braces achieve will be continuously monitored by either Dr. Stanislaus or Dr. Reid. This will ensure you receive the most beneficial effects and optimal results. Invisalign aligners are custom molded to fit snugly against your teeth for an appearance that is hardly noticeable. Many people will not even know you have anything in your mouth, unless you tell them. Invisalign aligners should generally be worn at all times, except during meals. The number of aligners used depends on your individual needs, and the duration of treatment can be expected to be less than traditional braces. Benefits and Drawbacks of Invisalign The obvious benefit of virtually invisible orthodontics is that the trays are completely discreet. This in itself is an enormous improvement over traditional metal braces, which many people associate with adolescence. Our patients frequently decide to use Invisalign virtually invisible braces because they would rather have crooked teeth than endure the appearance and discomfort of metal braces. With Invisalign, adult orthodontia is now more attractive than ever. By choosing to realign your smile with Invisalign, you can: - Achieve a straighter, healthier, more beautiful smile - Prevent tooth decay, gum disease, and TMJ disorder - Improve your self-confidence and quality of life - Achieve a youthful appearance - Improve your oral health and dental function Despite its many benefits, Invisalign is not right for everyone. Most notably, the orthodontic option may not be appropriate for certain types of bite issues and severe malocclusion. During a consultation, we will make your treatment goals our number one priority. If your doctor does not feel that Invisalign trays can provide the absolute best results, we will discuss your options. Today's braces are more comfortable and often feature more discreet materials, such as tooth-colored brackets and clear wires. Contact Brooklyn Heights Dental Today A beautifully aligned smile can prompt you to smile and laugh more openly, and impact your life in several additional ways. If you would like to learn more about Invisalign treatment at Brooklyn Heights Dental, schedule a consultation with Dr. Stanislaus or Dr. Reid today. You can reach us using our simple online form or by calling (718) 857-6639. "I've been very happy with my experiences at Dr. Stanislaus's office, and highly recommend him to anyone in the New York City area. He and his staff go above and beyond to do the best job possible."-R.B. (Patient)
0.699
FineWeb
["Invisalign Treatment", "Traditional Braces vs. Invisalign", "Benefits of Invisalign"]
An enormous specimen, weighing up to 5kg (11lbs), with a body length of up to 40cm (16in) and a leg span of one metre (3.3ft), the Birgus latro is the world’s largest terrestrial arthropod. The coconut crab is so-named due to its ability to climb palm trees and break into coconuts with its pincers. The coconut crab, which can live for up to 30 years, mainly inhabits the forested coastal areas of the islands of the South Pacific and Indian Oceans. A mostly nocturnal crustacean, it hides during the day in underground burrows. Although coconut crabs mate on dry land, as soon as the eggs are ready to hatch the female releases them into the ocean. Once hatched, the young will visit the ocean floor in search of a shell before coming back to dry land. Once ashore, the coconut crab permanently adapts to life on the land – so much so that it would drown in water because it has developed branchiostegal lungs and special gills more suited to taking oxygen from the air than from water. The fact that the coconut crab spawns at sea is the main reason for its widespread distribution as currents carry the larvae far afield. Still, the coconut crab remains an endangered species because it is considered a delicacy and is collected as food.
0.7718
FineWeb
["Coconut Crab Characteristics", "Habitat and Behavior", "Life Cycle and Conservation"]
By Jeffrey Hoffmeister Artificial intelligence (AI) has become the latest technology to upend the status quo. However, unlike other industries, healthcare’s adoption and implementation of AI is still in its infancy, in part, due to many providers still updating their systems and processes for the new technology. Nonetheless, the momentum is building and AI and the Internet of Medical Things (IoMT) is poised to revolutionize the healthcare industry. A recent analysis from Accenture shows growth in the AI health market is expected to reach $6.6 billion by 2021 and key clinical health AI applications can potentially create $150 billion in annual savings for the US healthcare economy by 2026. Additionally, Harvard Business Review found the application of AI to administrative processes could add a potential annual value of $18 billion by 2026. Based on what we know so far, it’s clear AI can provide the healthcare industry with a unique opportunity to not only offer tools and insights that can vastly improve patient care, but that also improve their bottom line. AI has the power to see patterns in research studies, detect ailments faster and provide more in-depth education. However, despite all the benefits and advantages of AI, some providers remain skeptical and hesitant to implement solutions, and are concerned about the challenges of AI in the healthcare industry. First rule of AI in medicine: Do no harm Since AI relies mainly on data collection, if the data isn’t accurate, the AI solution is blamed. In healthcare, AI solutions that rely on deep learning capabilities can lead to incorrect patterns being identified and thus incorrect diagnoses – such as false positive results. Despite these trepidations, providers who are pro-AI argue that this technology is actually much faster and more accurate than humans and only provides us with even more opportunities to succeed and streamline tasks. Other AI fears relate to job loss – much like the argument made across all industries against AI. Automation of processes will certainly make some roles obsolete, but for many positions within healthcare and caregiving, machines and computers will be responsible for one role, not the many hats worn by healthcare providers. Take radiologists for example. Deep learning solutions can help them identify areas of interest within a scan, but that’s not all radiologists do. AI solutions are simply a supplement to their duties and can allow them to spend more time focused on patients and providing value-based care. AI equals real results Depending on the industry, AI delivers different types of results. In the healthcare industry, AI solutions can assist in improving health outcomes, especially when it comes to breast imaging. According to Breastcancer.org, nearly one in eight U.S. women will develop invasive breast cancer during her lifetime. However, two-thirds have the potential to be saved through early detection and progressive treatments. In response, many medical facilities worldwide are turning to Digital Breast Tomosynthesis (DBT) technology solutions as their preferred method for screening and diagnostic mammography in order to do just that – detect and diagnose women with early-stage breast cancer. But just like all technology, there are also some challenges radiologists face when utilizing DBT. For example, detection of breast cancer using DBT involves interpretation of massive data sets, which can be extremely time consuming for radiologists. A 3D mammogram, or DBT, produces hundreds of images, while 2D digital mammography exams produce only four images. While DBT ultimately provides greater clarity and detail, it also requires radiologists to spend significantly more time reviewing and interpreting breast exams. However, this is where technology comes into play. Radiologists can leverage innovative AI and deep learning solutions to help reduce their DBT interpretation time and improve reading workflow. This is due to the capabilities of particular AI solutions, as some tools can automatically highlight areas for radiologists that might appear concerning, calling attention to spots that might need to be reviewed more cautiously. These capabilities are especially imperative for radiologists today, as many report feeling burnout. Although more research needs to be done on AI, in the medical imaging industry its clear these solutions are fundamentally changing the way radiologists and other healthcare providers do their jobs. In order for providers to overcome their concerns associated with AI, they must first carefully research and consider the right AI solution for their individual needs. As the healthcare industry as a whole continues to transform, it’s easy to believe that providers who utilize and understand the unique capabilities of AI solutions will perform above the rest. About Jeffrey Hoffmeister: As VP, medical director at iCAD, Jeffrey has participated in developing mammographic AI solutions for 25 years. He has provided clinical insight to engineering and marketing teams and managed the design and implementation of clinical studies for FDA approval of mammographic AI products, from iCAD’s first mammography CAD product, SecondLook, in 2002 to iCAD’s most recent digital breast tomosynthesis AI solution, PowerLook Tomo Detection. iCAD, a global leader in medical technology providing innovative cancer detection and therapy solutions, is the manufacturer of the first and only FDA-approved concurrent-read cancer detection solution for breast tomosynthesis, iCAD’s PowerLook Tomo Detection. Utilizing a trained algorithm developed through deep learning, the system automatically analyzes each tomosynthesis plane and identifies suspicious areas. These images are then blended into a 2D synthetic image, providing radiologists with a single, highly sensitive, enhanced image from which they can easily navigate the tomosynthesis data sets.
0.5815
FineWeb
```json [ "Artificial Intelligence in Healthcare", "AI Applications in Medical Imaging", "Benefits and Challenges of AI in Healthcare" ] ```
Research output: Contribution to journal › Journal article |<mark>Journal publication date</mark>||03/1991| |<mark>Journal</mark>||Semiconductor Science and Technology| |Number of pages||5| A non-quantum interference mechanism in the current-voltage characteristics of double-barrier semiconductor heterostructures is suggested. It is based on the admixing of independent contributions of channels corresponding to different values of the electron momentum along the plane of device. The disturbing of coherence of channels due to non-parabolicity of the conduction band and magnetic field parallel to the plane causes a modulation of amplitude of resonant oscillations (looking like beats) in devices with wide interbarrier space. In the simplest cases the amplitude of oscillations depends on the voltage drop V and magnetic field H as sin(V3/2)/V3/2 and J1(H)/H respectively. Our consideration gives an interpretation of the experiments in a parallel magnetic field.
0.5053
FineWeb
["Double-barrier semiconductor heterostructures", "Non-quantum interference mechanism", "Current-voltage characteristics"]
The PHPC300 has 12 blades. Each blade is a sealed box with (3M Novec) liquid in it with a boiling point of about 50 degrees Celsius. So the liquid near the CPU will evaporate and then be circulated to a condenser where it is turned back into a liquid. We can use this technology in China to achieve a PUE of 1.05.” See more videos from ISC’14.
0.9635
FineWeb
["PHPC300", "Liquid Cooling Technology", "Power Usage Efficiency"]
Kent ISD coordinates Safety, Violence & Bully Prevention Education services for Kent, Ionia and Montcalm Counties. Saftey Education is part of the Michigan Model for Health for students K-12. The materials and lesson plans were guided by the Centers for Disease Control and aligned with national and state standards. The curriculum is part of the Michigan recommended school curriculum for students K-12. Professional development, technical assistance and resources are available for participating schools. Bully Prevention is a new state mandate requiring schools to provide instruction in the following: how to define and identify characteristics of a bully, how to avoid bullying situations, and proactive management of a bullying situation. The law provides a framework for educators to identify curriculum that meets the needs of the individual districts. The Michigan Model for Health offers lessons to meet the state mandate. Reproductive Health Institute The Reproductive Health Institute, sponsored by Kent ISD, promotes the development of positive relationships, prevention of sexual assault and violence. Capturing Kids' Hearts Capturing Kids' Hearts is a teaching and learning process designed to build relationships and high performing teams. The program promotes safe classrooms, schools and bully prevention. The strategies hold students and educators accountable for their attitudes and behaviors. For information on Capturing Kids Hearts, contact, Steve Dieleman, [email protected] For more information visit http://www.flippengroup.com For more information on safety curriculum, reproductive health education and bully prevention, contact Dr. Cheryl Blair, [email protected]
0.6326
FineWeb
```json [ "Safety Education", "Bully Prevention", "Reproductive Health" ] ```
The module that appears as most suitable for you is called HTML import. This module will divide one single large HTML document into a structured Drupal book where the heading level hierarchy is respected. This module works with HTML exported from Word; HTML document converted from PDF as well as HTML document exported from Adobe InDesign. This makes it possible to use a single HTML page, created in a word processing program and saved as HTML, or other structured HTML, to create a multipage Drupal book in a single step. You actually need to collate the output from all your .docx files into a single HTML-file before converting to HTML to use HTML import in a single step. In case your requirements are beyond what HTML import is capable of, below is a list of all the (usable) Drupal modules I am aware of that can be used to bulk import HTML (and some other formats) into a Drupal site: - HTML import - Import a single HTML page created from MS Word and split it up into an hierarchical Drupal book structure consisting of a set of interlinked Drupal nodes. - HTML2Book - This also imports a single HTML page and split it up into an hierarchical Drupal book structure consisting of a set of interlinked Drupal nodes, but requires you to clean up the tagsoup created by MS Word first. - Import HTML - Import all of an existing, static HTML-site into a Drupal site as nodes. - Feeds - Import or aggregate data as nodes, users, taxonomy terms or simple database records. - Migrate - Provides a flexible framework for migrating content into Drupal from other sources, including HTML. - Node import - Allows users to import content (node, user, taxonomy) from CSV or TSV files. Requires you to convert from HTML to CSV. (no Drupal 7 version) Of these, the simplest to use are HTML import and HTML2Book. However, both can only handle a single HTML page. For more complex conversion tasks, where you need to convert a large static HTML-site made up of several interlinked pages, one of the other modules may be more suitable. Also: Whatever tool you use, never experiment with bulk import on a production site. Unless you're a lot more clever than me, there are going to be false starts and botched imports. Cleaning up thousands of nodes damaged by an import gone wrong is not fun. Always experiment with bulk import on a throwaway staging site that just can be discarded wholesale when things go wrong. Transfer the settings and do the import on the production site when you're sure you have a working set-up.
0.757
FineWeb
```json [ "Drupal Modules", "HTML Import", "Bulk Import Tools" ] ```
Virginia Institute of Marine Science virtualspecies is a freely available package for R designed to generate virtual species distributions, a procedure increasingly used in ecology to improve species distribution models. This package combines the existing methodological approaches with the objective of generating virtual species distributions with increased ecological realism. The package includes 1) generating the probability of occurrence of a virtual species from a spatial set of environmental conditions (i.e. environmental suitability), with two different approaches; 2) converting the environmental suitability into presence-absence with a probabilistic approach; 3) introducing dispersal limitations in the realised virtual species distributions and 4) sampling occurrences with different biases in the sampling procedure. The package was designed to be extremely flexible, to allow users to simulate their own defined species-environment relationships, as well as to provide a fine control over every simulation parameter. The package also includes a function to generate random virtual species distributions. We provide a simple example in this paper showing how increasing ecological realism of the virtual species impacts the predictive performance of species distribution models. We expect that this new package will be valuable to researchers willing to test techniques and protocols of species distribution models as well as various biogeographical hypotheses. Distribution Models; Performance; Niche; Responses; Predictions; Platform; Impacts; Climate; Spiders; Bias Leroy, B; Meynard, CN; Bellard, C; and Courchamp, F, "virtualspecies, an R package to generate virtual species distributions" (2016). VIMS Articles. 800.
0.9919
FineWeb
``` [ "Species Distribution Models", "Virtual Species Distributions", "Ecological Realism" ] ```
Hasidism—a spiritual revival movement associated with the founding figure of Israel Ba’al Shem Tov (Besht, c. 1700–1760), which began in Poland in the second half of the eighteenth century and became a mass movement of Eastern European Jewry by the early decades of the nineteenth—has been celebrated as nothing less than a “feminist” revolution in early modern Judaism. The first to depict it in this light was Samuel Abba Horodezky (1871–1957) who, in his four-volume Hebrew history of Hasidism, first published in 1923, claimed that “the Jewish woman was given complete equality in the emotional, mystical, religious life of Beshtian Hasidism” (vol. 4, 68). Horodezky’s account underlies virtually every subsequent treatment of the subject, whether in the popular, belletristic and semi-scholarly literature on the history of Hasidism, or in such works, mostly apologetic and uncritical, as have set out to discover and catalogue the achievements of prominent women throughout pre-modern Judaism. Notably, until relatively recently, Hasidic scholarship has totally ignored the subject, implicitly dismissing it as either marginal or insufficiently documented to permit serious consideration.
0.511
FineWeb
["Hasidism", "Feminism in Judaism", "Hasidic Scholarship"]
General process of radiometric dating Free video chat with horny girls randoml Thus this essay, which is my attempt at producing such a source.Contents: The half-life of a radioactive isotope is defined as the time it takes half of a sample of the element to decay.I've been doing some reading about radiometric dating and I've come across an interesting find.If anybody has any additional information on this, that would be great.When I first got involved in the creationism/evolution controversy, back in early 1995, I looked around for an article or book that explained radiometric dating in a way that nonscientists could understand. Young-Earth creationists -- that is, creationists who believe that Earth is no more than 10,000 years old -- are fond of attacking radiometric dating methods as being full of inaccuracies and riddled with sources of error. A nuclide of an element, also called an isotope of an element, is an atom of that element that has a specific number of nucleons. In general the dating works IF any process that happens after death effects carbon12 and carbon 14 equally. We are interested in the ratio of c12/c14 in the sample Essentially they have the same chemistry so you would expect all chemical and biological processes to treat them the same. If they can begin to comprehend that it is random and spontaneous, they end up feeling less nervous about the whole thing. Radioactive decay involves the spontaneous transformation of one element into another. Radioactivity and radioactive decay are spontaneous processes.
0.514
FineWeb
["Radiometric Dating", "Radioactive Decay", "Isotopes"]
Would you like to check out a different programme while following your own? Or perhaps you’d like to delve deeper into your own field of expertise? A minor is an excellent way of adding your own flavour to your study. What is a minor? A minor is a structured package of topics with which you can broaden your knowledge and competencies or focus more sharply on a specific area. Most minors last a semester and will earn a student 30 credits, which is often equal to the total elective quota. However, given that for some programmes this total quota is only 15 credits, every minor can be ‘half followed’. In terms of their degree of difficulty, minors are mainly suitable for third-year bachelor’s students. Why follow a minor? - A minor represents an excellent opportunity to look beyond the confines of your own programme. - All faculties offer a broad selection of minors, in total more than 50. This means you will have a good chance that you can follow a topic that interests you for a whole semester. - If you follow an in-depth minor, you can specialise in your own field of expertise. - A well-chosen minor will boost your chances of being admitted to a master’s programme. There are several special minors, and these contribute to various programmes: - Brain and Cognition - Human Evolution - Entrepreneurship for Society - Sustainable Development Check out all minors offered by Leiden University. Joint minors offered by Leiden, Delft and Rotterdam How would you like to follow a minor at a different university? We offer a range of multidisciplinary minors together with TU Delft and Erasmus University Rotterdam: - Responsible Innovation - Safety, Security & Justice - Geo-resources for the Future - Frugal Innovation for Sustainable Global - African Dynamics This video can not be shown because you did not accept cookies.You can leave our website to view this video.
0.6155
FineWeb
``` [ "What is a minor?", "Why follow a minor?", "Types of minors" ] ```
How Do Astronauts Lift Weights in Space? As astronauts spend more and more time in space, their bodies degenerate. Gravity doesn't exist as it does on earth, and so there isn't the same amount of resistance from weights. During space flight, astronauts experience a force of gravity one-millionth as strong as we experience on earth. In such conditions, a benchpress or Bowflex would be little more than a prop with which to record some amazing YouTube videos. With nothing to simulate the resistance of free weights, an astronaut could lose muscle mass and bone density. One study found that after a six-month stay in space, astronauts lost 15 percent of the mass and 25 percent of the strength in their calves. It's for that reason that NASA spent a lot of time and money creating a fancy machine complete with sensors, pistons, cables, computers, balancing devices, and lots of high grade metal. They named it the Advanced Resistance Exercise Device, or aRED for short. Astronauts simply call it "The Beast." Vacuum cylinders allow it to mimic free weights, and different settings allow astronauts to reconfigure the machine to to do any one of 29 exercises—from dead lifts to curls. They have the potential to push their limits—the max setting for bar exercises is equal to 600 pounds on earth. The aRed was set up in the Space Station in 2009, and offered double the max resistance of the previous exercise machine. For more on "The Beast," check out this recent post by astronaut Don Pettit.
0.831
FineWeb
["Astronaut Fitness in Space", "The Advanced Resistance Exercise Device", "Space Workout Challenges"]
Award-winning Drama resources designed to enthuse and engage learners Daydream Education offer a range of colourful and informative Drama resources for key topics such as writing, directing, designing, acting and theatre and are designed to work alongside the National Curriculum, covering Key Stages 2, 3 and GCSE. Our vibrant and eye-catching resources engage pupils in the study of drama, allowing them to apply their knowledge, creativity and skills. Our visual and attention grabbing drama posters have been designed to engage and enthuse pupils. The charts are a great way of improving understanding whilst also bringing the classroom to life. The posters combine clear and concise curriculum-based content with striking images and graphics to engage pupils and reinforce learning. These colourful and engaging Drama revision guides are concise and informative learning aids that will give pupils the confidence to undertake independent learning and revision. The Pocket Posters simplify key topics, breaking them down into easy to digest chunks of information to help enhance the learning process for students of all abilities. The reference books contain 30 of our award-winning Drama posters.
0.8591
FineWeb
["Drama Resources", "National Curriculum", "Drama Revision Guides"]
- CORPORATE TRAINING - INTERNATIONAL STUDENT - ABOUT AAPS The Food Technology, Safety and Quality Diploma Program is designed to help students develop a hand-on approach to food safety and quality, while allowing them the opportunity to earn food safety certification. The program incorporates the specialized knowledge and skills required to implement the fundamental principles of quality assurance. Additionally, the program is designed to be equally useful for entry level or ongoing career development in the food and beverage industry This program is equally beneficial to those who want to complement their pre-existing skills and knowledge by upgrading. The AAPS's integrated approach affords students with the foundation and knowledge to develop, implement, and maintain the food safety program using tools such as GMPs and HACCP to ensure that products comply with national and international safety and quality standards. Diploma in Food Technology, Safety and Quality Courses Include: Food safety certifications are necessary to work in many food and beverage related environments, including for food and beverage servers, people who work in food preparation and people who work in an institutional setting, as well as large scale food manufacturing. Many employers also have a preference for candidates who can meet a high caliber of skills and standards for food safety and quality. Additionally, possession of the right qualifications can also help a person with industry experience advance their career, including training others. Students who are registered in the Food Technology, Safety and Quality Diploma Program are eligible to attend the training session "Food Handling Certificate" free of charge ($70 Value) and obtain the certificate issued by the City of Toronto. AAPS graduates from the Food Safety and Quality Program pursue careers in
0.6324
FineWeb
["Food Technology", "Food Safety", "Quality Assurance"]
“The gardens of ancient Egyptian nobility, the walled gardens of Persian settlements in Mesopotamia and the gardens of merchants in medieval Chinese cities indicate that early urban peoples went to considerable lengths to maintain contact with nature.” Robert Ulrich The oldest garden in the archaeological record (The Hanging Gardens of Babylon were 100 years earlier) is formal and Persian from 500BC. One can still see remains of the geometric plan, its white columns, can trace pavilions and the archaeologists have revealed waterwheels. We know where Cyrus the Great used to sit on his throne. They grew some bulbs, tulips and hyacinths, but mainly figs, palms, and pomegranates. The rose was cultivated by the Greeks though they did not develop gardens perhaps because like Thoreau they felt no need, surrounded by natural environment. That is until Persian invasions of 5th C BC, when gardens imitating Persian pleasure gardens were fashioned and poetry and literature from this period first mention flowers. The rose was a favourite for its ‘botanical, medicinal, cosmetic and symbolic attributes’. Epicurus, was credited with the creation of the first rose garden in Athens . . . No known description of Epicurus’s garden exists, but it can be assumed that, apart from roses, plantings included irises, lilies, violets and herbs. It lasted 450 years and which budded other gardens throughout the Greek empire, the Garden of Epicurus. A garden that emphasised praxis over theory, that is always giving you choices, that is a struggle, that is unending.
0.6667
FineWeb
```json [ "Ancient Gardens", "Persian Gardens", "Greek Gardens" ] ```
The Philistines in Canaan and Palestine The Philistines who, in the 12th century BCE and under Egyptian auspices, settled on the coast of Palestine, are counted among the Sea Peoples by most researchers. Egyptian inscriptions call them “Peleset.” Much suggests that they are of Greek origin. It is conceivable that the Philistines were in fact Mycenaeans and involved in the wars, but not on the side of the initial attackers. Current State of knowledge During the 12th century BCE, the Philistines settled on the fertile coast of Palestine. They founded five city states (Ashdod, Ashkelon, Ekron, Gath and Gaza) which then formed a confederation. At first, these city states were still under the auspices of Egypt. When Egyptian power waned at the end of the 12th century, the Philistines assumed the hegemony in the region. Palestine is named after their inhabitants as “Land of the Philistines.” The origin of the Philistines has not yet been fully clarified. The majority of researchers consider them to have been among the Sea Peoples, where they appear as “Peleset.” The Philistines could thus have come from the Aegean islands or the Greek mainland. Other researchers consider the Philistines to be a Sea People, too, but assume the west and south coasts of Asia Minor as their areas of origin. Friends of Egypt The name Palestine already appears in Luwian stone inscriptions in the North Syrian city of Aleppo during the 11th century BCE. It is therefore virtually impossible to derive the topographic term from the arrival of the Peleset. It is possible that researchers will disentangle the terms Palestine, Philistine and Peleset in the near future. Philistine ceramics are very similar to contemporary Greek pottery. And the Old Testament says that the Peleset came from Crete. This would suggest that Mycenaean Greeks were involved in the Sea Peoples’ invasions. However, the Peleset were given the right to settle in the most fertile and thus most valuable areas of Palestine that had until then been under Egyptian control. Not only did the Egyptian government let the Peleset settle, but it also gave them rights and responsibilities. It is hard to imagine that barbarian people, who had launched a hideous attack on Egypt shortly prior to that, would have been granted these benefits. Also, a Greek participation in the actual Sea Peoples’ invasions is not consonant with the generally amicable relations between the New Kingdom in Egypt and Mycenae. It is quite conceivable that the Philistines derived from the Peleset and that those in return were Mycenaean Greeks from Crete and from the Greek mainland. Since the Mycenaeans fought against the coalition of Luwian states, they were likely to be political allies of Egypt. Hence, they may have received the best settlement sites in Canaan as a reward for their vigor and victory. The Peleset thus did not belong to the coalition of Luwian petty states. The reason that they are still counted among the Sea Peoples is that their retaliations contributed massively to the destruction during the crisis years. Eißfeldt, Otto (1936): “Philister und Phönizier.” Der alte Orient 34 (3), 1-41. Finkelstein, Israel (2000): “The Philistine Settlements: When, Where and How Many?” In: The Sea Peoples and Their World: A Reassessment. Eliezer D. Oren (ed.), The University Museum, University of Philadelphia, Philadelphia, 159-180. Nibbi, Alessandra (1972): The Sea-Peoples: A Re-examination of the Egyptian Sources. Church Army Press and Supplies, Oxford, 1-73. The time has come when all our ideas about the so-called Sea Peoples should be set aside and the text re-examined in a fundamental way, as a whole. Alessandra Nibbi 1972, Preface
0.7773
FineWeb
``` { "topics": [ "The Philistines in Canaan and Palestine", "Origin of the Philistines", "Philistine Settlements and Relations with Egypt" ] } ```
This report discusses the design and implementation of an OFDM modem for a simplex com- munication between two PCs over a frequency selective channel. First a brief introduction is provided by explaining the backrground and the speci?cation of the project. Then the report deals with the system model. Each block of the OFDM system is described (IFFT/FFT, cyclic pre?x, modulation/demodulation, channel estimation, bit loading). In the following section, the system architecture is analysed. The transmission protocol, as well as the are explained in details. Then, the DSP implementation is discussed. Finally, the results are provided in the last chapter.
0.5587
FineWeb
["Introduction to OFDM Modem", "System Model and Architecture", "DSP Implementation and Results"]
What are the common conventions of navigation, what are uncommon conventions? We started by splitting the question in two and listing the common conventions of navigation; - Navigation bar - Links bar. - Navigation indicator. - Title or logo link. - Side bar. - Photographic links. - Gridded photographic links - Often found on portfolio websites. - Side scroll arrow. - Infinite scroll. - Drop down menu. - Search bar. Next, we listed the uncommon conventions of navigation; - Interactive link - play to navigate. - Random page generation - Stumbleupon. - Navigation bar - Placed at bottom of page. - Infinite upwards scroll. - Multi-directional single page scrolling. - Loading menus/pages. - Google street view navigation. After creating our group lists we discussed the different methods of navigation that people had generated. After covering points relating to what makes a websites navigation successful we created a 'Flow diagram' to visually communicate the paths viewers will take when accessing our websites. Finally, towards the end of the session we drew out the underlying grid structure of our five examples collected for last weeks task. For each site we had to answer the following question; How do these structures help/hinder the design of the website? - The structure is very simple, focusing the audiences attention on background image and surrounding links. - The website is functional and easy to navigate, the links placed at the top of the composition so viewer attention is attracted to them. - The page is aesthetically pleasing as there is an effective balance between imagery and links. - Again, the structure is very simple and directs viewer focus to the important date. - Similar to the first example, this website also places its links across the top of the sites composition. - The structure creates negative space at either side of the websites elements, this focuses viewer attention on the images and information displayed. - Furthermore, the structure also centrally places an image browser on the page, this improves the sites functionality as the purpose of the website is to showcase and promote some upmarket villas. - Minimal aesthetic reflects the style of villa the site is promoting. - Infinite scroll website. - Structure creates negative space that focuses the viewers attention on important page elements, this helps the audience navigate the page improving the sites functionality. - Negative space is key to ensuring the website looks balanced. - Colours have been used to help viewers distinguish between pages, this is useful as the website utilizes an infinite scroll feature. - Infinite scroll website. - Structure helps guide the viewers eye across important page elements, this improves the flow and functionality of the site. - Different fonts have been used to attract the viewers attention, this also places importance on the key parts of the text to help communicate the sites message instantly. - The minimal aesthetic focuses the viewer attention on design elements such as the logos, type and imagery.
0.9953
FineWeb
["Common Conventions of Navigation", "Uncommon Conventions of Navigation", "Website Navigation Design"]
Update: Using LEDs for Instrument Cluster Warning Lights 941046 Traditionally, automotive instrument clusters have been illuminated using incandescent bulbs. Failures of these bulbs are a source of warranty and repair costs. LED lamps can reduce these costs. In addition, LED lamps have a number of other benefits for instrument cluster designs. This paper describes the design of a hybrid instrument cluster using LED lamps for telltale warning lights and bulbs for gauge lighting. This approach provides a very cost effective design since most telltale warning lights can be illuminated with a single LED. This approach eliminates three quarters of the bulbs in a typical design. Previous SAE papers, 870211 and 900474, have also discussed the use of LED lamps for telltale lighting. These papers used very specialized lamps. This paper discusses several industry standard LED lamp packages in both through-hole and SMT package configurations. Lighting measurements have been done at instrument cluster depths of 10 to 41 mm, to cover most applications.
0.8821
FineWeb
["Instrument Cluster Design", "LED Lamp Benefits", "Automotive Lighting Solutions"]
If I am convicted of Theft or an offense considered in Texas to be a Crime of Moral Turpitude, what effects will it have on my in the future? There are several considerations involved with a conviction for a Crime of Moral Turpitude Theft related Conviction. First, there is a potential Loss of Occupational Licensing rights and employment. This means many jobs may not hire you based on this type of conviction. Second, there is the likely Impeachment of Testimony if you ever need to testify in ANY court proceeding. This means as a defendant or witness in a crimnal case, a civil case (if you sue over a car accident), or family court for divorce or custody hearings. Conviction of a theft related crime of moral turpitude could potentially (and usually will ) be used against a witness for up to 10 years after a conviction from a Moral Turpitude or release from confinement based on a crime of Moral Turpitude in any proceeding to discredit their testimony. For instance, if you sustain a theft related conviction that is considered a Crime of Moral Turpitude -- even a misdemeanor shoplifting theft -- that prior conviction can be used against you in any case, even unrelated and non-criminal, to discredit your testimony by showing that you are a dishonest person. This can be based solely on the conviction itself, without explaining facts or circumstances. For more information, see Rule 609, Impeachment By Evidence Of Conviction Of Crime. Consequences for Immigration. Federal statutes use the term "Crimes Involving Moral Turpitude" to indicate crimes which can lead a non-citizen resident alien to become deportable. If a non-citizen resident alien is charged with an offense that may be considered a crime of moral turpitude, it is extremely possible and likely that deportation proceedings will follow any criminal case.
0.5865
FineWeb
["Occupational Licensing and Employment", "Impeachment of Testimony", "Consequences for Immigration"]
From time to time, cats will sneeze and it is pretty normal. Just like humans, sneezing helps to clear out their nasal passage if there is something that gets in the way. It is often just a response to some sort of irritant found in the nasal passage. Cats sneeze a lot like humans do, by releasing air at a really high pressure through the nose and mouth. It may be startling, but it is normal. The problem is, excessive sneezing is not normal and should be looked into right away. If your cat starts sneezing and it just won’t go away, or if they have any other symptoms that go along with the sneezing, then a trip to the vet may be in order. They will be able to run tests and look for underlying causes and determine what type of treatment, if any, is necessary. Table of Contents Causes of Sneezing and Excessive Sneezing in Cats There are so many different causes of sneezing in cats, so it is often times difficult to tell exactly why your cat is sneezing. A veterinarian will be able to take a look at their overall health and symptoms and will then determine what tests are necessary to diagnose properly. First, they may start by taking a swab from their nose, mouth or eyes and send it to the lab to rule out infection. Infection is actually one of the most common causes of sneezing in cats. Other common causes include allergens and irritants. Most of the time, if your cat is sneezing it is due to an upper respiratory infection. This is much like the common cold to us humans. It is much more common in kittens or young cats that have been in a shelter environment. Getting kittens vaccinated early can prevent many of these infections from even coming on in the first place. Viral infections are also commonly associated with sneezing. For example, the feline herpes virus can cause sneezing, and stress typically makes the flare ups worse. Feline calicivirus is another virus that affects the respiratory tract, and it can not only cause sneezing, but it can also lead to pneumonia and other more serious respiratory issues if you don’t get it under control quickly. Other types of infections that cause sneezing may include: - feline immunodeficiency virus, or FIV - feline infectious peritonitis - feline leukemia Infection is not the only cause of sneezing in cats. Allergens and irritants can also be culprits. If your cat doesn’t sneeze too often, then infection may not be the cause. Try to take note of your cat’s sneezing and see if you notice any patterns. Say, for instance, your cat sneezes when they are leaving the litter box. This may be due to the dust that comes up from the litter. Sometimes, it is a simple fix, like buying a dust free litter, but other times it may not be as easy to spot the irritants. Here are a few common allergens and irritants that may make your cat sneeze: - perfumes and fragrances - pest control sprays - certain types of cat litter - cigarette smoke - cleaning products Sure, there are others, but these are definitely the most common. Typically, if your cat is suffering from allergies, they will also have a rash or itchy skin that goes along with it. Other Symptoms that May Come Along with Sneezing If your cat is sneezing, chances are that is not their only symptom. If it is just a simple sneeze, you probably don’t have anything to worry about. If you notice any of these other symptoms, it may mean infection or another serious condition: - discharge from the eyes - swelling around eyes - excessive discharge - discolored (green or yellow) discharge - decreased appetite - weight loss - poor coat - difficulty breathing or swallowing - vomiting or diarrhea How to Know When to Go to the Vet If your cat just sneezes occasionally, you don’t have much to be concerned about. Just remember, if it starts to get worse, you need to call your vet. If your cat only has mild symptoms, it may be okay to just keep a close watch on him for a couple of days. Be sure to keep him inside so that you can note any changes in his health or behavior. If your cat has been sneezing off and on for a couple of days, it may be time to get them in for a check-up. Other more serious concerns that require prompt medical attention may include: - continuous sneezing over a period of a few days - sneezing blood Your vet will be able to determine the cause of the sneezing and start your cat off on a course of treatment so that they will be back to themselves in no time! Treatment Options for Excessive Sneezing in Cats Treating sneezing in cats isn’t always easy. It really depends on the cause of the sneezing. You can’t just give a cat one simple treatment option to deal with the sneezing, because that doesn’t help the root cause of the sneeze. Sneezing is just a symptom, it is not an actual illness. Your vet will offer a treatment option that will help to correct the root cause of the sneezing, in turn getting rid of the sneezing. They may recommend getting a humidifier for your cat to help them be more comfortable. Treatment options may include antibiotics or decongestants. As you can see, sneezing and excessive sneezing in cats are two different things. If you think that your cat is sneezing more often than he should, then it may be time to give the vet a call and see what they think. Don’t let an underlying condition go unnoticed just because you overlooked some sneezing! It may seem simple, but the root cause could be severe. Get it checked out just in case. They say some people are ‘dog people’ and others are ‘cat people’. I’m a cat person! I got my first cat when I was in the 2nd grade. I had to beg my mom to let me keep him. He was an orange tabby, and I have been partial to them ever since! We currently have three cats. Being a cat person, I am always trying to learn more about why cats do the things they do. Cats are such loving animals, but they can be so fickle. I guess I can kind of relate to their behavior, and that is probably what attracts me to them.
0.7014
FineWeb
["Causes of Sneezing in Cats", "Symptoms of Excessive Sneezing in Cats", "Treatment Options for Excessive Sneezing in Cats"]
Benefits Of Green Roofs The installation of a green roof (or wall) can prove beneficial in a range of ways, both ecologically and financially. Ecological benefits of green roofs There are many ecological benefits when creating a green roof. The benefits are usually increased with greater substrate depths - the benefits associated with installing shallower, extensive green roofs are on the whole far more modest than those offered by intensive ones due to the smaller range of vegetation that can be grown, the reduction of water retention and many other factors. - Lower carbon footprint due to reduced heating and air conditioning demand. This is achieved by adding mass and thermal resistance value. - As green roofs and walls reflect less solar radiation and absorb less heat than regular roofs and walls, they have the effect of reducing the urban heat island effect. Urban heat island effect decreases air quality and increases the production of pollutants such as ozone. It all also decreases water quality as warmer waters flow into area streams and put tress on their ecosystems. - Creation of habitats for animals and insects. Green roofs cool and humidify the surrounding air, creating a beneficial microclimate in urban areas. Planted roofs and walls can compensate for ‘green’ areas lost in building development, often making a big difference in planning permission approval, especially in green zones. - Absorption of carbon dioxide and pollutants. The vegetation in green roofs bind dust and toxic particles helping to filter out smog. Nitrates and other harmful materials are absorbed by the plants and within the substrate filtering out these pollutants and heavy metals. This also helps improve local air quality, which can benefit both humans and animals. - In the case of intensive green roofs it may also be possible to grow food or crops. - Water management. Depending on the green roof design, the immediate water run-off can be reduced considerably, by up to 90%. This has been proven to greatly reduce stress on drainage systems and in turn localised flooding. This can help your rainwater management system, greatly reducing construction costs. - Noise protection. Plants and trees provide natural sound insulation; they can reduce reflective sound by up to 8 dB. They have been proven very effective in noisy areas. - Green roofs can help ensure that new developments are designed to adapt to climate change. - The ecological and environmental benefits outlined above are usually increased with greater substrate depths - the benefits associated with installing shallower, extensive green roofs are on the whole far more modest than those offered by intensive ones due to the smaller range of vegetation that can be grown. Another important benefit offered by the intensive variety of green roof is the possibility of using the roof space (in the case of flat roofs with safe roof access) for leisure activities, such as gardening. Financial benefits of green roofs There are several practical, business-scenario benefits to green roofs. Among other things new rebates and tax deductions crop up that could help ease the financial burden. Green roofs can help community and state costs, a study completed by researchers at the University of Michigan found that greening 10% of Chicago roofs would result in public health benefits of between $29.2 and $111 million, due to cleaner air. Whilst this doesn’t help in individual project costs, the economic benefits of green roofs to our government will most likely only become more apparent as further studies are taken into consideration. The financial benefits to the individual project are as follows: - Potentially reduced energy bills for both heating and cooling, as green roofs and walls insulate buildings from sunlight and cold air (though the exact amount saved is unknown.) - Potential increase in property value. Due to the nature of a green roof and their aesthetic qualities. - It has been suggested that the lifespan of the building’s roof and walls can be increased (in some cases, doubled). This is due to the protection from temperature differentials and the resulting expansion and contraction that normally shortens a roof’s lifespan. They also provide protection from severe weather damage. - The presence of green roofs and walls in business premises will also present a positive image to customers and boost a company’s green credentials. - It has also been suggested that in temperate climates green roofs can reduce the risk of fire (though this view has been contested.)
0.9355
FineWeb
```json [ "Ecological benefits of green roofs", "Financial benefits of green roofs", "Practical benefits of green roofs" ] ```
Economic Behavior in the Face of Resource Variability and Uncertainty McAllister, RRJ and Tisdell, JG and Reeson, AF and Gordon, IJ, Economic Behavior in the Face of Resource Variability and Uncertainty, Ecology and Society, 16, (3) Article 6. ISSN 1708-3087 (2011) [Refereed Article] Policy design is largely informed by the traditional economic viewpoint that humans behave rationally in the pursuit of their own economic welfare, with little consideration of other regarding behavior or reciprocal altruism. New paradigms of economic behavior theory are emerging that build an imprical basis for understanding how humans respond to specific contexts. Our interest is in the role of human relationships in managing natural resources (forage and livestock) in semiarid systems, where spatial and temporal variability and uncertainty in resource availability are fundamental system drivers. In this paper we present the results of an economic experiment designed to explore how reciprocity interacts with variability and uncertainty. This behavior underpins the Australian tradable grazing rights, or agistment, market, which facilitates livestock mobility as a human response to a situaiton where rainfall is so variable in time and space that it is difficult to maintain an economically viable livestock herd on a single management unit. Contrary to expectations,we found that variability and uncertainty significantly increased transfers and gains from trade within our experiment. When participants faced variability and uncertainty, trust and reciprocity took time to build. When variability and uncertainty were part of the experiment trust was evident from the onset. Given resource variability and uncertainty are key drivers in semiarid systems, new paradigms for undertstanding how variability shapes behavior have special importance.
0.996
FineWeb
["Economic Behavior", "Resource Variability", "Uncertainty"]
The present study was designed to investigate if the presence of asymptomatic malaria parasiteamia in pregnant women will compromise their ability to respond to full dose of tetanus toxoid immunization during their antenatal clinic visits. Hence, 90 apparently healthy pregnant women who had completed the tetanus toxoid immunization during the current pregnancy were recruited at the antenatal clinic and were divided into two groups based on the antenatal record of malaria paras during the immunization period. Sixty (66.7%) of the pregnant women were seroreactive for Plasmodium falciparum histidine rich protein- (HRP)-2 while 30 (33.3%) were seronegative for Plasmodium falciparum HRP-2. The malaria parasite density range for the seroreactive group was between 322 and 1045 parasites per ml of blood. The blood concentration of Tetanus toxoid antibody response in both groups of seroreactive and seronegative HRP-2 pregnant women did not show any significant difference in tetanus toxoid antibody response (p>O.2). This result showed that the presence of asymptomatic IPlasmodium falciparum malaria parasiteamia in the pregnant women during the immunization schedule did not compromise the ability to respond to tetanus toxoid immunization. Hence asymptomatic malaria may not contribute to the prevalence of neonatal tetanus in Nigeria , however, there is need to treat these pregnant women for asymptomatic malaria when detected in order to reduce the burden of malaria on them.
0.7654
FineWeb
```json [ "Malaria Parasiteamia in Pregnant Women", "Tetanus Toxoid Immunization", "Asymptomatic Malaria Impact on Immune Response" ] ```
Pumpkin seed oil is rich in zeaxanthin, which protects the retina and slow the progress of macular degeneration. In 2003, the Medical Research Council Environmental Epidemiology Unit at the University of Southampton in England announced that zeaxanthin “may be far more important in preventing or stabilizing macular degeneration than previously realized.” Learn more about zeaxanthin, Benign Prostatic Hyperplasia: When pumpkin seed is taken along with saw palmetto symptoms of benign prostatic hyperplasia can be reduced (BPH). Scientists have noted that the benefit may arise from some of the contents of pumpkin seed, such as plant sterols, zinc, and fatty acids. Learn more about benign prostate hyperplasia.
0.7654
FineWeb
["Macular Degeneration", "Benign Prostatic Hyperplasia", "Pumpkin Seed Oil Benefits"]
In his argument, Ian Leslie makes his argument about how we decide to use technology and not about technology itself. Leslie says that, “machines are for answers; humans are for questions”. By this Leslie means that without humans asking the questions then there is no need for computers. If humans stop asking questions then there is no need for answers. Leslie focuses his argument around the need for humans and technology to work together to improve each other’s potential. The two are rather dependent on each other. One cannot successfully operate without the other. Leslie ponders this inquiry question to try and persuade his audience that technology is not the problem and is not making humans stupid, but rather we are the ones making our own selves stupid. Leslie describes the “information gap” as being “when you know just enough to know that you don’t know everything, you experience the itch to know more”. If someone believes they know everything then there is no need to ask any more questions. Leslie believes in keeping the “information gap” open. If the gap was closed then there would be no questions asked and all curiosity would be lost. Like Leslie’s argument of a codependent relationship between humans and technology, the “information gap” must be open in order for this relationship to work. For example, in order for new technology to be discovered questions must be asked about what can be made better or what still needs to be invented. From personal experience, whenever I have a question that needs to be answered and I don’t want to have to do extensive research, I go to the Internet for my answer. If the “information gap” were closed then I would never get any answers in a fast and efficient way. In comparison to Nicholas Carr’s article, both articles agree on the fact that humans are becoming “stupid”. However, they differ on the reasons why humans are becoming “stupid”. Carr believes that humans are relying more on computers and technology to do everything for us. However, Leslie believes that there should be a relationship between the computer and humans. Leslie insists that computers help spark and answer questions brought up by humans. Without humans and their questions, there is no need for answers or computers. On the one hand, Carr believes humans are relying too heavily on computers, while on the other hand, Leslie believes there needs to be a stable relationship between humans and computers. In my opinion, I believe that Leslie’s argument is more persuasive than Carr’s. Leslie imposes a need of a relationship between computers, whereas Carr insists we are relying too heavily on computers and technology. I believe that humans would be more likely to better a relationship with computers than to turn back to the olden ways of conducting research by reading through numerous books. Leslie’s argument is more likely to resonate with humans today than Carr’s argument.
0.8909
FineWeb
```json [ "The Relationship Between Humans and Technology", "The Importance of the Information Gap", "Perspectives on Human Intelligence and Technology Use" ] ```
Writing an instructable is like telling a story, so the process of drawing up the instructions follows the same path as any other DIY making project, only when you start to draw it up you realise that you have to resolve things that don't quite work. This way the finished instructable can be a bit better. I started drawing up my instructions because I make them into downloadable project sheets for my website dadcando and the extra bandwidth to needed to serve up high quality photos was too much. As it happens, drawing picture of each step is a lot more work, but then the end result is very nice. - some drawing skill (but not much really) - a vector drawing package (I'll explain more later) - probably a digital camera or phone camera - a decent idea fir an instructable
0.8799
FineWeb
```json [ "Creating an Instructable", "DIY Project Planning", "Technical Requirements" ] ```
Keywords: lock, lock up, key, ignition Found 1 variant for this sign (click on video to enlarge): As a Noun [Often the movement is repeated] the part of a door, a drawer, or a suitcase which you use to keep it shut and to prevent other people from opening it. To open it, you must first put a key in it and turn it. English = lock. [Often the movement is repeated] a specially shaped piece of metal which you place in a lock and turn in order to open or lock a door, a drawer, or a suitcase; a specially shaped piece of metal that you turn, for example to wind up a clock. English = key. [Often the movement is repeated] the keyhole in the dashboard of a car in which you insert a key in order to start the engine. English = ignition. As a Verb or Adjective To put a key into a lock and turn it so that it cannot be opened. English = lock. To make sure all the doors and windows of a building are properly closed and locked so that burglars cannot get in. English = lock up. To insert a key into the ignition of a car and turn it so that the engine starts.
0.9745
FineWeb
``` { "topics": [ "Locks", "Keys", "Ignition" ] } ```
Researchers in the field of psychology have found that one of the best ways to make an important decision, such as choosing a university to attend or a business to invest in, involves the utilization of a decision worksheet. Psychologists who study optimization compare the actual decisions made by people to theoretical ideal decisions to see how similar they are. Proponents of the worksheet procedure believe that it will yield optimal, that is, the best decisions. Although there are several variations on the exact format that worksheets can take, they are all similar in their essential aspects. Worksheets require defining the problem in a clear and concise way and then listing all possible solutions to the problem. Next, the pertinent considerations that will be affected by each decision are listed, and the relative importance of each consideration or consequence is determined. Each consideration is assigned a numerical value to reflect its relative importance. A decision is mathematically calculated by adding these values together. The alternative with the highest number of points emerges as the best decision. Since most important problems are multifaceted, there are several alternatives to choose from, each with unique advantages and disadvantages. One of the benefits of a pencil and paper decision-making procedure is that it permits people to deal with more variables than their minds can generally comprehend and remember. On the average, people can keep about seven ideas in their minds at once. A worksheet can be especially useful when the decision involves a large number of variables with complex relationships. A realistic example for many college students is the question "What sill I do after graduation?" A graduate might seek a position that offers specialized training, pursue an advanced degree, or travel abroad for a year. A decision-making worksheet begins with a succinct statement of the problem that will also help to narrow it. It is important to be clear about the distinction between long range and immediate goals because long-range goals often involve a different decision than short range ones. Focusing on long-range goals, a graduating student might revise the question above to "What will I do after graduation that will lead to successful career?" What does the passage mainly discuss? A.A method to assist in making complex decisions. B.A comparison of actual decisions and ideal decisions. C.Research on how people make decisions. D.Differences between long-range and short-range decision making.
0.7937
FineWeb
```json [ "Decision Making", "Problem Solving", "Optimization" ] ```
Teacher Performance Appraisal for Experienced Teacher – APS003 Reviewed/Revised: September 2015 The Waterloo Catholic District School Board believes the primary aim of a staff appraisal process is to ensure the professional growth of each of its teachers through recognition of professional achievement and positive contributions to the system. To assist experienced teachers in the successful achievement of their goals the Board is committed to the use of the Performance Appraisal of Experienced Teachers (2007) issued by the Ministry of Education. Evaluation within the Board is based on the following assumptions: - Educators within the system are competent. - Educators want to increase their professional effectiveness. - Educators wish to be involved in a co-operative evaluation process as part of a professional learning community. The performance appraisal system for an experienced teacher applies to members of teachers’ bargaining units as defined in Part X.1 of the Education Act and temporary teachers. It is not applicable to teachers new to the profession as of 2006, occasional teachers, continuing education teachers, supervisory officers, administrators, or instructors in teacher-training institutions.
0.7686
FineWeb
["Teacher Performance Appraisal", "Evaluation Process", "Professional Growth"]
SUBSTANTIVE CHANGE REPORTING POLICY Longwood University recognizes the importance of compliance with the Substantive Change for Accredited Institutions of the Commission on Colleges policy statement of the Southern Association of Colleges and Schools (SACSCOC, 2011), which requires the university to report all substantive changes accurately and in a timely manner to the Commission on Colleges (COC). This policy exists specifically to establish, clarify and communicate the requirement that all university changes deemed to be "substantive" must be approved by the President and Board of Visitors, with subsequent notification to and/or approval by the COC for the university’s regional accrediting body, the Southern Association of Colleges and Schools (SACS). SACS accredits the university and its programs and services, wherever they are located or however they are delivered. The SACSCOC is recognized by the United States Department of Education as an agency whose accreditation enables its member institutions to seek eligibility to participate in federally funded programs. SACS requires accredited institutions to follow the substantive change procedures of the COC. In order to retain accreditation, the university is required to comply with SACS and COC procedures concerning substantive changes. While the purpose of this policy is to document the approval and transmittal process to SACS, new, revised or discontinued degrees and establishment of distance learning sites may also require reporting and prior approval from the State Council of Higher Education for Virginia (SCHEV). The requirements of both agencies must be met; compliance with one does not constitute compliance with the other. This policy is primarily designed to address academic programs and curricular issues; although other defined substantive changes are also covered. Branch Campus: A location of an institution that is geographically apart and independent of the main campus of the institution. A location is independent of the main campus if - The location is permanent in nature. - The location offers courses in educational programs leading to a degree, certificate, or other recognized educational credential. - The location has its own faculty and administrative or supervisory organization and has its own budgetary and hiring authority. Source: SACSCOC. - Degree Completion Program: A program typically designed for a non-traditional undergraduate population such as working adults who have completed some college-level course work but have not achieved a baccalaureate degree. Students in such programs may transfer in credit from courses taken previously and may receive credit for experiential learning. Courses in degree completion programs are often offered in an accelerated format or meet during evening and weekend hours, or may be offered via distance learning technologies. Source: SACSCOC. - Distance Education: A formal educational process in which the majority of the instruction (interaction between students and instructors and among students) in a course occurs when students and instructors are not in the same place. Instruction may be synchronous or asynchronous. A distance education course may use the internet; one-way and two-way transmissions through open broadcast, closed circuit, cable, microwave, broadband lines, fiber optics, satellite, or wireless communications devices; audio conferencing; or video cassettes, DVD’s, and CD-ROMs if used as part of the distance learning course or program. Source: SACSCOC. - Dual Degree: Separate program completion credentials each of which bears only the name, seal, and signature of the institution awarding the degree to the student. Source: SACSCOC. - Educational Program: A coherent course of study leading to the awarding of a credential (i.e., a degree, diploma or certificate). Source: SACSCOC. - Geographically Separate: An instructional site or branch campus that is located physically apart from the main campus of the institution. Source: SACSCOC. - Joint Degree: A single program completion credential bearing the names, seals, and signatures of each of the two or more institutions awarding the degree to the student. Source: SACSCOC. - Level: SACSCOC’s taxonomy categorizes institutions by the highest degree offered. Longwood University is designated as a Level III institution because it offers the master’s degree as the highest degree. - Merger/Consolidation: SACSCOC defines a consolidation as the combination or transfer of the assets of at least two distinct institutions (corporations) to that of a newly-formed institution (corporation), and defines a merger as the acquisition by one institution of another institution's assets. For the purposes of accreditation, consolidations and mergers are considered substantive changes requiring review by the Commission on Colleges. (Examples include: a senior college acquiring a junior college, a degree-granting institution acquiring a non-degree-granting institution, two junior or senior colleges consolidating to form a new institution, or an institution accredited by the Commission on Colleges merging with a non-accredited institution). Source: SACSCOC. - Notification: A letter from an institution’s chief executive officer or his/her designated representative to the SACSCOC president to summarize a proposed change, provide the intended implementation date, and list the complete physical address, if the change involves the initiation of an off-campus site or branch campus. Source: SACSCOC. - Procedure One: SACSCOC procedure associated with a substantive change that requires SACSCOC notification and approval prior to implementation. Changes under Procedure One require notification, a prospectus or application, and may involve an on-site visit. Source: SACSCOC. - Procedure Two: SACSCOC procedure associated with a substantive change that requires SACSCOC notification prior to implementation. Source: SACSCOC. - Procedure Three: SACSCOC procedure associated with approval of a consolidation/merger. Source: SACSCOC Significant Departure: A program that is not closely related to previously approved programs at the institution or site or for the mode of delivery in question. To determine whether a new program is a significant departure, it is helpful to consider the following questions: - What previously approved programs does the institution offer that are closely related to the new program and how are they related? - Will significant additional equipment or facilities be needed? - Will significant additional financial resources be needed? - Will a significant number of new courses be required? - Will a significant number of new faculty members be required? - Will significant additional library/learning resources be needed? - Will the CIP code change? Source: SACSCOC, SCHEV. Substantive Change: A significant modification or expansion of the nature and scope of an accredited institution. According to SACS, a substantive change includes: - Any change in the established mission or objectives of the institution. - Any change in legal status, form of control, or ownership of the institution. - The addition of courses or programs that represent a significant departure, either in content or method of delivery, from those that were offered when the institution was last evaluated. - The addition of courses or programs of study at a degree or credential level different from that which is included in the institution’s current accreditation or reaffirmation. - A change from clock hours to credit hours. - A substantial increase in the number of clock or credit hours awarded for successful completion of a program. - The establishment of an additional location geographically apart from the main campus at which the institution offers at least 50 percent of an educational program. - The establishment of a branch campus. - Closing a program, off-campus site, branch campus or institution. - Entering into a collaborative academic arrangement such as a dual degree program or a joint degree program with another institution. - Acquiring another institution or a program or location of another institution. - Adding a permanent location at a site where the institution is conducting a teach-out program for a closed institution. - Entering into a contract by which an entity not eligible for Title IV funding offers 25% or more of one or more of the accredited institution’s programs - Teach-Out: The process by which the university provides instructional and academic support services to students enrolled at a site that has been closed and/or in a program that has been discontinued. The teach-out process often extends well beyond the closing of a site or program to allow time for enrolled students to complete their programs in a reasonable amount of time. - Teach-Out Agreement: A written agreement between institutions that provides for the equitable treatment of students and a reasonable opportunity for students to complete their program of study if an institution, or an institutional location that provides 50 percent or more of at least one program offered, ceases to operate before all enrolled students have completed their program of study. Such a teach-out agreement requires SACSCOC approval in advance of implementation. Source: SACSCOC. This policy applies to all university officers who can initiate, review, approve and allocate resources to any changes, including those to academic and non-academic programs and activities that may be considered a substantive change according to SACSCOC Policy for Substantive Changes for Accredited Institutions. Within academic areas, such changes can originate with individuals or groups of faculty members, Department committees, Department Chairs, Deans and Associate Deans, the Vice President for Academic Affairs, Faculty Senate, or any other area reporting to the Vice President for Academic Affairs. In those areas outside of Academic Affairs, potential substantive changes may arise in individual units, among supervisors in each area, executive management teams within Vice Presidential areas, or with the Vice Presidents or Cabinet. Further, the need for a potential substantive change may come to the attention of the President or those in his/her direct reporting line. Each individual hereby designated is required to be familiar and comply with this policy. As the University pursues structural and programmatic changes, all of those changes deemed to be "substantive" changes require approval by the President, Board of Visitors and the SACSCOC. The University will follow the substantive change procedures of SACSCOC, and inform the SACSCOC of such changes and proposed changes in accord with those procedures. Regardless of the origination point, all substantive changes must be tracked and reported under this policy. - SACS Accreditation Liaison: The Vice President for Academic Affairs serves as the SACS Accreditation Liaison. In the years between accreditation reviews, the liaison is responsible for ensuring the timely submission of annual institutional profiles and other reports as requested by the Commission. The liaison is responsible for the accuracy of all information submitted to the Commission and for ensuring ongoing compliance with Commission standards, policies, and procedures beyond reaffirmation. During the Reaffirmation Cycle, the liaison serves on the SACSCOC Reaffirmation Leadership Team and oversees all staffing aspects of the Reaffirmation process. The liaison is responsible for internal and external monitoring of substantive change progress, and responsible for reporting final change status. - Vice Presidents: The Vice Presidents are responsible for their respective areas bringing forward any potential substantive changes under this policy. - President: The President, with the SACS Accreditation Liaison, is responsible for the accuracy of all information submitted to the COC and for ensuring ongoing compliance with COC standards, policies, and procedures beyond reaffirmation. The President is also responsible for oversight and final reporting of substantive changes to SACSCOC. - Sanctions: If Longwood University fails to follow SACSCOC procedures for notification and approval of substantive changes, its total accreditation may be placed in jeopardy. For that reason, the sanction for failure to follow this university policy must be sufficient to avoid such failure. If an academic program, unit or officer initiates a substantive change without following the procedures outlined in this policy, the President or Vice President for Academic Affairs may direct the immediate cancellation or cessation of that change, with due regard for the educational welfare of students, when it is discovered. In areas outside of Academic Affairs, the same sanction may be applied by the President or relevant Vice President. Reviewed and Approved by Cabinet, September 12, 2012. Approved by Faculty Senate, October 4, 2012 Approved by the Board of Visitors, December 7, 2012
0.5855
FineWeb
```json [ "Substantive Change Policy", "Accreditation and Compliance", "University Procedures and Responsibilities" ] ```
The first mission to employ gravity assist, or using the gravity of a planet to alter a spacecraft's speed and trajectory to fly by its target planet, the Mariner 10 mission flew by both Venus and Mercury, snapping photos and collecting data. Also the first spacecraft to visit Mercury, a feat that wouldn't happen again for more than 30 years, Mariner 10 data revealed a surprising magnetic field and a metallic core comprising about 80 percent of Mercury's mass. - Imaging system - Electrostatic analyzer - Electron spectrometer - Triaxial fluxgate magnetometer - Extreme ultraviolet spectrometer - Infrared radiometer
0.9979
FineWeb
```json [ "Mariner 10 Mission", "Spacecraft Instruments", "Planetary Exploration" ] ```
This study, as the initial part of a broader project, aimed to collect background data about pesticide use in the West Bank. In line with international norms, results have shown that pesticide usage is greater in areas of intensive and high value crop cultivation. More pesticides are used for crops grown under plastic than for those grown in open irrigation systems. The survey reveals widespread problems in both usage and disposal of pesticides. Fourteen of the pesticides used in the West Bank are either suspended, cancelled or banned by the World Health Organization. Most of the labels continue to be in Hebrew, a language that most of the farmers don't read. There is little, if any, extension help available to farmers. Storage and disposal of pesticides seem to be less than adequate, as is understanding of the dangers of pesticide use. Most of the farmers interviewed expressed the belief that they were developing immunity to pesticide toxicity through usage. Encouraging signs are that the farmers interviewed were very interested in learning more about pest control. A relatively high number of farmers in Palestine said that they read and followed advice given in agricultural publications. This suggests that training, using documentation in combination with onsite demonstrations, will be possible. Furthermore, 55% of farmers interviewed recognized that there are beneficial organisms in the soil, though only just over half of this number recognized that pesticides were harmful to these organisms. This understanding of the importance of maintaining ecological balance represents a significant basis for IPM training. Clearly, improving farmers' understanding of the ecological system with which they are working is vital. This includes improved understandings of the importance of soil organisms and of pestpredator relationships; and understanding of the concept of economic threshold. Farmers, residents in areas near to farms and consumers all need to be more aware of toxicity levels. Education is the key to coming to terms with the problems of pesticide usage in the West Bank.
0.8373
FineWeb
```json [ "Pesticide Use in the West Bank", "Pesticide Management and Safety", "Integrated Pest Management (IPM) Training" ] ```
This paper discusses tetrahedra with rational edges forming a geometric progression, focussing on whether they can have rational volume or rational face areas. We examine the 30 possible configurations of such tetrahedra and show that no face of any of these has rational area. We show that 28 of these configurations cannot have rational volume, and in the remaining two cases there are at most six possible examples, and none have been found. Journal of Number Theory Vol. 128, Issue 2, p. 251-262
0.9481
FineWeb
```json [ "Tetrahedra with Rational Edges", "Rational Volume", "Geometric Progression" ] ```
Question: What do I do if online incident reporting is not right for me? Answer: If your incident is an emergency, call 911. If non-emergency call 972-744-4801. Question: What if this happened in another city? Can I file a report using this online police citizen reporting system? Answer: No. If a crime took place outside of the City Limits of Richardson please call the police department for that city. Question: What is physical evidence? Answer: Physical evidence would include anything that was left behind by the suspect(s) who committed the crime. In addition, it would be a CLEAN, DRY, SMOOTH surface that the suspect(s) are believed to have touched. Question: What is a known suspect? Answer: A known suspect is when you or someone else knows the identity of the suspect(s) or where to find the suspect(s) who committed the crime. This would include the license plate number of the vehicle the suspect(s) were in.
0.5661
FineWeb
["Filing a Report", "Evidence and Suspects", "Emergency Contacts"]
A paper on human rights and humanitarian interventions International interventions in and massive human rights violations that lead to humanitarian and human rights emergencies the humanitarian, human rights,. The human rights consequences of peacekeeping interventions in council of human rights policy (ichrp), working paper, human rights: the humanitarian. Just war and humanitarian terrorism and violation of human rights should be noted that the few humanitarian interventions that have been successful,. The legality of humanitarian intervention the issue is ‘hot’ because the concept of human rights is on the it also examines ‘humanitarian interventions. Legal aspects of humanitarian intervention for humanitarian purposes and can human rights be protected by the since humanitarian interventions cause and. The protection of human rights in humanitarian crises a joint background paper by ohchr and unhcr iasc principals, 8 may 2013 a objective. Humanitarian intervention essay example free essay on humanitarian intervention “forcible self-help by states to protect human rights,” iowa law review 53, 24. Intervening in syria and the humanitarian case: what does the of foreign military interventions on human-rights grounds in humanitarian interventions. Is that forcible action to stop serious human rights of humanitarian intervention, this paper tries to of humanitarian interventions that. Hypothesis: that despite the incidents where humanitarian interventions humanitarian intervention and the violation of human rights and humanitarian. Intervention: when, why and how post-cold war era by the human rights discourse and modern threats to aims of humanitarian interventions is to.Human rights data & technology the practice of humanitarian intervention of those instances of internationalized rule that we call ‘interventions’. Burundi and the future of humanitarian abuses of those rights this paper concludes that a of humanitarian intervention human rights. Drawing upon examples of post-cold war interventions, discuss the way in which humanitarian interventions have (or have not) contributed to the spread of human rights. Artigo humanitarian interventions: a critical approach fernando josé ludwig 1 abstract this paper aims to confront the manifold aspects of “humanitarian” intervention along with the conceptualization of national sovereignty. Humanitarian interventions, human rights are considered the prime responsibility of this example humanitarian intervention essay is published for educational. Humanitarian intervention essay writing service, custom humanitarian intervention human rights makes internal human rights law universal and humanitarian. The aim of this paper is to trace significant influence on the practise of humanitarian interventions throughout human rights being promoted at the. Humanitarian interventions in the service of international organizations human rights humanitarian intervention my paper offers one view in. - Fighting for the moral cause state motivations for humanitarian while some humanitarian interventions the effect of human rights ngos on humanitarian. - Given the realities of complex emergencies the separation of human rights and humanitarian action has become an obstacle to responding more adequately to this type of crisis: the issue is no longer whether there should be a human rights-based approach to relief, but rather how to give effect to it. The history of humanitarian intervention in theory and humanitarian interventions in ‘anti-slavery courts and the dawn of international human rights. Politics of int law - humanitarian intervention and human rights - essay example. Humanitarian intervention and pretexts for war (humanitarian interventions have of internationally recognized human rights sean d murphy, humanitarian inter. Free humanitarian intervention sovereign states committing human rights abuse to - humanitarian interventions have been an argument in the.
0.9802
FineWeb
```json [ "Human Rights", "Humanitarian Intervention", "International Law" ] ```
100 ways to make $1000+ An Easy Ways Click Image for Gallery - make quick money without spending any money - How to Make Regular Income from Digitalpoint Forums? - $30 + An Hour The Easy Way. - Making $20+/Daily from Yahoo Answers with only 15 minutes work. - EARN $100 PER DAY ONLINE - Promotional Methods Make $100 per day ONLINE. Download 6 E-Books & make unlimited ways to make $$$
0.8673
FineWeb
["Making Money Online", "Quick Income Methods", "Online Income Strategies"]
family history can influence the chances of having twins to some extent but just because of that we cannot predict that whether you will definitely have twins or not, as you are under treatment for poly cystic disease, if you want to conceive stop taking contraceptive pills and go for ovulation induction drugs with your doctor's advice. if you take the ovulation induction drugs in the early part of cycle means from 2nd day of periods to 7th day, there is chance for maturation of more follicles which can increase the possibility of twins.
0.8326
FineWeb
["Having Twins", "Poly Cystic Disease Treatment", "Conception and Ovulation Induction"]
On this page: Who is Štepán Hulík? štepán Hulík a.k.a. Štepán Hulík is a Czech film historian and screenwriter. - born on (30 years ago) in Uherské Hradiště - nationality: Czech Republic - profession: Screenwriter - film written: Burning Bush Online dictionaries and encyclopedias with entries for Štepán Hulík Click on a label to prioritize search results according to that topic:
0.7041
FineWeb
``` { "topics": [ "Štepán Hulík", "Film History", "Screenwriting" ] } ```
Taking research methodology seriously involves thinking about how we design research projects, how and why we use particular methods, and wider issues about the nature of research. The topics pages on this site seek to address these different areas by the exploring the conceptual, political and ethical dimensions of research on religion, key issues in research design and a range of methods and approaches that researchers studying religion might use. Each topic page has a downloadable discussion paper or structured exercise which introduces key issues. The topic pages also provide additional resources, including sample studies, bibliographies, or links to other relevant websites. They have been designed to be a useful reference point for individual researchers as well as providing materials for courses in research methodology and the study of religion.
0.9988
FineWeb
```json [ "Research Design", "Research Methods", "Ethical Dimensions of Research" ] ```
Acute Otitis Externa Medications Definition of Acute Otitis Externa: The outer ear and ear canal are inflammed, infected or irritated, commonly known as swimmers ear Drugs associated with Acute Otitis Externa The following drugs and medications are in some way related to, or used in the treatment of Acute Otitis Externa. This service should be used as a supplement to, and NOT a substitute for, the expertise, skill, knowledge and judgment of healthcare practitioners. Synonym(s): Ear Infection, acute outer; Otitis Externa, acute; Swimmer's Ear, acute
0.9998
FineWeb
["Definition of Acute Otitis Externa", "Drugs associated with Acute Otitis Externa", "Synonym(s) of Acute Otitis Externa"]
Acute lung injury (ALI) is a severe disease characterized by alveolar neutrophilia, with limited treatment options and high mortality. Experimental models of ALI are key in enhancing our understanding of disease pathogenesis. Lipopolysaccharide (LPS) derived from gram positive bacteria induces neutrophilic inflammation in the airways and lung parenchyma of mice. Efficient pulmonary delivery of compounds such as LPS is, however, difficult to achieve. In the approach described here, pulmonary delivery in mice is achieved by challenge to aerosolized Pseudomonas aeruginosa LPS. Dissolved LPS was aerosolized by a nebulizer connected to compressed air. Mice were exposed to a continuous flow of LPS aerosol in a Plexiglas box for 10 min, followed by 2 min conditioning after the aerosol was discontinued. Tracheal intubation and subsequent bronchoalveolar lavage, followed by formalin perfusion was next performed, which allows for characterization of the sterile pulmonary inflammation. Aerosolized LPS generates a pulmonary inflammation characterized by alveolar neutrophilia, detected in bronchoalveolar lavage and by histological assessment. This technique can be set up at a small cost with few appliances, and requires minimal training and expertise. The exposure system can thus be routinely performed at any laboratory, with the potential to enhance our understanding of lung pathology. 18 Related JoVE Articles! Implantation of Fibrin Gel on Mouse Lung to Study Lung-specific Angiogenesis Institutions: Boston Children's Hospital and Harvard Medical School. Recent significant advances in stem cell research and bioengineering techniques have made great progress in utilizing biomaterials to regenerate and repair damage in simple tissues in the orthopedic and periodontal fields. However, attempts to regenerate the structures and functions of more complex three-dimensional (3D) organs such as lungs have not been very successful because the biological processes of organ regeneration have not been well explored. It is becoming clear that angiogenesis, the formation of new blood vessels, plays key roles in organ regeneration. Newly formed vasculatures not only deliver oxygen, nutrients and various cell components that are required for organ regeneration but also provide instructive signals to the regenerating local tissues. Therefore, to successfully regenerate lungs in an adult, it is necessary to recapitulate the lung-specific microenvironments in which angiogenesis drives regeneration of local lung tissues. Although conventional in vivo angiogenesis assays, such as subcutaneous implantation of extracellular matrix (ECM)-rich hydrogels (e.g. , fibrin or collagen gels or Matrigel - ECM protein mixture secreted by Engelbreth-Holm-Swarm mouse sarcoma cells), are extensively utilized to explore the general mechanisms of angiogenesis, lung-specific angiogenesis has not been well characterized because methods for orthotopic implantation of biomaterials in the lung have not been well established. The goal of this protocol is to introduce a unique method to implant fibrin gel on the lung surface of living adult mouse, allowing for the successful recapitulation of host lung-derived angiogenesis inside the gel. This approach enables researchers to explore the mechanisms by which the lung-specific microenvironment controls angiogenesis and alveolar regeneration in both normal and pathological conditions. Since implanted biomaterials release and supply physical and chemical signals to adjacent lung tissues, implantation of these biomaterials on diseased lung can potentially normalize the adjacent diseased tissues, enabling researchers to develop new therapeutic approaches for various types of lung diseases. Basic Protocol, Issue 94, lung, angiogenesis, regeneration, fibrin, gel implantation, microenvironment Automated Measurement of Pulmonary Emphysema and Small Airway Remodeling in Cigarette Smoke-exposed Mice Institutions: Brigham and Women's Hospital - Harvard Medical School, University of Cambridge - Addenbrooke's Hospital, Brigham and Women's Hospital - Harvard Medical School, Lovelace Respiratory Research Institute. COPD is projected to be the third most common cause of mortality world-wide by 2020(1) . Animal models of COPD are used to identify molecules that contribute to the disease process and to test the efficacy of novel therapies for COPD. Researchers use a number of models of COPD employing different species including rodents, guinea-pigs, rabbits, and dogs(2) . However, the most widely-used model is that in which mice are exposed to cigarette smoke. Mice are an especially useful species in which to model COPD because their genome can readily be manipulated to generate animals that are either deficient in, or over-express individual proteins. Studies of gene-targeted mice that have been exposed to cigarette smoke have provided valuable information about the contributions of individual molecules to different lung pathologies in COPD(3-5) . Most studies have focused on pathways involved in emphysema development which contributes to the airflow obstruction that is characteristic of COPD. However, small airway fibrosis also contributes significantly to airflow obstruction in human COPD patients(6) , but much less is known about the pathogenesis of this lesion in smoke-exposed animals. To address this knowledge gap, this protocol quantifies both emphysema development and small airway fibrosis in smoke-exposed mice. This protocol exposes mice to CS using a whole-body exposure technique, then measures respiratory mechanics in the mice, inflates the lungs of mice to a standard pressure, and fixes the lungs in formalin. The researcher then stains the lung sections with either Gill’s stain to measure the mean alveolar chord length (as a readout of emphysema severity) or Masson’s trichrome stain to measure deposition of extracellular matrix (ECM) proteins around small airways (as a readout of small airway fibrosis). Studies of the effects of molecular pathways on both of these lung pathologies will lead to a better understanding of the pathogenesis of COPD. Medicine, Issue 95, COPD, mice, small airway remodeling, emphysema, pulmonary function test Using Continuous Data Tracking Technology to Study Exercise Adherence in Pulmonary Rehabilitation Institutions: Concordia University, Concordia University, Hôpital du Sacré-Coeur de Montréal. Pulmonary rehabilitation (PR) is an important component in the management of respiratory diseases. The effectiveness of PR is dependent upon adherence to exercise training recommendations. The study of exercise adherence is thus a key step towards the optimization of PR programs. To date, mostly indirect measures, such as rates of participation, completion, and attendance, have been used to determine adherence to PR. The purpose of the present protocol is to describe how continuous data tracking technology can be used to measure adherence to a prescribed aerobic training intensity on a second-by-second basis. In our investigations, adherence has been defined as the percent time spent within a specified target heart rate range. As such, using a combination of hardware and software, heart rate is measured, tracked, and recorded during cycling second-by-second for each participant, for each exercise session. Using statistical software, the data is subsequently extracted and analyzed. The same protocol can be applied to determine adherence to other measures of exercise intensity, such as time spent at a specified wattage, level, or speed on the cycle ergometer. Furthermore, the hardware and software is also available to measure adherence to other modes of training, such as the treadmill, elliptical, stepper, and arm ergometer. The present protocol, therefore, has a vast applicability to directly measure adherence to aerobic exercise. Medicine, Issue 81, Data tracking, exercise, rehabilitation, adherence, patient compliance, health behavior, user-computer interface. Isolation of Mouse Respiratory Epithelial Cells and Exposure to Experimental Cigarette Smoke at Air Liquid Interface Institutions: Harvard Medical School, University of Pittsburgh. Pulmonary epithelial cells can be isolated from the respiratory tract of mice and cultured at air-liquid interface (ALI) as a model of differentiated respiratory epithelium. A protocol is described for isolating and exposing these cells to mainstream cigarette smoke (CS), in order to study epithelial cell responses to CS exposure. The protocol consists of three parts: the isolation of airway epithelial cells from mouse trachea, the culturing of these cells at air-liquid interface (ALI) as fully differentiated epithelial cells, and the delivery of calibrated mainstream CS to these cells in culture. The ALI culture system allows the culture of respiratory epithelia under conditions that more closely resemble their physiological setting than ordinary liquid culture systems. The study of molecular and lung cellular responses to CS exposure is a critical component of understanding the impact of environmental air pollution on human health. Research findings in this area may ultimately contribute towards understanding the etiology of chronic obstructive pulmonary disease (COPD), and other tobacco-related diseases, which represent major global health problems. Medicine, Issue 48, Air-Liquid Interface, Cell isolation, Cigarette smoke, Epithelial cells Mouse Pneumonectomy Model of Compensatory Lung Growth Institutions: Cincinnati Children's Hospital Medical Center. In humans, disrupted repair and remodeling of injured lung contributes to a host of acute and chronic lung disorders which may ultimately lead to disability or death. Injury-based animal models of lung repair and regeneration are limited by injury-specific responses making it difficult to differentiate changes related to the injury response and injury resolution from changes related to lung repair and lung regeneration. However, use of animal models to identify these repair and regeneration signaling pathways is critical to the development of new therapies aimed at improving pulmonary function following lung injury. The mouse pneumonectomy model utilizes compensatory lung growth to isolate those repair and regeneration signals in order to more clearly define mechanisms of alveolar re-septation. Here, we describe our technique for performing mouse pneumonectomy and sham pneumonectomy. This technique may be utilized in conjunction with lineage tracing or other transgenic mouse models to define molecular and cellular mechanism of lung repair and regeneration. Medicine, Issue 94, Pneumonectomy, Compensatory Lung Growth, Lung Injury, Lung Repair, Mouse Surgery, Alveolarization Methods to Evaluate Cytotoxicity and Immunosuppression of Combustible Tobacco Product Preparations Institutions: Wake Forest University Health Sciences, R.J. Reynolds Tobacco Company. Among other pathophysiological changes, chronic exposure to cigarette smoke causes inflammation and immune suppression, which have been linked to increased susceptibility of smokers to microbial infections and tumor incidence. Ex vivo suppression of receptor-mediated immune responses in human peripheral blood mononuclear cells (PBMCs) treated with smoke constituents is an attractive approach to study mechanisms and evaluate the likely long-term effects of exposure to tobacco products. Here, we optimized methods to perform ex vivo assays using PBMCs stimulated by bacterial lipopolysaccharide, a Toll-like receptor-4 ligand. The effects of whole smoke-conditioned medium (WS-CM), a combustible tobacco product preparation (TPP), and nicotine were investigated on cytokine secretion and target cell killing by PBMCs in the ex vivo assays. We show that secreted cytokines IFN-γ, TNF, IL-10, IL-6, and IL-8 and intracellular cytokines IFN-γ, TNF-α, and MIP-1α were suppressed in WS-CM-exposed PBMCs. The cytolytic function of effector PBMCs, as determined by a K562 target cell killing assay was also reduced by exposure to WS-CM; nicotine was minimally effective in these assays. In summary, we present a set of improved assays to evaluate the effects of TPPs in ex vivo assays, and these methods could be readily adapted for testing other products of interest. Immunology, Issue 95, Tobacco product preparation, whole smoke-conditioned medium, human peripheral blood mononuclear cells, PBMC, lipopolysaccharide, cell death, secreted cytokines, intracellular cytokines, K562 killing assay. Robotic Ablation of Atrial Fibrillation Institutions: Charité — Universitätsmedizin Berlin, Campus Virchow, University Hospital Zurich. Background: Pulmonary vein isolation (PVI) is an established treatment for atrial fibrillation (AF). During PVI an electrical conduction block between pulmonary vein (PV) and left atrium (LA) is created. This conduction block prevents AF, which is triggered by irregular electric activity originating from the PV. However, transmural atrial lesions are required which can be challenging. Re-conduction and AF recurrence occur in 20 - 40% of the cases. Robotic catheter systems aim to improve catheter steerability. Here, a procedure with a new remote catheter system (RCS), is presented. Objective of this article is to show feasibility of robotic AF ablation with a novel system. Materials and Methods: After interatrial trans-septal puncture is performed using a long sheath and needle under fluoroscopic guidance. The needle is removed and a guide wire is placed in the left superior PV. Then an ablation catheter is positioned in the LA, using the sheath and wire as guide to the LA. LA angiography is performed over the sheath. A circular mapping catheter is positioned via the long sheath into the LA and a three-dimensional (3-D) anatomical reconstruction of the LA is performed. The handle of the ablation catheter is positioned in the robotic arm of the Amigo system and the ablation procedure begins. During the ablation procedure, the operator manipulates the ablation catheter via the robotic arm with the use of a remote control. The ablation is performed by creating point-by-point lesions around the left and right PV ostia. Contact force is measured at the catheter tip to provide feedback of catheter-tissue contact. Conduction block is confirmed by recording the PV potentials on the circular mapping catheter and by pacing maneuvers. The operator stays out of the radiationfield during ablation. Conclusion: The novel catheter system allows ablation with high stability on low operator fluoroscopy exposure. Medicine, Issue 99, Atrial fibrillation, catheter ablation, robotic ablation, remote navigation, fluoroscopy, radiation exposure, cardiac arrhythmia Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital. Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression. Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls Measuring Respiratory Function in Mice Using Unrestrained Whole-body Plethysmography Institutions: Monash Institute of Medical Research, Monash Medical Centre, Animal Resource Centre, Perth, Australia, Wake Forest Institute for Regenerative Medicine. Respiratory dysfunction is one of the leading causes of morbidity and mortality in the world and the rates of mortality continue to rise. Quantitative assessment of lung function in rodent models is an important tool in the development of future therapies. Commonly used techniques for assessing respiratory function including invasive plethysmography and forced oscillation. While these techniques provide valuable information, data collection can be fraught with artefacts and experimental variability due to the need for anesthesia and/or invasive instrumentation of the animal. In contrast, unrestrained whole-body plethysmography (UWBP) offers a precise, non-invasive, quantitative way by which to analyze respiratory parameters. This technique avoids the use of anesthesia and restraints, which is common to traditional plethysmography techniques. This video will demonstrate the UWBP procedure including the equipment set up, calibration and lung function recording. It will explain how to analyze the collected data, as well as identify experimental outliers and artefacts that results from animal movement. The respiratory parameters obtained using this technique include tidal volume, minute volume, inspiratory duty cycle, inspiratory flow rate and the ratio of inspiration time to expiration time. UWBP does not rely on specialized skills and is inexpensive to perform. A key feature of UWBP, and most appealing to potential users, is the ability to perform repeated measures of lung function on the same animal. Physiology, Issue 90, Unrestrained Whole Body Plethysmography, Lung function, Respiratory Disease, Rodents The Bovine Lung in Biomedical Research: Visually Guided Bronchoscopy, Intrabronchial Inoculation and In Vivo Sampling Techniques There is an ongoing search for alternative animal models in research of respiratory medicine. Depending on the goal of the research, large animals as models of pulmonary disease often resemble the situation of the human lung much better than mice do. Working with large animals also offers the opportunity to sample the same animal repeatedly over a certain course of time, which allows long-term studies without sacrificing the animals. The aim was to establish in vivo sampling methods for the use in a bovine model of a respiratory Chlamydia psittaci infection. Sampling should be performed at various time points in each animal during the study, and the samples should be suitable to study the host response, as well as the pathogen under experimental conditions. Bronchoscopy is a valuable diagnostic tool in human and veterinary medicine. It is a safe and minimally invasive procedure. This article describes the intrabronchial inoculation of calves as well as sampling methods for the lower respiratory tract. Videoendoscopic, intrabronchial inoculation leads to very consistent clinical and pathological findings in all inoculated animals and is, therefore, well-suited for use in models of infectious lung disease. The sampling methods described are bronchoalveolar lavage, bronchial brushing and transbronchial lung biopsy. All of these are valuable diagnostic tools in human medicine and could be adapted for experimental purposes to calves aged 6-8 weeks. The samples obtained were suitable for both pathogen detection and characterization of the severity of lung inflammation in the host. Medicine, Issue 89, translational medicine, respiratory models, bovine lung, bronchoscopy, transbronchial lung biopsy, bronchoalveolar lavage, bronchial brushing, cytology brush Assessment of Motor Balance and Coordination in Mice using the Balance Beam Institutions: California Institute of Technology. Brain injury, genetic manipulations, and pharmacological treatments can result in alterations of motor skills in mice. Fine motor coordination and balance can be assessed by the beam walking assay. The goal of this test is for the mouse to stay upright and walk across an elevated narrow beam to a safe platform. This test takes place over 3 consecutive days: 2 days of training and 1 day of testing. Performance on the beam is quantified by measuring the time it takes for the mouse to traverse the beam and the number of paw slips that occur in the process. Here we report the protocol used in our laboratory, and representative results from a cohort of C57BL/6 mice. This task is particularly useful for detecting subtle deficits in motor skills and balance that may not be detected by other motor tests, such as the Rotarod. Neuroscience, Issue 49, motor skills, coordination, balance beam test, mouse behavior Diagnostic Ultrasound Imaging of Mouse Diaphragm Function Institutions: The Ohio State University College of Medicine, Oakland University. Function analysis of rodent respiratory skeletal muscles, particularly the diaphragm, is commonly performed by isolating muscle strips using invasive surgical procedures. Although this is an effective method of assessing in vitro diaphragm activity, it involves non-survival surgery. The application of non-invasive ultrasound imaging as an in vivo procedure is beneficial since it not only reduces the number of animals sacrificed, but is also suitable for monitoring disease progression in live mice. Thus, our ultrasound imaging method may likely assist in the development of novel therapies that alleviate muscle injury induced by various respiratory diseases. Particularly, in clinical diagnoses of obstructive lung diseases, ultrasound imaging has the potential to be used in conjunction with other standard tests to detect the early onset of diaphragm muscle fatigue. In the current protocol, we describe how to accurately evaluate diaphragm contractility in a mouse model using a diagnostic ultrasound imaging technique. Medicine, Issue 86, ultrasound, imaging, non-invasive, diaphragm, muscle function, mouse, diagnostic Utility of Dissociated Intrinsic Hand Muscle Atrophy in the Diagnosis of Amyotrophic Lateral Sclerosis Institutions: Westmead Hospital, University of Sydney, Australia. The split hand phenomenon refers to predominant wasting of thenar muscles and is an early and specific feature of amyotrophic lateral sclerosis (ALS). A novel split hand index (SI) was developed to quantify the split hand phenomenon, and its diagnostic utility was assessed in ALS patients. The split hand index was derived by dividing the product of the compound muscle action potential (CMAP) amplitude recorded over the abductor pollicis brevis and first dorsal interosseous muscles by the CMAP amplitude recorded over the abductor digiti minimi muscle. In order to assess the diagnostic utility of the split hand index, ALS patients were prospectively assessed and their results were compared to neuromuscular disorder patients. The split hand index was significantly reduced in ALS when compared to neuromuscular disorder patients (P<0.0001). Limb-onset ALS patients exhibited the greatest reduction in the split hand index, and a value of 5.2 or less reliably differentiated ALS from other neuromuscular disorders. Consequently, the split hand index appears to be a novel diagnostic biomarker for ALS, perhaps facilitating an earlier diagnosis. Medicine, Issue 85, Amyotrophic Lateral Sclerosis (ALS), dissociated muscle atrophy, hypothenar muscles, motor neuron disease, split-hand index, thenar muscles In vitro Coculture Assay to Assess Pathogen Induced Neutrophil Trans-epithelial Migration Institutions: Harvard Medical School, MGH for Children, Massachusetts General Hospital. Mucosal surfaces serve as protective barriers against pathogenic organisms. Innate immune responses are activated upon sensing pathogen leading to the infiltration of tissues with migrating inflammatory cells, primarily neutrophils. This process has the potential to be destructive to tissues if excessive or held in an unresolved state. Cocultured in vitro models can be utilized to study the unique molecular mechanisms involved in pathogen induced neutrophil trans-epithelial migration. This type of model provides versatility in experimental design with opportunity for controlled manipulation of the pathogen, epithelial barrier, or neutrophil. Pathogenic infection of the apical surface of polarized epithelial monolayers grown on permeable transwell filters instigates physiologically relevant basolateral to apical trans-epithelial migration of neutrophils applied to the basolateral surface. The in vitro model described herein demonstrates the multiple steps necessary for demonstrating neutrophil migration across a polarized lung epithelial monolayer that has been infected with pathogenic P. aeruginosa (PAO1). Seeding and culturing of permeable transwells with human derived lung epithelial cells is described, along with isolation of neutrophils from whole human blood and culturing of PAO1 and nonpathogenic K12 E. coli (MC1000). The emigrational process and quantitative analysis of successfully migrated neutrophils that have been mobilized in response to pathogenic infection is shown with representative data, including positive and negative controls. This in vitro model system can be manipulated and applied to other mucosal surfaces. Inflammatory responses that involve excessive neutrophil infiltration can be destructive to host tissues and can occur in the absence of pathogenic infections. A better understanding of the molecular mechanisms that promote neutrophil trans-epithelial migration through experimental manipulation of the in vitro coculture assay system described herein has significant potential to identify novel therapeutic targets for a range of mucosal infectious as well as inflammatory diseases. Infection, Issue 83, Cellular Biology, Epithelium, Neutrophils, Pseudomonas aeruginosa, Respiratory Tract Diseases, Neutrophils, epithelial barriers, pathogens, transmigration Measuring Frailty in HIV-infected Individuals. Identification of Frail Patients is the First Step to Amelioration and Reversal of Frailty Institutions: University of Arizona, University of Arizona. A simple, validated protocol consisting of a battery of tests is available to identify elderly patients with frailty syndrome. This syndrome of decreased reserve and resistance to stressors increases in incidence with increasing age. In the elderly, frailty may pursue a step-wise loss of function from non-frail to pre-frail to frail. We studied frailty in HIV-infected patients and found that ~20% are frail using the Fried phenotype using stringent criteria developed for the elderly1,2 . In HIV infection the syndrome occurs at a younger age. HIV patients were checked for 1) unintentional weight loss; 2) slowness as determined by walking speed; 3) weakness as measured by a grip dynamometer; 4) exhaustion by responses to a depression scale; and 5) low physical activity was determined by assessing kilocalories expended in a week's time. Pre-frailty was present with any two of five criteria and frailty was present if any three of the five criteria were abnormal. The tests take approximately 10-15 min to complete and they can be performed by medical assistants during routine clinic visits. Test results are scored by referring to standard tables. Understanding which of the five components contribute to frailty in an individual patient can allow the clinician to address relevant underlying problems, many of which are not evident in routine HIV clinic visits. Medicine, Issue 77, Infection, Virology, Infectious Diseases, Anatomy, Physiology, Molecular Biology, Biomedical Engineering, Retroviridae Infections, Body Weight Changes, Diagnostic Techniques and Procedures, Physical Examination, Muscle Strength, Behavior, Virus Diseases, Pathological Conditions, Signs and Symptoms, Diagnosis, Musculoskeletal and Neural Physiological Phenomena, HIV, HIV-1, AIDS, Frailty, Depression, Weight Loss, Weakness, Slowness, Exhaustion, Aging, clinical techniques Right Ventricular Systolic Pressure Measurements in Combination with Harvest of Lung and Immune Tissue Samples in Mice Institutions: New York University School of Medicine, Tuxedo, Vanderbilt University Medical Center, New York University School of Medicine. The function of the right heart is to pump blood through the lungs, thus linking right heart physiology and pulmonary vascular physiology. Inflammation is a common modifier of heart and lung function, by elaborating cellular infiltration, production of cytokines and growth factors, and by initiating remodeling processes 1 Compared to the left ventricle, the right ventricle is a low-pressure pump that operates in a relatively narrow zone of pressure changes. Increased pulmonary artery pressures are associated with increased pressure in the lung vascular bed and pulmonary hypertension 2 . Pulmonary hypertension is often associated with inflammatory lung diseases, for example chronic obstructive pulmonary disease, or autoimmune diseases 3 . Because pulmonary hypertension confers a bad prognosis for quality of life and life expectancy, much research is directed towards understanding the mechanisms that might be targets for pharmaceutical intervention 4 . The main challenge for the development of effective management tools for pulmonary hypertension remains the complexity of the simultaneous understanding of molecular and cellular changes in the right heart, the lungs and the immune system. Here, we present a procedural workflow for the rapid and precise measurement of pressure changes in the right heart of mice and the simultaneous harvest of samples from heart, lungs and immune tissues. The method is based on the direct catheterization of the right ventricle via the jugular vein in close-chested mice, first developed in the late 1990s as surrogate measure of pressures in the pulmonary artery5-13 . The organized team-approach facilitates a very rapid right heart catheterization technique. This makes it possible to perform the measurements in mice that spontaneously breathe room air. The organization of the work-flow in distinct work-areas reduces time delay and opens the possibility to simultaneously perform physiology experiments and harvest immune, heart and lung tissues. The procedural workflow outlined here can be adapted for a wide variety of laboratory settings and study designs, from small, targeted experiments, to large drug screening assays. The simultaneous acquisition of cardiac physiology data that can be expanded to include echocardiography5,14-17 and harvest of heart, lung and immune tissues reduces the number of animals needed to obtain data that move the scientific knowledge basis forward. The procedural workflow presented here also provides an ideal basis for gaining knowledge of the networks that link immune, lung and heart function. The same principles outlined here can be adapted to study other or additional organs as needed. Immunology, Issue 71, Medicine, Anatomy, Physiology, Cardiology, Surgery, Cardiovascular Abnormalities, Inflammation, Respiration Disorders, Immune System Diseases, Cardiac physiology, mouse, pulmonary hypertension, right heart function, lung immune response, lung inflammation, lung remodeling, catheterization, mice, tissue, animal model Experimental Manipulation of Body Size to Estimate Morphological Scaling Relationships in Drosophila Institutions: University of Houston, Michigan State University. The scaling of body parts is a central feature of animal morphology1-7 . Within species, morphological traits need to be correctly proportioned to the body for the organism to function; larger individuals typically have larger body parts and smaller individuals generally have smaller body parts, such that overall body shape is maintained across a range of adult body sizes. The requirement for correct proportions means that individuals within species usually exhibit low variation in relative trait size. In contrast, relative trait size can vary dramatically among species and is a primary mechanism by which morphological diversity is produced. Over a century of comparative work has established these intra- and interspecific patterns3,4 Perhaps the most widely used approach to describe this variation is to calculate the scaling relationship between the size of two morphological traits using the allometric equation y=bxα, where x and y are the size of the two traits, such as organ and body size8,9 . This equation describes the within-group (e.g., species, population) scaling relationship between two traits as both vary in size. Log-transformation of this equation produces a simple linear equation, log(y) = log(b) + αlog(x) and log-log plots of the size of different traits among individuals of the same species typically reveal linear scaling with an intercept of log(b) and a slope of α, called the 'allometric coefficient'9,10 . Morphological variation among groups is described by differences in scaling relationship intercepts or slopes for a given trait pair. Consequently, variation in the parameters of the allometric equation (b and α) elegantly describes the shape variation captured in the relationship between organ and body size within and among biological groups (see 11,12 Not all traits scale linearly with each other or with body size (e.g., 13,14 ) Hence, morphological scaling relationships are most informative when the data are taken from the full range of trait sizes. Here we describe how simple experimental manipulation of diet can be used to produce the full range of body size in insects. This permits an estimation of the full scaling relationship for any given pair of traits, allowing a complete description of how shape covaries with size and a robust comparison of scaling relationship parameters among biological groups. Although we focus on Drosophila , our methodology should be applicable to nearly any fully metamorphic insect. Developmental Biology, Issue 56, Drosophila, allometry, morphology, body size, scaling, insect Increasing Pulmonary Artery Pulsatile Flow Improves Hypoxic Pulmonary Hypertension in Piglets Institutions: Laval University, Institut National de la Recherche Agronomique, Sorbonne Paris Cité, Physiologie clinique Explorations Fonctionnelles, INSERM U 965, Centre Hospitalier Universitaire Tours. Pulmonary arterial hypertension (PAH) is a disease affecting distal pulmonary arteries (PA). These arteries are deformed, leading to right ventricular failure. Current treatments are limited. Physiologically, pulsatile blood flow is detrimental to the vasculature. In response to sustained pulsatile stress, vessels release nitric oxide (NO) to induce vasodilation for self-protection. Based on this observation, this study developed a protocol to assess whether an artificial pulmonary pulsatile blood flow could induce an NO-dependent decrease in pulmonary artery pressure. One group of piglets was exposed to chronic hypoxia for 3 weeks and compared to a control group of piglets. Once a week, the piglets underwent echocardiography to assess PAH severity. At the end of hypoxia exposure, the piglets were subjected to a pulsatile protocol using a pulsatile catheter. After being anesthetized and prepared for surgery, the jugular vein of the piglet was isolated and the catheter was introduced through the right atrium, the right ventricle and the pulmonary artery, under radioscopic control. Pulmonary artery pressure (PAP) was measured before (T0), immediately after (T1) and 30 min after (T2) the pulsatile protocol. It was demonstrated that this pulsatile protocol is a safe and efficient method of inducing a significant reduction in mean PAP via an NO-dependent mechanism. These data open up new avenues for the clinical management of PAH. Medicine, Issue 99, Piglets, pulmonary arterial hypertension, right heart catheterization, pulmonary artery pressure, vascular pulsatility, vasodilation, nitric oxide
0.8171
FineWeb
``` { "topics": [ "Acute Lung Injury", "Pulmonary Rehabilitation", "Respiratory Diseases" ] } ```
Study site and fossil collection The Greater Yellowstone Ecosystem (GYE) is often considered one of the last intact, temperate ecosystems in the world. This ecosystem contains all native mammals and few exotics, and is thought to be functioning in a relatively natural state . The GYE is located in northwestern Wyoming, and contains portions of southern Montana and eastern Idaho (center of park: 44° 36' 53.25"N Latitude, 110° 30' 03.93" W Longitude). The core of the GYE is Yellowstone National Park (YNP), which was established as the world's first national park in 1872. The preservation of this park means that we are able to extend current ecological conditions to the recent past. The A. tigrinum fossils used in this analysis were excavated from Lamar Cave, a paleontological site in YNP. The details of the excavation and stratigraphy are described elsewhere . Isotopic analysis has shown the sampling radius of the cave to be within 8 km (with 95% confidence) . Within this radius there are at least 19 fishless, modern ponds of generally similar permanence that are potential habitat for A. tigrinum. The A. tigrinum samples are most likely from predation in these ponds and surrounding lands. The current study analyzes fossils obtained from 15 of the 16 stratigraphic levels from the excavation (level 11 did not contain any Ambystoma specimens). For the analyses the levels were pooled into five intervals, labeled A-E (youngest to oldest). This aggregation was based on 95% confidence limits around the radiocarbon dating of the intervals . Easily identified A. tigrinum fossils include femora, humeri, vertebrae, and various skull bones. We used the fossil vertebrae because of their abundance (N = 2850) and because they record metamorphic state. All vertebrae were identified, but for the purposes of this study only the first cervical and sacral vertebrae were used. Because these particular vertebrae are unique to every skeleton, they are useful in determining the minimum number of individuals from a locality . The fossils were grouped within each stratigraphic layer into four morphologically distinct classes: Young Larval, Paedomorphic, Young Terrestrial, or Old Terrestrial. The developmental stage and age of each individual was determined from diagnostic characteristics of the neural arch and centrum [32,33]. Specifically, the Young Larval had an open (unfused) neural arch and open centrum with little or no ossification; the Young Terrestrial were characterized by an open neural arch and constricted, or partially fused, centrum with little ossification; the Paedomorphic were typified by a fused neural arch and an open centrum with some ossification present; the Old Terrestrial were described by a fused neural arch and a closed, or fused, centrum with visible ossification (Figure 1). Abundance was determined by a standardized minimum number of individuals (MNI) . The MNI was taken as the larger of the two values for sacral or cervical vertebrae (axis), since the Ambystoma skeleton contains only one of each of these elements. The abundance levels were then standardized by dividing by the MNI of the wood rat, Neotoma cinerea. Unlike other common small mammals found at this site, wood rats show a constant relative abundance . This pattern is consistent with a broad habitat preference for this species, and is especially important because the wood rat is the main collection agent of the Lamar Cave fossils. Their relative evenness thus indicates taphonomic constancy of the cave , which is corroborated with isotopic analyses . Because plasticity in growth rate cannot be directly measured in the fossil record, it is inferred from body size in different age classes. The centrum length and anterior width of each specimen were measured with electronic calipers. A body size index (BSI) was created for each specimen by dividing the centrum length by the anterior centrum width . Percent paedomorphosis by time interval was determined by dividing the standardized MNI of paedomorphic vertebrae by the standardized MNI of all adult vertebrae, defined as Old Larval, Young terrestrial, and Old Terrestrial morphs. Thus, we calculate abundance, mean body size, and percent paedomorphosis, each as potentially independent responses of the salamander population to the abiotic environment around Lamar Cave.
0.7684
FineWeb
["Study site and fossil collection", "Fossil analysis and identification", "Abundance and population dynamics"]
FREE & FAST SHIPPING ON US ORDERS OVER $30 ✦ FLAT RATE $3.95 SHIPPING ON US ORDERS LESS THAN $30 ✦ NOW SHIPPING TO CANADA Tomato - Egg - Lycopersicon esculentum. - Plant produces good yields of egg shaped tomatoes. - The tomatoes are the size and shape of an egg. - A firm tomato that keeps well. Does well in poor growing conditions. Day to Maturity | 75 days 12-Month Planting Calendar |Plant it (Sow) |Eat it (Harvest) Tomato - Egg has a rating of 5.0 stars based on 3 reviews. + CLICK IMAGES TO ZOOM
0.7607
FineWeb
["Tomato - Egg Description", "Planting Calendar", "Product Details"]
After you've dealt with labour and birth, your biggest concern is your newborn's health and development. You want to ensure that your bundle of joy gets all the nourishment he or she needs for proper growth. With the whole world giving you their opinions on whether you should breastfeed or bottle feed, you're going to be very confused, with good reason. Remember this, moms: breastfeeding is undoubtedly the best source of nutrition for your baby. However, there could be a situation when you have to choose to formula feed your infant. Take a look at these growth differences to understand what you can expect: Growth Difference Between Breastfed and Formula-fed Babies A few days after birth: Babies lose some weight immediately after birth. Studies have shown that when it comes to breastfeeding vs. formula feeding, infants who are fed on breast milk lose more weight than those who are fed formula in the initial weeks of birth. Though breast milk is more nutritious, supply can be low right after birth. On the other hand, there’s no dearth of formulated milk which is why babies who are formula-fed weigh more than breastfed children. First 3 months: Health experts believe that once the supply of breast milk normalises, there’s no difference between the growth of formula-fed babies and breastfed babies. Both can enjoy a good supply of nutritious milk and gain weight in a consistent manner. 6 to 12 months: Doctors recommend that babies be introduced to solid food along with continuing breast milk or formula milk once they complete the 6-month milestone. This is when many mothers begin weaning their babies off milk and adding solid food to their daily diet. However, since many infants don’t finish the food given them, the growth of breastfed babies can be slower. But what about that of formula-fed babies? For infants to grow at a consistent rate, they require a fair amount of energy and proteins. Once a mom tries to wean her baby by introducing him or her to solid food, breastfeeding gradually decreases. However, babies may not get all the required nutrition from solid food especially if they don’t eat too much or reject it completely. This means lower consumption of energy and protein-rich food. Yet babies who continue to be formula-fed get the required nutrients from formula milk. This is why a formula-fed baby growth chart shows 'better results' than that for breastfed babies. Breast milk contains plenty of antibodies that equip babies with a better immune system. In the long run, this proves to be more beneficial for their steady growth and development. There is no comparison to the goodness of mother's milk for the baby. Also, some of the constituents in formula may be hard to digest and can result in diarrhoea. Fats in formula milk are hard to break down too and could lead to excessive weight gain. Though the growth chart for breastfed babies may not seem to be as impressive as that of formula-fed infants, studies show that breastfeeding is more nutritious. If you are unable to breastfeed for some reason, please discuss this with your doctor and take a call accordingly.
0.8453
FineWeb
["Breastfeeding vs Formula Feeding", "Growth Difference in Babies", "Nutrition and Development"]
High levels of exposure to dust mite is an important factor in the development of asthma in children What Are Dust Mites? Dust mites are the most common cause of allergy from house dust. Dust mites are hardy creatures that live and multiply easily in warm, humid places. They prefer temperatures at or above 70 degrees Fahrenheit with a relative humidity of 75 percent to 80 percent. They die when the humidity falls below 40 percent to 50 percent. They usually are not found in dry climates. Millions of dust mites can live in the bedding, mattresses, upholstered furniture, carpets or curtains of your home. They float into the air when anyone vacuums, walks on a carpet or disturbs bedding, but settle out of the air soon after the disturbance is over. There may be many as 19,000 dust mites in one gram of dust, but usually between 100 to 500 mites live in each gram. (A gram is about the weight of a paper clip). Each mite produces about 10 to 20 waste particles per day and lives for 30 days. Egg-laying females can add 25 to 30 new mites to the population during their lifetime. Mites eat particles of skin and dander, so they thrive in places where there are people and animals. Dust mites don’t bite, cannot spread diseases and usually do not live on people. They are harmful only to people who become allergic to them. While usual household insecticides have no effect on dust mites, there are ways to reduce exposure to dust mites in the home. Physical characteristics of the house dust mite - Dust mites are Less than half a millimetre in length, this makes it hard to see with the naked eye. - Oval-shaped body - Light coloured body with fine stripes - Life span of dust mite is around two months or so, depending on the living conditions. What Causes Dust mite Allergy? People who are allergic to dust mites react to proteins present within the bodies and faeces of the mite. Dust mite-allergic people; who inhale these particles frequently experience allergy symptoms. What are the Symptoms of allergic reaction to house dust mite? - A tight feeling in the chest - Runny nose - Itchy nose - Itchy eyes - Itchy skin - Skin rashes. Dust mite allergens persist at high levels during month of July. The lowest allergen levels are in September and October, but cold weather doesn’t necessarily mean the end of allergy. That’s because the mite faecal particles remain in the home, mixed in with dead and disintegrating mite bodies, which also cause allergies. Tips for reducing house dust allergens - Measure the indoor humidity and keep it below 55 percent. Do not use vaporizers or humidifiers. You may need a dehumidifier. Use vent fans in bathrooms and when cooking to remove moisture. Repair all water leaks. (Dust mite, cockroach, and mould allergy. - Wash all bedding that is not encased in barrier covers (e.g. sheets, blankets) every week. Washing at 60 degrees centigrade or above will kill mites. House dust mite allergen dissolves in water so washing at lower temperatures will wash the allergen away temporarily, but the mites will survive and produce more allergen after a while - Remove wall-to-wall carpets from the bedroom if possible. Use a central vacuum or a vacuum with a HEPA filter regularly. If you are allergic, wear a mask while dusting, sweeping or vacuuming. Remember, it takes over two hours for the dust to settle back down, so if possible clean when the allergic patient is away and don’t clean the bedroom at night. (Mould, animal and house dust mite allergies) - Encase mattresses and pillows with “mite-proof” covers. Wash all bed linens regularly using hot water. (Dust mites’ allergy.) - Replace wool or feathered bedding with synthetic materials and traditional stuffed animals with washable ones - Light washable cotton curtains, and wash them frequently. Reduce unnecessary soft furnishings - Vacuum all surfaces of upholstered furniture at least twice a week - Washable stuffed toys should be washed as frequently and at the same temperature as bedding. Alternatively, if the toy cannot be washed at 60 degrees place it in a plastic bag in the freezer for at least 12 hours once a month and then wash at the recommended temperature - Use a damp mop or rag to remove dust. Never use a dry cloth since this just stirs up mite allergens. - Have your heating and air-conditioning units inspected and serviced every six months. (Animal, mould and house dust mites’ allergies.
0.9024
FineWeb
```json [ "What Are Dust Mites?", "What Causes Dust mite Allergy?", "Tips for reducing house dust allergens" ] ```
Posted by Nick Matzke on March 20, 2006 09:54 PM Remember how, according to the ID movement, “methodological naturalism” was supposed to be a Darwinist/atheist conspiracy to arbitrarily exclude ID? Well, let’s have a look at who coined the term. Ronald Numbers, one of the leading experts on the history of creationism, writes, The phrase “methodological naturalism” seems to have been coined by the philosopher Paul de Vries, then at Wheaton College, who introduced it at a conference in 1983 in a paper subsequently published as “Naturalism in the Natural Sciences,” Christian Scholar’s Review, 15(1986), 388-396. De Vries distinguished between what he called “methodological naturalism,” a disciplinary method that says nothing about God’s existence, and “metaphysical naturalism,” which “denies the existence of a transcendent God.” (p. 320 of: Ronald L. Numbers, 2003. “Science without God: Natural Laws and Christian Beliefs.” In: When Science and Christianity Meet, edited by David C. Lindberg, Ronald L. Numbers. Chicago: University Of Chicago Press, pp. 265-285.) A few additional points worth noting here: 2. The idea of methodological naturalism is of course much older than the term, stretching back centuries to the distinction between primary and secondary causes. (Glenn Branch dug around and found some evidence that the term may be older, but perhaps like the term “intelligent design” the words are associated occasionally over the decades, but without really being codified as an Official Term.) 3. But perhaps it was Darwin and those other dogmatic Darwinists that came up with methodological naturalism in the 1800’s in order to ram evolution down everyone’s throats. Not according to Numbers: By the late Middle Ages the search for natural causes had come to typify the work of Christian natural philosophers. Although characteristically leaving the door open for the possibility of direct divine intervention, they frequently expressed contempt for soft-minded contemporaries who invoked miracles rather than searching for natural explanations. The University of Paris cleric Jean Buridan (a. 1295-ca. 1358), described as “perhaps the most brilliant arts master of the Middle Ages,” contrasted the philosopher’s search for “appropriate natural causes” with the common folk’s erroneous habit of attributing unusual astronomical phenomena to the supernatural. In the fourteenth century the natural philosopher Nicole Oresme (ca. 1320-82), who went on to become a Roman Catholic bishop, admonished that, in discussing various marvels of nature, “there is no reason to take recourse to the heavens, the last refuge of the weak, or demons, or to our glorious God as if He would produce these effects directly, more so than those effects whose causes we belive are well known to us.” Enthusiasm for the naturalistic study of nature picked up in the sixteenth and seventeenth centuries as more and more Christians turned their attention to discovering the so-called secondary causes that God employed in operating the world. The Italian Catholic Galileo Galilei (1564-1642), one of the foremost promoters of the new philosophy, insisted that nature “never violates the terms of the laws imposed upon her.” (Numbers 2003, p. 267) The next time you hear IDists ranting and raving about the evils of methodological naturalism, keep the above in mind. In fact, if the IDists don’t mention these rather important bits of history, you should ask yourself why. 4. So it looks like Judge Jones got it exactly right when he ruled: While supernatural explanations may be important and have merit, they are not part of science. (3:103 (Miller); 9:19-20 (Haught)). This self-imposed convention of science, which limits inquiry to testable, natural explanations about the natural world, is referred to by philosophers as “methodological naturalism” and is sometimes known as the scientific method. (5:23, 29-30 (Pennock)). Methodological naturalism is a “ground rule” of science today which requires scientists to seek explanations in the world around us based upon what we can observe, test, replicate, and verify. (1:59-64, 2:41-43 (Miller); 5:8, 23-30 (Pennock)). ID violates the centuries-old ground rules of science by invoking and permitting supernatural causation 5. So who came up with methodological naturalism – the idea, as well as the term? It turns out it was those notorious atheists, the Christians. Despite the occasional efforts of unbelievers to use scientific naturalism to construct a world without God, it has retained strong Christian support down to the present. And well it might, for, as we have seen, scientific naturalism was largely made in Christendom by pious Christians. Although it possessed the potential to corrode religious beliefs – and sometimes did so – it flourished among Christian scientists who believe that God customarily achieved his ends through natural causes. (Numbers 2003, p. 284) 6. All of this is worth pointing out because the ID Movement at large has been complaining that methodological naturalism is an unfair constraint on science, and in particular critics of the decision in Kitzmiller v. Dover, such as Alvin Plantinga and Steve Fuller, have been asserting that methodological naturalism is an arbitrary, recently invented constraint – Fuller has even gone so far as to say it was constructed as an anti-creationism tool in the 1980’s. It may be that the coining of the term “methodological naturalism” was useful in the 1980’s – especially to rebut the eternal creationist yammering about the search for natural causes being atheistic, but also to keep science separate from metaphysical conclusions like atheism – but the idea is ancient and really is at the heart of the history of what we now call “science.” Plantinga and Fuller cite Newton as a non-methodological naturalist (which itself is probably dubious although Newton is a complex guy), but regardless, Numbers makes it clear that methodological naturalism goes back to Galileo and before. If anyone ever sees an ID advocate acknowledge these sorts of points, please let me know. de Vries, Paul (1986) “Naturalism in the Natural Sciences: A Christian Perspective.” Christian Scholar’s Review, 15(4):388-396. Ronald L. Numbers (2003). “Science without God: Natural Laws and Christian Beliefs.” In: When Science and Christianity Meet, edited by David C. Lindberg, Ronald L. Numbers. Chicago: University Of Chicago Press, pp. 265-285. PS: A bit of commentary from Numbers himself on the ASA listserv. PPS: I have edited the link for the “ID” part of “ID Movement” to correct a misunderstanding pointed out here.
0.5023
FineWeb
```json [ "Methodological Naturalism", "History of Science", "Intelligent Design" ] ```
There was a single driver behind the development of the Telnet protocol – compatibility. The idea was that telnet could work with any host or operating system without difficulty. It was also important that the protocol could work using any sort of terminal (or keyboard). The protocol was initially specified in RFC 854 where it defined a lowest common denominator terminal called a NVT (network virtual terminal). This is actually an imaginary or virtual device which exists at both ends of the connection i.e. client and server. Both devices should map onto this NVT, so the client would map while specifying OS and terminal type whilst the server must do the same thing. Effectively both are mapping onto the NVT which creates a bridge across the two different systems to enable the connection rather line this VPN technology here. An important specification to remember is NVT ASCII which refers to the 7 bit variant of the famous character set used by the internet protocol suite. Each character (7 bit) is sent as an 8 bit byte with the high order bit set to 0. It’s important to remember this definition as it supports many of the commands and functionality contained in Telnet. The telnet operation actually uses in-band signalling in both directions. The byte 255 decimal is sent which states interpret as command (IAC). The following byte is the actual command byte in all circumstances. In order to specify the data byte, two consecutive 255 bytes are sent. Telnet has a surprising number of commands but as it’s rarely used in modern times, then the majority of them are rarely used. Although Telnet by default assumes the existence of an NVT – the initial exchange is one of option negotiation. This exchange is symmetric (requests can be sent from each side) and the requests can be one of four main options. These are WILL, DO, WONT or DONT and refer to following settings – - WILL – Sender Enables Options - DO – Sender wants receiver to enable option - WONT – Sender wants to disable option - DONT – Sender wants receiver to disable option The Telnet protocol requires that either side can reject or accept any request to enable an option but must honor a request to disable. This allows the telnet protocol to ensure that specific requirements to disable options are always honoured. These are important because they usually are required to support the various terminals used. Remember Telnet option negotiation like the rest of the protocol is designed to be symmetrical. This means that either end of the connection can initiate negotiation of any option supported by the protocol. Further Readings: http://bbciplayerabroad.co.uk/bbc-live-vpn/
0.8262
FineWeb
["Telnet Protocol", "Network Virtual Terminal (NVT)", "Telnet Option Negotiation"]
You have learned how to pack mountains of gear for the long haul. Being over-encumbered no longer prevents you from using fast travel.”— in-game description - Antithesis of Strong Back, Burden to Bear, Hoarder and Pack Rat. This perk only does anything when you are over-encumbered, so it's less useful when it's harder to become over-encumbered. - It also works well with Travel Light, which, despite its name, increases movement speed even when over-encumbered. - Carrying over 10,000 pounds of gear may cause severe lag when opening containers or searching corpses.
0.9303
FineWeb
["Perk Description", "Perk Interactions", "Gameplay Limitations"]
What are Some MAOI Medications and Side Effects They Can Cause? MAOI or monamine oxidase inhibitors are prescription antidepressants that some mental health care professionals and family health care providers prescribe to patients, when there is a problem that prevents a patient from taking other types of antidepressants. Below you will find a list of a few of the most common types of monamine oxidase inhibitors that patients use to alleviate the symptoms that are most commonly associated with mental health conditions such as depression. - Tranlcypromine – also known as Parnate - Phenelzine – also known as Nardil - Socarboxazid – also known as Marplan
0.9258
FineWeb
["MAOI Medications", "Types of Monamine Oxidase Inhibitors", "Side Effects of MAOI Medications"]
This extremely steep, mountainous ecoregion encompasses the Ogilvie and Wernecke mountains, the Backbone Ranges, the Canyon Ranges, the Selwyn mountains, and the eastern and southern Mackenzie Range (these last two are an extension of the Rockies). Alpine to subalpine northern subarctic Cordilleran describes this region’s ecoclimate. Weather patterns from the Alaskan and Arctic coasts have a significant influence on this ecoregion. Summers are warm to cool, with mean temperatures ranging from 9°C in the north to 9.5°C in the south. Winters are very long and cold, with very short daylight hours. Mean temperatures range from -19.5°C in the south to -21.5°C in the north, where temperatures of -50°C are not uncommon. Mean annual precipitation is highly variable, but generally increases along a gradient from northwest to southeast, with the highest amounts (up to 750 mm) falling at high elevation in the Selwyn Mountains. At lower elevations, anywhere from 300 mm (in the north) to 600 mm (in the south) is the average (ESWG 1995). The bedrock is largely sedimentary in origin, with minor igneous bodies, and much of this is mantled with colluvial debris and frequent bedrock exposures and minor glacial deposits. Barren talus slopes are common. Although parts of the northwest portion of this ecoregion are unglaciated, the majority has been heavily influenced by glaciers. Alpine and valley glaciers are common, especially in the southern and eastern parts of the area where the ecoregion contains broad, northwesterly trending valleys. Valleys tend to be narrower and sharper in the unglaciated northwest. Elevations in the ecoregion also tend to increase as one moves southeast. In the north, in the unglaciated portions of the Ogilvie and Wernecke mountains, elevations are mostly between 900 m and 1350 m asl, with the highest peaks reaching 1800 m. In the central part of the ecoregion, elevations can reach above 2100 m asl, and in the south (Selwyn mountains) peaks reach as high as 2950 m. Permafrost is extensive, and often continuous throughout the region (ESWG 1995). Subalpine open woodland vegetation is composed of stunted white spruce (Picea glauca), and occasional alpine fir (Abies lasiocarpa) and lodgepole pine (Pinus contorta), in a matrix of willow (Salix spp.), dwarf birch (Betula spp.) and northern Labrador tea (Ledum decumbens). These often occur in discontinuous stands. In the north, paper birch (B. papyrifera) can form extensive communities in lower elevation and mid-slope terrain, but this is less common in the south and east. Alpine tundra at higher elevations consists of lichens, mountain avens (Dryas hookeriana), intermediate to dwarf ericaceous shrubs (Ericaceae), sedge (Carex spp.), and cottongrass (Eriophorum spp.) in wetter sites (ESWG 1995). Characteristic wildlife include caribou (Rangifer tarandus), grizzly and black bear (Ursus arctos and U. americanus), Dall’s Sheep (Ovis dalli), moose (Alces alces), beaver (Castor canadensis), red fox (Vulpes vulpes), wolf (Canis lupus), hare (Lepus spp.), common raven (Corvus corax), rock and willow ptarmigan (Lagopus mutus and L. Lagopus), bald eagle (Haliaeetus leucocephalus) and golden eagle (Aquila chrysaetos). Gyrfalcon (Falco rusticolus) and some waterfowl are also to be found in some parts of the Mackenzie mountains (ESWG 1995) Outstanding features of this ecoregion include areas that may have remained ice-free during the late Pleistocene–relict species occur as a result. Also, the ecoregion supports a large and intact predator-prey system, one ofthe most intact of the Rocky Mountain ecosystem. The winter range for the Porcupine caribou herd and full season range of the Bonnet-Plume woodland caribou herd (5,000 animals) is found in this area. The Fishing Branch Ecological Reserve has the highest concentration of grizzly bears in North America for this northern latitude. Habitat Loss and Degradation It is estimated that at least 95 percent of the ecoregion is still intact. Mining, mineral, oil and gas exploration are the principal sources of habitat disturbance and loss. Remaining Blocks of Intact Habitat The ecoregion is principally intact. Degree of Fragmentation To date, the ecoregion has remained principally intact. Roads are increasingly becoming a concern, as is some of the access associated with mineral exploration. Degree of Protection •Tombstone Mountain Territorial Park Reserve - Yukon Territory - 800 km2 •Fishing Branch Ecological Reserve - northwestern western Yukon Territory - 165.63 km2 Types and Severity of Threats Most of the threats relate to future access into this northern and fragile ecoregion. Further road development and mineral exploration may result in increased human access. This is already occurring in the western half of the ecoregion. Suite of Priority Activities to Enhance Biodiversity Conservation •Enlarge Tombstone Mountain Territorial Park - Yukon Territory •Establish protected areas in the various mountain ranges that comprise this ecoregion in both Yukon and Northwest Territories. •Enlarge Fishing Branch Ecological Reserve - Yukon Territory •Protect the Wind, Snake and Bonnet Plume Rivers. •Develop protected area proposals for the Keele Peak Area and the Itsi Range - Yukon Territory •Canadian Arctic Resources Committee •Canadian Nature Federation •Canadian Parks and Wilderness Society, Yukon Chapter •Friends of Yukon Rivers •World Wildlife Fund Canada •Yukon Conservation Society Relationship to other classification schemes The North Ogilvie Mountains (TEC 168) characterize the northern part of this ecoregion, the Mackenzie Mountains (TEC 170) run east-west through Yukon Territory and the Northwest Territories, and the Selwyn Mountains (TEC 171) are located in the south section of this ecoregion, which is part of the Taiga Cordillera ecozone (Ecological Stratification Working Group 1995). Forest types here are Eastern Yukon Boreal (26c), Boreal Alpine Forest-Tundra (33) and Tundra (Rowe 1972). Prepared by: S. Smith, J. Peepre, J. Shay, C. O’Brien, K. Kavanagh, M. Sims, G. Mann.
0.5342
FineWeb
```json [ "Ecoregion Description", "Wildlife and Vegetation", "Conservation Status" ] ```
With facilities including 4,300-ton and 8,000-ton forging presses, Pacific Steel Mfg. carries out the entire production process from steelmaking to machining, to produce distinctive steel forgings using free forging. Steel forgings play an important part in various fields, such as electric power, iron and steel, ships, paper manufacturing, and industrial machinery. 8,000-ton forging press
0.5174
FineWeb
["Steel Forgings", "Manufacturing Process", "Industrial Applications"]
Funny thing around data – when students are an intricate part of the conversation, it takes on a whole different meaning. Recently, I was facilitating one of our Joint Teams (LSS/Classroom partners and Learning Technologies) with a focus on supporting learners with written output challenges. There were several tools that provide significant success for students. These I’ll write about at a later date. What really intrigued me was the rising conversation from teachers about the connections to communicating student learning. I had shared how we could use audio and images as a enhancer for students to record their understandings and how important it was to have snapshots of these over time to reflect. As these take advantage of a digital realm, the use of a place like our “student blogfolios” rose as a consideration. This allows a student to share and hold their story in the digital world, opening the door to a range of possibilities: - access anytime anywhere by participants - collaborative feedback in an organized manner - connected communication between student, parent, teacher (of course anytime, anywhere) - historic snapshots of whole child learning processes Knowing your “why” aids in this journey. Without any advertising, our Learning Technologies team have seen this expand exponentially (yes we really only started last year with a small pilot). The implications are huge as we attempt to work through what it means to communicate deep learning in different ways, to empower students to own their voice and story, to share these breakthroughs with parents in a way that allows for dialogue. If you are interested in this, please contact any member of the Learning Technologies team or go straight to our FORMS page to apply. Recently we had session 2 of our Communicating Learning via ePortfolios Team. When I wrote up this session to post for the group, it struck me that these thoughts applied to everyone especially at a time when we’re all focused on assessment, report cards and all the things that come with the process. - Learning is doing - Learning starts with the learner’s own ideas - Learning involves getting personally involved - Questions drive learning and are also outcomes of learning - Learning involves uncovering complexity - Learning can be a group process and a group outcome - Learning and thinking can be made visible How might these ideas affect the way you approach a Fast ForWord learning environment, an assessment environment or what this means to think well (critically, creatively)? If you know your “why”, it’s easier to frame the assessment. I’m especially captivated by “questions drive learning and are also outcomes of learning”. Not the teacher’s questions but student’s questions. We’d love to have your thoughts on this through the comment feature here. In partnership with Ron Coleborn (Math consultant), we are pleased to announce two Math softwares available for teachers: - Dreambox Learning (Math) - Skoolbo (Math and Literacy basic skills) (Canadian version) DreamBox is an online Math resource (K-8) intended to support personalized instruction for students from intervention to enrichment. The ongoing formative assessments within the program can align classroom practices and lessons creating a blended model of instruction. Some of our schools have been using the program in pilot and can share their stories. A small number of purchased student licenses are available. More information can be found on our Learning Technologies site (WEB RESOURCES > DREAMBOX LEARNING). Teachers who are interested may apply on our FORMS page. Skoolbo (Canadian version)(K-5) is now available for those who are seeking practice in basic skills in Math or Literacy (building blocks). This is an online resource that provides a “game type” environment to hone basic skills. Create your avatar and the program can take you through the skillsets (based on a set of pre-tests). Or a teacher can assign specific content where practice is needed. This may enhance (RTI) Tier 2 and Tier 3 supports. We’ve registered all schools (elementary and secondary). More information can be found on our Learning Technologies site (WEB RESOURCES > SKOOLBO). Teachers who are interested may apply on our FORMS page for a class account. Watch for a full launch in September. Perhaps I see a coordinated song and dance in the future? February 1st is the 32nd day of the year in the Gregorian calendar, the day before Groundhog Day, the day (1920) of the newly established Royal Canadian Mounted Police. But more importantly, it is the official launch of our Digital Citizenship Initiative (K-7). While some may hone in on the word, “digital” implying that it is related to computer lab activities or technology classes, we suggest it is plain “Citizenship”. And everyone is responsible for learning and modeling citizenship (both in-person and online). At the heart of this, resides the core values and beliefs that we all hold. This in turn, drive our behaviours and actions. Citizenship is fully integrated in the new curriculum. It is embedded in the Core Competencies of Personal and Social competency. As well you will find targeted Curricular Competencies in every curriculum including the latest draft Applied Design, Skills and Technologies. The Digital Citizenship Initiative has been divided into themes: - February – Relationships and Communication - March – Internet Safety - April – Footprints and Reputation - May – Cyberbullying - June – Credit and Copyright Lesson launches are provided as starting points for teachers and students. As well, there is a FOR PARENTS area to explore at home with family. Keeping open communication is key to understanding how one lives and learns in all the environments we encounter. Schools have been invited to participate and share their class stories through school websites, principal newsletters, home communications, and PAC meetings. We are in this journey together. For classes, there is an added bonus – a monthly DC contest starting this month. Hop on over to the Digital Citizenship Contest page or click on the DC button found on your school’s website for information. We sincerely hope you’ll join us as we learn together. We’d love to hear stories that you may wish to share. Please leave a comment below.
0.5645
FineWeb
["Digital Citizenship", "Student Learning Portfolios", "Math Software Resources"]
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. There have come to be two kinds of commercial marketing research. One is commonly called qualitative, the other quantitative. For most marketers, qualitative research is defined by the nonexistence of numerical measurement and statistical analysis. Qualitative research provides an in-depth, if necessarily subjective, understanding of the consumer. In practice, qualitative research has become almost synonymous with the focus group interview. This technique involves convening a cluster of respondents, usually eight to 10, for a more or less open-ended discussion about a product. The discussion "moderator" makes certain that topics of marketing significance are brought up. The research report summarizes what was said, and perhaps draws inferences from what was said and left unsaid, in the discussion. One can identify in several quarters conflicting feelings about focus groups. The results do seem useful to management. But there is concern about the subjectivity of the technique, and a feeling that any given result might have been different with different respondents, a different moderator, or even a different setting. Most commercial reports hold a cryptic statement acknowledging this conflict. The statement cautions that focus group research should be regarded as preliminary. Results should not be generalized without additional quantitative research. Most users probably have a vague sense of uneasiness with the technique. As aptly put by [Wells, W. D. 1974, p. 2-145], "How can anything so bad be good?" In addition to the general uneasiness, plentiful procedural questions surround 0 the uses of focus groups. The following are typical questions. Should focus group research ideally be generalized through additional quantitative research? When should focus group research be used? How many focus groups constitute a project? What is the role of interface among the group members? Should focus groups be composed of homogeneous or heterogeneous people? What expertise and credentials should a moderator need? How important is the moderator's interviewing technique? Should management observe focus group sessions? What should a focus group report look like? These questions currently are debated by marketing researchers on the basis of their professional experiences. Neither the conflict between the apparent effectiveness of focus groups and the reservations expressed about them, nor the typical procedural questions have been the subject of systematic argument. The marketing literature has been of little aid to qualitative marketing researchers. There have been occasional descriptions of applications [e.g., Cox, K. K., J. B. Higginbotham, and J. Burton, 1976] and expositions of techniques [e.g., Wells, W. D: 1974, Goldman, A. E: 1962, Wagner, H. R. Alfred Schutz: 1970], but this work has not established a common framework for thinking about focus group research. Qualitative marketing research is considered first from a philosophy of science outlook. This perspective is not used simply to hold up the focus group technique to a list of idyllic criteria for scientific methods. The author fully realizes that many practitioners are not interested in being "scientists." They are, however, interested in developing understanding from research. The philosophy of science provides a valuable perspective on knowledge-not just scientific knowledge, but the entire realm of knowledge. The point of the philosophy of science perspective developed here is to examine the type of knowledge sought by qualitative research, be it scientific knowledge or otherwise, to determine what this implies about the use of the focus group technique. The implications of seeking either nonscientific or scientific knowledge through focus group research are not well understood. Though many practitioners might shun the "scientist" label, the distinction is not as simple as it may seem. There are actually three different approaches to focus group research in existing practice. Drawing upon the philosophy of science perspective developed, this article shows that each of these approaches reflects a different kind of knowledge being sought. Though none of the three approaches seeks scientific knowledge in its strictest form, two are meant to yield knowledge which is in some sense scientific. A PHILOSOPHY OF SCIENCE PERSPECTIVE What comes to mind when most people think of research is the image of "scientific" research. This image is somewhat fluffy, and it is not easy to articulate. Thus it may help to begin with a consensus view of what science is. Science is a particular way of trying to understand the real world. For social scientists the real world is the full physical complexity of substance and behaviors. But the real world is much too complex to be understood in and of it. At the heart of science is the process of conceptualization, which seeks to represent the real world in a simple enough way to permit understanding. Scientific constructs are abstracted forms and represent only limited aspects of real-world objects and behaviors. If scientific constructs mirrored the full complexity of the real world, one could no more understand science than one can directly understand the real world. Constructs are simplifications and idealizations of reality. They are, in short, abstractions of the real world. Some may seem more "real" than others-say, "taste buds" as opposed to "attitudes"-but they are all abstractions; they "exist" only within the realm of scientific discourse. Scientific theory consists of constructs and the inter-relationships among them [Bunge, M, 1967]. The value of this theory depends on the detail that abstract conceptualization is not a one-way process. As depicted in Figure 1, scientific conceptualization must work in reverse, too. One must be able to use constructs to interpret the real world, to determine whether real objects and behaviors possess the properties and relationships embodied in scientific theory [Zaltman, G., C. R. Pinson, and R. Angelma, 1973]. This is the business of theory testing. It is the most visible part of science, for it entails all of the methods and procedures associated with "being scientific." Basically, these methods are merely systematic procedures for determining whether a theory is consistent with the workings of the real world. If consistency is detected, the theory is retained, though it is not considered proved; otherwise the theory is modified. The uniqueness of science is in the logical rigor and documentation employed in testing scientific constructs and relationships against the real world. Let us return to the nature of scientific constructs. An important question is, how do we develop scientific constructs? Where do they come from? In all of science, the derivation of constructs is somewhat problematic [Kaplan, 1964]. Part of the answer seems to be that good theory spawns its own constructs (the best example being particle physics). There is also the process of modifying constructs on the basis of empirical confirmation. Still, there must be an external derivation at some point in theory development, and this origin is the world of everyday thought and experience. As shown in Figure 1, the world of everyday thought is separate from scientific discourse. It is composed of the requisites and ordinary language that people use to give meaning to the world in their everyday lives. As such, its function is analogous to that of science. It allows one to interpret the actual world by use of simplified ideas. The only difference is that scientific be subject to more rigorous and critical verification than are everyday ideas. Although everyday thought may initially supply ideas for scientific constructs, are supposed to be more powerful and to be subject to more rigorous and critical verification than are everyday ideas. Although everyday thought may initially supply ideas for scientific constructs, knowledge is subject to its own rules of proof. But this independence is not absolute. Modern philosophers of science agree that all knowledge is highly presumptive [Feyerabend, P. K, 1970: Lakatos, I, 1970: Toulmin, S. E. Human, 1972]. No single hypothesis can be examined without at the same time assuming the truth of the mass of all other knowledge, both scientific and everyday. Neither scientific explanations of consumer behavior nor explanations based on everyday knowledge can be proved. All knowledge reduces to the choice between alternative explanations. It is thus entirely reasonable to contrast scientific and everyday explanations. The truly scientific explanation may be expected to have advantages, but it is not automatically superior. In the case of social science, these advantages are seen by many as more assumed than real. Such considerations have led [Campbell, D, and 1976.] to argue for the cross-validation of social science by qualitative common-sense explanation. This step rarely is taken, and is probably generally considered to be "unscientific." Nonetheless, some form of contrast between scientific and everyday explanation should be part of a sophisticated view of science, and this relationship accordingly appears in Figure 1. Quantitative research commonly is associated, at least implicitly, with the realm of science. This connotation is not always correct, however. Actually, there are two approaches to quantitative research. What can be referred to as the descriptive approach supplies numerical information relevant to everyday, first-degree constructs? Demographic analyses, such as breakdowns of consumption figures by age, are a prime example. This research, in itself, bears more upon everyday than scientific explanation. Age, used purely descriptively, is not a scientific construct. Quantitative research which does seek scientific clarification can be referred to simply as the scientific approach. Here, quantitative means much more than merely working with numerical amounts or rating scales. It implies the use of second-degree constructs and causal hypotheses which are subjected to scientific methods. The methods in common use are the experiment, some types of cross-sectional and panel surveys, and time series analysis. Scientific quantitative marketing research, in sum, aspires to the scientific familiarity depicted in the philosophy of science outlook. Qualitative marketing research similarly cannot be restricted to a literal definition of "doing research without numbers." Unlike the case of quantitative research, the relationship of qualitative research to the scientific and everyday knowledge dichotomy is very indistinct. THE EXPLORATORY APPROACH Qualitative marketing research regularly is under-taken with the belief that it is provisional in nature. Focus groups frequently are conducted before the fielding of a large sample survey. This exploratory approach can take one of two somewhat different forms. Researchers may be interested in simply "pilot testing" certain operational aspects of anticipated quantitative research [Bobby .J. Calder, 1977]. Their objective might be to check the wording of questions or the instructions accompany product placements. Alternatively, researchers may have the much more ambitious goal of using qualitative research to create or select theoretical ideas and hypotheses which they plan to verify with future quantitative research. For this purpose, focus groups are usually less structured; respondents are allowed to talk more freely with each other. When focus groups are conducted in anticipation of scientific quantitative research, their principle is really to stimulate the thinking of the researchers. They represent an explicit attempt to use everyday thought to generate or operationalize second-degree constructs and scientific hypotheses (cf. Fig. 1). Though the subject of exploratory qualitative research is everyday knowledge, the information desired is best described as pre-scientific. The basis of exploratory focus groups is that considering a problem in terms of everyday explanation will somehow facilitate a subsequent scientific approach. Focus groups are a way of accomplishing the construct-generation process shown in Figure 1. As was noted, however, the process of generating second-degree constructs from first-degree ones, of moving from the everyday to the scientific, is very poorly understood. The philosophy of science supplies no precise guidelines. Nor has any thought been given to this process in the marketing research literature. This is not to say that the exploratory approach is not valuable, only that it is being attempted without benefit of any well-developed ideas of how to do it [Bobby .J. Calder, 1977]. The most relevant sources to which qualitative marketing researchers might turn are sociologists concerned with the notion of "grounded theory." This term refers to theory analytically generated from qualitative as well as quantitative research as opposed to theory generated by its own inside logic. The idea is that "grounded theory is a way of arriving at theory suited to its supposed uses" [Glaser, B. G. and A. L. Strauss, 1967 p. 3]. In other words, such theory is developed within the context of its application. The aim of the exploratory approach might well be described as grounded theory. Much qualitative research follows the exploratory approach even though it never leads, to quantitative research. The presumed second-degree constructs and hypotheses developed from focus groups frequently are not subjected later to scientific methods. Most often this omission is due to the high costs of a second quantitative level. In such cases, concern commonly is expressed about the risk of generalizing from the small samples of qualitative research. But there is much more at risk than sample generalizability. What happens with this abridged exploratory approach is that what is still essentially everyday knowledge (that of the researchers and focus group participants) is cast in ostensibly scientific terms and treated as if it were a scientific finding, instead of being at best a pre-scientific starting point. The problem is that this knowledge has not been subjected to scientific methods for any sample; to assume that it is scientific is risky indeed. Exploratory qualitative research which is not followed by a quantitative stage is not necessarily ineffective. Taken as everyday knowledge, it may well be very useful. The mistake is to represent pre-scientific every-day explanation as fully scientific but merely lacking sample generalizability. One final spot with regard to the exploratory approach is almost never recognized in marketing research practice. The approach concentrates solely on the construct-generation relationship from the everyday to the scientific (cf. Fig. 1). Of equal importance in terms of the philosophy of science is the comparison relationship from the scientific to the everyday. It is useful to think of this relationship as cross-validating scientific explanations against everyday ones. If the two explanations are not reliable, a choice must be made. Given the current expansion of social science, this choice sometimes will favor the everyday explanation. That is, consumers' explanations will sometimes be favored over theoretical hypotheses. Thus, it is potentially misleading to assume that qualitative research must always be impermanent. It is also desirable to conduct independent exploratory qualitative research. In this way, scientific explanations can be compared with everyday ones. Contrary to current practice, it is just as appropriate to conduct focus groups after a quantitative project as before it. Scientific explanations should be treated as provisional also. The exploratory approach to qualitative research seeks pre-scientific knowledge. This knowledge is not meant to have scientific standing. It is meant to be a precursor to scientific knowledge. Its status is ultimately rooted in the creativity of the individual. The exploratory approach could be adopted to compare scientific with everyday explanations. In this case, the objective would be not pre-scientific, but everyday knowledge. Distinct of Market Research Market research information may have at least two different contributions to marketing knowledge and practice. First, insight is obtained about aspects of a market exchange process involving a product (re-search results), a producer group (researchers), and a consumer group (managers) of sole interest to the marketing profession. Second, studying elements of the profession's knowledge system may provide insights which could lead to improvements in that system. We offer just a few reasons why more attention should be devoted to knowledge system issues such as factors affecting the use of market research information. Each year substantial resources are expended in the conduct of market research. The top 10 U.S. private market research agencies alone had transactions of more than 700 million dollars in 1980 (Honomichl 1981). These monies are spent on formal, problem-oriented re-search to help determine day-after recall for an advertisement, the best location for a new retail outlet, what product line modifications are desirable, and so on. Formal research is undertaken because managers expect the resulting information to reduce uncertainty when they are making important decisions. The market research industry, in fact, exists largely because of this anticipation among managers. Thus, understanding what factors affect the use of research by managers is of major outcome to both the market research industry and its customers. Do managers think about research results while making product or service decisions? What factors influence and improve the consideration of research results? Additionally, if we give credibility to the frequent observation that much problem-oriented research in marketing is not used or not used for its intended purpose (Adler and Mayer 1977; Dyer and Shimp 1977; Ernst 1976; Kover 1976; Kunstler 1975), the study of these factors becomes even more imperative. The general issue of market research use has been cited as an extremely important one in need of official investigation. A special joint commission of the AMA and the Marketing Science Institute surveyed the contributions of more than 25 years of marketing's "R & D." They were "struck by the discrepancies between the volume of the new knowledge generated over [the 25 surveyed years] and a comparatively low rate of adoption at the line manager level" (Myers, Greyser, and Massy 1979, p. 25). The commission's major recommendations were to develop improved ways "to bridge the gaps between knowledge-generation and knowledge-utilization" (Myers, Greyser, and Massy 1979, p. 27). These sentiments have been echoed in a study of European managers by Permut (1977). Marketing Research Strategies Actual and Recommended Until recently, there has been a strong fondness in social science research in the direction of preserving data integrity through the use of quantitative/ deductive research methods whenever possible (e.g., Mitroff 1974). This preference also is evident in marketing. A random sample of 10 issues of the Journal of Marketing Research for the years 1977-1982, for example, shows marketing's research methods to be characterized by (1) substantial methodological attention and self-study, ordinarily advocating quantitative or "objective" methodological innovations,( 2) no qualitative studies of any sort, and (3) considerable use of indirect measures of behavior( e.g., verbal reports) rather than direct assessments of the phenomena (e.g., purchases) under consideration. In other disciplines, growing dissatisfaction with the use of quantitative research methods and strategies has emerged, particularly as they are applied to phenomena not easily operationalized or easily visible outside the natural settings in which they occur (for examples, see the special issue of the Administrative Science Quarterly 1979, or the Sage Series in Qualitative Research, e.g., van Maanen, Dabbs, and Faulkner 1982c ). Van Maanen (1982a) gives some reasons for this re emergence of qualitative research in the disciplines of sociology and psychology: "The sources of disenchantment [with quantitative/deductive tools] are many, but deserving of passing note are: the relatively trivial amount of explained variance, the abstract and remote nature of key variables, the lack of comparability across studies, the failure to achieve much analytical validity . . . and the causal complexity of multivariate analysis, which, even when understood, makes change-oriented actions difficult to contemplate" (p. 13). A rising number of researchers in economics (e.g., Piore 1979), medicine (e.g., Feinstein 1977), organizational behavior (e.g., Fombrun 1982; van Maanen 1979a), sociology (McGrath, Martin, and Kulka 1982; Mitroff 1974), and psychiatry have advocated and helped foster rebirth of qualitative research in the social sciences. Some of these researchers have gone so far as to say that, given the small level of theoretical knowledge about phenomena in which social science is interested, coupled with the known complexities and context-sensitivities of these same phenomena, qualitative research is the major or even the only valid knowledge-accrual device open to scientists whose interests are focused on human behavior. Though we do not go so far, it may be noted that many important marketing phenomena meet the dual conditions of little theoretical knowledge and high complexity. Such phenomena should be suited to the application of qualitative research methods. However, little trend toward qualitative research has yet been observed in marketing Because of marketing's quantitative/deductive research roots, many marketing subject areas not amenable to study by the methods oriented toward the top-left apex of Figure 1 have received little research notice of any sort. For instance, though much is written about normative pricing strategy formation, almost zero is known descriptively about how (or whether!) managers engage these strategies under real-world pressures. Indeed, little is known about what constitutes effective marketing management in practice (or whether practice is consistent with what little is known from theory, survey verbal reports, or student simulations). What is known about such questions often evolves from practical experience, undocumented analogies with other disciplines, and common-sense reasoning. The apparent researches bias toward types of investigation that protect data integrity at the expense of currency results in a methodological one-sidedness that may impair the development and testing of sound theories. In sum, there is a role and a need for a much broader set of knowledge-accrual mechanisms than those conventionally employed in marketing research In particular, methods toward the lower-right apex of Figure 1 seem especially well-suited to aspects of marketing where there is a relatively thin theoretical base or complex observational task. One such method found promising by many researchers (e.g., Duncan 1979; McClintock, Barnard, and Maynard-Moody 1979) is case research. CASES, CASE TEACHING, AND CASE RESEARCH Case studies are most familiar to marketers as a pedagogical device, or as a way of generating exploratory insights prior to more "rigorous" investigations. Here, neither of these uses of cases is viewed as case research; rather, the use of cases as research tackle is our focus. Though examples of case research qua research can be found (c.f., Bonoma, in press; Corey 1978; Corey and Star 1971), little guidance about how to conduct marketing case research is available, except in literatures not often examined by marketing researchers (e.g., Geertz 1973; van Maanen 1982a). In this section, therefore, we discuss the nature of a case, then differentiate the use of cases for teaching, prescientific, and research purposes, and set the stage for discussion of a four-stage qualitative research process intended to guide qualitative and case-based research endeavors Defined most generally, a case study is a description of a management state of affairs. As such, it is the marketing analogue of the physician's clinical examination (e.g., MacLeod 1979), and relies on a alike appeal to multiple data sources for reliable diagnosis (cf. Leenders and Erskine 1978). Though case studies familiar from class-room use usually spotlight on some problem of high currency to firm management and have broad pedagogical appeal, cases without any problem focus can be constructed to learn about the operation of a healthy management or marketing organization. Thus, though management "disease" often is the stimulus for case construction, a problem focus is not required. Second, case construction implicates multiple data sources. Like other qualitative methods, cases frequently rely heavily on verbal reports (personal interviews) and unobtrusive observation as primary data sources. However, case method is distinguished from other qualitative methods in that it involves numerous other data sources, some of which are quantitative. These other data sources serve as a means of "perceptual triangulation"4 and pro-vide a full picture of the business unit under study. Prime among these sources are financial data (e.g., budgets, operating statements), market performance data (e.g., share, sales by territory), and market and competitive information (e.g., product replacement rates, competitive spending levels). Additional data sources consulted include written archives (e.g., memoranda), business plans, and direct observations of management interactions. Third, cases should mirror and be sensitive to the context within which management's acts occur and to the temporal dimension through which events unfold. They go beyond providing a static snapshot of events, and cut across the temporal and contextual gestalt of situations. Finally, cases require direct observation of management behavior by a trained observer who applies his/her own construal of the ongoing events, while also trying to understand the construal's of the actors. Case method, in short, requires skilled clinical judgments about what to look at and what it means. Thus, like other qualitative methods, case method is concerned basically with the researcher's interpretation of management's signification of events, information, and reality-that is, it depends on the researcher's perceptions about management's meanings, not on some "objective reality."Unlike some other qualitative methods, case methodology draws on numerous other data sources to triangulate these perceptions and significations within a broader context. Organizational Context of Market Research Use Largely as a function of developments in its environment, marketing is asking introspective questions about its own competence. At the beginning of the 1980s we have seen the quick growth of the marketing function over the past two decades slowed under the impacts of inflation, raw material shortages, unemployment and recession. These economic changes necessitate a reconsideration of strategies that had earlier proved successful. The drive now is to become leaner, more well-organized in the use of available resources and more oriented toward the future (Wind 1980). If we are to believe that the U.S. and other post-industrial economies are moving from an "Age of Product Technology" to a "Knowledge-based Society" (Bell 1976), we should be increasingly concerned with our ability to deal with our corporate knowledge systems. The growth and even survival of today's business entities will depend on their strategies for handling and processing information. The more present this information, the greater the ability of managers to make policy decisions based upon it. In turn, the effectiveness of those decisions will be measured in terms of market information. The marketing purpose is somewhat unique in that the information gathering and analysis processes in firms have been institutionalized as marketing research departments or divisions. Although these specialized information processing units have existed for some time, very little examination has been given to the effectiveness of research in providing information at the right place for the right decision. Additionally, it is only very recently that any attention has been paid to the factors that have an effect on the usefulness of marketing research. The issue of examining marketing's R&D has not gone ignored. The critical costs of inadequate utilization of marketing tools and techniques have been mentioned lately by a special AMA/Marketing Science Institute joint commission (Myers, Massy and Greyser 1980). The commission's members were surprised at the relatively low rate of acceptance at the line manager level of new marketing knowledge generated over a period encompassing the past 25 years. Their major recommendation was to develop better ways "to bridge the gaps between knowledge-generation and knowledge-utilization"(Myers, Greyser and Massy 1979, p. 27). Both marketing practitioners and academics support these observations and agree that much problem oriented research is not used (Dyer and Shimp 1977, Ernst 1976, Kover 1976, Kunstler 1975). However, little formal research has been conducted in this area (Greenberg, Goldstucker and Bellenger 1977; Krum 1978; Luck and Krum 1981). Most observations about the factors affecting use of marketing research have been limited to introspective, albeit careful, analyses of personal experiences (Hardin 1973, Kunstler 1975, Newman 1962). The issue of inadequate utilization of available research information is not unique to marketing. Under use occurs in all areas of applied research activity. Most recently it has received much empirical attention in the policy sciences and has led to the creation of the area of inquiry called Knowledge deployment (Caplan, Morrison and Stambaugh 1975; Rich 1975; Weiss 1977; Weiss and Bucuvalas 1980). Developments in this area indicate that an understanding of the research use phenomenon lies in examining the organizational contexts in which policy decisions are made. The design of the decision making structures of organizations sometimes provides clues as to why some of them are more well-organized at using research than others. As Day and Wind (1980) have commented, senior management has come to believe that focusing only on a customer-oriented search for competitive advantage may be shortsighted. There is a need to widen the scope of empirical attention in marketing by looking at relationships beyond those of the company and its customers. One set of these relationships deals with managers within an organization. Unless the arrangement of work relationships in a firm has been de-signed to optimize managerial effectiveness, the company customer dealings will suffer and, in turn, negatively impact on the firm's long-term success. Yet the influence of organizational structure on the marketing function has hardly ever been studied systematically (Bonoma, Zaltman and Johnston 1977; Silk and Kalwani 1980; Spekman and Stern 1979). This issue is particularly important in the knowledge utilization area since parallel findings in the policy sciences, as mentioned earlier; indicate the importance of organizational design in influencing research use. In the pursuit of marketing effectiveness it may be useful to examine what forms of marketing organization appear best suited to manage the marketing research process efficiently (Wind 1980). Competitive Pressures in Environment The timing of the special subject on competition in marketing is particularly appropriate because of the growing significance of competition in marketing activities. With a slowdown in world economic growth, firms must take business away from competitors if they are to sustain their own growth rate. Deregulation, globalization of markets, flexible manufacturing, and rapidly changing technology are producing new sources of com-petition and altering the nature of competition in markets. The articles in the special issue respond to the needs of marketers to develop a better understanding of the impact of competition on marketing decisions. Competition and marketing research competition is the process by which independent sellers vie with each other for customers in a market. Be-cause substitutes exist for most products and services, firms typically meet competitors when marketing their offerings. Consequently, the effectiveness of marketing programs typically depends on the reaction of both customers and competitors. However, marketing theories and research have emphasized issues related to customer response and have directed less attention to competitive response. This lack of attention to competitive effects is surprising because it is hard to imagine a marketing decision that is not affected by competitive activity. The marketing concept, a keystone of marketing thought, stresses the importance of satisfying customer needs and considering customer responses in the development of marketing programs. Recently, marketers have called for a expansion of the marketing concept to ad-dress explicitly the role of competitive considerations in marketing decision making (Day and Wensley 1983; Oxenfeldt and Moore 1978). These scholars suggest that customers be viewed as a "prize" gained by satisfying customer needs better than competitor firms. The entire range of research in this area cannot be addressed in this short note. The introduction is organized around the following five questions. 1. Who are the firm/brand's competitors? 2. How intense is the competition in a market? 3. How does competition affect market evolution and structure? 4. How do competitive actions affect the firm's marketing decisions? 5. How do firms achieve and maintain a competitive advantage? WHO IS THE FIRM/BRAND'S COMPETITOR A market is defined as "a group of potential customers with similar needs and sellers offering goods and services to satisfy those needs" (McCarthy and Perreault 1984). The identification of market limits and the competing firms within those boundaries pervades all levels of marketing decisions. Market definition is crucial for assessing strategic opportunities, identifying competitive threats, developing marketing programs, and assessing market share to assess performance. What Are the Boundaries of a Market? The identification of market boundaries and competing firms is subjective. Competition among firms and brands is a matter of degree. At one extreme, all firms and products compete indirectly against each other for the restricted resources of customers. At the other extreme, Coke and Pepsi compete against each other using similar production and marketing strategy to satisfy almost identical customer needs. Thus, the degree of similarity in needs satisfied and methods used to satisfy those needs deter-mines the degree to which firms and brands compete against each other. The different market definitions are determined by discontinuities in supply and demand characteristics. Economists highlight supply considerations when they define an industry as a set of rival firms using similar technologies and/or manufacturing processes. Marketers have focused on demand considerations when they define markets in terms of common needs such as transportation (Levitt 1965). The nature of the marketing decision determines the appropriate boundary for defining the competitive set. The development of functional marketing mix decisions typically involves a contracted definition of the competitive set focusing on directly competing brands in a market segment. In contrast, long-term strategic marketing decisions require a broader definition of competitors and customers-product markets (Day 1981a) or industry segments (Porter 1985)-so that un-served potential needs and competitive threats are identified. In general, marketing research on market boundaries has resolute on consumer needs related to functional decisions involving brands. The rich tradition of segmentation research in marketing (Wind 1978) centers primarily on the structure of buyers in the market, ignoring the sellers participating in the market. Research on product positioning considers both customer needs and customer perceptions of market offerings. A assortment of analytical approaches are available for identifying the structure of competing products from assessments of the degree to which customer-based in formation indicates the substitutability of products( Day, Shocker, and Srivastava 1979). HOW I NTENSE I S THE COMPETITIONIN A MARKET? The attractiveness of an industry, product market, or market section as a strategic investment opportunity is related to the profit potential of the market and the firm's ability to exploit that potential. Porter (1980) indicates that the evaluation of competitive intensity is a crucial input for evaluating the profit potential of a market. Much of the research in industrial organization (IO) economics addresses issues related to assessing competitive intensity. The dominant IO prototype, structure-conduct-performance, argues t hat industry structure determines the conduct within an industry. Thus, the conduct of firms, the nature of the competitive activity within an industry, determines industry performance (profit-ability, innovativeness, cost efficiency). Most IO research ignores conduct, focusing simply on the relation-ship between structure and performance (Porter1 981). Within this tradition, Porter (1980) suggests a check list of structural variables that can be used to determine the level of competitive strength within an industry Marketers are more concerned with the performance of products and firms than with the performance of entire industries. Because of this orientation, marketers have concentrated on directly assessing the conduct o r behavior of competing firms rather than the structural properties that presumably affect conduct's or example, Gatignon (1984) developed a measure of competitive intensity in a market based on competitive reactions to marketing activities rather than the structural properties of the competitive such as the her find ahl index. Research related to the assessment and implications o f competitive intensity is not represented in the special issue. However, the area is a promising one for prospect research. Research is needed to test the extent to which the structural properties postulated by Porter (1980) are related to actual conduct, competitive reaction, and performance. Is the strength of competitive reactions in a market related to the number and size distribution of competitors in the market? How does the level of fixed costs, market growth rate, product differentiation, and exit and entry barriers influence the intensity of competitive reactions? Is the level of competitive intensity in a market related to the performance of the industry and specific firms in an industry? In addition, we need to explore how the level of competitive intensity in a market influences the effectiveness o f marketing activities. For example, Gatignon (1984) found that the intensity of competition moderates the effect of advertising on consumer price compassion. In markets with high competitive intensity, advertising increases price sensitivity, whereas in markets with low competitive intensity t he effects of advertising on price sensitivity are weaker. The industrial organization and marketing strategy literature place considerable emphasis on the size of a firm, especially because of the resource advantages that it possesses and can use to compete. This factor can strongly affect a new product's performance (Day 1984; Narver and Slater 1990). The better the resources of a firm, the more market power, which is a competitive advantage that translates into better performance of the new product. These advantages can be due, in part, to the capability to invest greater resources into the design of superior innovations (Capon et al. 1992), which might be more radical, have a greater relative advantage, and cost less. These effects need to be included in a model of the impact of strategic orientations Product quality is speedily becoming an important competitive issue. The superior reliability of many Japanese products has sparked considerable soul-searching among American managers. [W. J. Abernathy, K. B. Clark, and A. M. Kantrow, 1983] In addition, a number of surveys have voiced consumers' dissatisfaction with the existing levels of quality and service of the products they buy [Barksdale et al., 1982]. In a recent study of the business units of major North American companies, managers ranked "producing to high quality standards" as their chief current concern [G. Miller, 1983]. Despite the attention of managers, the academic literature on quality has not been reviewed extensively; the problem is one of coverage: scholars in four disciplines — philosophy, economics, marketing, and operations management — have considered the subject, but each group has viewed it from a dissimilar vantage point. Philosophy has focused on definitional issues; economics, on profit maximization and market balance; marketing, on the determinants of buying behavior and customer satisfaction; and operations management, on engineering practices and manufacturing control. The result has been a host of competing perspectives, each based on a different analytical framework and each employing its own terminology. At the same time, a number of ordinary themes are apparent. All of them have important management implications. On the conceptual front, each discipline has wrestled with the following questions: Is quality objective or subjective? Is it timeless or socially determined? Empirically, interest has focused on the correlates of quality. What, for example, is the connection between quality and price? Between quality and advertising? Between quality and cost? Between quality and market share? More generally, do quality improvements lead to higher or lower profits? Five Approaches to Defining Quality Five major approaches to the definition of quality can be identified: (1) the transcendent approach of philosophy; (2) the product-based approach of economics; (3) the user-based approach of economics, marketing, and operations management; and (4) the manufacturing-based and (5) value-based approaches of operations management [Garvin, D. A. 1984]. Dimensions of Quality: Dimensions can be identified as a framework for thinking about the basic elements of product quality: Each is self-contained and dissimilar, for a product can be ranked high on one dimension while being low on another. Garvin, D. A. (1984) first on the list is performance, which refers to the main operating characteristics of a product. For an automobile, these would be characteristics like acceleration, handling, cruising speed, and ease; for a television set, they would include sound and picture clarity, color, and ability to receive distant stations. This dimension of quality combines elements of both the product and user-based approaches. Measurable product attributes are involved, and brands can usually be ranked objectively on at least one dimension of performance. The connection between performance and quality, however, is more ambiguous. Whether performance differences are perceived as superiority differences normally depends on individual preferences. Users typically have a wide range of interests and needs; each is likely to equate quality with high performance in his or her area of immediate interest. The connection between performance and quality is also affected by semantics; among the words that describe product performance are terms that are frequently associated with quality as well as terms that fail to carry the association. For example, a 100-watt light bulb provides superior candlepower (performance) than a 60-watt bulb, yet few consumers would regard this difference as a measure of quality. The products simply belong to different performance classes. The smoothness and quietness of an automobile's ride, however, is typically viewed as a direct reflection of its quality. Quietness is therefore a performance dimension that readily translates into quality, while candlepower is not. These differences appear to reflect the conventions of the English language as much as they do personal preferences. There is a clear analogy here to Lancaster's theory of consumer demand. [K. Lancaster, 1966] the theory is based on two propositions: [Lancaster, 1971] All goods hold objective characteristics relevant to the choices which people make among different collections of goods. The relationship between ... a good . . . and the traits which it possesses is essentially a technical relationship, depending on the objective characteristics of the good. . . . Individuals fluctuate in their reaction to different characteristics, rather than in their assessments of the characteristics.... It is these characteristics in which consumers are interested . . . the various characteristics can be viewed ... as each helping to satisfy some kind of "want." In these terms, the performance of a product would match to its objective characteristics, while the relationship between performance and quality would reflect individual reactions. The same approach can be applied to product features, a second dimension of quality. Features are the "bells and whistles" of products, those secondary characteristics that complement the product's basic functioning. Examples include free drinks on a plane flight, permanent press as well as cotton cycles on a washing machine, and automatic tuners on a color television set. In many cases, the line separating primary product characteristics (performance) from secondary characteristics (features) is difficult to draw. Features, like product performance, involve objective and measurable attributes; their conversion into quality differences is equally affected by individual preferences. The distinction between the two is primarily one of centrality or degree of importance to the user. Reliability is a third dimension of quality. It reflects the chance of a product's failing within a specified period of time. Among the most common measures of reliability are the mean time to first failure (MTFF), the mean time between failures (MTBF), and the failure rate per unit time. (Juran, 1974)Because these measures require a product to be in use for some period, they are more relevant to durable goods than they are to products and services that are consumed instantly. Japanese manufacturers typically pay great notice to this dimension of quality, and have used it to achieve a competitive edge in the automotive, consumer electronics, semiconductor, and copying machine industries. A related dimension of quality is conformance, or the degree to which a product's design and operating characteristics match pre-established standards. Both internal and external elements are involved. Within the factory, conformance is usually measured by the incidence of defects: the proportion of all units that fail to meet specifications, and so require rework or repair. In the field, data on conformance are often difficult to gain, and proxies are frequently used. Two common measures are the incidence of service calls for a product and the frequency of repairs under warranty. These measures, while suggestive, disregard other deviations from standard, such as misspelled labels or shoddy construction, which do not lead to service or repair. More comprehensive measures of conformance are required if these items are to be counted [Garvin, D. A. 1984]. Both reliability and conformance are closely joined to the manufacturing-based approach to quality. Improvements in both measures are normally viewed as translating directly into quality gains because defects and field failures are regarded as undesirable by virtually all consumers. They are, therefore, relatively objective measures of quality, and are less likely to reflect individual preferences than are rankings based on performance or features. Durability, a gauge of product life, has both economic and technical dimensions. Technically, durability can be denned as the amount of use one gets from a product before it physically deteriorates. A light bulb provides the perfect example: after so many hours of use, the filament burns up and the bulb must be replaced. Repair is impossible. Economists call such products "one-hoss shays," and have used them widely in modeling the production and consumption of capital goods. [C. J. Bliss, 1975] Garvin, D. A. (1984) Durability becomes more difficult to interpret when repair is possible. Then the concept takes on an added dimension, for product life will vary with changing economic conditions. Durability becomes the amount of use one gets from a product before it breaks down and replacement is regarded as preferable to continued repair. Consumers are faced with a series of choices: each time a product fails; they must weight the expected cost, in both dollars and personal inconvenience, of future repairs against the investment and operating expenses of a newer, more reliable model. In these circumstances, a product's life is determined by repair costs, personal valuations of time and inconvenience, losses due to downtime, relative prices, and other economic variables, as much as it is by the quality of components or materials, this approach to durability has two important implications. First, it suggests that durability and reliability are closely associated. A product that fails frequently is likely to be scrapped earlier than one that is more reliable; repair costs will be correspondingly higher, and the purchase of a new model will look that much more desirable. Second, this approach suggests that durability figures should be interpreted with care. An increase in product life may not be due to technical improvements or to the use of longer-lived materials; the underlying economic environment may simply have changed, For example, the expected life of an automobile has risen steadily over the last decade, and now averages fourteen years.[ Retiring Autos at 14, 1983] Older automobiles are held for longer periods and have become a greater percentage of all cars in use.[ S. W. Burch, 1983] Among the factors thought to be responsible for these changes are rising gasoline prices and a weak economy, which have reduced the average number of miles driven per year, and federal regulations governing gas mileage, which have resulted in a reduction in the size of new models and an increase in the attractiveness to many consumers of retaining older cars. In this case, environmental changes have been responsible for much of the reported increase in durability. Product as Symbol Products have a significance that goes beyond their functional usefulness. This significance stems from the ability of products to communicate meaning (Hirschman, 1981; McCracken, 1986). Products are symbols by which people convey something about themselves to themselves and to others (Holman, 1981; Solomon, 1983). This symbolic meaning is known to influence consumer preference. All commercial objects have a symbolic character, and making a purchase involves an assessment - implicit or explicit - of this symbolism ... (Levy, 1959, p. 119). The symbolic meaning of products has become increasingly significant. Nowadays, differentiating products based on their technical functions or quality is difficult (Dumaine, 1991; Veryzer, 1995). Since the wave of the quality controls in the 1980s, products can be expected to fulfill their functions reasonably well. Symbolic meaning provides another way to differentiate products. Due to representative meaning otherwise indistinguishable products become differentiated in the eyes of the consumer. Similarly Salzer-Mo¨rling and Strannega˚rd (2004) recently stated: With the abundance of products in the western world, the managerial challenge, it seems, has become that of differentiating similar products (p. 224). The relationship between physiological product characteristics and consumer quality perception is at the heart of market-oriented product development: In order to design products which will be accepted by consumers, it is necessary to convert consumer demands into product specifications that are actionable from the producer's point of view. With regard to food, this relationship is particularly complicated because the way consumers perceive expected quality before a purchase is often different from the way quality is perceived after consumption, and may be related to various physiological product characteristics. While this has been acknowledged repeatedly in the literature (e.g. Grunert et al., 1996; Poulsen et al., 1996; Steenkamp and van Trijp, 1996), despite the apparent practical consequences of better knowledge on how physiological product characteristics and quality perception before purchase and after consumption interact, research shedding light on this issue has been very sparse. The study by Steenkamp and van Trijp (1996) combined physiological product characteristics, quality cues and quality criteria. It was done with blade steak as product category. Six physiological characteristics were calculated, some of them by several indicators: color, fatness, pH value, water-binding capacity, and shear force and sarcomere length. Eight quality cue measures were combined into three latent constructs: freshness, visible fat and appearance, which together determined quality expectations. Likewise, seven quality criteria measures were combined into three latent constructs: tenderness, non-meat components and flavor, which together determined quality experience. The main results were as follows: * color has a significant impact on quality expectations only * fatness has a negative impact on quality expectations and a positive impact on quality experience * water-binding capacity, sarcomere length and pH value have an effect on both quality expectations and quality experience * shear force affects quality experience only * There is no significant relationship between quality expectation and quality performance. To have a clear understanding of the issues surrounding the impact of product characteristics, a debate on product classifications is warranted. When looking at product classifications, marketers divide products and services based on the types of consumers that use them - consumer products and business to business products. This discussion will be limited to consumer products. Consumer products are those which are purchased by the final consumer for his/her consumption. These products are further classified into convenience, shopping, specialty and unsought products. Convenience goods are those that are purchased often with little planning or shopping effort. They are usually at stumpy prices and widely available. Shopping goods are those which are purchased less frequently, such as furniture and major appliances, and which are compared on the basis of suitability, quality, price and style. Specialty goods enjoy strong brand preference and loyalty. Consumers of these goods are willing to make a special purchase effort, do little brand comparisons and have low price sensitivity. Both producers and sellers of these products use carefully targeted promotion. Unsought products are consumer goods that the consumer either does not know about or knows about but does not normally think of buying; for example, Red Cross blood donations (Kotler et al, 1998). Peterson et al., (1997), suggest another classification system which they argue is more relevant. In this system the products and services are categorized along three dimensions: cost and frequency of buy, value proposition and degree of differentiation. Goods in the first dimension range from low cost, frequently purchased goods to high cost infrequently purchased goods. The usefulness of this dimension lies in that it highlights the differences in operation and distribution costs depending on whether and how the Internet is used. The value proposition dimension classifies merchandise according to their tangibility. Products are classified as tangible and physical or intangible and service related. Internet commerce is especially well-suited for goods consisting of digital assets - which are intangible - (Rayport and Sviokla, 1995), such as computer software, music and reports. The third dimension, differentiation, deals with how well the seller has been able to create a sustainable competitive advantage through differentiation. Information about the product attributes plays a vital role in consumers' product evaluation process. For most product evaluations, only incomplete information is available, thus consumers often form evaluations for various products on the basis of the available information and form attribute covariance inferences about the missing information (Pechmann and Ratneshwar, 1992; Ross and Creyer, 1992).
0.8778
FineWeb
```json [ "Marketing Research", "Qualitative Research", "Product Quality" ] ```
A number of infections and diseases can contribute to an enlarged spleen. The effects on your spleen may be only temporary, depending on how well your treatment works. Contributing factors include: - Viral infections, such as mononucleosis - Bacterial infections, such as syphilis or an infection of your heart's inner lining (endocarditis) - Parasitic infections, such as malaria - Cirrhosis and other diseases affecting the liver - Various types of hemolytic anemia — a condition characterized by premature destruction of red blood cells - Blood cancers, such as leukemia, and lymphomas, such as Hodgkin's disease - Metabolic disorders, such as Gaucher's disease and Niemann-Pick disease - Pressure on the veins in the spleen or liver or a blood clot in these veins How the spleen works Your spleen is tucked under your rib cage next to your stomach on the left side of your abdomen. It's a soft, spongy organ that performs several critical jobs and can be easily damaged. Among other things, your spleen: - Filters out and destroys old and damaged blood cells - Plays a key role in preventing infection by producing white blood cells called lymphocytes and acting as a first line of defense against invading pathogens - Stores red blood cells and platelets, the cells that help your blood clot An enlarged spleen affects each of these vital functions. For instance, as your spleen grows larger, it begins to filter normal red blood cells as well as abnormal ones, reducing the number of healthy cells in your bloodstream. It also traps too many platelets. Eventually, excess red blood cells and platelets can clog your spleen, interfering with its normal functioning. An enlarged spleen may even outgrow its own blood supply, which can damage or destroy sections of the organ. July 26, 2013 - Landaw SA, et al. Approach to the adult patient with splenomegaly and other splenic disorders. http://www.uptodate.com/home. Accessed June 6, 2013. - Splenomegaly. The Merck Manuals: The Merck Manual for Healthcare Professionals. http://www.merckmanuals.com/professional/hematology_and_oncology/spleen_disorders/splenomegaly.html. Accessed June 6, 2013. - Longo DL, et al. Harrison's Online. 18th ed. New York, N.Y.: The McGraw-Hill Companies; 2012. http://www.accessmedicine.com/resourceTOC.aspx?resourceID=4. Accessed June 6, 2013. - Pozo AL, et al. Splenomegaly: Investigation, diagnosis and management. Blood Reviews. 2009;23:105. - Recommended Adult Immunization Schedule: United States — 2013. Centers for Disease Control and Prevention. http://www.cdc.gov/vaccines/schedules/hcp/adult.html. Accessed June 6, 2013.
0.8536
FineWeb
["Causes of Enlarged Spleen", "Spleen Function", "Effects of Enlarged Spleen"]
By A. J. Cropley Read or Download Towards a System of Lifelong Education. Some Practical Considerations PDF Similar nonfiction_12 books The publication supplies a finished view of the current skill take into consideration the microstructure and texture evolution in increase engineering versions of the plastic behaviour of polycrystalline fabrics at huge lines. it really is designed for postgraduate scholars, examine engineers and lecturers which are attracted to utilizing complicated versions of the mechanical behaviour of polycrystalline fabrics. Contemporary advancements in Clustering and knowledge research provides the result of clustering and multidimensional information research study carried out essentially in Japan and France. This booklet specializes in the importance of the information itself and at the informatics of the information. prepared into 4 sections encompassing 35 chapters, this ebook starts off with an outline of the quantification of qualitative info as a mode of interpreting statistically multidimensional information. This reference comprises greater than six hundred cross-referenced dictionary entries on utopian idea and experimentation that span the centuries from precedent days to the current. The textual content not just covers utopian groups around the globe, but in addition its rules from thewell identified corresponding to these expounded in Thomas More's Utopia and the tips of philosophers and reformers from precedent days, the center a long time, the Renaissance, the Enlightenment, and from outstanding 20th-century figures. Extra resources for Towards a System of Lifelong Education. Some Practical Considerations Karpen 32 In Part II the rights and duties of the individual, of societal groups and of the state in implementing a policy of lifelong education in the basic law democracies of (Western) constitutional states will be outlined. Finally, some remarks on the organization and the procedure for implementing such a policy will be given in Part III. Part I: Lifelong Education and the Law 1. Lifelong education Lifelong education means that people's education is seen as a process encompassing the entire life span and all areas of life (Cropley, Chapter 1 of this book). They are democracies. Yet in d e t a i l , and above a l l i n the i n t e r p r e t a t i o n and applicat i o n of the Romano/Germanic and common law c o n s t i t u t i o n s on the one hand, and the s o c i a l i s t c o n s t i t u t i o n s on the other, these two types of c o n s t i t u t i o n d i f f e r fundamentally, and these d i f ferences are r e f l e c t e d i n the profound dichotomy of the developed w o r l d . Basically t h i s difference stems from the philosophical approach, from the r e l a t i o n s h i p of the concept of s t a t e , of p o l i t i c s , c o n s t i t u t i o n and law to the "value-system". 9. Introduction of folk culture, oral and written, as an integral part of the school curriculum. 10. Abolition of any ranking between the socalled manual disciplines and the so-called intellectual disciplines. 11. Integration of general education and vocational education. 12. ). 13. ). 14. Improvement in the cultural content and methods of the mass media programmes. 30 E. Gel pi 15. Making work experience more interesting from the educational point of view. 16. Significant development of experiments in self-instruction.
0.8612
FineWeb
```json [ "Lifelong Education", "Clustering and Data Analysis", "Utopian Theory and Experimentation" ] ```
Answer: You don’t become like the teacher, you specifically develop in your own way and form. Externally, as a result of a shared connection, forms of speech and grammar or something similar like the teacher’s may appear in you, but this has no connection to spirituality. Everyone is unique in spirituality. Question: Do you suppose that not being like your teacher is the right message? Answer: It makes no difference. Even if a student thinks one way or another, the curve always leads a person to the “straight line.” Sooner or later the students will understand and everything will work out. I don’t give guidance about this. A student must gradually understand everything for himself. From the Kabbalah Lesson in Russian 8/7/16
0.8076
FineWeb
["Spirituality", "Personal Development", "Teacher-Student Relationship"]
Definition - What does Electrode Device mean? An Electrode Device is a logging tool which is based on the arrangement of simple metallic electrodes which work at a low frequency. It includes conventional micrologs, electrical logs, laterlogs and various other microresistivity logs. The device is used for both measurement while drilling and wireline logs. The voltage and current in the Electrode Device can be measured on convenient electrodes or on the combination of electrodes. It is used to provide current via nonmetal objects and measure conductivity. Petropedia explains Electrode Device The Electrode Device is basically used for the purpose of logging and is considered as a logging tool which is basically the arrangement of metallic electrodes used in MWD and wireline logs. Appropriate electrodes or combination of electrodes are required for measuring the current and voltage in the device. The electrodes are the electromagnetic cells which are either anodes or cathodes. The electrons leave the cell at the anodes and enter the cells at cathodes and reduction takes place. Electrode devices are also used for measuring electrode resistivity.
0.9475
FineWeb
["Electrode Device Definition", "Logging Tool Applications", "Electrode Device Measurements"]
Following the death of monster hunter Ulysses Bloodstone, vampire; Charles Barnabus became the executor of Ulysses’ estate in Boston, Massachusetts. For years, he searched for a living heir to inherit Bloodstone’s wealth, and eventually discovered eighteen-year-old Elsa Bloodstone. Elsa and her mother, Elise, moved into the Bloodstone mansion, but Charles was careful to keep the true knowledge of Ulysses' lifestyle away from his daughter. Elsa eventually learned the truth on her own after encountering the mansion's caretaker, Adam. As Elsa found herself drawn into the world of the supernatural, Charles was forced to reveal himself as a vampire. A powerful breed of vampire known as the Nosferati captured several vampires including Dracula and Charles Barnabus. The Nosferati can only feed off of the blood of other vampires, and intended on keeping Dracula and Charles as a permanent immortal food source. Elsa and her friends raided the Nosferati headquarters freeing the two vampires. Following the defeat of the Nosferati, Charles returned with Elsa to Bloodstone Manor. Since then, Elise Bloodstone has converted the mansion into a curio museum. Charles Barnabus has developed a close bond with the woman and remains as part of the museum's staff. Many, if not all of Charles Barnabus' powers and abilities are common to all vampires, even if demonstration of such abilities has not been explicitly shown in a canonical resource. - Superhuman Strength: Like all vampires, Charles possesses superhuman strength, the exact limits of which have yet to be measured. - Fangs: Like all vampires, Charles has fangs and claws. He can quickly drain a victim of blood. - Hypnotism: Charles is able to hypnotize others by gazing into their eyes for a short period of time. - Shapeshifting: Charles is able to shape shift into bats, rats, a wolf, and mist. He can also turn into human-sized or larger wolfen and bat-like forms. - Weather Manipulation: He has considerable control over the elements and weather. - Mind Control: A person bitten by Charles is able to be influenced by him through a sort of empathic link. - Accelerated Healing: Charles is capable of regenerating damaged or destroyed tissue to an extent much greater than an ordinary human. He can fully heal from multiple gunshots and severe burns within a matter of minutes, however he cannot regenerate missing limbs or organs. - Enhanced Agility: Charles' agility, balance, and body coordination are enhanced to levels that are beyond the natural limits of the human body. - Enhanced Reflexes: Charles' natural reaction time is enhanced to levels that are beyond the natural limits of the human body. - Enhanced Stamina: Charles' body is more resistant to the fatigue toxins generated by his muscles during physical activity. He can exert himself at peak capacity for several hours before fatigue begins to affect him. - Special Limitations: Charles, like all vampires, has a number of special vulnerabilities. He is highly allergic to silver and can be severely injured, or killed, with silver weaponry. If Charles is injured by silver, his recovery time is considerably slower than normal. Charles is also unable to withstand exposure to direct sunlight. His tissue begins to instantly dry up and will crumble to powder within a matter of moments. Charles can be killed by having a wooden stake plunged into his heart, somehow interrupting the mystical energies that keeps him alive. Charles can also be killed by being decapitated and being exposed to fire. He can also be affected by religious icons, such as the Cross of David or a crucifix for example. Charles is affected by the strength of the wielder's faith in the icon and religion it represents, not the size of the icon itself. Charles must rest within his coffin during daylight hours. He must line his coffin with soil from his homeland in order to both sustain his power and travel more than 100 miles in distance from his birthplace. Superhuman: Charles Barnabus is able to lift volumes of mass in excess of two tons; the exact limits of his strength level is unknown. - Charles is not the first fictional vampire to carry the name "Barnabus". In 1966, the ABC daytime soap opera Dark Shadows featured a 175-year-old vampire named Barnabus Collins. - 4 Appearances of Charles Barnabus (Earth-616) - Media Charles Barnabus (Earth-616) was Mentioned in - Images featuring Charles Barnabus (Earth-616) - Quotations by or about Charles Barnabus (Earth-616) - Character Gallery: Charles Barnabus (Earth-616) Discover and Discuss - Search this site for: |Like this? Let us know!|
0.9561
FineWeb
["Charles Barnabus", "Vampire Powers and Abilities", "Bloodstone Family"]
Top Course Tags A Few Big Assignments Background Knowledge Expected Pretty easy, overall. Professor Jones is a very well-educated, helpful, patient teacher. He is very much interested in the education of his students and will do anything to help them succeed! Highlights are to write a book but it is divided up into smaller papers, so, the idea isn't as frightening as it sounds. I learned how to write a dissertation if I need to someday. Hours per week: Advice for students: Stay on top of your writing assignments and you'll do well.
0.9543
FineWeb
``` { "topics": [ "Course Overview", "Professor Evaluation", "Study Advice" ] } ```
MLS - Microfluidic Bubble Detector Elveflow provides a unique microfluidic bubble detector. It can identify if liquid is present in clear tubes. It can be plugged directly onto our OB1 flow controller, or it can be used as a standalone unit with the Sensor Reader and another instrument. The sensor is able to register the presence of fluids inside clear tubing, trigger a signal to another instrument and act accordingly – like stop, wait a certain amount of time, allow enough flow to clear the tubing, or reset the sensor. Get Quote or Technical Information (We will answer within 24 hours) The microfluidic bubble detector detection is based on the measurement of the optical path and the variation of this path when the medium flowing is changing. The sensor comes in two different housing suited to the use with 1/16″ or 1/4″ outside diameter tubes. - Reliable non invasive technique - Cost effective compared to camera check - Based on true/false logic - Large compatibility: wide range of tubing size - Software average functions - Use anywhere in your setup - Prevents damage in cells with bubble bursts - Setup automation (possibility to use if conditions) - Bubble detection - Liquid level sensing - Blood processing equipment - Patent connected medical devices - Perform bilateral recirculation based on air detection One particular application worth to mention is using the microfluidic liquid sensor as a bubble detector. Bubbles are a big challenge to address in microfluidics, as they can induce flow modifications or interact with the experiment and cause damage. One can monitor the apparition of bubbles (change in the working fluid) at any given point of his setup and automate the experiment accordingly like switching valves to direct the bubble in another fluidic path for instance. For further information check out the example tab on this same page. A light beam is emitted by a LED at known power. This light beam goes through the capillary and the fluid passing through. It is then collected by an NPN silicon phototransistor. This phototransistor converts the light power into an electrical power. When a fluid changes, the optical index and the light absorption coefficient change accordingly. It induces a change in the electrical power and allows to detect changes in the fluid. USB flow sensor software module Thanks to an intuitive interface, the Elveflow® Smart Interface allows the use of Elveflow® instruments from the simplest commands for beginners to the most complex manipulations for experts… read more Control your experiments through C, Python, Matlab®, Labview® or the Elveflow® Smart Interface. The Elveflow® Smart Interface is a software application offering all the functionalities that microfluidicists need. - Apply pressure on line 1. - When tube 1 is empty bubbles will appear. - The liquid sensor 1 detects the bubbles and then pressure in line 1 is turned off. Simultaneously a pressure is applied in line 2. - Then, when the tube 2 is empty, bubbles will appear. - The liquid sensor 2 detects the bubbles and the pressure in line 2 is turned off. Simultaneously, a pressure is applied in line 1. - It can be continued as many times as you want. The microfluidic liquid sensor has many applications. This is one is a smart way to perform bidirectional recirculation without using valves. Read more about bubble generation Our applications notes
0.9134
FineWeb
["Microfluidic Bubble Detector", "Liquid Level Sensing", "Microfluidic Liquid Sensor"]
How are the `Great Flood', meteorite impacts and the extinction of the dinosaurs connected in the context of intellectual history? This thoroughly researched, historical perspective of the theory of `catastrophism' is a readable account of the nature of geological events, in the light of known evidence and accepted scientific thought. ...Here is a lucid, up-to-date survey of these heated issues [the new catastrophism of meteorite impacts and the 'resulting' mass extinctions and other catastrophe scenarios], aimed at undergraduates and the many others who are equally bewildered...Albritton finally puts the new catastrophism into proper perspective and explains recent ideas which led to it and to dinosaurmania. He takes a sensible, on the fence stance as to possible causes of mass extinctions. - Nature; His smoothly written and tellingly illustrated text is a cheerful, open account of the history of certain ideas of geological change...A fairer and simpler account you will not find. - Scientific American; The book under review makes fascinating reading. It is recommended as a remarkably good and readable summary of a vast amount of recent research on catastrophism and its role in mass extinction - Journal Geological Society of India There are currently no reviews for this product. Be the first to review this product!
0.8605
FineWeb
``` { "topics": [ "Catastrophism", "Mass Extinctions", "Geological Events" ] } ```
Massage Therapy for Multiple Sclerosis (MS) Multiple sclerosis is a condition where the myelin tissue covering the nerves becomes inflamed. This inflammation can fluctuate; it may worsen and then gradually subside. After the inflammatory response, scar tissue may form, hindering the patient’s neurological functions. The spinal cord, brain stem and cranial nerves are commonly affected. Patients suffering from MS most commonly deal with muscle and joint stiffness that limits their ability to move and function. By receiving massage therapy, blood flow is stimulated to these parts of the body, which helps facilitate the healing process and increase flexibility. Massage therapy also helps in stopping the disease from progressing so quickly, improving prognosis. Weakened muscles can cause atrophy, which leads to further deterioration in serious cases. In such cases, massage therapy helps to increase the blood flow and oxygen to affected muscles, aiding in restoration of health and energy. The intensity and type of massage therapy recommended for MS patients depends upon what stage the disease is at. Massage therapy should be avoided in acute stages as it would aggravate the pain and discomfort.
0.6763
FineWeb
["Multiple Sclerosis (MS) Symptoms", "Massage Therapy Benefits for MS", "Massage Therapy Considerations for MS Patients"]
The Krishi Vignana Kendra of the Central Marine Fisheries Research Institute (CMFRI) has developed a precision farming module for bitter gourd cultivation with a view to lowering pesticide contamination and ensuring increased availability of locally produced bitter gourd in the market. The process involves ‘fertigation,’ through which water-soluble nutrients and organic compounds are supplied to the root zone through drip irrigation. The Venturi system is connected to drip irrigation pipes to mix water-soluble nutrients and organic compounds with water. Plastic mulching is used to cover planting beds, which in turn helps conserve moisture and control weeds. The method helps prevent nutrient loss and reduce the labour required for weed management. Water-soluble nutrients are supplied through drip irrigation tubes every three days in 30-35 split doses, whereas in traditional farming it is applied in 3-4 split doses. ‘Pheromone traps’ are used for controlling fruit fly and ‘yellow sticky traps’ are used against white flies. Neem oil spray is also done as an organic pest repellent. Nutrient-rich bitter gourd is effective in preventing lifestyle diseases. Since bitter gourd is very much susceptible to diseases and pests, there is indiscriminate use of pesticides, particularly in commercial farming. CMFRI develops module involving fertigation Pesticides widely used in farming of bitter gourd
0.715
FineWeb
["Bitter Gourd Cultivation", "Precision Farming Module", "Pest Control Methods"]
ERIC Number: ED161091 Record Type: RIE Publication Date: 1978-Apr Reference Count: 0 Moves toward a "Cognitive Grammar": Some Implications of Linking Grammar with Cognitive Representation. Arundale, Robert B. Research on how communicating human beings produce and understand language has focused mostly on what language is, less on how language is processed, and little on who produces and understands language. However, the interaction between what, who, and how is very significant. The importance of who does languaging is related both to the cognitive capability of the person and to the social matrix in the communicative use of language. As the computational paradigm of communication differs from human cognition of meaning, it is restricted in its ability to explain how human beings do languaging. On the other hand, as the human behavior paradigm ignores the communication process, it is limited in its usefulness: it ignores the who of communication. Cognition and communication are jointly necessary for human languaging. The interrelationships of who, what, and how, of individual and social components, of cognitive constraints and communicative constraints, and of the computational paradigm and the human behavior paradigm indicate that research on human languaging must be an interdisciplinary venture. (TJ) Publication Type: Speeches/Meeting Papers Education Level: N/A Authoring Institution: N/A Note: Paper presented at the Annual Meeting of the International Communication Association (Chicago, Illinois, April 25-29, 1978)
0.9075
FineWeb
["Language Processing", "Cognitive Grammar", "Human Communication"]
As a workaround, follow this workflow: 1. Cap the top with a planar surface 2. Knit the 2 surface bodies and for a solid 3. Shell the solid with .5 mm You will get this warning: The Thickness value is greater than the Minimum radius of Curvature. The shell may succeed, but could cause undesirable results, such as bad geometry. To find the Minimum radius of Curvature, use Tools, Check. That being said, the end result might be close enough to what you need: BTW, there si an interesting warning on the shell feature in the tree: OMG thank you so much! Finally got it to work! It's cool that Alin solved your problem by using the Shell command. We often end up using Surfacing commands to work our way around failures of the Shell command and now the Shell command has the crowing rights. On the other hand, the warnings that the command gives you are worth listening to and thinking about, even if you decide to forge ahead. (Which I invariably do.) You can often track down the problem areas in your part by using a smaller value for the thicken, doing a binary search to nail down the maximum thickness that will work. Visual examination of the maximum thickness part will often show where are an edge or surface is about to disappear. Sometimes you might find that a thicker part works, in which case you can do a similar binary search for the thinnest part that will build. Another approach is offset a surface from groups of faces to see if certain boundaries are causing the difficulty. You might also want to run the stringent Check tool on your part to make sure that it is OK. Even a part that checks good may go bad later as you add more features. You might have to return to the early features to "fix" them so that the later features will work.
0.8947
FineWeb
["Resolving Shell Command Issues", "Troubleshooting Part Design", "Using Check Tool for Verification"]
Sister, you belong. Period. You belong because your Savior invited you to His feet to be just as much a learner as your brothers in Christ. You have just as much access to His teaching and just as much responsibility in His kingdom. There is a fine line between correction and condemnation. How do we—people of the church—correct one another without condemning one another? If God is just and longs for his Church to pursue justice, part of pursuing justice in our American context means seeking ways to end racism in all of its forms: from my individual relationships to institutional injustices.
0.6304
FineWeb
``` [ "Women in Christianity", "Correcting vs Condemning", "Pursuing Justice and Ending Racism" ] ```
- 1 spaghetti squash - a bit of olive oil - 1 onion - 1 tsp garlic powder - Vegetables + spices of your choice - 80 g tomato puree - 100 g grated mozzarella - First, wash the squash, cut off the ends and cut it in half lengthwise. Then, scoop out the seeds. - Next, rub the cut side of the squash with a bit of olive oil and place face down on your baking sheet lined with parchment paper and bake in preheated oven at 200 ° C for 30-40 minutes (cooking time depends on the size of your squash) - In the meantime, in a frying pan fry the onions and garlic in oil and chop the vegetables. - Add vegetables then simmer. Pour in marinara and add spices to taste. - Afterwards, take out the squash and run oven at 180 ° C. - Using a fork, scrape the squash spaghetti bowls to release the strands of squash. - Mix spaghetti squash with your vegetables and fill into the squash bowls. - Sprinkle with mozzarella and bake for another 15-20 minutes (at 180 ° C) until cheese is completely molted. - Serve and enjoy.
0.7344
FineWeb
["topics": ["Ingredients", "Preparation", "Cooking"]]
Christmas Special is a special Slenderman's Shadow Map. It is based in a snowy area (believed to be Santa's workshop) and is also set during the night. The objective of this one, unlike most of the other maps, is to find 8 presents hidden throughout the map. It is quite hard with the fact that some presents are small and are on the ground sometimes. There are large buildings shaped like large wrapped Christmas presents to navigate through, often containing power generators and shelves. Instead of Slender Man being the one chasing you, it's Santa Claus. - There was a glitch where you can you can climb up mountains and go on Slendy Claus's head, but it was fixed later.. - This is the first map with snow. - He quotes "Merry Christmas. Ho ho ho ho ho!" when he kills you. - This is the first map of Slenderman's Shadow that Slenderman isn't in. - When you get startled by Slendy Claus, instead of a low piano note playing, he quotes "Ho ho ho!". - The person you play as doesn't make any breathing sounds.
0.5124
FineWeb
["Christmas Special Map", "Gameplay Mechanics", "Slendy Claus"]
Quid allows users to derive data driven insights from extensive, unstructured data sets. Not only do we collect and analyze the world’s unstructured data, but we also present this data to customers in a way that instigates insight and inspiration, and ultimately adds value to their business. On the engineering side, we’ve been faced with the challenge of visualizing information organized into networks in a meaningful and efficient way. The networks consist of nodes, or vertices, and links between them, also called edges. The vertices stand for real-world entities, and edges represent a variety of relationships between them. This structured illumination helps us understand a given data set faster and more fully. While network generation takes place on Quid’s infrastructure, the interactive visualization runs entirely in your web browser. This means displaying and manipulating thousands of vertices and edges in an environment not known for speed. Quid engineers are always looking for ways to up the amount of data on display, while keeping a responsive frame rate; optimizing the network visualization is one of the more satisfying improvements we have made recently. Read on to find out how we did that! Providing Structure to the Unstructured Some networks are easier to draw than others. Spiders, for example, weave nets that are both […]Read More - How does Quid create reliable business intelligence? - Our First Engineering Game Day - Improving search with Word2Vec and Wikipedia - Managing text data you haven’t seen and can’t control - Using deep learning with small data - It’s NOT the Stork: Where Tests Come From - Quid Hackathon II - Here’s a suggestion - Optimizing The Rendering Engine - Quid Hackathon - Major League Data Visualization Event @ Quid! - The Ups and Downs of a Chef Shop - Reaching Equilibrium in Web Browser Network Visualizations
0.9347
FineWeb
```json [ "Data Visualization", "Network Optimization", "Business Intelligence" ] ```
If you ask most people phones, radio signals, and televisions all seem like they’ve been around forever, but it all actually started with a date that changed the astronomy world forever. October 4th, 1957 was when the Russians launched their first space-born satellite that could communicate with Earth. Sputnik was a satellite that could provide beeps and wasn’t hampered by many of the problems that Earth’s current radio waves had, such as a need for clear weather and a lot of power. This revolutionized communication, and when television satellites let a show produced in Canada be shown in Europe at the same time, this was a new way to get information to the people. Although the channels were used for broadcasting government news, soon entertainment television began to become popular. More and more satellites and channels were added to orbit and to the world, and information and connections began to spread. Phones were soon connected, allowing anyone to talk to anyone, and the world was more connected than it had ever been before. Now information, images, and social injustice could be shown on countless television screens across the world, bringing with it new change and a deeper understanding of the events of the time period. With satellite television becoming popular, more and more satellites were being launched into space. The way that satellite TV’s work is that they are kept in orbit above the equator to keep them still, then a signal is fired at them from an antenna, is bounced off the satellite, and then is directed into a home with a satellite dish. Then the signal goes down the dish and into the television set to produce the desired channel. Even with digital television and other forms of media starting to replace it, satellite television is still used today. However, as the satellite era began to rise, many people questioned the value of the astronomy that put them there. Most people don’t even know what astronomy is, choosing to simply say ‘it’s the study of the stars’ and then leaving it at that, but it’s so much more. Some people theorize that the reliance on our phones is stopping us from being naturally curious and that is putting a lack of wonder on the world. Since people are less curious they don’t go into the sciences, specifically astronomy. Then people don’t understand how the satellites work and why they need to be where they are. With astronomy reaching the forefront of the world again thanks to new ideas, discoveries, and plans for the future, it’s important to remember that the basic ideas of astronomy, math, and science are the core ideas and methods that make our modern world go round. So try to look at astronomy through a different lens the next time you see it because we all need to understand how important it is for the modern world we’ve created. Without the knowledge of the past, we might have trouble protecting the future.
0.861
FineWeb
["History of Satellite Technology", "Astronomy and its Importance", "Impact of Satellite Television on Global Communication"]
You might have heard a lot of buzz about the wonders of NoSQL lately but you might still be left wondering when to actually use a NoSQL database. This new class of technology emerged as answer to the limitations of relational databases in handling Big Data requirements. Although NoSQL databases can vary greatly in features and benefits, most offer greater data model flexibility, horizontal scalability, and superior performance over relational databases. - Need to handle large volumes of structured, semi-structured, and unstructured data - Follow modern development practices such as agile sprints, quick iterations, and frequent code pushes - Prefer object-oriented programming that is easy to use and flexible - Want to leverage efficient, scale-out architecture instead of expensive, monolithic architecture Then you should consider adopting a NoSQL database like MongoDB. Companies of all sizes, from the latest startup to well-established Fortune 100 companies, have been build amazing modern applications on MongoDB. If you’ve never thought that a database could directly result in powerful business outcomes then consider the following success stories with MongoDB: - One of the world’s leading insurance companies unified their siloed customer service data into one application in just 3 months after failing to do so for 8 years with a legacy relational database - A leading telecommunications provider accelerated time to market by 4x, reduce engineering costs by 50%, and improve customer experience by 10x by using a NoSQL database - A Tier 1 investment bank rebuilt its globally distributed reference data platform on new NoSQL database technology, enabling it to save $40M over five years Find out more about how a database can accelerate your business by downloading our white paper today.
0.8655
FineWeb
["Introduction to NoSQL", "Use Cases for NoSQL Databases", "Success Stories with NoSQL"]
Please, don’t get me wrong. In a blog post I wrote about the trap of asking too many questions in the classroom, I make a point in avoiding the many questions we ask, mainly to our teens. However, I know that many educators are now questioning it because, in fact, inquiries in the classroom can lead to connecting ideas and elaborating more on a topic. Right. And I’m not against questions at all. In fact, I pointed out in the blog post that I ask questions, sometimes too many. My point was to focus more on students’ doing the job and not us being frustrated by the lack of elaborated, more in-depths thoughts. Questions are valuable, but with a few twists, we can make them a more powerful element in a task. Take, for example, a lesson I’ll soon be teaching. In the teacher’s guide of the book, you start the lesson by asking question about an image of Times Square to talk about advertisement. Lead-ins and wrap-ups in teachers’ guides are , in many cases, questions. So, imagine we had this photo and there were questions like, “How do you feel about a place like this? Is it similar to the place you live? Where is it?” Nothing wrong with those questions, and if you have a group of participatory adults, I’m sure you’ll have an effective, interactive start, but, again, my point here is that you are doing the job by asking the questions. Now, consider starting the same topic by asking students to COMPARE (ACTION VERB) these places, what they see, what called their attention, which one they prefer and the reason for their preferences. It isn’t a big change, right? OK, but here, students are in charge of the task, they will be activating their brains in the comparison being more active in the process, finding the language they need to communicate their thoughts. Then, you can ask them to decide which photo is more similar to the place where they live, moving to a more personal approach. Another way that we can introduce the topic in a more surprising way is to show students a text (available here) and ask them to PICTURE the scene. They can DESCRIBE or even DRAW it. Then, they COMPARE their thoughts with each other. How are they similar/different? For this activity, I chose Lebron’s Should I commercial for Nike because of the richness of the text, which makes it easier for descriptions, and also because they might not guess it is an advertisement. So, the surprise element works well for a lead-in. After eliciting their creative ideas for the text, I’ll let them know it was in fact a commercial, and I’ll ask them to PREDICT what kind of ad it is and who the advertiser is. Then, we will watch the video to check if their predictions were right. http://pt.englishcentral.com/video/11319/lebron-james-what-should-i-do (there is a whole story behind this ad because it seems it was made as to respond to his Cleveland fans’ uproar when he left Cleveland Cavaliers and moved to the Basketball team Miami Heat). One more idea to introduce the topic of ads and buying power is to tell a story. I’d do it with Coke’s commercial. Here’s the setting: THE SETTING: Every day, thousands of South Asians laborers arrive to Dubai to work for a better future, saying that, “If working here means my wife, kids and parents can be happy, then I would stay here forever” “We do this so that our children can be educated. If they can have a better life, then my life will be worthwhile” “I long to hear their voice every day, even for a couple of minutes. If I could do that, it would make me so happy” THE PROBLEM: With an average income of $6.00 per day, the workers have to pay up to $0.91/min to call home, making it nearly impossible to connect to their families regularly. THE TASK: You are the employer of this company in Dubai. How could you alleviate a bit of these workers’ homesickness? How could they connect more often with their family? DESIGN a viable solution to this problem. They’d work in small groups and come up with a solution through a brainstorming process with post-it notes. Next, I’d show them Coke’s advertisement: To wrap up any of these three ideas, I’d ask the students to LIST some of the advertisement strategies companies use to sell their products. They’d compare to the most popular ones available at http://smallbusiness.chron.com/5-common-advertising-techniques-15273.html and would decide which technique was used in the commercials or ads we’ve explored. For an extension of the activity, students could FIND examples of those ad strategies to share with partners in the following class. So, questions are still part of the lesson, but they are not the task itself. They are embedded within the task in which students’ active role is a prominent feature. Don’t get me wrong, then. Questions are part of the deal, but they cannot prevail in your lesson plan as the only teaching strategy. Next time, you plan a lesson, have Bloom’s verb chart to help you VISUALIZE how you can make your students more active in the learning process.
0.7435
FineWeb
["Teaching Strategies", "Advertisement Analysis", "Student-Centered Learning"]
Did you ever wonder certain things about the coffee you’re drinking, such as where did it come from? How did it get here? Why do I like this brand/flavor/blend so much? What about the history of coffee? Did you know that it’s been around for thousands of years in different parts of the world? Today’s most popular brewing methods are certainly not new, and some have been around for centuries. Just for fun, while you’re drinking your favorite cup of coffee, check out a few other additional fun facts about coffee. - Did you know that light roasted coffees actually have more caffeine than dark roasted coffees? That’s because the darker the coffee bean, the longer it’s been roasted. The longer a coffee bean is roasted, the more caffeine is literally cooked out of the bean. - Did you know that the coffee bean actually comes from a cherry? That’s right, your coffee comes from a bush or tree that bears cherry fruit. If you plucked one of those cherries off the tree and peeled it open, you’d find a green seed or bean inside. It’s only after that bean or seed has been dried and roasted that you get that wonderful cup of coffee. - Today, coffee is developed from over 50 different species. Only two of them – Robust and Arabica, are used in mass coffee production. When we’re talking about mass coffee production, consumers around the world drink over 500,000,000,000 (that’s billion) cups of coffee every year, and most of those are at breakfast time. - Back in the 16th century, it was considered illegal to brew or drink coffee. Charles II of Europe banned coffee houses. Germany banned it in 1677 when Frederick the Great nixed the beverage because he was worried about economic stability. You see, so many people were spending their money on coffee imports, and he wanted to keep money inside the country. - Did you know that next to oil, coffee is the second most popular traded commodity in the world today? Coffee is a wonderful beverage, and comes in so many different varieties, roasts, and flavors. Perhaps, someday, if you’re dedicated enough, you can try all of them!
0.8979
FineWeb
```json [ "History of Coffee", "Coffee Production", "Coffee Facts and Trivia" ] ```
Kenn Heydrick, President-Elect for the Science Teachers Association of Texas. What does STAT stand for? STAT stands for Science Teachers Association of Texas This definition appears frequently and is found in the following Acronym Finder categories: - Science, medicine, engineering, etc. - Organizations, NGOs, schools, universities, etc. See other definitions of STAT We have 92 other meanings of STAT in our Acronym Attic - Staatssicherheitsdienst (State Security, former East Germany) - Software for Ambient Semantic Interoperable Services - South Tidewater Association of Ship Repairers - Short Term Air Supply System (emergency breathing system) - South Tyneside Assessment of Syntactic Structure (language assessment tool) - Standard Training Activity Support System (Navy-CNET) - Submarine Towed Array Surveillance/Sonar System - Surveillance and Target Acquisition Aircraft System - Science and Technology Assistance Team - Scientific Test and Analysis Techniques (various organizations) - Security Test and Analysis Tool - Security Threat Avoidance Technology (Harris Corporation) - Short Term Assessment and Treatment - Short Turn-Around Time - Signal Transducer and Activator of Transcription - Situation Triage and Assessment Team - Slotted Tube Atom Trap - Small Transport Aircraft Technology - Society of Teachers of the Alexander Technique Samples in periodicals archive: 10 /PRNewswire/ -- The Science Teachers Association of Texas (STAT) presented its "1995 Recognition of Service to Science Education by a Business" award to Exxon Corporation (NYSE: XON) today in honor of Exxon's million dollar Texas Exxon Energy Cube program.
0.9342
FineWeb
["Science and Education", "Organizations", "Acronym Definitions"]
The Banner/C2 Program is responsible for the financial and technical oversight, project management/development and vendor management of the following applications: - Banner Customer Information System - Certification and Compliance Diversity Management System also known as (C2). The Banner Customer Information System: - Provides comprehensive customer accounting facilities for the City’s water and sewer services. - Maintains records of customers, premises, services, accounts, meter readings, and inventory. - Provides the means to record and bill for services in a cost effective, efficient, and manageable way. - The Banner application is used primarily by the Department of Water Management and the Department of Finance. Certification & Compliance System (C2) The C2 System is a web-based tool that provides: - Enhanced Minority, Women, and Disadvantaged Business Enterprise (MWDBE) Directory with key-word search - Online tracking of MWDBE Goal Attainment - Online verification of MWDBE payments - Flexible reporting capabilities for staff - The C2 Diversity Management System is used primarily by the Department of Procurement Services and City contractors.
0.982
FineWeb
``` { "topics": [ "Banner Customer Information System", "Certification & Compliance System (C2)", "Vendor Management" ] } ```
What is it? Blue Cohosh is an herbal medicine used for missing menstrual periods, painful periods, and historically for false or early labor (birth) pains. Other names for Blue Cohosh include: Caulophylum, Papoose Root, Blue Ginseng, Yellow Ginseng, Blueberry Root, and Squaw Root. Ask your doctor, nurse, or pharmacist if you need more information about this medicine or if any information in this leaflet concerns you. Tell your doctor if you - are taking medicine or are allergic to any medicine (prescription or over-the-counter (OTC) or dietary supplement) - are pregnant or plan to become pregnant while using this medicine - are breast feeding - have any other health problems, such as high blood pressure or heart or blood vessel disease Talk with your caregiver about how much Blue Cohosh you should take. The amount depends on the strength of the medicine and the reason you are taking Blue Cohosh. If you are using this medicine without instructions from your caregiver, follow the directions on the medicine bottle. Do not take more medicine or take it more often than the directions tell you to. To store this medicine: Keep all medicine locked up and away from children. Store medicine away from heat and direct light. Do not store your medicine in the bathroom, near the kitchen sink, or in other damp places. Heat or moisture may cause the medicine to break down and not work the way it should work. Throw away medicine that is out of date or that you do not need. Never share your medicine with others. Drug and Food Interactions: Do not take Blue Cohosh without talking to your doctor first if you are taking: - Heart disease medicines (examples: Lanoxin(R) digoxin, Cardizem(R) Dilacor(R) diltiazem, Calan(R) Isoptin(R) verapamil) (2) - Before taking Blue Cohosh, tell your doctor if you are pregnant or breast feeding - Do not take Blue Cohosh if you have heart disease (7) - Children have been poisoned from the bright blue bitter tasting seeds (4) - Children should not take Blue Cohosh (4) - The berries of the plant are toxic and not used as medicine (5) Stop taking your medicine right away and talk to your doctor if you have any of the following side effects. Your medicine may be causing these symptoms which may mean you are allergic to it. - Breathing problems or tightness in your throat or chest - Chest pain - Skin hives, rash, or itchy or swollen skin Other Side Effects: You may have the following side effects, but this medicine may also cause other side effects. Tell your doctor if you have side effects that you think are caused by this medicine. - Call your doctor if your blood pressure increases (2) 1. Anon: British Herbal Pharmacopoeia. British Herbal Medicine Association, Keighley, UK; 1983. 2. Newall CA, Anderson LA, Phillipson JD: In Herbal Medicines, A Guide For Health-care professionals. Royal Pharmaceutical Society of Great Britain, UK; 1996. 3. McGuffin M, Hobbs C, Upton R et al(eds): American Herbal Products Association's Botanical Safety Handbook. CRC Press, Boca Raton, FL; 1997. 4. Duke JA. Handbook of Medicinal Herbs. CRC Press, Boca Raton, FL; 1985. 5. Tyler VE: The Honest Herbal. George Stickley Co, Philadelphia, PA; 1982. 6. Betz JM, Andrzejewski D, Troy A et al: Gas chromatographic determinations of toxic quinolizidine alkaloids in blue cohosh Caulophyllum thalictroides (L.) Michx. Phytochem Anal 1998; 9:232-236. 7. Jones TK & Lawson BM: Profound neonatal congestive heart failure caused by maternal consumption of blue cohosh herbal medication. J Pediatr 1998; 132(3 pt 1): 550-552. Last Updated: 1/27/2017
0.5757
FineWeb
["Blue Cohosh", "Herbal Medicine", "Drug Interactions"]
For the most part, since early childhood I suppose, most of us haven’t particularly cared for rules. Rules are confining. Those who are inclined toward creativity and proclaiming their originality are especially prone to finding ways to bend the rules to their own will. There’s an old saying: You first have to know the rules before you can break the rules. Dalai Lama XIV is quoted as saying, “Know the rules well, so you can break them effectively.” With respect to wearing pattern (and color), there are rules that neither you nor I invented that, when given due respect, will allow any one of us to appear to others in a way that is both agreeable and very individual. The fact of the matter is that the eye seeks visual harmony and is distracted or annoyed by visual dissonance or incongruity. That is one of those rules or laws of nature. When what you are wearing is harmonious in color and pattern, the people who see you better enjoy the experience and you enjoy a better reception. It is in knowing the rules, including the major-minor rules, and then effectively following, breaking, or bending them where a clear path to individual style is made visible. Not to be confused with monotony, harmony is a pleasing or congruent arrangement of parts. It’s all about how you put it together. I’ve heard it said by others and I’ve said it myself when seeing someone who is either a particularly well proportioned human being or is simply dressed in a way that is especially well done, “She (or he) is put together!” Harmony – being “put together” – can be achieved in a nearly endless variety of combinations of pattern, color, and texture. In this forum, we will deal mostly with the mixing of patterns, and that in the simplest of terms. Pattern, or form – whether stripes, checks, plaids, paisleys, or geometrics – is based on lines, both straight and curved, and how they are configured or relate to one another. The successful wearing and mixing of patterns – achieving visual compatibility instead of optical vibration – involves several factors, and especially these: - Scale (proportion) - Intensity (contrast) - Type of pattern (stripe, check, etc.) Rules to Guide You: 1. When combining two like patterns – two stripes for example – vary the scale of each. If the jacket pattern is a large plaid, then combine it with a shirt (or contrasting vest) exhibiting checks that are closer together. If your shirt has narrow stripes, then you can wear it with a jacket that has wider stripes. This rule holds true, even if another item in the total look is not solid, but of a different pattern. 2. When combining two different patterns – a stripe and a check – they will better harmonize if similar in scale. The exception to this rule – and aren’t there always exceptions to every rule? – is the combing two smaller or tight patterns. If at least one or both are muted or of subtle intensity, then you will probably not give others a headache when they look at you. Otherwise, if one pattern is small/tight, then it is likely best combined with another pattern that is larger in dimension. 3. Mixing three patterns – a herringbone jacket, check shirt, and stripe tie – is especially in harmony when all three patterns are similar in scale and intensity. Even when all of the pieces are from the same color family, the use of multiple patterns creates substantial visual interest. If you want to create more “pop” – a desire that I frequently hear – then consider choosing a tie of bolder intensity than the suit and shirt or varying the dimension of one garment in the ensemble. Not forgetting the simple elegance of suits, jackets, shirts, and ties of solid color made from beautiful cloths, the rules above provide a basic framework for successfully wearing patterns. The point is to use pattern to your every advantage to announce your individuality and to communicate clearly who you are and what you’re all about. Mixing it up with style and substance,
0.6248
FineWeb
```json [ "Visual Harmony", "Pattern Mixing", "Style Rules" ] ```
Although hidden and out of sight, dryer vents perform a vital role in your home. Dryer vents remove hot exhaust air from your clothes dryer to ensure effective and safe operation. However, through the process of drying your clothes and with repetitive use, dryer vents can become clogged with lint, dust and other debris. Cleaning your dryer vent on a regular basis is critical. Not only does this help maintain the efficient functioning of your dryer, cleaning ensures your entire home remains safe. Why Cleaning Dryer Vents Is Important According to the National Fire Prevention Association, 15,450 house fires in 2010 were caused by home dryer machines, and the majority of these (32%) were triggered by dryer vents that hadn’t been cleaned. These fires greatly threaten the safety of homes and families as well as cause millions of dollars of damage to property. Dryer vent fires most often occur due to lint and other debris that builds up in the vents after repeated dryer use. Lint is made up of small fabric fibers and dust particles that are released from clothes when they are washed. Lint is naturally highly flammable, and combined with the heat from a clothes dryer, can quickly kindle a fire. Clothes dryer fires can often cause significant damage to a home before they can be controlled. Not only can clogged dryer vents lead to fires, they also reduce the efficiency of your dryer, provide the potential for carbon monoxide poisoning, and promote conditions for the spread of mold and allergens. As dryer vents become clogged and the machine has to work harder to remove exhaust, the dryer efficiency drops and your energy consumption climbs. Without being fully removed by the dryer vent, carbon monoxide can leach into your home causing death or long term poisoning. The buildup of lint and dust in the warm moist environment can also lead to mold growth and excess dust mites. Mold and dust mites can make your whole family ill, while potentially triggering severe allergies or asthma in those who are susceptible. Keep Dryer Vents Clean Although uncleaned and clogged dryer vents are the leading cause of dryer machine fires, they can be easily prevented with proper dryer vent maintenance and cleaning. One of the simplest things you can do to prevent dryer vent fires is to ensure you remove lint from your dryer filter before and after each load, to eliminate lint buildup. Although this won’t reach all the potential lint buildup in your dryer, it can help to reduce the total lint burden in your dryer. Another essential component of home dryer maintenance is inspecting your outer vent flap to ensure that it is not obstructed by any debris or build up. Professional Dryer Vent Cleaning Professional dryer vent cleaning is an essential aspect of your dryer vent maintenance. While checking your dryer vent yourself at home can help to minimize build up and reduce the chances of a fire, professional dryer vent cleaning can reach areas of the vent that you can’t reach alone. No matter how good your regular dryer vent cleaning is, you will always require professional dryer vent cleaning from time to time. Authorities recommend having your dryer vent professionally inspected and cleaned at least once a year for greater dryer vent safety and performance. Dryer vent maintenance is an essential aspect of keeping your home safe and efficient. Maintain and clean your dryer vent regularly at home and book professional dryer vent cleaning services at least once a year to remove long term lint buildup.
0.7361
FineWeb
``` { "topics": [ "Dryer Vent Importance", "Dryer Vent Cleaning", "Dryer Vent Safety" ] } ```
Intel is presenting its “3D” Tri-Gate transistor is going to enter production for the first time. The 22nm processor codenamed “Ivy Bridge”. It sounds like a marketing name, but Intel takes 3D transistors very seriously. The design has been demonstrated in 2002, so the concept is not breaking news, but it is finally entering production with a 22nm processor. They key elements here are “transistor density” and “power efficiency”, with an emphasis on the latter. Intel says that Tri-Gate allows the company to build processors that have more transistors, while using less power. Tri-Gate lets Intel control the flow of electricity in a much more efficient manner, the company says – the idea is to turn inefficiencies into processing power that users can tap into. Tri-Gate is a fundamental improvement that should allow Intel to push the limits of Moore’s law (again), which dictates that the transistor density of processors doubles every couple of years. Tri-Gate is important to cost-reduce transistors, but this technology might also be crucial to lowering the power consumption for mobile devices. This could prove to be a key element for Intel’s long-term goal to get into the handset market: Atom based processors will also benefit from Tri-Gate. Intel also says that a lower voltage will be tremendously helpful for graphics processing. Why? Because graphics processors (GPUs) are extremely dense. And because Intel has started to integrate GPUs in its processors with Sandy Bridge, it’s critically important for them to have a transistor foundation that lets them pursue integration. That said, don’t expect the current paradigm to change: Intel considers its graphics processors like an “enabler” for its CPUs more than a product class in itself. Intel believes that the current implementation of Tri-Gate can work (as is) until 14nm. Beyond that “further innovations will be needed”.
0.6505
FineWeb
```json [ "Intel Tri-Gate Transistor", "Transistor Density and Power Efficiency", "Moore's Law and Future Applications" ] ```
by Oliver Milman (TheGuardian) The largest migration on Earth is very rarely seen by human eyes, yet it happens every day. Billions of marine creatures ascend from as far as 2km below the surface of the water to the upper reaches of the ocean at night, only to then float back down once the sun rises. This huge movement of organisms – ranging from tiny cockatoo squids to microscopic crustaceans, shifting for food or favourable temperatures – was little known to science until relatively recently. In fact, almost all of the deep ocean, which represents 95% of the living space on the planet, remains inscrutable, despite the key role it plays in supporting life on Earth, such as regulating the air we breathe. Scientists are only now starting to overturn this ignorance, at a time when this unknown world is being subjected to rising temperatures, ocean acidification and the strewn waste expelled by humans. “The deeper we go, the less we know,” said Nick Schizas, a marine biologist at the University of Puerto Rico. “The majority of habitat of Earth is the deeper areas of the ocean. Yet we know so little about it.” Schizas is part of a new research mission that will, for the first time, provide a comprehensive health check of the deep oceans that future changes will be measured against. The consortium of scientists and divers, led by Nekton, is backed by XL Catlin, which has already funded a global analysis of shallow water coral reefs. The new mission is looking far deeper – onwards of 150m down, further than most research that is restricted by the limits of scuba divers. We already know of some of the creatures of the deep – such as the translucent northern comb jelly, the faintly horrifying fangtooth and the widely derided blobfish – where the pressure is up to 120 times greater than the surface. The deep sea was further illuminated during the film director James Cameron’s cramped solo “vertical torpedo” dive to the 11km deep Mariana trench in 2012. Yet only an estimated 0.0001% of the deep ocean has been explored. The Nekton researchers are discovering a whole web of life that could be unknown to science as they attempt to broaden this knowledge. The Guardian joined the mission vessel Baseline Explorer in its survey off the coast of Bermuda, where various corals, sponges and sea slugs have been hauled up from the deep. “Every time we look in the deep sea, we find a lot of new species,” said Alex Rogers, an Oxford University biologist who has previously found a new species of lobster in the deep Indian Ocean and huge hydrothermal vents off Antarctica. Courtesy of Guardian News & Media Ltd
0.6946
FineWeb
["Deep Ocean Exploration", "Marine Life Migration", "Ocean Conservation"]
Purchase this article with an account. Rishabh Jain, Bartlett Mel; Benefits of a Hybrid Spatial/non-Spatial Neighborhood Function in SOM-based Visual Feature Learning. Journal of Vision 2010;10(7):956. doi: 10.1167/10.7.956. Download citation file: © ARVO (1962-2015); The Authors (2016-present) Neurally-inspired self-organizing maps typically use a symmetric spatial function such as a Gaussian to scale synaptic changes within the neighborhood surrounding a maximally stimulated node (Kohonen, 1984). This type of unsupervised learning scheme can work well to capture the structure of data sets lying in low-dimensional spaces, but is poorly suited to operate in a neural system, such as the neocortex, in which the neurons representing multiple distinct feature maps must be physically intermingled in the same block of tissue. This type of “multi-map” is crucial in the visual system because it allows multiple feature types to simultaneously analyze every point in the visual field. The physical interdigitation of different features types leads to the problem, however, that neurons can't “learn together” within neighborhoods defined by a purely spatial criterion, since neighboring neurons often represent very different image features. Co-training must therefore also depend on feature-similarity, that is, should occur in neurons that are not just close, but also like-activated. To explore these effects, we have studied SOM learning outcomes using (1) pure spatial, (2) pure featural, and (3) hybrid spatial-featural learning criteria. Preliminary results for a 2-dimensional data set (of L-junctions) embedded in a high-dimensional space of local oriented edges features show that the hybrid approach produces significantly better organized maps than do either pure spatial or non-spatial learning functions, where map quality is quantified in terms of smoothness and coverage of the original data set. This PDF is available to Subscribers Only
0.9535
FineWeb
["Neurally-inspired Self-organizing Maps", "Hybrid Spatial/Non-Spatial Neighborhood Function", "Visual Feature Learning"]
I don't know how true this is, but it's an interesting read. Narrator: How does the universe work? Does good and bad exist? Do we really know all of humanity's history? How did the Human Being appear? Does God exist? What is the Spirit? What is going to happen in 2012? Who are the Indigo children? Did Atlantis exist? Where do we come from? Where are we going? What is the meaning of it all? Imagine there were an explanation to all these questions, for all that happens. An explanation that unites science and faith that could explain both, physical and spiritual. Imagine that someone begins to remember that conception of the Universe and that person remembers his life and other lives before being born. He remembers people, remembers beings, remembers missions and aims, and remembers the structure of everything we know, think and feel as the Universe. This is a summary if the basic concepts of our existence, how we are made, how our context works, the truths and structures of the things we think we understand about the great importance of things we ignore in life. After Tumti: The Universal Heritage First Part: Someone Who Remembers Who Am I? My name is Matias De Stefano, I am 22 years old and I'm from Venado, Tuerto, Argentina and my purpose is to remember. Since I was 3 years old, I began to remember things before I was born to help me organize people. We are able to remember when we activate a part of our brain that unites us with all of the Cosmic memory. Everyone can do it, but some of us are specialized in it. We are allowed to remember historic events that have occurred before what humanity knows and all the Universal memory to understand, today, the processes of the planet and humanity. At the beginning, the memories were mild, which I used to show and tell stories to my friends for them to enjoy. But after, the remembering process began to be harsher because I started to have a lot of headaches up to the point that I had to bang my head for the pain to go away. One image after another would appear, such as sentimental memories and pain. I would write about this information and these images through pictures and notes the way many Beings, who I could see, told me to do. These are beings that are with all of us but some of us have the capacity to see them and they helped me to organize this information, to know how to use it. Really, I always took it as something normal. I didn't see it as anything strange until I was about 14 years old when I realized I was conscious of things that the rest of the people didn't realize and in my context, the people didn't really know it because it was something normal, until I was about 13 or 14 years old. The only person I shared this information with was my Mom. She knew everything that was happening, although she didn't understand anything I was talking about because in my family, nobody understood anything about this. She comprehended me, she accepted that this was happening to me and she was the most important support I had to be able to not suffer the remembering. Really, at the beginning, the aim wasn't very clear. I even used to doubt if I was schizophrenic because of all the things I could remember, because sometimes it didn't have a lot of meaning. There were a lot of loose memories and I thought it was to allow me to tell stories, to write, live and eat from selling books. I didn't understand very well what it was for, what I did know is that it used to drive me to desperation, and it had a lot of relation with something that was going to happen in my future life. Although later on, when the information started to organize itself in my mind when the information started to organize itself, I started to realize that it was going to allow me to help other people to organize their information, …Cosmic information, terrestrial information, spiritual information… everything matched together, and a lot of things could be explained through simple things I could remember. Narrator: Amongst many other things Matias can remember, Sayonic, which is a language which he has been allowed to remember to explain, and understand the cosmological history in a close and familiar way to the people who once spoke about these subjects. Its origins goes back to the year 9000 BC and was organized by priests who lived on the current Egyptian coast, with the aim to make people from different languages, beliefs and cultures able to understand each other in a nation of freedom. What I have come to say with my memories, the message for the people, is not really a message. The only thing I can tell them is to calm down because we are on the right path. What I can tell the people in the best way possible is to organize their information, without saying really anything new, although for many, it's very new, without all the complexity of the Human knowledge, in a more simple way… in an easier way of understanding, through simplicity. Explaining clearer, the historical events, destroying the myths that make humanity live in desperation at the moment. The reason why I can remember is because of my position before I was born. Actually, many of the children that are born now are able to remember although they are not able to remember most of the things… the general knowledge of things. They can remember their previous lives, where they came from, from which sun, why they came, etc… But I could remember the general knowledge of things, of history, cosmology… because I used to work in what is known on earth as the Central Akashic Records which I used to call, when I was a boy, Thamthiorgah. Narrator: Thamthiorgah, in Sayonic language, is the place that all call Central Akashic Records. The records of information is what Matias calls the "Spinal Cord of God". Akasha is a word with Sanskrit origins which is used to refer to a Cosmic Plan which works as a file where all events, situations, emotions and actions of a Being are recorded. That's where all the history of the planet is registered, as well as all the personal history of each of us. In these files, the purpose of life and the program of our destiny according to karma or learning experiences are registered. My job was to work with large amounts of information, that’s why it didn't affect me that much to remember. Many who begin to remember when young, may end up autistic, schizophrenic or may even die before 13. But because I knew how to work with this information, they allowed me to remember more and more and although I suffered a lot from the age between 13 and 17, I was able to control and organize it. The things I could remember at the beginning were my previous lives, but soon they began to expand a lot. And some of the things I have come to organize and speak about goes back even to the beginning of the souls, back to the creation of what we know as God, the different humanities from the stars, talking about the Confederations of the Galaxy and all that, the waves of souls that were born in the different worlds, how reincarnation works, which are the systems, the Laws of the Cosmos. Also about the history of humanity, especially the unknown history. The history about Atlantis, Lemuria, the races that influenced the creation of humanity, why did they influence all of the historical processes of the Ages, until reaching Aquarius and to understand the recent process of Aquarius, all the new souls getting born in this period and why. And how we need to learn how to live on this planet and what are the procedures to folllow from today and the next 200 years. Narrator: Our environment, including things and people, the events that happen, the culture, the races and many factors of the physical life are known on earth as that which we must know, experience and discover, all to be able to live in society. And this is the ENVIRONMENT AS A MIRROR. Everything you can find outside are reflections of what we have inside. All this information can be used by the people to organize their context, to compare the context which in they live and move around, in a moment of No-Time and No-Space, this is to understand ancient history, where we came from, where we are going and why we are here. So to understand the context, those who need to, will help them find themselves with their own mission, their own evolution path, understanding why they are here with their own mission, their own evolution path, understanding why they are here and to know what is their script in this whole process. Always from a general point of view because to find out why each of us is here is our own problem. I'm not here to tell each person their mission, only to show the general mission of the group. Narrator: We must learn how to make the difference between what belongs to us and what doesn't and what affects us, not because it's immoral, ugly or beautiful but because of our feeling within a relationship with other things. This process within may be done through meditation. This helps us, while closing our eyes in silence, to find not what is spiritual, but what is inside to discover ourselves. All this information goes beyond just informing people about things that happened, about things that existed, just to have it filed in their library. It has to be used more to make people understand what is their function today. Second Part: Another Conception of the Universe Life can be divided into two periods to understand how it all begins. It can be explained on the etheric plane or in the physical way. Of course, when the Etheric began, the Physical didn't exist and life was interpreted another way. It was interpreted as the Essence. The Essence is run at the Spiritual level. Narrator: The Spirit is the Essence, sparks of God's body, his Electrons. The Spirits don't have any form, they are made of pure light. They contain all the knowledge from the Origins. The spirits are born from what is known here as the Source, which could be described kind of like the solar plexus chakra of God. Where all things come from, the Spiritual level is a very subtle level that doesn't even have an energetic vibration. This means it doesn't have an energetic or etheric density. It's just pure light. This pure light expands itself throughout the Universe and when it condenses itself, it become molecules, compact energy. When it compacts itself, then matter begins to form. This matter forms itself because a chaos of densities begins to happen in the Cosmic walls that generates what's known as Chaos. Narrator: Spirits are born with 2 main functions: 1. to make all that became dense go back to the pure light…and 2. to integrate all of the experiences of the Cosmos to understand what is already known and allowing each of them to become a new God. It expands and contracts, and in this expansion and contraction the spiritual essence has to make the matter subtle again to bring it back to its Source. That subtle process is what we call reincarnation. The reincarnation process is to allow the Spirit to be in the matter and to bring the matter back into pure light. This means that we have to break the idea that matter is impure and the Spirits have to return to the source. Matter is also pure and we, as Spirits, incarnate so the matter can become light again. This process is what was called, "DHUATER TUMTI KEI DHU URNUS ATERTI, which means, "Bring the Sky to Earth and give back the Light to the Sky". That phrase involves everything we are living. The process of reincarnation demands us to evolve because this whole process reaches the integration of all things. So, all Souls and Spirits, actually Spirits, need to understand that everything happens in the Universe to be able to become another Universe. All Spirits have to incarnate. How do they do it? Through another dense energy which is known as the Soul. Narrator: The Soul is the closest dense vehicle that Spirit has. This is composed by different energies, that's why it's dense because it's not ONLY pure light. Its body is well known as the Chakras: Base, Sacral, Plexus, Heart, Throat, Third Eye and Crown. These are the energetic glands that allow the Spirit to connect with the maximum density: Matter. Each Chakra corresponds with each gland in the physical body. The Soul is created in the cells of what we know as God and they are Etheric accumulations which are capable of vibrating with the level of the matter. This allows the pure light to go into the matter to evolve and understand all the existence in the Being which we call God… to be able to be another God. And this is possible through experience, the experience of living what exists in the Universe in all its aspects, in each of its dimensions. In this moment, we are going through the 3rd dimension, so we are integrating the experience of living in the 3rd dimension. We have this experience thanks to the organization that has been created in the Universe which is through matter and the decomposition of matter. The deterioration of the matter is what allows the experience in the Universe, and we call this the deterioration of time. Time is an essence that exists to deteriorate the matter, so it only exists in the physical world. This experience can last for years or millenniums, depending on what each Soul and Spirit needs to learn as a whole. Narrator: The evolution has a wide history which can be found in the Soul, and can be used by the Spiritual Beings to practice in the physical worlds. There are various steps to become what we are today. There are different types of incarnations. We know the human one the best. But actually, the different incarnations begin in the energetic levels. First a Spirit has to practice the energy and incarnate in what's known as a Soul. First adapting itself to the features of a Soul, and to learn how to manipulate what a Soul Being is. After that, it starts to practice the incarnations in the molecule and gas level. Narrator: As Spirits, we must first practice the least dense to incarnate and this is the adaptation to a Soul. The densification of things allows us to understand the fluidity of the physical things inside the dense worlds. The maximum density allows us to know how matter feels and to recognize our limits in the physical world. Plants allow us to understand the channeling of divine light and how to anchor into the earth. For this to happen, we must practice the photosynthesis process for a long time and integrate it day by day. Being animals allows us to learn the movement and control of our body, decision making, instinct, interaction with the rest of Beings and communication. Being rational animals allows us to practice spirituality, meditation through recreational activities, culture and the sense of family. After it's the moment to learn, through approximately 70 lives, the unification between Earth and Sky, taking into consideration all of what we have learned before. This is the last physical level, together with the extraterrestrial. Angels & Beings from the 7th dimension: In this level, the evolution changes. Here they work in service of the physical worlds, using their experience to guide who is going through physical density. After the 7th dimension starts another type of evolution, which is more etherical. This means that the Beings who already went through the physical levels start to incarnate in etheric levels because they were able to illuminate their matter and ascend through matter. When one ascends through matter, a new evolution process begins because his body has already become light. Now, their process is to help who is lower down, to say it somehow, and to make them understand the process of illumination of matter like the ones we know as Jesus, Buddha, Muhammad and many others that were born and are not so well known in our society. But their evolution continues going up until they reach the 15th dimension where they incorporate all that essence in the different dimensions to understand the integration of that essence. All this process takes us to what I called in one of my writings as Lumina, which is the Etherical level par excellence of pure light. This process is difficult to describe with words because it does not correspond with our evolution level or our dimensional level. So, the only thing I can say about this is that it exists but we must not worry about it. The first dimension we know is the initial point. They are all of the little dots we see in the sky which we call Prana. They're dots of light that shape and create all things. The 2nd dimension is the projection of those dots of light. For example, something visible for us in the 2nd dimension are shadows. After the shadow multiplies itself to the sides, that creates the vibration of 3rd dimension with depth, and this is the plane we live in today. This is based especially in geometry. The 2nd dimension can be understood as the drawings in numerology, in the mathematical level, and the 3rd dimension can be understood as geometry. After the 4th dimension is sacred geometry. This means the application of geometry in a vibrational level. It's a moment where matter starts to understand that there is no time or space. The only thing that exists is the here and now. The 5th dimension goes further than sacred geometry because it understands the essence of sacred geometry. This means the understanding that each geometrical block completes another form of a Being. This means that each of us forms another. Although today in the 3rd dimension, we have the understanding or theory of that we have still not reached the assimilation of this. The 6th dimension is a projection of all the totality of existing things in the integration level. How is this understood? The 6th dimension is where everything is possible. This is the next level where the minds of the autistic live in, for example. It is a level where one creates its own reality, where one can create their own geometry. The 7th dimension is the integration of that geometry in pure light. It's a level where the Beings lose their faces, to become, simply, a guide and they go to a christic level. A christic level of vibration that goes up to the 10th dimension, more or less. In these planes is where, what's known as, our Cosmic Fathers or Pleiadian extraterrestrials live. They are beings that can go from one dimension to another easily. They are beings that can go from one dimension to another easily. In those dimensions one can do this, being visible in the 3rd or 9th dimension. The rest of the dimensions are difficult for the human mind to understand. For example, the 11th dimension is where everything moves around like placentas. All the vibration moves like a swell of energies that make up everything. Narrator: In the universe, the orders, judgments, archangels, seraphs, guides and many other Beings from high dimensions in a subtle way, carry out the function of politicians. They are the ones that control the order of the people, of the worlds and help their social welfare to direct their economy, allow them to freely learn, they move around the worlds to help the community and its evolution. Beyond those hierarchies, this does not necessarily mean that they are superior. To be in higher dimensions does not mean to be superior nor above anyone or anything like that. What it means is different types of vibration, different vibrational levels. There are many Beings from the 15th dimension that need to learn things from the 3rd dimension. This means they are not integrated yet. This does not mean they are superior, it just means they are different. In the Universe, there is a dimensional hierarchy that moves this way but this is just to organize the patterns of the universal function. Actually, all of them exist in the same level. This means that here where my hands are, all the dimensions are happening at the same time. But my conscience only allows me to see the 3rd dimension. GOOD AND BAD This does not mean better or worse. Narrator: In the sky, the economy is energy-given and simple. This is understood as the fluidity of energy and information. This is the exchange of essences, Karmic agreements, borrowing of historys and the exchange of energy all in such a subtle level that is practically imperceptible. What we know as evil, which is not really evil, can even reach higher dimensions like the 18th, for example, to keep the balance between Giving and Receiving. Every energetic system works by not leaving a single empty space. If someone gives, he has to receive immediately and is now seen as something negative and dark by humanity. This takes us into the next theme: GOOD AND BAD The Light Beings established the economy in the physical worlds so the Souls could move around and survive with the exchange of needs. It does not mean evil, it's just another was of working, another way of evolving. Light evolves through freedom, self-control, free will and support, and allows a lot of time for the process. On the other hand, "evil", or darkness, is another way of evolving by establishing deadlines. Quick deadlines: You have to learn this in a year. If you don't, something bad will happen. So, it's just another process to evolve and because humans understand things through morality, it's not a very understood level on Earth, but it's another way of evolving quicker and many people decide to choose it. Third Part: THE UNKNOWN HISTORY Narrator: In school, we are taught that history started in 3000 BC. The way we know about history is through sources that have survived through time. But even things we thought we knew 40 years ago, now are found out to be lies. How do we know that the historians knew what happened in 3000 BC? Inside this whole dimensional process of evolution, we can find the historical process inside the 3rd dimension. This process starts about 6000 million years ago, but for humans, it's more recent…about 30 million years ago. The process of the creation of humanity was programmed first through genetics by the beings who transmute the genetic information and evolution. These are the Beings we call Eternal Beings or Nature Spirits. These Beings project all the forms that emanate from God in the physical worlds. So, really, Darwin's theory is true, although there is a detail that is missing. There are many races on this planet like vegetable, animals and also humans which are not originally from this planet, but are mutations or historical additions on this planet. All these Beings were brought here for an evolutionary need. Human's history begins on this planet around 24,000 BC., when the 1st prototypes of humanity were created, who are known as Adam & Eve, although Eve was not the 1st woman, it was Lilith. But there were already humans on Earth. They were not actually the first. They were the 1st prototype of the humans we know, the ones that were most similar to us. They were quickly created because there was a cultural addition, to say it somehow, from other planets that helped the human race to become what it is. Why did they do it? It was not a random event. It's not that humans were made by extraterrestrials and not from God, like some people say but the Divine Plan from the Angelic Level was in contact with the Beings from the 9th dimension, who are also extraterrestrial, and they passed over the Angelic Plans to the physical worlds through Beings who are connected through the spiritual world. And they followed the Plan according to the needs of the Cosmology. Narrator: Behind the history of humanity, a story of conspiracy started to unfold, both terrestrial and extraterrestrial, about the control of this widely rich and varied world, known today as the stories of the Reptilians, Rigelians and Illuminati, amongst many others. Planet Earth was going through important changes with invasions of other races which were not positive for the planet like the renowned Reptilians. These Reptilians were negative for the Earth's evolution, so the Galactic Federations, who are the Beings who have a close relationship with the Beings from Angelic levels, projected over humanity, which was growing a new humanity, creating what we know as the human prototype who we are today. This human prototype is a copy of the Angels idea from the etheric emanation, on the Earth through the extraterrestrial. This is where our history begins. Narrator: The problem is that history and time are circular. This means that events repeat themselves differently but with the same patterns. History should not be taken as a list to know what happened until our days, it should be understood as a complex camouflaged order that shows us the mistakes that can repeat themselves. Obviously, it's very different than the way it's taught in class, although it has already started to come out. The problem of this history is that it can be read over and over and it's very sensationalistic and rigid in some points. This was like this, or like this, and there is NO other opinion. Or there is a conspiracy or a plot behind humanity. In reality, it is not exactly like this. We should not create a schizoid persecutory delirium with humanity's history. Humanity's history happened this way because it had to happen this way and all the problems that have been created happened to allow the change of humanity and the evolution of consciousness in humanity…and created specific work in an area. That’s why humanity is governed because of the constellations, which began from the Ages, which lasts for about 2,160 years. Narrator: Earth spins around the sun in a process which lasts about 365 days, but at the same time, our Sun spins around another sun which is a lot bigger, called Syria, about every 26,000 years. As a year on Earth, the Sun's year has its seasons, equinoxes, solstices and ages, too. This has an influence on the historic events on Earth. The Solar year we are going through began approximately 21,210 BC, with the Age of Capricornia. The 1st prototypes that were created were the ones we know as Lemurians, which I call Lomiom. Lomiom belonged to the whole Pacific. It's a race that created lemuria, how we know it today… well, how some know it. After many historical problems, Atlantis started to develop in the Atlantic, which, since I was small, I called Gefislion. This country extended itself through the whole Atlantic Ocean and created, on the warm part of the planet, loads of civilizations and colonies which helped to organize the planet and kept it in the direction of the Cosmic Plan. Narrator: How did some civilizations begin to write with complexity overnight? How, after only 1,000 years of human civilization, the Egyptians began to build monuments so spectacularly calculated and designed from mathematics? In only 10 years, how were so many Gods created to worship? Why has the Sphinx shown degradation for over 9,000 years, when the Egyptologists say it was built only 5,000 years ago? All these civilizations that went from 13,000 to 6,000 BC, tried to apply a system of terrestrial balance and human information. This is how the whole plan of building the pyramids and the old temples, began, and today, there are only a few remainders left. The pyramids are thought to be tombs but were never tombs. Narrator: The first civilizations, such as Lemuria, Atlantis, Mayans and Doors of the Sun, are the ones from which many other civilizations were created in 10,000 BC. Later on, humanity had to go through another type of evolution which was not so much stellar than having to work the Earth and humanity in itself, the cultures and the rest. That's why, since Taurus, 3000 BC, the history that we are allowed to remember began. Humanity's history, the beginning of "Civilization" for the many teachers. This began in Egypt, where humans started to practice spirituality through society. Having it clear that the history we know goes through Taurus, Aries and Pisces, the Age we are going through now, we can understand that the history develops itself according to the energies that flow from the Cosmos to the planets, in our case, Earth. This energetic pressure that comes from the stars guides the events that happen on Earth, mainly because everything is interconnected. Narrator: The energy of the Cosmic environments are factors that might mold the energy of the Soul so that it holds on to the Physical body with one intention: to learn and fulfill its mission in this specific moment. This is why the Cosmic Order determines our steps to take, our history, our map and route, our feelings, relationships, gifts and so many other aspects to create the necessary mechanisms to allow us to learn and carry out what we had agreed to do before we were born. Once we arrive at the dawn of the sun, what we know as 2012, the intention of the Ages in the cycle changes so all the vibration of the worlds, that are those stars, also change. What does this mean? It's like when the spring begins. So, everything that has been worked through the "known" history starts to open to a type of history which is totally unknown. Not because the world is going to go through a horrible change, but because there is an increase in the vibration and the energy coming from the sun. And this makes the Earth transform its energetic vibration. When this energy level changes, it vibrates in a different way. When it vibrates in a different way, it changes color. The vibration level of a planet creates a different color due to the emanation of the heat. And all the Souls have to adapt to that color. And the color that is vibrating today on the Earth is what is known as Indigo. Today, people talk a lot about the Indigo children. It's not a group of Souls that come with an indigo suit or some sort of indigo spiritual level. They are just new souls that are coming to work on Earth during this period, and to be able to be born on the Earth they need this indigo color to be able to work within the vibration of the Earth. Demystifying the story that many books have created about "special" kids, "indigo" kids, and nobody else is indigo than that percentage of indigo children. It's not this way, really, it's Earth that is vibrating in indigo. What does this indigo color mean? Indigo is the color of the 3rd eye, which we know as the eye of visions. Narrator: This color is transmutation, and the Souls have come to create in the way that adapts best to each of them. If their context is aggressive or very passive, they will create changes through aggressiveness and by breaking ideas in the family level. They will do it through sexuality, politics, vandalism, art, indifference and tribalism, even though pure and possessive love. But they will change everything because it's their mission: change things to create the context that is needed. They also transmute the vision of what is, transform the vision, create ideas, creativity. So they work in these levels: creativity and idealism, but changing this creativity and idealism, because they transmute. So everything that is born on planet Earth starts to transmute. This transmutation happens through different ways… through aggressiveness, conflicts between societies, transformation through kicking or tranquility. Through action and non-action, both are very useful for their universe. So this vibration makes everything that comes new into the planet, vibrate with it. Any soul being that comes to Earth, when the curtains start to shift in the 1980's, becomes indigo. This means every tree, stone, animal and human that is born from the 80's is already indigo. It's not a special group. The "group" that is seen as special depends on the vibratory level that one has when coming to Earth, but not because that it is indigo, but because of the specialty that each one has on that level, this is what makes each human different. This is why some of the indigos are warriors, others creative, some totally peaceful and others just ignore everything. For example, they don't necessarily have to believe in God or talk about the Universe just because they are indigo. Indigo is a vibration that transmutes, an indigo could do it through the economy or politics without believing in God. Narrator: It has nothing to do with spirituality; it has to do with vibration. The classification of Souls has to do with the vibration, and the amount depends on their types of vision. All of them have agreed what they have to achieve in this world, but the only thing they depend on to achieve their mission is for the adults to stop worrying about their well-being and education. The best way that they can help them is by forgetting about them and to start listening to themselves. The Crystal, in comparison to the Indigo, is a being that comes from the Christic levels… they are the hundreds of thousands of little Christs that have come to fulfill the mission of unconditional love. And they are the generations that started to come since 2000. The Souls that practice more spirituality are the Cristal children; they have also come to this Age as a group of Avatars, as they are called today. They are a Soul group that has come here to work spirituality through harmony. This means a spirituality that does not have to relate to God or Angels, but with the harmony of societies and the person within. Fourth Part: THE NEW TIME Narrator: The education, no matter how many changes it has gone through in history, it always had the essence to teach how to learn on Earth, PHYSICAL EDUCATION: to adapt to the physical world, PHYSICS AND CHEMISTRY: to learn about the formation of things, ECONOMY AND NATURAL SCIENCES: to manage resources to survive, MATHEMATICS AND TECHNOLOGY: about the logic of God's body, LANGUAGES: about the communication between Beings, PSYCHOLOGY AND ETHICS: about the relationship with Beings. Education should help us to adapt to the world, to consciously learn how to live with the world, to learn about ourselves and the rest as a whole. The problem that society is having now is, "what must I do with these indigo children?" Should we close them in school, let them destroy the world? What must adults do with the indigo children, and what must the indigo children do with the adults? The indigo children have come to transmute whatever they can, so they are going to do it at the beginning, through what we think is non-action. What non-action really does is to stop the social flow, stop the movement; it's like a counter-action of what one expects as an indigo revolution. The idea of having to create something new is not the first reaction of an indigo. The first reaction is to sit down and do nothing that promotes a system. So to create an education and society for an indigo child is very difficult. This difficulty is something that the adults are going to have to face for at least the next 2 decades. When people start to see that any type of teaching system they try is going to fail somehow, because the indigo have not come to stay on the planet, the indigo vibration changes the context for what's going to stay after. So what has to be done is to allow them to express their creativity of change in the best ways possible. It has to be totally flexible for their generation, for their creation and work on planet Earth. Narrator: For this to happen it is necessary to break the education systems we are used to, which is based on memorizing, competition, abuse of authority, lack of creativity and imagination. The importance of what's mental over what comes from the emotions. A new system should base itself on the emotions, learning through experimentation and discovery that encourages integration. This is what can adapt best to an indigo's vibration, because it hasn't got a system that directly conducts the learning of the person. It's a wide system that accepts any type of system, ancient, modern and even future. It allows an extension of the vision of learning that goes further than just educating. It goes towards the integration of the Human Being, from when one is born until one dies. It allows every pedagogical system to unite and debate, not to create a new school, but to create a new way of learning. It goes further than what any of us have had to come and work for in the education. This is actually why us indigos and crystals have come here for: to learn and to help learn. To learn and to help learn, and nothing else. : Where did we learn about the ecosystems? Visiting a forest or with a photograph in a book? Where did we learn about how to use our bodies? Dancing and playing or in a notebook with complex words? How do we learn a language? Writing and reading or communicating ourselves with a group? How do we learn the Theorem of Pythagoras? Memorizing the formula or discovering it the same way Pythagoras did? We know that the Earth spins around the sun, but do we ever look up at the sky trying to understand why? So learning happens through practice and experience, not theorizing. The theory is only useful to understand a part of what's important. Some tools with which one can work with an indigo child: well, more than work, is to help guide so they can fulfill their mission, is that everything they teach, that is useful, for example, things they have to do at home, school or in society, must be applicable inside the school, house and society. They mustn’t teach useless things like learning every bone in the body, every element in the cell or complex mathematics. We are Beings (indigo/crystal) that come from the 6th to 13th dimension, to try to promote the 4th and 5th dimension inside the 3rd. If they close us in a class to teach us mathematics of the 2nd dimension, when we are promoting the 5th, it's a bit difficult. So everything that is taught should be applicable to the everyday life, because that is really where our job is, where we must all work. Now the world can be seen in the 4th dimension, it's wide and circular towards all directions. So to use a blackboard to give the information is not a positive thing, but it would be to use the walls, it's a tool, all the kids now write on the walls. What they use the least is the blackboard of the class. Also, they must be taught to not feel afraid of nature, the dark, thunder, wind… this would help them a lot to move in the integration of things and not in the polarity of things. But the most important thing to manage this is obviously the parents and teachers. They are the first ones that have to change their vision, and the first ones that have to break with their concepts about sociability. This means, the parents for us… stop being parents. Parents really should start to be guides. Guides of life, not to tell us what must be done and how. They should give advice about what is best to do, so to be a guide or companion. The same with the teacher, he shouldn't be someone who tells you what you have to learn. The teacher should be someone who learns with the student, a learning companion. Someone with who you can reach an argument to learn, and it doesn't matter how much one thinks they know about something, there is always something new to understand. And it must be learned together. So to take into consideration that the parents must become guides, and the teachers must become companions. Narrator: According to the Mayan calendar, time divides itself in cycles which have a beginning and an end. Also, other cultures have calculated the cycles within which humanity moves. All of them have calculated the exact dates when an Age would change. The Age we are in now, according to the calendars, finishes on the 21st of December, 2012. 2012, a year that people are speaking about so much, December 21, 2012, especially, is an equinox. It's the Sun entering its Spring. Earth enters into Spring on September 21st in the southern hemisphere. On December 21st, 2012, its the Sun that goes into its Spring, so it starts to blossom. My advice to everyone for 2012 is to stay totally calm because there is no need to fear what may happen. There is no need to be catastrophic or a sensationalist. It's just an equinox. The changes can be felt days before and days after. Spring begins days before "Spring", flowers start to bloom weeks before the spring equinox. Also, there are still cold days after the Spring equinox. So we must not think that 2012 is an absolute "day to night" change. It's just an equinox, where many things can happen. Things that so many prophets and people have been talking about, but it depends on each of us to live that reality. He he wants to pretend to live the apocalypse, which so many people are talking about, will see it. He will live it and feel it. He who is balanced in their conscience, heart and especially in their stomach, which is the "I am", he won't live that apocalypse, not because the extraterrestrials will take them from the planet, because he won't see the apocalypse as something destructive, but as a possibility of change. It's just an change. Even the word apocalypse means this: it's something that comes after… It's a change, so don't feel scared. Don't be afraid. Don't believe in the natural disasters that people say are going to happen because you are already living them, it has already begun. And nobody says it's the end. The process is already happening. It's just a change. It's a moment where a period of time comes to an end. The Sun's night ends and it begins its Spring, dawn starts to break. the day begins. This involves a huge change because the vibration changes, along with an electromagnetic change. Our brain, bodies, planet Earth and ll the systems we use like a computer, the video… work electromagnetically. Also, the Sun works through electromagnetism, so the light of the Sun may alter during the dawn of the Sun. Why? To be concise, when one is awake at night and sees the dawn, you can feel a breeze and the birds start singing. It's a totally different atmosphere. The temperature goes down and then it suddenly goes up. What is happening? During the dawn, everyone starts to go through a change, a regeneration. The Sun is going to go through the same process, that same breeze begins, but that breeze is electromagnetic because of the fire that the sun creates. So it's not humanity's fault that all this is happening, it's just a cycle, a natural process of the Earth where we have all chosen to be born and to experience and help everyone who doesn't understand this change. Don't be afraid of any electromagnetic changes inside your body or if you feel dizzy. Don't worry, you are not going to explode. Maybe you will perceive things beyond the normal vision. Maybe you will feel a lot of sudden changes in your physical and emotional structure. Some of the changes can even result in schizophrenic deliriums, but its nothing that you should worry about. Also, these 3 days of darkness that people are talking about… if we have an electromagnetic overload, then of course the electromagnetic systems are going to fail. So yes, maybe the lights go off and there will be darkness for a period of time. But what is the problem? Is it the end of the world? The problem is the mass hysteria. That's why we must stay calm and tranquil. That's why my advice is to not stay in big cities with a lot of people. It's not that there are going to be more disasters where there are more people because God says so… but because the people, even if nothing happens, are going to get nervous just because it's 2012. So my advice is to stay calm and during this period, don't try to do anything else than trying to stay calm and being present on the Earth. Stay completely anchored on the earth, with your feet as though they were deeply in the ground because really, it's the only thing that is expected from us in this process. If the wind comes, we must stay very straight so it can't push us around. It's the only thing we should do for now: stay calm. THE DAY AFTER After 2012 and for the next 50 years, important changes are expected to happen in economy, politics and in society, where the intention of creating things in the world is going to have to change. There is going to be a lot of migration due to changes in one's own economy and food supply. People are going to have to move to new cities. The Poles are going to end up melting, as many people are saying, so the water levels of the oceans are going to rise. Not only the coasts, but also the inland lakes are going to raise their water levels. Politically, there may be a bit of social disorder where we are going to have to manage the politics… and not just one politician on his own. And maybe political the climate changes are not going to allow society to restructure itself for at least 30 years. The next 50 years are going to be the transition process. And for those who say that in the year 2012-2014, an era of love and peace is going to begin, this is not true. And for those who say that an end of chaos and total destruction is going to unfold, this is not true, either. Simply, there is going to be a change, each person will see what they want to see. But we must take into consideration these factors that society is going to change a lot, not because the Universe has said so or an Angel has come to say it will be this way, but because there may be a shortage of food and water and this will make things change. The factors that will provoke the change during this transition will be very human. Don't despair if you can't see the light at the end of the tunnel because all of us came here to live the transition, and we decided to come to experience this transition. The light is for our children and grandchildren, of the indigo generation…not the adults' generation. Later on, things will start to organize themselves and the crystal, golden and platinum children will start to organize the new structure from 2040-2050 and will have it placed for 2080-2090, and these are the years that we will fully be in the new Age we call Aquarius. Aquarius has already begun vibrationally, but hasn't begun geologically nor astrologically yet. We shouldn't expect a sudden change from the Age of Pisces to the Age of Aquarius. Another thing we have to take into consideration for the future is that no one is going to want to form groups, everything is going to be very individual. So don't get frustrated if you want to create a new society in the future, because everyone is going to go their own way. This is how the Age of Aquarius works. The evolution is internal and it's not external anymore. In the next 100 years, the transition process is going to demand an internal and individual process for each person. What one will see outside will just be the reflection of what's inside. Narrator: In the bible, Luke 22:10, Jesus says to his disciples, "As you enter the city, a man carrying a jar of water will meet you. Follow him to the house that he enters…" This passage clearly forecasts the transition between the Ages of fishing men (The Age of Pisces), to the Age of the water carrier, the Age of Aquarius. The Age of Pisces characterized itself for the need of having guided groups by a person who knew or understood the path. Today, during the transition, many are expecting a Messiah to come, someone to guide them. But during the Age of Aquarius, one is guided by oneself. Each person will find all their answers inside themselves. There are no masters to follow. Nobody should tell us where to go. Now we have to guide ourselves, but how? Who are the best "teachers" to show us how to do this? These are the masters that have never spoken and that's why they are the best masters: the TREES. The trees are the ones that can show us the best way to live on this planet, with deep roots into the ground, a straight trunk to channel all the light from the Sun towards the Earth… and providing oxygen for everything around them so that all Beings can live with it. Just by meditating, without closing your eyes and looking at a tree, you will understand what I am talking about. They were the first ones that came to anchor the light and that light has to consciously come back to the planet. So be like TREES and bring the Sky back to Earth. Source: Sound of Heart
0.5277
FineWeb
```json [ "The Universe and Spirituality", "The Evolution of Humanity", "The New Time and Aquarius" ] ```
Why Choose Us? - We limit the scope of each workshop to specific Common Core State Standards and Principals and Standards of School Mathematics to insure an in-depth understanding of concepts and transfer of knowledge to the class-room. - We guarantee the outcomes identified in each workshop description through research-validated methods and materials. - Materials are field tested and mapped to Principles and Standards for School Mathematics, Common Core State Standards and expectations. - We increase student achievements, generate teacher enthusiasm, and change attitudes towards the teaching and learning of mathematics. - We give participants professionally prepared materials which are listed in workshop descriptions. - We offer graduate credit and CEU's. Welcome to TSMM: Thinking Strategies for Mastering Math® was established in 1991 to significantly enhance the teaching and learning of mathematics for kindergarten through fourth grade educators and their students. The goal of Thinking Strategies for Mastering Math® is to improve the quality of math instruction in the following four components of mathematics: Number Sense, Operation Sense, Algebra, Measurement, Data Addition and Subtraction, Place Value and Trade, Algorithms Multiplication and Division, Place Value and Trade, Algorithms - Geometry for Third and Fourth Grade Students The change we offer educators in these components is created through research on how students learn mathematical concepts. Our mission is to provide elementary educators with the methodology for teaching fundamental math concepts deemed important by Common Core State Standards, Principals and Standards of School Mathematics, 2000, and research. Our vision is to be recognized by elementary educators as their first choice for professional development workshops and classroom ready materials for teaching mathematics.
0.9982
FineWeb
```json [ "Mathematics Workshops", "Teacher Professional Development", "Elementary Math Education" ] ```
Marketing Manager at Plantriskassessment Member Since April 2016 Plant Risk Assessment: We provide services for Plant Risk Assessment, Plant Safety Assessment, Plant Auditing, Plant Regulation The first step to establishing a systematic, pro-active approach to managing health and safety in the workplace is a comprehensive plant risk assessment. The legislation covering OHS requirements in the workplace varies across Australia. However, moves have been made towards a nationalised system with the introduction of the new Work Health and Safety Act 2011.
0.8681
FineWeb
["Plant Risk Assessment", "Work Health and Safety Act", "Plant Safety Assessment"]
Name Institution Course Date Erasmus and Christian Humanism Humanism is a philosophical belief that the human race can survive without paying attention to existing superstitious or religious beliefs whose approach is based on reason and humanity. Experience and human nature are mainly recognised by humanist as the only founding principles of moral values… Download file to see previous pages... It is a logical philosophy which is based on human beings belief with regard to dignity, derive information from scientific principles and gain the relevant motivation from human compassion and hope (Fowler 139). Most humanist have a common belief which is based on individual freedoms and rights but also believed that social cooperation, mutual respect and individual responsibility are equally important. In addition, they believe that the problems bedevilling society can only be solved by the people themselves which can improve the overall quality of life for everyone. In this way, the humanist maintains the positivity from the inspiration they acquire in their daily activities, natural world, culture and various forms of art. They are also believed that every individual has only one life to live and it is his/her personal responsibility to shape it in the right way and enjoy it fully. Humanists encourage positive relationships, human dignity and moral excellence while enhancing cooperation and compassion within the community. They also see the natural world as the only place where they show love and work thus setting good examples to the rest. They accept total responsibility in their course of their daily action as they struggle to survive as they enjoy the diversity around. Humanism strives to move away from religious or secular institution through a philosophy that shuns the existing traditional dogmatic authority. Characteristics of humanism include democratic, creative use science, ethical, insist that Social responsibility and liberty go hand in hand and cultivate creative and ethical living Humanist commitment is enshrined in responsible behaviours and rational thoughts which facilitate quality life in the society. They also believe that human beings and nature are inseparable though the latter is indifferent to the human existence. They also believed that living is the most significant part of life that overshadow dying and heavily contribute to overall life purpose and meaning.On moral values, they believe that they are not products of divine revelation or a property of religious tradition and therefore must be developed by human beings through natural reasoning (Fowler 183). Understanding of the nature should thus be the guiding principle in determination/reflection of the wrong as well as right behaviours. Furthermore, they possess the faith that human being has the capacity to differentiate and choose between bad and evil without any the existence of potential incentive of reward. Humanism is based on rational philosophy which is get inspirations from art, information from science and motivation from compassion. It tries to support the affirmation of human dignity while maximising opportunity consonant and individual liberty which is tied down to planetary and social responsibility. It heavily advocate for fro extensive societal democracy and society expansion as well as social justice and human rights. Humanism is devoid of supernaturalism since it recognise human as part of nature while laying emphasis on ethical, religious, political and social values. Therefore, humanism tends to derive its life goals from human interest and needs rather than deriving them from ideological and theological abstractions and further asserts that the human destiny lie on their responsibility (Fowler 219). Humanism provides a way of living and thinking that tries ...Download file to see next pagesRead More Cite this document (“Desiderius Erasmus Essay Example | Topics and Well Written Essays - 2000 words”, n.d.) Retrieved from https://studentshare.org/history/1496562-desiderius-erasmus (Desiderius Erasmus Essay Example | Topics and Well Written Essays - 2000 Words) “Desiderius Erasmus Essay Example | Topics and Well Written Essays - 2000 Words”, n.d. https://studentshare.org/history/1496562-desiderius-erasmus. Renaissance Comparison Essay Name Surname College Name Professor’s Name Course Name Renaissance Comparison Essay The Italian Renaissance in Fact was the initial phase of the Renaissance as a whole period of great cultural achievement and changes in Europe that took place within a period of time between the 13th and the beginning of 17th centuries, marking Europe’s transition between Medieval and Modern Ages. During this time, the Dutch scholars defined the stage of Italy as both deteriorating and fruit giving. The deterioration of art and architecture of Italy was on high scale, on the other hand, the society of Italy started to get influenced by education, trade and well established infrastructure. Below is a list of influential figures or philosophers who have had a great impact on the evolution of the American education. Discussion Socrates is a well-known Greek philosopher. His work had a great influenced in the western society especially in the evolution of the American education. It is a period that started roughly around between the eighteenth and nineteenth centuries; it pertains to the period when the Industrial Revolution started in England, and produced many new things which people associate with modernity, which is a rapid process in the development of progress. He was despised by both sides for his preference for compromise over conflict. But his positions and views were based on pragmatism and not cowardice. The proper way, for Erasmus, was to never resort to fanaticism even if one is right. He understood well the nature of evil and he too hoped to see truth replace error and right triumph over wrong. It often maintained by several thinkers that the Reformation, a landmark episode which resulted in great renovation of Catholic ideology, was an immediate offshoot of the Renaissance. The renewed interest in classical learning and thinking influenced all spheres of the Western life and it ended the dominance of the church over the area of thought and philosophy. Acting as a serious concern to everybody, old age comes out as the major point, which Folly promises she can reverse the effects. Through the concomitant element that she mentions, such as foolishness and forgetfulness, youthfulness can be prolonged while keeping
0.5869
FineWeb
```json [ "Humanism", "Erasmus", "Renaissance" ] ```
Friday Fun Facts About The Eiffel Tower The Eiffel Tower is probably the best known landmark outside the United States. It’s been featured in books, magazines, and movies and has held its place in history ever since it was constructed. It’s a truly amazing structure. In 1889 the World’s Fair in Paris celebrated the 100th anniversary of the French Revolution. There was a contest to construct a monument at the entrance of the World’s Fair. A company called Eiffel et Compagnie, owned by Alexandre-Gustave Eiffel, got the contract. Eiffel himself didn’t actually do the design. That honor goes to one of his employees — Maurice Koechlin. These two men also designed the interior framework of the Statue of Liberty years earlier. Here are some more fun things you might not have known about that big, iron, thing we call the Eiffel Tower. Fun Facts About The Eiffel Tower - The four feet at the base of the tower correspond to the four points of the compass. - The final design called for 18,000 pieces of “puddle iron,” a type of wrought iron used in construction in those days. - You’ll find the most well known structure in Paris on the Champ de Mars. - The Eiffel Tower is 1,050 feet high and was the tallest structure in the world for 41 years before being passed by the Chrysler Building in New York City. - The tower is made of iron and weighs about 10,000 tons and has 2.5 million rivets to hold it all together. - The tower was originally created as a temporary exhibit at the World’s Fair and was almost torn down in 1909. - Saving the tower paid off during World War I it was used to intercept radiotelegraph signals from the enemy. - Hitler also gave orders to destroy the tower during World War II but the order was never carried out. - A tower of this size would have to have some movement. Metal expands with heat and sunlight causes the tower to move about 7 inches it also grows about six inches. It also sways slightly in high winds. - The Eiffel Tower is lighted by 5 billion lights. - The tower has 108 stories and 1,1710 steps but there are two elevators. During WW II the French cut the elevator cables so the Nazi’s would have to take the stairs to the top. - Seven tons of paint is applied every seven years to help prevent it from rust. - Its one of the most copied structures in the world with at least 30 replicas in various sizes around the world. Some Final Thoughts The Eiffel Tower had a extreme makeover in 1986. It’s the most visited paid monument in the world welcoming 7 million people each year. Five hundred employees maintain restaurants, elevators, and security for the International landmark. Have you been there? Tell us about your visit?
0.9814
FineWeb
["History of the Eiffel Tower", "Design and Construction", "Interesting Facts"]
- To develop regulations and standards in system integration (e.g. EEDI definitions and implementation rules). The Working Group will ideally consist of engine builders, engineering companies, research organisations, system component suppliers such as electrical equipment & automation, controls, batteries, gears, propulsors and thrusters, heat exchangers, steam turbines, and system integrators including engine users, shipyards, and Classification Societies. - To input and engage the component industry for shipping and land based applications - To develop hybrid system design principles - To input on the development of the internal combustion engines in diesel-electric installations - To discuss development drivers for energy efficiency optimisation concepts in ships or land based applications - To contribute to the development and promulgation of multi-source energy system design optimisations for ships and land based power plants - To input and contribute in the development of regulations with Classification Societies, adjusting the existing rules to cope with the state-of-the-art in system integration design principles - Regular meetings are held once every six months and usually hosted by one of our members.
0.9373
FineWeb
["System Integration", "Hybrid System Design", "Energy Efficiency Optimisation"]
History of Broadband Impedance Matching This history of broadband impedance matching is organized chronologically by the birth date of each major design technique. Conceptual descriptions are for readers at the BSEE level, and mathematical symbolism and equations are minimal. The bits and pieces of matching technology are scattered over the past 70 years. There are some substantially different developments that nevertheless fit together in important ways. Some separate techniques are also crucial to several others, especially optimization or nonlinear programming. Three books and an article have been made available on this ETHW as downloadable PDF files to simplify reference retrieval (click on citations in blue type). More than 60 references are cited. The Broadband Matching ProblemA crucial task in transmitter, amplifier, receiver, antenna and other RF applications is design of an impedance matching equalizer network as shown in Figure 1. The goal is to transfer power from source to load by transforming complex load impedance ZL=RL+jXL to match a resistive or complex source impedance ZS=RS+jXS over a wide frequency band. These impedances are usually measured at a finite number of radio frequencies. Sinusoidal source voltage E at any particular frequency is applied to lossless equalizer input port 1 through ZS, which can provide the maximum-available source power PaS to the load ZL when input impedance Z1 = R1+jX1 = ZS* = RS−jXS (conjugate of ZS) according to equations (1) and (2). Otherwise, there is some power mismatch MM², which is the per-unit power reflected by the equalizer. Power mismatch is also expressed as Return Loss: RL = −20Log(MM). The goal is to find an equalizer network that minimizes the mismatch, thus maximizing the transducer gain GT in (1). Conjugate matching is not physically possible over a finite frequency band (Carlin and Civalleri, 1998:180). Note that reference page numbers may follow the citation year. As in Figure 1, real power absorbed by load impedance ZL is the same power entering a lossless passive network, namely |a1|² − |b1|² = |b2|² − |a2|², which is the difference between PaS and reflected power. At a given frequency the generalized reflection coefficients in (2) and (3) are also equal in magnitude to the hyperbolic distance metric in (4) associated with impedances Z1 and Z2 through ordinary reflection coefficients S1 and S2 in (5). Hyperbolic distance between points on reflection charts is described in Section 3. According to good practice, impedances are generally normalized to 1 ohm and frequency to 1 radian/second (r/s). Reflection coefficients S1 and S2 in (5) differ from (2) and (3) only in that the latter have reactance XS or XL added to X1 or X2, respectively. According to a 1947 book sponsored by the US National Defense Research Committee: ”The techniques for matching a microwave device over a broad band are not well defined, and no practical general procedure has been developed for ‘broadbanding’ a piece of microwave equipment.” (Montgomery et. al., 1947:203). That changed the following year (Fano, 1948) and has been evolving ever since. Most matching network research is based on lossless lumped L and C components, but there is a well-known frequency transformation that adapts those results to commensurate microwave transmission-line components (Richards, 1948). It is crucial to recognize four different network termination arrangements: The preceding outlines the most general broadband double match case, as in Figure 1, where both load and source impedances are complex. The simpler broadband single match case involves a complex load and a resistive source, i.e. XS=0. Filter networks are either doubly terminated with only resistances RL and RS terminating or singly terminated with ZL=RL and ZS=0. In the latter case, either an ideal voltage or current source may provide the excitation. Filter design research flourished in the 1930s and was well understood by 1950 (Green, 1954). Forerunner Technology 1939 - Two tools crucial to broadband matching were described in 1939 and have been relevant ever since. Darlington’s Theorem says that an impedance function of an arbitrary assemblage of reactive and resistive elements can be represented by a reactive (lossless L and C) network terminated in a 1-ohm resistance (Darlington, 1939). Applied to Figure 1, for ZL=1+j0 there is always an LC network that can produce any impedance function Z1(p) versus complex frequency p = σ+jω that is rational positive real. A positive real impedance function Z1 has R1>0 when σ>0 and X1=0 when ω=0. Positive real impedance functions occur as the ratio of specific polynomials in the complex frequency variable p. Darlington’s Theorem is evidently false if the 1-ohm termination is replaced by any other impedance, and that poses the compatible impedances problem (Youla et. al., 1997). The Smith chart, initially conceived in 1939 for transmission-line analysis, is a transformation of all impedances in the right-half Argand plane (RHP) into a unit circle (Smith, 1939). That bilinear transformation was originally just as in equation (5) with the Smith chart center Zi=1+j0 ohms, but is equally applicable to (2) or (3) where the chart center corresponds to ZS* or ZL*, respectively. The Smith chart is utilized in Graphical Matching Methods in Section 7, and the related hyperbolic distance concept is crucial to the H-infinity technique in Section 9. The hyperbolic distance between Smith chart reflection point S1* and S2 according to equation (4) was described in 1956: “The transformation through a lossless junction [two-port network] ... leaves invariant the hyperbolic distance ... . The hyperbolic distance to the origin of the [Smith] chart is the mismatch, i.e., the standing-wave ratio expressed in decibels: it may be evaluated by means of the proper graduation on the radial arm of the Smith chart. For two arbitrary points, W1, W2, the hyperbolic distance between them may be interpreted as the mismatch that results from the load W2 seen through a lossless network that matches W1 to the input waveguide.” (Westman, 1956:652,1050). That curved distance metric on a Smith chart (geodesic) also can be expressed by the voltage standing-wave ratio (VSWR), which is a scaled version of the hyperbolic distance or mismatch as opposed to the original transmission-line voltage interpretation (Allen and Schwartz, 2001). Analytic broadband matching theory was born in 1945 with a gain-bandwidth restriction on any single-match lossless infinite-element equalizer having a parallel RC load (Bode, 1945). This general limitation is a simple bound on the integral over all frequencies of mismatch (return) loss in decibels. For lowpass equalizers having an RC load, infinitely many elements, and a constant low reflection magnitude for frequencies below 1 r/s and unity above, the magnitude of S1 in (5) can be no less than e^(-Pi/RC), where ^ is the exponentiation operator. For corresponding bandpass equalizers, the minimum of the maximum constant mismatch is e^(-PiD) where the essential parameter is the decrement D=QBW/QL. QBW is the passband geometric-center frequency divided by the passband width, and QL is loaded QL=XL/RL at band center frequency (Cuthbert, 1983:193). This ideal result highlights the tradeoff between a good match over a narrow band and a poorer match over a wide band, Finally, the broadband design techniques described in Sections 4, 5, 8, and 9 require complicated polynomial synthesis procedures. Regarding both theory and computation, a practicing synthesis expert noted: "... the modern (insertion-loss) method of filter synthesis and design involves a very large amount of numerical computations, as well as, in most cases, the need to make choices that are anything but clear or simple. Furthermore, the numerical computations are nearly always very illconditioned, necessitating the use of either a large number of decimal places or esoteric procedures to overcome." (Szentirmai, 1997). Analytic Gain Bandwidth Theory 1948 - Analytic theory is required to understand gain bandwidth limitations; however, it can solve only simple RC or RLC single-match problems and requires those precise load models. Robert Fano extended Bode’s 1945 gain bandwidth theory for broadbanding the RC single match case by utilizing established doubly-terminated filter theory (Fano, 1950). The Fano technique replaces a qualified load impedance ZL (Figure 1) with Darlington’s resistively-terminated LC two-port network, so that what results is a doubly-terminated filter with resistors on both ends and two LC cascaded two-ports as the equalizer. The overall problem is to design a Chebyshev or elliptic equal-ripple doubly-terminated filter having a specified number of elements. The poles and zeros of the given ZL impedance function rigidly fixed one of those two-port networks constituting the matching equalizer. Therefore, there was less flexibility to choose passband width, response shape, and tolerance (flat loss) while maintaining physical realizability of the LC two-port matching elements. This last requirement was satisfied by a set of Cauchy integral constraints similar to Bode’s primary result cited above, except that the upper bounds involve the LC elements in the load-equivalent Darlington network and the right-half p plane zeros of the input reflection coefficient S1. Fano solved only a few special cases with performance tradeoffs for certain types of RLC loads, but more general results were “hampered in most cases by mathematical difficulties which lead to laborious numerical and graphical computations.” (Fano, 1948:34) Fano’s approach was soon made more simple and applicable, first by discovery of Chebyshev network element equations for the single- and double-match cases (Green, 1954) as later cited by others (Matthaei et. al., 1964:131). Less convenient continued-fraction expansion of Chebyshev polynomials for elements in single-match equalizers and singly- and doubly terminated filters also were reported (Matthaei, 1956), and extended to exclude transformers (Plotkin and Nahi, 1962). The Fano integral constraints were more simply tabulated and sloped passband responses for interstage networks were included to offset high-frequency gain rolloff in amplifiers (Mellor, 1975). Explicit design formulas for simple single-match broadband networks were published (Chen, 1976) Fano’s analytic gain bandwidth theory was extended to include the double match case by adding a second Darlington network to represent a qualified source impedance ZS (Fielder, 1961), and optimal matching limits for that case were calculated by a simple iterative algorithm (Levy, 1964). Chen assembled design formulas for double-match broadband networks (Chen, 1988); for a list of articles in which the formulas first appeared, see (Gudipati and Chen, 1995:1647). A new theory employed scattering parameters with complex normalization, making Fano’s representation of the load impedance (by a Darlington equivalent LC network terminated in a one-ohm resistor) unnecessary. Equations (2) and (3) in Figure 1 were defined in that different approach, which formulated the matching constraints with Laurent series expansions (Youla, 1964:32). That complex scattering approach was applied to both single- and double-match equalizer design, and the concept of compatible impedances further extended that technique (Wohlers, 1965). Applying gain bandwidth limits to simple lumped terminations motivated load and source modeling, which often required de-embedding diode or FET device circuit models from sampled measured data (Bauer and Penfield, 1974), (Medley and Allen, 1979). Late in the history of analytic gain bandwidth theory, it was shown that designing a Chebyshev equal-ripple passband for single matching is achievable but not optimal (Carlin and Amstutz, 1981). Furthermore, for the double-match case, selective flat gain to an arbitrary tolerance is never physically realizable (Carlin and Civalleri, 1985); the Real Frequency Techniques in Section 8 overcome that limitation. Dissipative Equalizers 1953 – Although lossy matching is not a popular technique, it is informative to know that a resistive, matched x-dB attenuator (pad) placed between a source and load reduces maximum available power by x dB, of course, but also reduces reflectance return loss by 2x dB (Westman, 1956:570). Darlington briefly considered semi-uniform dissipation (all Ls have one unloaded Q value, Cs another) to synthesize lossy filters (Darlington, 1939), and subsequent consideration of broadband matching using those lossy equalizer elements extended Fano gain-bandwidth and Youla scattering parameter theories (LaRosa, 1953). LaRosa gave three reasons why lossy matching networks might be better than lossless ones: First, a lossless network might not be able to provide the desired low return loss over the passband: second, input return loss and power delivered to the load impedance are not independently controllable with a lossless matching network; and third, a dissipative network might have a simpler form than a lossless one. Lossy equalizers without transformers were later considered (Gilbert, 1975), and selected lossy lumped networks were optimized to include sloped-gain passbands (Liu and Ku, 1984). Bode later elaborated on Darlington’s general theory, stating that any lossy or lossless network could be transformed to a lossless network if two element impedances are proportional, so that a rational impedance function exists (Zhu et. al., 1988). However undesirable deliberate power loss may be, otherwise unmatchable narrowband antennas may force lossy matching to obtain a tradeoff between bandwidth and power efficiency (Allen and Arceo, 2006). A Pareto front is generated by an adaptive weighted-sum approach to multi-objective optimization problems with bounded random variable sets (Chu and Allstot, 2005). Using sets of random element values in a lossy matching network, one may plot a pattern of dots representing optimal results on a graph of input reflection coefficient magnitude (reflectance) versus system power loss. The boundary of that cluster nearest the origin is the two-dimensional Pareto front; it shows the best tradeoff between equalizer power reflected and power dissipated. Developers of the H-infinity global optimal matching theory (Section 9) have also shown that a globally-optimal lossless matching network preceded by a resistive pad produces a Pareto front that is simply a negatively-sloped straight line segment in the linear graph of reflection coefficient versus equalizer (reflection plus dissipation) power loss. Therefore, such a resistive pad sweeps out the best gain-reflectance tradeoff (Allen et. al., 2008b). Engineers may prefer an equivalent nonlinearly-scaled plot of VSWR versus insertion loss in dB for various optimal LC network degrees and topologies. Also, the resistive pad may be replaced by dissipative network elements (Gilbert, 1975), and their sensitivities could be included to create a 3-D gain/reflectance/sensitivity Pareto front. Numerical Optimization 1956 - Optimization (nonlinear programming) is the technique for minimizing a nonlinear scalar function of many variables that may be constrained in various ways. In simple notation, the objective to be minimized is some scalar function f(x) subject to constraints c(x)≤0, where x is the vector of scalar variables and c(x) is a vector of scalar constraint functions. Optimization by varying parameters of candidate broadband matching networks has been a step in many major design techniques. Unfortunately, determining starting variable values and uncertainty of the search finding a global (deepest) minimum are two major weaknesses inherent in numerical optimization. Joseph Louis Lagrange defined the calculus of mathematical optimization involving only equality constraints in 1804, but numerical applications depend on digital computers and a scientific programming language. That started with the IBM Model 650 computer in 1954 and the programming FORTRAN II formula translation language accommodating complex data in 1958. The easily treated least squares objective function was an early choice for circuit design (Aaron, 1956). The first truly effective unconstrained numerical optimization algorithm (Fletcher and Powell, 1963) was also applied sequentially in a straightforward algorithm for enforcing nonlinear constraints by the Lagrange multiplier method (Powell, 1969). The PET and Apple personal computers with the BASICA language made numerical optimization available to every engineer in 1977. First partial derivatives of the optimization objective function with respect to each of the NV variables are required for rapidly convergent descent algorithms, e.g., Fletcher-Powell. Finite-difference perturbation of each variable for approximating first partial derivatives lacks accuracy and requires NV+1 simulations of network response at each of NS frequency samples, usually NS>2×NV, so finite-differencing increases computing time on the order of NV². An amazing result in 1969 based on adjoint networks and Tellegen’s Theorem showed that all NV exact first partial derivatives could be obtained with only two network simulations per frequency (Director and Rohrer, 1969). This was a crucial development for optimization search schemes, which are highly iterative. Even so, exact first-partial derivatives for ladder network topologies can be obtained with even less computation (Orchard, 1985:1092). Also, the matrix of second partial derivatives can be estimated from the vector of first partial derivatives by using Gauss-Newton searches that save significant computing time (Nocedal and Wright, 1999:259). Network optimization soon followed availability of digital computing and Fortran programming (Calahan, 1968:181-244). Choice of optimization variables distinguished the several approaches, including matching desired coefficients in rational polynomials in complex frequency p = σ+jω, varying polynomial pole and zero locations in the p plane, and varying the L and C values in candidate matching networks. The last technique was later found to be less illconditioned by many orders of magnitude and thus more accurate (Orchard, 1985:1089). It is advantageous to transform the element values to logarithmic space to concentrate variables about unity (Iobst and Zaki, 1982:2168), and also derivatives of the objective function are thus normalized by their respective variable values, i.e. Bode sensitivities (Cuthbert, 1987:381). For good conditioning of the objective function, it has long been known that the input reflection coefficient in equation (2) is a bilinear function of each L and C in a network (Penfield et. al., 1970:99), so varying any single network element traces an image circle tangent to the interior of the input unit reflection Smith chart. The corresponding mismatch over the passband, which is the distance from chart center to points on the input image circle, is a well behaved, unimodal curve between zero and unity and is ideal for numerical optimization for both single- and double-match problems (Cuthbert, 1999:137). Statistical optimization methods increased in popularity with computer speed and are most relevant for multi-objective optimization problems. Genetic search and simulated annealing search methods are computationally expensive and not very effective for weighted-sum cost functions. However, an effective comprehensive statistical optimizer has been implemented to display tradeoff and design alternatives along a Pareto front followed by a Monte Carlo sensitivity analysis (Chu and Allstot, 2005). Graphical Methods 1961 - The bilinear function in equation (5) maps the impedance RHP into a unit circle, the Smith chart, which originally displayed lines of constant R and X (Smith, 1939). Besides widespread applications for designing impedance matching at a single frequency, many engineers became adept at recognizing chart impedance loci over frequency bands to design elementary broadband matching networks (Jasik, 1961). The Carter chart is the same bilinear mapping but show lines of constant impedance magnitude and Q (Q=X/R). The Q parameter indicates the relative energy stored in the tuning-element impedance; thus, greater Q implies less passband width. An unsophisticated broadband impedance matching design is based on Carter charts with impedance element transitions selected to keep each element loaded Q minimized (Glover, 2005). Real Frequency Techniques 1977 - The analytic gain bandwidth technique in Section 4 is based on a load model that characterizes the equalizer termination(s) by a prescribed rational transfer function with pole and zero singularities in the complex frequency p = σ+jω plane. The real frequency technique (RFT) was a new and different approach based on load characterization by samples in real frequency only on the p = jω axis so that no load model was required (Carlin, 1977). Load impedance ZL (Figure 1) is sampled at no less than 2N frequencies, where N is the assumed degree of the equalizer. Back impedance Z2= R2+jX2 is determined by employing an approximately optimal R2(ω) function over all real frequencies in the Hilbert transform integral to determine X2 and thus Z2. Darlington’s theorem then assures a single-match equalizer. The first step in Carlin’s RFT is a piecewise linear functional guess of R2 over the entire ω axis with the variables being the increase in R2 over each linear segment, i.e. excursions (Cuthbert, 1983:219). Then the least-squared mismatch MM in Figure 1 equation (3) is minimized over those variable excursions using sampled ZL impedances, piecewise R2 estimates, and X2 related by the Hilbert integral. With that optimal sampled R2, a nonnegative, even rational function of R2(ω) is obtained by a second optimization that varies both numerator and denominator coefficients. Then a standard Gewertz procedure (Cuthbert, 1983:58) converts the rational R2(ω) resistance function to a rational Z2(ω) LC impedance function, from which a Darlington synthesis final step realizes equalizer element values. A concise algorithm for both the single- and double-match cases was published later (Carlin and Yarman, 1983:20-23). The MATCHNET PC DOS program for single- and double-match RFT synthesis of both lumped and commensurate transmission-line equalizers was published still later (Sussman-Fort, 1994). A different RFT approach obtains the Z2(ω) function in Figure 1 directly in a form guaranteed to represent a physical lowpass or bandpass double-match equalizer (Yarman and Fettweis, 1990). The parametric representation of Z2(ω) by Brune functions is a form of partial fraction expansion with numerator residues that are functions of the (LHP) pole frequencies in their respective denominators. The transducer power gain (1) is maximized by varying the positive-real and imaginary parts of the N complex pole frequencies to calculate Z2 for use in (3). Thus, the laborious Gewertz procedure is not required and numerical stability is improved. Synthesis of the equalizer element values is still required, but the Brune functional form allows an efficient zero-shifting long-division algorithm. An intelligent guess of the initial variables is required, and the Brune parametric method does not provide partial derivatives for the optimization algorithm. A third RFT approach for obtaining a lowpass Z2(ω) function in Figure 1 in the single-match case (XS=0) uses a bilinear transformation to map the entire real frequency axis onto a unit circle (a Wiener-Lee transform). Expanding Z2(ω) in a Laurent series then models R2(ω) and X2(ω) as Fourier series with cosine and sine basis functions, respectively. Starting with an initial guess of R2(ω) values, transducer power gain (1) is maximized over a passband using cosine coefficients as variables constrained to keep R2(ω)>0 (Carlin and Civalleri, 1992). First and second partial derivatives are available for the optimization. A Darlington synthesis step is still required to realize equalizer element values. Use of frequency mapped to a unit circle and the Fourier transform appears similar to, but is not the same as, the technique in the next section. H-Infinity and Hyperbolic Geometry 1981 - An entirely different approach to both theoretical and numerical techniques of broadband single-match was defined as a minimum distance problem in the space of bounded, analytic functions, particularly passive networks characterized by scattering parameters (Helton, 1981). A set of load reflectance values measured at discrete real frequencies is converted to respective center and radius values of eccentric constant-gain circles as commonly plotted on a Smith chart in the design of amplifiers. A spline extends these data to functions over the entire unit frequency circle, and approximate Fourier coefficients are obtained by the Fast Fourier Transform (FFT) to create a trigonometric series on a unit circle. Truncated Toeplitz and Hankel infinite matrices are constructed with those Fourier coefficients to form a simple matrix equation. Its smallest eigenvalue is calculated as trial mismatch parameter MM (reflectance) is varied to find the transition from positive to nonpositive definite, the result being the minimum-possible mismatch for a physically realizable equalizer (Allen and Healy, 2003), (Schwartz et. al., 2003). An algorithm for determining the optimal equalizer back reflectance (scattering parameter s22) for use in Darlington synthesis also was described; however, its validity has been questioned (Carlin and Civalleri, 1992:497). The first implementation of Helton’s H-infinity approach came many years later (Schwartz and Allen, 2004) and provided a gain-bandwidth bound and the Darlington equivalent s22 scattering parameter, but not the matching equalizer. H-infinity is the Hardy space of matrix-valued functions that are bounded in the RHP and where passive network scattering parameters are contained in the unit ball. Nehari’s Theorem is the computational workhorse in H-infinity theory; it explains how to find an analytic function in the Hardy space that is the best approximation of a given complex function defined on the unit circle (Allen and Healy, 2003:28). The Helton method defined the given function of frequency as constant gain circles in a Smith chart, and the metric for proximity to a physical network’s scattering parameters is the hyperbolic distance or mismatch, equation (4) in Figure 1. The result is the best possible performance bound by optimizing over all physical broadband-matching equalizers. Mathematician and Helton advocate Allen published a comprehensive book that applied H-infinity theory to optimizing broadband matching, amplifier gain, noise figure, and stability, all of which are circle functions on a Smith chart (Allen, 2004). He states on page 201: “Anytime we have presented the H-infinity results to any electrical engineers, their immediate question is always – Where is the matching circuit? The most conservative answer is that H-infinity theory does not supply a matching circuit – the H-infinity theory computes the best possible performance over all the lossless matching circuits. It is very rare in numerical optimization to know the global minimum. The H-infinity theory equips the amplifier designer with the best possible performance to benchmark to assess candidate matching circuits.” “This leads to the solid engineering question: How complex should the 2-port be to get an acceptable match? One approach plots the mismatch as a function of degree d. As the degree d increases the mismatch approaches the upper bound computed by Nehari’s Theorem. ... Thus Nehari’s bound provides one benchmark for the matching 2-ports.” (Allen and Schwartz, 2001:31). “In practice, often the circuit designer throws circuit after circuit at the problem and hopes for a lucky hit.” (Allen and Healy, 2003:4). “The H-infinity solutions compute the best possible performance bounds by optimizing over all possible matching circuits. The engineering approaches typically specify a matching circuit topology and optimize over the component values. The State-Space (SSIM) Method stands between these two extremes by optimizing over all possible matching circuits of a specified degree.” (Allen, 2004:xii). “The main (SSIM) program sets up the parameters for MATLAB’s minimizer ‘fmincon.’ Each search starts by initializing the minimizer ... by uniformly randomly selecting Nrep elements ... and starting the search at that random point.” (Allen and Schwartz, 2008a). Systematic Search 1985 – A crude systematic grid search originated from sampling one variable along a line interval at equal subintervals, or two variables on sides of squares, or three variables on sides of cubes, etc., with the “curse of dimensionality” exponentially increasing computing time. One approach was based on design of a ladder impedance-matching networks by a grid search in the bounded space of loaded Q parameters involved in transforming impedance to admittance at a single frequency (Abrie, 1985:215-231). A frequency in the upper passband was selected for this “1 plus Q squared” algorithm to impedance match with less than some specified maximum mismatch (2). The set of Q values for each series or parallel ladder-network element ranged from about –4.6 to +4.6 with subintervals of 0.5 to 0.8. A least-squares optimizer then improved one or more equalizers from a small set of acceptable network topologies. Another systematic approach was based on recursive least-squares (regression) identification from control system technology (Dedieu et. al., 1994). Ladder network equalizers found by a recursive stochastic identification equalization (RSE) algorithm were refined by a random search in the region. The topology of candidate ladder networks was assumed, and sensitivities (normalized first partial derivatives) of power gain with respect to each network element were computed by the adjoint network (Tellegen) method for use in a Gauss-Newton minimization algorithm. The flat loss in the power gain was adjusted manually. Many equalizers designed by this method resulted in a few elements converging to extreme values so that their removal did not affect power gain performance. The GRid Approach to Broadband Impedance Matching (GRABIM) can accommodate a mix of lumped and distributed network elements and includes features of preceding algorithms with added advantage taken of bilinear element transformations and minimax optimization (Cuthbert, 2000). Twelve all-pole ladder network topologies composed of 2-10 Ls and Cs are candidates to single- or double-match tabulated impedance data at discrete passband frequencies. Because the reflection coefficient in (2) is a unique bilinear function of each network element (Penfield et. al., 1970:99), the mismatch loss at each passband frequency sample is a smooth unimodal curve versus the element value localized in element logarithmic space (0.1 to 10.0). This set of superposed curves presents a nonsmooth unimodal optimization objective. A grid search in element variable space approximately locates the potentially global minimum of the maximum mismatch at any passband frequency while avoiding transmission-line element periodicity anomalies. Then Powell’s Lagrange multiplier algorithm, in conjunction with a Gauss-Newton minimizer, precisely locates a minimax solution in this neighborhood, eliminating any superfluous candidate-network elements. No initial element values are required. Aaron, M. R. (1956). The use of least squares in system design. IRE Trans. Circuit Theory, CT3, N4. Dec.: 224-231. Abrie, P. L. D. (1985). The Design of Impedance-Matching Networks for Radio-Frequency and Microwave Amplifiers. Boston, MA: Artech House. Allen, J. C. (2004). H-Infinity Engineering and Amplifier Optimization. Boston: Birkhauser. Allen, J. C. and D. Arceo (2006). A Pareto approach to lossy matching. SPAWAR Systems Center San Diego TR-1942. Allen, J. C. and D. F. Schwartz (2001). Best Wideband Impedance Matching Bounds for Lossless 2-ports. SSC San Diego T.R. 1859. (65 pages). Allen, J. C. and D. F. Schwartz (2008a). Wideband matching circuit and method of effectuating same. US Patent 7376535. Allen, J. C. and D. Healy (2003). Hyperbolic geometry, Nehari’s theorem, electric circuits, and analog signal processing. Modern Signal Processing, V.46, 1-62. Allen, J. C., D. Arceo and P. Hansen (2008b). Optimal lossy matching by Pareto fronts. IEEE Trans. Circuits Sys. II, Express Briefs V55, N6. Bauer, R. F. and P. Penfield (1974). De-embedding and unterminating. IEEE Trans. Microwave Theory Tech. March: 282-288. Bode, H. W. (1945). Network Analysis And Feedback Amplifier Design. New York: Van Nostrand; Ch. 16. Calahan, D. A. (1968). Computer-Aided Network Design, Preliminary Edition. NY: McGraw-Hill. Carlin, H. J. (1977). A new approach to gain-bandwidth problems. IEEE Trans. Circuits Syst. April: 170-175. Carlin, H. J. and B. S. Yarman (1983). The double matching problem: analytic and real frequency solutions. IEEE Circuits Sys., V30, N1. Carlin, H. J. and P. Amstutz (1981). On optimum broadband matching. IEEE Trans. Circuits Sys., V28. May: 401-405. Carlin, H. J. and P. P. Civalleri (1985). On flat gain with frequency-dependent terminations. IEEE Trans. Circuits Sys., V32, N8. Aug.: 827-839. Carlin, H. J. and P. P. Civalleri (1992). An algorithm for wideband matching using Wiener-Lee transforms. IEEE Trans. Circuits Sys., CAS39, N7. July: 497-505. Carlin, H. J. and P. P. Civalleri (1998). Wideband Circuit Design. NY: CRC Press." Chen, W. K. (1988). Broadband Matching: Theory and Implementations, 2nd Ed. NJ: World Scientific Publishing Co. Chen, Wai-Kai (1976). Theory and Design of Broadband Matching Networks. NY: Pergamon Press. Chu, M. and D. J. Allstot (2005). Elitist nondominated sorting genetic algorithm based on RF IC optimizer. IEEE Trans. Circuits Sys., CAS-I, Reg. Papers, V52, N3. Mar.: 535-545. (Cuthbert, T. R. (1983). Circuit Design Using Personal Computers. NY: John Wiley. Also, Melbourne, FL: Krieger Publishing Co. (1994). (Cuthbert, T. R. (1987). Optimization Using Personal Computers with Applications to Electrical Networks. NY: John Wiley. (Cuthbert, T. R. (1999). Broadband Direct-Coupled and Matching RF Networks. Greenwood, AR.: TRCPEP Publications (Cuthbert, T. R. (2000). A real frequency technique optimizing broadband equalizer elements. ISCAS 2000 - IEEE Intnl. Symp. Circuits Sys., May 28-31, 2000, Geneva, Switzerland: V-401 to V-404. Darlington, S. (1939). Synthesis of reactance 4-poles. Jour. Math. Phys., Vol. XVIII. Sept. 275-353. Dedieu, H.,. et. al. (1994). A new method for solving broadband matching problems. IEEE Trans. Circuits Sys., V41, N9. September: Pages 561-571. Director, S. W. and R. A. Rohrer (1969). The generalized adjoint network and network sensitivities. IRE Trans. Circuit Theory, CT16, 318-323. Fano, R. M. (1948). Theoretical limitations on the broadband matching of arbitrary impedances. M.I.T. Tech. Report 41: January, Res. Lab. Electron. Fano, R. M. (1950). Theoretical limitations on the broadband matching of arbitrary impedances. J. Franklin Inst. Feb.:139-154. Fielder, D. C. (1961). Broadband matching between load and source systems. IEEE Trans. Circuit Theory. June:148‑131. Fletcher, R. and M. J. D. Powell (1963). A rapidly convergent descent method for minimization. Computer Journal, V28, 1067-1087. Gilbert, E. N. (1975). Impedance matching with lossy components. IEEE Trans. Circuits Sys. CAS22, N2: Feb. 96-100. Glover, I. Et. al. (2005). Microwave Devices, Circuits, and Subsystems for Communications Engineering. NY: Wiley, Section 3.9.3. Green, E. (1954). Amplitude-Frequency Charisteristics of Ladder Networks. Chelmsford, Essex, England: Marconi House. Gudipati, R. and W. K. Chen (1995). Explicit formulas for the design of broadband matching bandpass equalizers with Chebyshev response. IEEE Intnl. Symp. Circuits Sys. 1995. May:1644-1647, V3. Helton, J. W. (1981). Broadbanding: gain equalization directly from data, IEEE Trans. Circuits Sys., CAS28, Number 12, 1125–1137. Iobst, K. W. and K. A. Zaki (1982). An optimization technique for lumped-distributed two ports, IEEE Trans. Microwave Theory Tech., V30, N12. Jasik, H. (1961). Antenna Engineering Handbook, 1st Ed. NY: McGraw-Hill: Section 31.7. LaRosa, R. and H. J. Carlin (1953). A general theory of wideband matching with dissipative 4-poles. Polytechnic Inst Brooklyn. NY: Defense Tech. Info. Cntr, AD0002980. Levy, R. (1964). Explicit formulas for Chebyshev impedance-matching networks. Proceedings IEE. June: pages 1099-1106. Liu, L. C. T. and W. H. Ku (1984). Computer-aided synthesis of lumped lossy matching networks for monolithic microwave integrated circuits (MMIC's). IEEE Trans. Microwave Theory Tech., MTT32, N3. March 282-290. Matthaei, G. L. (1956). Synthesis of Tchebycheff impedance-matching networks, filters, and interstages. IRE Trans. Circuit Theory, CT3. Matthaei, G. L., L. Young, and E. M. T. Jones (1964). Microwave Filters, Impedance Matching Networks, and Coupling Structures. NY: McGraw-Hill. Also, Boston: Artech House (1980). Medley, M. W. and J. L. Allen (1979). Broad-band GaAs FET amplifier design using negative-image device models. IEEE Trans. Microwave Theory & Tech., V27, N9. Sept.: 784-788. Mellor, D. J. (1975). Computer-Aided Synthesis of Matching Networks for Microwave Amplifiers. Stanford, CA: Stanford University Montgomery, C. G., R, H. Dicke, and E. M. Purcell (1947). Principles of Microwave Circuits, M.I.T. Radlab, Vol. 9. Republished in 1964 by Boston Technical Publishers, Lexington, MA. Nocedal, J. and S. J. Wright (1999). Numerical Optimization. NY: Springer-Verlag. Orchard, H. J. (1985). Filter design by iterated analysis. IEEE Trans. Circuits Sys., V32, N11. Nov.: 1089-1096. Penfield, P., R, Spence, and S. Duinker (1970). Tellegen's Theorem and Electrical Networks. Cambridge, MA: M.I.T. Press, Research Monograph No. 58. Plotkin, S. and N. E. Nahi (1962). On limitations of broadband impedance matching without transformers. IEEE Trans. Circuit Theory. Powell, M. J. D. (1969). A method for nonlinear constraints in optimization problems. Optimization (R. Fletcher, ed.). London, pp.283-297). Richards, P. I. (1948). Resistor transmission-line circuits. Proc. IRE, V36, 217-220. Schwartz, D. and J. Allen Wideband Impedance Matching: H∞ Performance Bounds, IEEE Trans. Circuits Sys.II: Express Briefs, 51(7), pages 364–368. Schwartz, D. F., J. W. Helton, and J. C. Allen (2003). Predictor for optimal broadband impedance matching. U. S. Patent 6622092. Smith, P. H. (1939). Transmission line calculator. Electronics, V12, N1. Jan. 29-31. Sussman-Fort, S. E. (1994). Matchnet: Microwave Matching Network Synthesis Software and User's Manual, Automated Synthesis of Low-Pass, High-Pass, and Bandpass Lumped and Distributed Matching Networks. Boston, MA: Artech House. Szentirmai, G. (1997). Computer-aided design methods in filter design: S/FILSYN and other packages, Chap. 3 in CRC Handbook of Electrical Filters, J. T. Taylor and Q.Huang, Eds. NY: CRC Press. Westman, H. P. Editor (1956). Reference Data for Radio Engineers, Fourth Ed. NY: International Telephone and Telegraph Corp. Wohlers, M. R. (1965). Complex normalization of scattering matrices and the problem of compatible impedances. IEEE Trans. Circuit Theory, Yarman, B. S. and A. Fettweiss (1990). Computer-aided double matching via parametric representation of Brune functions. IEEE Trans. Circuits Sys., V37, N2. Feb.: 212-222. Youla, D. C. (1964). A new theory of broadband matching. IEEE Trans. Circuits Sys. CT11: 30-50. Youla, D. C., F. Winter, and S. U. Pallai (1997). A new study of the problem of incompatible impedances. Intnl Jour. Circuit Thy. & Applis, Zhu, L. et. al. (1988). Real frequency technique applied to synthesis of lumped broad-band matching networks with arbitrary nonuniform losses for MMIC's. IEEE Trans. Microwave Theory Tech., MTT36. N12. Dec.: 1614-1619. - 1 Introduction - 2 The Broadband Matching Problem - 3 Forerunner Technology 1939 - - 4 Analytic Gain Bandwidth Theory 1948 - - 5 Dissipative Equalizers 1953 – - 6 Numerical Optimization 1956 - - 7 Graphical Methods 1961 - - 8 Real Frequency Techniques 1977 - - 9 H-Infinity and Hyperbolic Geometry 1981 - - 10 Systematic Search 1985 – - 11 References
0.7844
FineWeb
```json [ "The Broadband Matching Problem", "Analytic Gain Bandwidth Theory", "Real Frequency Techniques" ] ```
Spray Dryer Technology at Elan Technology Spray dryer technology allows various materials to be combined and then processed into a homogeneous, free-flowing powder. Many materials are spray dried simply to produce a dust-free powder. However, in many cases, such as catalysts, close control of the particle size distribution is required in order to ensure proper performance of the final material. Spray drying is a process used to produce dry granular powders from a slurry, which is a mixture of liquid solution and solid materials. Production of powders is accomplished by rapidly drying the slurry using heated air. The slurry is introduced into the hot air stream via a rotating atomizer wheel or nozzles. While both methods produce granular product, the atomizer wheel provides a broader particle size distribution. Each method allows for very consistent control of the particle size distribution. During Elan’s spray drying process, operating conditions such as temperature, wheel speed (if applicable) and pump pressure are closely monitored to ensure consistent product quality. In-process quality control can be tailored to the requirements of each specific customer. However, we typically monitor particle size distribution, moisture content and bulk density. Although a variety of liquids can be used to form the slurry, Elan Technology primarily processes aqueous based mixtures. We have the capability to prepare slurries onsite or we can accept bulk tankers of pre-mixed material. Specific batching instructions for toll spray drying are developed for each customer’s materials based on their individual requirements. Post-spray dying, materials can be screened to further refine the particle size distribution and then packaged in customer specified containers.
0.9694
FineWeb
["Spray Dryer Technology", "Spray Drying Process", "Post-Spray Drying Operations"]
Influence of Microbial Bodies on Soil Humification Processes I am interested in the ways in which microbial bodies are broken down and metabolized by other microbes and transformed into soil organic matter. My research uses 13C labeled microbial cells (gram positive, gram negative, fungi & actinomycetes) to follow the uptake and respiration of microbial carbon by living microbial biomass in two soil types. Microbial cell uptake is being monitored in a tropical rainforest (Puerto Rico) and temperate forest (Blodgett, CA). Carbon Cycling in Temperate and Tropical Forests My project is a comparison of carbon cycling and sequestration processes occurring in temperate and tropical forests. I am following the uptake of 13C labeled microbial carbon by living biomass and subsequent stabilization or respiration out of the soil system as CO2. I am sampling soils along a time series both in situ and under controlled laboratory conditions.
0.7465
FineWeb
```json [ "Influence of Microbial Bodies on Soil Humification Processes", "Carbon Cycling in Temperate and Tropical Forests", "Soil Organic Matter Formation" ] ```
- Government is an Elephant (Public Strategist) — if Government is to be a platform, it will end up competing with the members of its ecosystems (the same way Apple’s Dashboard competed with Konfabulator, and Google’s MyMaps competed with Platial). If you think people squawk when a company competes, just wait until the competition is taxpayer-funded …. - Recordings from NoSQL Live Boston — also available in podcasts. - Modeling Scale Usage Heterogeneity the Bayesian Way — people use 1-5 scales in different ways (some cluster around the middle, some choose extremes, etc.). This shows how to identify the types of users, compensate for their interpretation of the scale, and how it leads to more accurate results. - Building a Better Teacher — fascinating discussion about classroom management that applies to parenting, training, leading a meeting, and many other activities that take place outside of the school classroom. (via Mind Hacks) Change tactics or give up: It's a crossroads many teachers face when students don't understand the code. I can never forget an evening late into a semester of my Introduction to Python course, during which I asked my students a question about user-defined classes. Here’s the code I had put on the board: var = 0 def __init__(self): # called MyClass.var = MyClass.var + 1 x = MyClass() # new instance created y = MyClass() # new instance created As new information for this particular lesson, I informed them that every time a new MyClass instance is created, the __init__() method is called implicitly. In other words, the code above calls __init__() twice, and in executing the code in __init__(), the variable MyClass.var is being incremented — so this is also happening twice. So, I asked them: after the above code is executed, what is the value of The hand of this class’ most enthusiastic student shot into the air. “One!” He answered proudly. And for a moment my mouth stood open. Read more… Use teaching stacks to drive growth. Elliott Hauser is CEO of Trinket, a startup focused on creating open sourced teaching materials. He is also a Python instructor at UNC Chapel Hill. Well-developed tools for teaching are crucial to the spread of open source software and programming languages. Stacks like those used by the Young Coders Tutorial and Mozilla Software Carpentry are having national and international impact by enabling more people to teach more often. The spread of tech depends on teaching Software won’t replace teachers. But teachers need great software for teaching. The success and growth of technical communities are largely dependent on the availability of teaching stacks appropriate to teaching their technologies. Resources like try git or interactivepython.org not only help students on their own but also equip instructors to teach these topics without also having to discover the best tools for doing so. In that way, they play the same function as open source Web stacks: getting us up and running quickly with time-tested and community-backed tools. Thank goodness I don’t need to write a database just to write a website; I can use open source software instead. As an instructor teaching others to code websites, what’s the equivalent tool set? That’s what I mean by Teaching Stack: a collection of open tools that help individual instructors teach technology at scale. Elements of a great teaching stack Here are some of the major components of a teaching stack for a hands-on technology course: The challenge of translating the educational benefits of making. Making and education clearly go hand in hand, but how do we quantify and share the results of authentic learning without losing its essence? That's the issue educators are currently facing. Teaching to the txt (Green Onion News Network) The Harper Valley School Board recently adopted a policy that allows students to use their cell phones to search for answers on state-mandated standardized tests. Tablets can help students and track teachers, but not everyone is on board. Tablet computing can help reverse the decline of U.S. education, but there's a side effect. Because tablets are digital, we can analyze how students learn and examine teachers' competence. It opens the question: What happens when the digital classroom challenges powerful teachers' unions?
0.866
FineWeb
["Education", "Technology", "Teaching Methods"]
It consists of four large plates with motifs, each of them includes four wooden plates with motifs to be allocated to the respective theme. A playful way, developing concentration and speech abilities. Ensure the smooth running of the game play instructions. - Blackboard with motifs of cardboard - 4 Large Printed Sheet of cardboard (LxWxH): Approximately 18 x 18 x 0.2 cm - 16 Small Plates of wood (LxWxH): Approximately 5 x 5 x 0.5 cm - Weight: About 123 g
0.6751
FineWeb
["Game Components", "Product Specifications", "Educational Benefits"]