text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Information technology has traditionally been a male-dominated field with an insular culture, but it is also a global business in which skilled workers are at a premium. One of the most influential companies in IT history, IBM, has a long record of leadership on diversity issues--not just in IT, but in the corporate world. Its legacy goes back to the punch-card days, when the young enterprise included women and blacks in its workforce at the start of the 20th century, and continued through the decades with ahead-of-the-curve policies toward disabled and gay workers. In time, diversity and opportunity "became a recruitment tool" for IBM, says Ted Childs, the company's former head of global workforce diversity. But that type of legacy is not widespread in business. "IBM is the exception rather than the rule," says Karen Sumberg, assistant vice president at the Center for Work-Life Policy, a not-for-profit organization that studies women and work. Researchers at the University of North Carolina's Institute on Aging gave a presentation on perceptions of older workers in the tech workplace in which they noted, "IT has an image of being youthful, male and white." Older workers, one of the groups recognized under the diversity umbrella, are under-represented in IT and are more likely to lose their jobs than their younger colleagues. Still, the nature of technology work seems to attract people who make decisions based on rational inputs rather than emotion, says Samir Luther, workplace project manager for the Human Rights Campaign Foundation, a gay, lesbian, transgender and transsexual rights group. Of course, IT people are not immune from prejudices, but they may be open to the logical case made for diversity. For whatever reasons, Luther says anecdotal evidence shows that as gender identity becomes a hot topic in diversity discussions, IT seems to attract a relatively large number of transgender workers.
<urn:uuid:cc2a643e-a805-4125-965f-a1efe2d86cf2>
CC-MAIN-2017-09
http://www.cioinsight.com/it-strategy/a-tale-of-two-cultures
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00628-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971266
382
2.609375
3
While most industries have changed significantly over the years, higher education has remained relatively the same. Students listen to professors lecture in century-old universities and tackle tough philosophical questions the way their ancestors did. But higher education is at a breaking point. Tuition is skyrocketing. State funding is dropping. And online course providers are on the rise. Cost is a major barrier for accessing higher education. A 2011 Pew Research Center survey on the cost and value of higher education found that 75 percent of respondents said college is too expensive for most Americans to afford. And 57 percent said the U.S. higher education system does not provide students a good return on their investment. “Technology has to be a big part of the solution to access and affordability,” said Ben Wildavsky, senior scholar at the Kauffman Foundation, guest scholar at the Brookings Institution and co-editor of Reinventing Higher Education: The Promise of Innovation. “The key is to do it in a smart way.” Futurists surveyed for The Future of Higher Education report by the Pew Internet and American Life Project pontificated on what higher education would look like in 2020. Thirty-nine percent said higher education wouldn’t look much different than it does today. But 60 percent said higher education would be different, complete with mass adoption of teleconferencing and distance learning. In their written responses, however, many of them painted scenarios that incorporated elements of both. The stage is set for a shift in how higher education operates — the question is, how exactly will it evolve? Futurists view the coming decades as an opportunity for teacher/student relationships to occur almost purely through technology — an approach known as technology-mediated education. But faculty members look to maintain the university model that’s been in place for centuries, with a sprinkle of technology integration. These mindsets offer somewhat competing visions for what higher education could look like in the coming years, with each claiming to make college education better, more accessible and more affordable for students. Lillian Taiz — a history professor at California State University, Los Angeles, and president of the California Faculty Association, which launched the Campaign for the Future of Higher Education — said eliminating the traditional university experience would be a mistake. To Taiz, technology-mediated education means no student engagement, no physical campus and no credibility. Universities will be on par with 19th-century correspondence schools, which had little standing because they accepted student work by mail. Integrating technology into the existing higher education model is a better option, she said. Technology will become a tool in professors’ toolboxes. Universities will still exist and do much of the same things they do today. “I love technology, but it isn’t a replacement for the kind of learning that goes on where you’re interacting,” Taiz said. “It’s an enhancement.” Most of the disruptive ideas that could reshape college education over the next 25 years are in the early research stage now or only being used in a few segments of the population, said Cameron Evans, CTO of U.S. education at Microsoft. But over the next five to 25 years, machine learning will have to increase to keep up with the large amounts of data that people produce, Evans said. Machines will learn about students’ behavior, actions, preferences and associations. Then they will figure out how to use this knowledge to create a richer and more dynamic learning context. Learning also will have to adapt more to students’ needs and preferences, he added. While growth in personalized learning is a given, it needs to step up to the next level so that data is fashioned for individual students and the faculty members who prepare courses for those individuals. One danger of the pure technology model, Taiz said, is that students who don’t have much money will attend technology-mediated schools. And students with more resources will go to prestigious university campuses such as Harvard, Yale and Stanford. But others argue that the divide has little to do with technology. “We have big socioeconomic gaps in who goes to what kind of college,” said Kauffman’s Wildavsky. “So it’s not that this advent of technology is going to create something that didn’t exist already.” Nor are all technology-mediated models necessarily bad. Older working students especially benefit from the opportunities of online classes. And some students may choose a technology-mediated education because the experience is good enough, Wildavsky said. For example, former Stanford professor Sebastian Thrun taught an Introduction to Artificial Intelligence course on campus in 2011 with Peter Norvig, Google’s director of research. But they also opened up the course online at no charge to anyone in the world who wanted to participate. As a result, many of the students from the face-to-face class opted to participate online. As more and more students apply, top universities are becoming more selective, adds DeMillo. They’re selecting students by the quality of their high school education, which means they’re selecting by ZIP code and economic status. “We’re going through that now, and it has nothing to do with online education,” DeMillo said. Massively open online courses have been around in some form for at least four years. But their popularity exploded in 2012 after Stanford’s experiments — and these efforts will continue to reshape higher education. Thrun left Stanford to co-found Udacity, which launched to offer high-quality, low- cost classes. More than 160,000 students from more than 190 countries signed up for Udacity’s first artificial intelligence course. Two other Stanford professors, Andrew Ng and Daphne Koller, spun off a company called Coursera. And, in 2012, Harvard University and the Massachusetts Institute of Technology teamed up to start the not-for-profit edX. These organizations — along with Udemy and other academics — all offer massively open online courses that are available to anyone, with unlimited space and no charge. “I think not only are they sustainable, as you look at the economics of the cloud,” Evans said,“[but also] they’ve become the norm.” The question isn’t so much whether they can be sustained technologically or economically, he said, but whether people can stay engaged in the course. And that’s one of the challenges these course providers will have to face. Currently the courses are not as engaging because students don’t build an affinity for the university or make friendships like they do on campus, Evans said. As 3-D technology and 4K resolution displays and video improve, they will help students make deeper emotional and social connections. However, these courses are only for certain types of students; they won’t meet everyone’s needs, Taiz said. “I worry if we think that this is the way of the future.” They also have a high dropout rate, she said. Before MIT joined its online course efforts with Harvard in edX, it offered “Circuits and Electronics” under the name MITx. Nearly 155,000 people signed up, according to MIT. Of these students, less than 15 percent tried the first problem set — and fewer than 5 percent passed the course. The dropout rate is really not exceptionally high in context, DeMillo said. A 20 percent retention rate in these courses is good. In other businesses, an online conversion rate of 1 to 2 percent is considered a win. Since January, top research universities have banded together to offer courses featuring their rock star professors. Georgia Tech started offering classes through Coursera in July and had 90,000 students registered in two months. “The high-quality portion of this story is really important,” DeMillo said. “The reason people are flocking to these courses is that the quality of the courses is so high, and it’s such a compelling experience for students that they’re drawn to it.” Online classes like these will be just one of the alternative paths that students can take down the road, Wildavsky said. Students will choose from multiple options, including online classes, traditional course credits and competency-based learning. Traditional course credits measure time spent learning, while competency-based learning measures mastery of skills and knowledge. Western Governors University — an accredited online university founded by 19 state governors — follows the competency-based learning path. A start-up called StraighterLine offers online classes a la carte for $99 a month, which is part of a trend called unbundling, Wildavsky said. Unbundling disassembles higher education into pieces and parcels them off to whoever can provide them at the highest quality for the lowest price. Think of it as contracting out teaching, curriculum, advising and other services. Once companies like StraighterLine can get universities to recognize their classes for credit, this will be yet another option for students to access higher education. “We’re going to move to a world where academic results matter much more than how you get there,” he added. No matter how students get there, they need to earn a recognized credential that gets them into the workplace in larger numbers, Evans said. According to a 2011 Pathways to Prosperity project from the Harvard Graduate School of Education, 56 percent of students at four-year colleges earn a bachelor’s degree within six years. And less than 30 percent earn an associate’s degree in three years. Students will not complete all of their learning at one institution. But students who currently transfer to multiple institutions end up with more credits than they need to finish a degree. States will need to think about ways to have credits and academic experience transfer to any public institution across their state system. That way, students can finish their degrees without worrying about credits transferring or retaking courses elsewhere. “As students become far more mobile, their academic experience has to be as portable as the mobility they represent in their own lives,” Evans said. “And that’s where technology can enable that portability to happen in a far greater way than what we have today.” Because academic results will matter more than how students get there, accreditors will change the way they evaluate institutions. Currently institutions are evaluated by inputs like the size of the university library or the amount universities spend. In the future, accreditors will evaluate universities by outputs, which include student learning, student success in the labor market and graduation rates. Along with multiple pathways and different accreditation measurements, credentials will change. Over the next five to 10 years, people will get a job solely by earning micro-credentials, demonstrating competency and showcasing their knowledge and skills on the Internet, Staton said. By placing more value on what people can do, everyone will focus on the actual work of potential employees rather than being hung up on credentials, he said. But that doesn’t mean that a bachelor’s degree has no place. Society may decide that a degree is important because of other signals it conveys about the individual, such as being highly socialized, capable of doing long-term projects or having a supportive family. Either way, this focus on the work rather than the diploma will undercut the skyrocketing prices of undergraduate education and potentially some types of graduate education. Depending on who casts the vision, higher education could be headed down a road that leads to technology-mediated or technology integrated learning. Students could travel multiple paths to get to academic results. And technology could play an increasing role in making higher education accessible and affordable. “It shouldn’t be [about] funding monolithic technology platforms; there will be no monolithic technology platforms,” Staton said. “It will be about interoperability, not about one solution for the entire system.”
<urn:uuid:591f4cf8-1085-4a4e-89f6-1e2be85b1704>
CC-MAIN-2017-09
http://www.govtech.com/education/What-Will-Higher-Education-Look-Like-in-25-Years.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00152-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957794
2,497
2.765625
3
Charge-coupled devices (CCDs) are sensors for recording images in digital cameras. These devices consist of an integrated circuit containing an array of linked or coupled capacitors acting as many small pixels. The falling of light on a pixel is converted into a charge pulse, which is then measured by the CCD electronics and represented by a number. The number usually ranges from 0 (no light) to 65,535 (very intense light). Each CCD chip is composed of an array of metal-oxide-semiconductor (MOS) capacitors and each capacitor is a pixel. When electrical charges are applied to the CCD top plates, they can be stored within the structure of chips. Digital pulses are applied to the top plates which can shift these charges among the pixels creating a picture representing charged pixels. These photoelectronic image sensors have made digital photography possible and revolutionized astronomy, space science, and consumer electronics. The CCD is a crucial component of fax machines, digital cameras, and scanners. CCDs have various applications and is used in digital cameras, optical scanners, and video cameras as light-sensing devices. CCD cameras are mostly used in astrophotography and are typically sensitive to infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. CCDs are used to record the exposures of galaxies and nebulae. They commonly respond to 70% of the incident light (meaning a quantum efficiency of about 70%) making them more efficient than photographic film, which captures only about 2% of the incident light. As a result, the CCDs were being rapidly adopted by the astronomers. This report covers the entire spectrum of CCDs, which are used in various applications such as consumer electronics, automotive, medical, industrial, security and surveillance. The market is segmented into four major geographic regions; namely the Americas, Europe, Asia-Pacific, and the Rest of the World (RoW). The current and future trends of each region have been analyzed in this report. The market share of the major players and the competitive landscaping is also included in the report. The report highlights drivers, restraints and opportunities for the global market. Further, tbe report covers all the major companies involved in this segment, covering their entire product offerings, financial details, strategies, and recent developments. Along with the market data, you can also customize MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standards and deep-dive analysis of the following parameters: Raw Material/Component Analysis - In-depth trend analysis of raw materials in competitive scenario - Raw material/Component matrix which gives a detailed comparison of Raw material/Component portfolio of each company mapped at country level - Comprehensive coverage of regulations followed in North America (U.S., Canada, Mexico) - Fast turn-around analysis of manufacturing firms with response to market events and trends - Opinions from different firms about various components, and standards from different companies - Qualitative inputs on macro-economic indicators, mergers and acquisitions - Tracking the values of raw materials/components shipped annually in each country 1.1 Analyst Insights 1.2 Market Definitions 1.3 Market Segmentation & Aspects Covered 1.4 Research Methodology 2 Executive Summary 3 Market Overview 4 CCD Image Sensor by Submarkets Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:1571973d-5724-4004-be71-89976bec8cec>
CC-MAIN-2017-09
http://www.micromarketmonitor.com/market-report/ccd-image-sensor-reports-8549425947.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00152-ip-10-171-10-108.ec2.internal.warc.gz
en
0.914477
754
3.578125
4
The advent and development of fiber optic communication technology has brought a revolutionary change to the communications industry. Nowadays in the world, about 85% of communication services via optical fiber transmission, long haul network and local relay network has been widely using fiber optics. Dense Wavelength Division Multiplexing (DWDM) technology development and maturation has opened up a vast space for the full application of the bandwidth and capacity of optical fiber transmission. With a high rate, large bandwidth obvious advantage, DWDM optical communication network has become the development of communication network trend. Especially in recent years, an IP-based Internet business explosive growth, this growth trend has not only changed the relationship between the IP network layer and the underlying transport network, and the networking of the entire network, the node design, management and control new requirements. An intelligent network architecture – Automatic Switched Optical Network (ASON) has become a research hotspot of today’s systems. Its core node optical cross-connect (OXC). Constitute dynamic wavelength routing and optical network flexible and effective management can be realized by the OXC equipment. OXC technology is one of the key technologies increasingly complex DWDM network, optical switch for switching the optical path of functional devices, is a key part of the OXC. Optical switch matrix is the core part of the OXC, it can achieve dynamic optical path management, optical network fault protection, wavelength dynamic allocation function, the solution to the current complex network of wavelength contention and improve the wavelength reuse, flexible configuration of the network are There is of great significance. Optical switch is not only the core device in OXC, but also widely used in the following areas. (1) Optical network protection switching system, the actual optical transmission system have left spare fibers when working channel transmission interruption or performance degradation to a certain extent, the main signal light switch automatically go to standby fiber system transmission, so that the receiving end received normal signal and feeling less than the network has a fault, the network nodes connected in a ring to further improve the survivability of the network. (2) Real-time network performance monitoring system, remote fiber test points, 1 × N multi-channel optical switch, a plurality of optical fibers connected to the Optical Time Domain Reflectometer, real-time network monitoring, computer-controlled optical switch switching sequence and time to achieve the detection of all fiber, and test results are returned to the network control center, once found a road problems, can be processed directly in the network management center. (3) The light switch is also used in optical fiber communication device testing system and metropolitan area networks, the poor access network/multiplexing and switching equipment. The introduction of the light switch in the future all-optical networks more flexible, intelligent, survivability. Optical switching technology has become the key to future optical networking, optical switching technology plays an increasingly important role in the field of communication, automatic control. In many types of optical switches, MEMS optical switch is considered most likely to become the mainstream of the optical switch device. In this paper, an overview of the basis of the principle characteristics of a variety of optical switch on, the focus of several major MEMS optical switch, and outlined their structure and performance characteristics.
<urn:uuid:858b7fe3-3d27-4f4b-b10f-ace01d3e4ece>
CC-MAIN-2017-09
http://www.fs.com/blog/brief-introduction-to-optical-switch.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00148-ip-10-171-10-108.ec2.internal.warc.gz
en
0.898131
667
2.6875
3
IP Telephony: Private Branch Exchange(PBX) vs. VoIP Contributed by Billy Short Internet Protocol telephony (frequently referred to as IP telephony) is an important concept in enterprise communications technology. IP telephony refers to all real-time applications over IP, which include many different instant messaging programs, video-conferencing, fax services, and Voice over Internet Protocol (VoIP). This document will focus mostly on the VoIP aspect of IP telephony. This document is in PDF format. To view it click here.
<urn:uuid:41ddb229-e11a-4553-ba94-6479f49116d9>
CC-MAIN-2017-09
http://infosecwriters.com/articles/2017/01/26/ip-telephony-private-branch-exchangepbx-vs-voip
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00324-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887149
113
2.875
3
Why is the industry buzzing? Because the technology could revolutionize networking. Typical networking intelligence is distributed across physical switches and routers, each with its own configuration. Even routing protocols designed to move packets across the globe rely on neighbors' advertisements to build their own view of the world. Traditionally, there hasn't been a central network map or a single point of management. SDN promises to end the need for network command-line interface (CLI) jockeys, while providing a more robust programmable network. SDN offers flexibility, performance and agility, as well as security, according to Chris Hoff. The main two concepts typically accepted when defining SDN are its separate control and data plane and programmability. Separation of the control plane and data plane means the command and control is removed from the switching/routing devices. Instead, control-plane operations are handled centrally and distributed to data-plane elements (think switches/routers). This allows for top-level decisions to be made from a management device with knowledge of the network as a whole, rather than device-centric configurations. Programmability offers the addition or expansion of features, as well as the ability to change flows dynamically and even pass management up to higher-level orchestration tools. A great example is QoS controls. As outlined by Mike Fratto, software-defined networking architectures would allow for separate flows to be programmed for different data types. These features are very applicable to both private- and public-cloud architectures. For evidence of such, see Google's announcement that OpenFlow is being used in a big way within its network. Management of the network flows can be designed on a case-by-case basis, while still running on the same physical topology. Separate customers (internal or external) can be defined with separate routing based on need, budget or otherwise. Additionally, flow changes could be made based on congestion, as Fratto suggests, or security, as suggested by Hoff. The hardware-independent flexibility should prove a key enabler for public and private clouds. These architectures will provide a set of pipes that can be set to adapt, without the need for physical changes or multiple CLIs. Additionally, SDN can be used to enhance security and visibility into network traffic. Overall, the feature set and thinking behind SDN (that is, how should a modern network look?) will be extremely beneficial to cloud architectures. For more detailed information on SDN, see "SDN – Centralized Network Command and Control." Disclaimer: This post is not intended as an endorsement for any vendors or products mentioned.
<urn:uuid:3a3f825f-0800-4e3f-a4cf-e1eff0e6ea2e>
CC-MAIN-2017-09
http://www.networkcomputing.com/networking/why-software-defined-networking-could-revolutionize-networking/420343525?piddl_msgorder=asc
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00200-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939874
530
2.609375
3
BGPv4 is an Exterior Gateway Protocol (EGP) and was introduced in 1995 in RFC 1771 and is now defined in RFC 4271. The major difference from earlier version of BGP and v4 is BGPv4 is classless and supports CIDR. BGP is primarily used to propagate and advertise public networks across the internet. A large majority of Internet communications is made possible by BGP. Autonomous System numbers (AS) are assigned to companies wanting to advertise their networks/ IP ranges to the Internet. AS numbers are controlled and assigned by the Internet Assigned Numbers Authority (IANA) to Regional Internet Registries (RIRs) who then assign specific AS numbers to ISPs or companies requesting an AS number Unlike IGPs, BGP is connection based and uses TCP port 179 to communicate with peers. Since TCP is used, routing via an IGP or static routes must be in place before BGP peering can establish. Since each BGP node relies on downstream neighbors to pass along routes, BGP is considered a Distance Vector Protocol . Each node or makes route calculations based on the advertised routes from BGP peering neighbors. Unlike other distance vector protocols BGP uses a routes AS_PATH to determine best path selection for each route. For this reason BGP is commonly called a Path Vector protocol. Packet Types/Neighbor States Open Message Sent after the TCP connection is established. This message is used to identify the sending router and to specify operational parameters - Open message includes: - BGP version number - AS number - Hold time - BGP ID (highest loopback IP or physical IP if no loopback exist) - Optional Parameters Keepalive Message Sent once a router accepts the parameters in the neighbors open message. Keepalives are then sent periodically. Update Message Sent when route changes are made which include, new routes, withdrawn routes or both. - Update message includes: - Network Layer Reachable Information (NLRI) - used to advertise new routes - Path Attributes - Withdrawn Routes - Note: each update message describes only a single BGP route. A new update message must be sent for each route being added. Notification Message Sent whenever an error is detected between peers . Notification messages always cause the BGP connection to close. - Open Sent - Open Confirm BGP Neighbor States Idle - BGP always begins in the idle state in which it refuses all incoming connections. When a start event occurs the BGP process initializes and starts establishing a BGP connection with its neighbor. - An error causes BGP to transition back to the idle state. The router can then try to automatically issue another start event. Too many attempts of a start event can cause flapping so limitations should be set to limit the number of retries. Connect State - In this state BGP is waiting for the TCP connection to be completed.If the connection is successful then an Open message is sent and the router transitions to the OpenSent state. - If the TCP connection is unsuccessful then BGP continues to listen for TCP connection attempts from the neighbor, resets its ConnectRety timer and transitions to the Active state Active State - BGP is trying to initiate a TCP connection with a neighbor. OpenSent State - An open message has been sent and BGP is waiting to receive an open message from its neighbor. - If there are errors in the open message (incorrect AS number or version etc...) an error notification is sent and BGP transitions back to the idle state. If no errors are seen then a keepalive message is sent. OpenConfirm State - The BGP process is waiting for a keepalive or notification from a neighbor. - If a notification is received or a TCP disconnect is received the state transitions to idle. If the hold timer expires, an error is detected, or a stop event occurs, a notification is sent and the BGP connection is closed changing the state to idle. Established State - BGP connection is fully established with a neighbor and update messages are exchanged with the new neighbor - If any errors are found or the keepalive timer times out a notification message is sent and BGP is transitioned back to idle. Path attributes are what allow BGP administrators to control and manipulate routing updates among peers. BGP path attributes allow you to control what routes are preferred, what routes are advertised to peers and what routes are added to the local routing table. Path attributes fall into 1 of 4 categories: Well-known Mandatory - must be included in all updates Well-known Discretionary – must be supported but may or may not be included in updates Optional Transitive - not required but peer must accept the attribute Optional Nontransitive - not required and can be ignored - Well-known Mandatory - Well-known discretionary - Optional Transitive - Optional nontransitive - MULTI_EXT_DSC (MED) - ORIGIN - Specifies the origin of the routing update. - IGP, EGP, Incomplete (preferred in this order) - Routes learned from redistribution carry Incomplete origins because BGP cannot tell where the route originated. - AS_PATH - Uses a sequence of AS numbers to describe the AS path to the destination. - When a BGP speaker advertises a route to an EBGP peer it prepends it’s AS number to the AS_PATH. When advertising to iBGP peers the AS is not added. - NEXT_HOP - Describes the next-hop router on the path to the advertised destination. The NEXT_HOP attribute is not always the address of the neighboring router. The following rules apply: - If the advertising and receiving routers are in different ASs (external peers), the NEXT_HOP is the IP of the advertising router's interface - If the advertising and receiving routers are in the same AS (internal peers), and the route refers to an internal destination, the NEXT_HOP is the IP of the neighbor that advertised the route - If the advertising and receiving routers are in the same AS (internal peers), and the route refers to a route in a different AS, the NEXT_HOP is the IP of the external peer from which the route was learned - LOCAL_PREF - Used only in updates between iBGP peers. It is used to communicate a BGP router's degree of preference for an advertised route. - When multiple routes to the same destination are received from different iBGP peers the LOCAL_PREF is used to determine the best path. - Highest value takes preference. - Default value is 100 - ATOMIC_AGGREGATE - Used to alert downstream routers that a loss of path info has occurred due to summarization of subnets. - If an update is received with the ATOMIC_AGGREGATE attribute set that BGP speaker cannot update the route with more specific information. Also the attribute must be set when passing the route to other peers. - AGGREGATOR - Provides information about where the aggregation was performed by including the AS and router ID of the originating aggregating router. - COMMUNITY - Used to simplify policy enforcement by setting a community value. - 4 octets are used (AA:NN) where AA represents the AS and NN is an administratively set value. An example would be 65001:70. - Cisco uses NN:AA instead and "ip bgp-community new-format" must be set to use AA:NN - Reserved COMMUNITY values used for policy enforcement - INTERNET - all routes belong to this community by default and advertised freely - NO_EXPORT - routes cannot be advertised to EBGP peers or advertised outside the confederation. - NO_ADVERTISE - routes cannot be advertised to any peer (EBGP or iBGP) - LOCAL_AS - (aka NO_EXPORT_SUBCONFED per RFC 1997) routes cannot be advertised to EBGP peers including peers in other ASs within the same confederation. - MULTI_EXT_DSC (MED) - used to influence routes entering the local AS. - Carried in EBGP updates this attribute allows an AS to inform a directly connected AS of its preferred ingress points. - Lowest value is preferred. - Default value is 0 - MED cannot be passed beyond the directly connected AS. For this the AS_PATH must be manipulated. - By default MEDs are not compared if two routes to the same destination are received from two different ASs - ORIGINATOR_ID - 32-bit value created by route reflectors to prevent routing loops. - The value is the RID of the originating router of a route in the local AS. If a BGP speaker sees it's RID in the ORIGINATOR_ID attribute of a received update it knows a loop has occurred and ignores the update. - CLUSTER_LIST - A sequence of route reflection cluster IDs used by route reflectors to prevent routing loops. - CLUSTER _LIST consist of all cluster IDs a specific route has passed through. If a route reflector sees its own cluster ID in this attribute it knows a loop has occurred and ignores the update. - Administrative Weight - Cisco specific BGP parameter assigned to help prioritize outbound routes. - Local to router only and not communicated out - Weight between 0 and 65,535. The higher the weight the more preferable the route - Weight considered before all other characteristics - Routes generated by local router = 32,768 - Routes learned from a peer = 0 - AS_SET - Used to prevent loops (just like AS_PATH) by listing all ASs traversed (not listed in order) in the route. Used when an aggregate summarizes a route and starts the AS_PATH over. AS_SET is included (with all original ASs) so routers can determine if a loop has occurred. - When AS_SET is included an ATOMIC_AGGREGATE does not have to be included with the aggregate. - Updates are sent when, ASs change within an aggregate and AS_SET is included. Without the AS_SET no update would be sent since it’s an aggregate. Attribute Order of Preference - Adminastrative Weight (Cisco only) - Highest wins - LOCAL_PREF - Highest wins - Prefer route learned locally through IGP - AS_PATH - Shortest path wins - Origin Code - Lowest wins - MED - Lowest wins - EBGP > Confederation EBGP > IBGP routes - BGP NEXT HOP - Lowest IGP metric to next hop wins - BGP Router ID - Lowest wins eBGP and iBGP Exterior BGP (eBGP) is used to setup BGP peering among peers of different autonomous systems. eBGP peering is most common among ISPs and their customers. ISPs also establish peering points with other service providers via eBGP peering. When an eBGP peer advertises routes to its neighboring peer the AS number is prepended to the AS_PATH. If a router receives the same route from multiple BGP peers then the route with the shortest AS_PATH is chosen and added to the routing table. Routers will then only advertise the best route to other BGP peers. An example AS_PATH would be 65001 65010 65111. Using this AS_PATH we can see the route was originated in AS 65111 and was then advertised to 65010 and then again advertised to 65001. To avoid loops, if a BGP peer sees its own AS number in the AS_PATH then it knows a loop will occur and discards the route. Internal BGP (iBGP) is used to setup BGP peering among peers of the same AS. Usually iBGP peers fall inside the same company or organization. iBGP is usually seen in multihomed scenarios and transit ASs which are used to pass BGP routes from one AS to another. When routes are advertised between iBGP peers the AS-PATH is not changes since the routes stay within the same AS. The AS number is not prepended to the AS-PATH until a route is advertised to an eBGP peer. Since the AS-PATH is used by BGP to protect against routing loops iBGP peers are unable to tell if a route advertised from another iBGP peer will cause a loop. To solve this issue iBGP peers do not advertise routes learned from iBGPs peer to other iBGP peers, thus providing loop avoidance within an AS. The problem with the iBGP loop avoidance rule is BGP routes learned on one end of an AS are not fully propagated to routers on the other end of an AS. One of three solutions must be used to fully propagate BGP routes across the AS. - All iBGP peers must be fully meshed and peer with all other iBGP routers within the AS. - By fully meshing all iBGP peers each BGP router will receive updates from all routers in the AS - Unfortunately this is not always possible and does not scale with a growing network - Synchronization must be used and BGP routes must be redistributed into the IGP so the routes can be advertised across the AS. - Most IGPs are unable to handle large BGP tables much less the full Internet BGP table. - Route Reflectors must be established. Route Reflectors are defined in RFC 4456 and used primarily in large autonomous systems to propagate routes to all BGP peers without the use of a fully meshed AS. In larger networks it is impractical to setup full mesh peering between all peers within the As. Route Reflection provides a means to centralize iBGP peering to a single router or group of routers known as route reflectors. All routers (known as clients) within the AS peer with a centralized router (route reflector or server). Normally iBGP routers do not advertise routes learned from iBGP peers internally but route reflectors are the exception to this rule. Route reflectors advertise routes to both iBGP and eBGPs peers thus allowing iBGP learned routes to propagate to all peers within an AS. A group of route reflector(s) and clients is known as a cluster. If multiple route reflectors exist within a single cluster then the cluster name must be defined on each route reflector. The key benefit to using route reflection over other techniques such as Confederations is route reflection does not need to be supported by all routers in the cluster or AS. Route reflection just needs to be supported on the route reflectors or servers. Clients do not need any additional configurations to join a cluster. Clients are specified on each route reflector servicing the cluster. - Client peers – routers who are a member of the cluster. - Non-Client peers – routers who are not a member of the cluster Route reflectors will treat each route differently depending on how a route is received. There are three rules followed by route reflectors. - Locally originated routes and routes received from EBGP neighbors are propagated to all BGP peers (internal and external). - Routes received from a client are propagated to all BGP peers (internal and external). - Routes received from an iBGP non-client peer are propagated to all EBGP peers and all IBGP client peers. Route Reflection Design When designing your BGP network for route reflection you need to consider the location of the route reflectors in comparison to all client peers. Generally routers centralized to the network and are able to peer with all neighbors should be used as route reflectors. For example, in a star topology the hub router will be used as the route reflector. If it is not possible to have a single centralized route reflector then multiple route reflectors should be used. Multiple route reflectors should also be considered for redundancy in the event of a router failure. For large networks you might also consider breaking down your network into multiple clusters. Router reflectors can be clients of other route reflectors. This will allow you to setup a hierarchal network of clusters. A good example would be to create a separate cluster for each city or geographical area and then each route reflector is a client for the back bone cluster. Route reflection can also be used alongside Confederations to improve control over routing updates across the network. AS-path (Filter List) Halabi, Sam and McPherson, Danny (2000). Internet Routing Architectures, 2nd Edition,. Cisco Press. ISBN Ciscopress ISBN 1-57870-233-X
<urn:uuid:aa7cbb2c-7667-42e1-8ea7-4c32daf16a82>
CC-MAIN-2017-09
http://www.networking-forum.com/wiki/BGP
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00376-ip-10-171-10-108.ec2.internal.warc.gz
en
0.896397
3,506
3.671875
4
Black Box Explains...FDDI Fiber Distributed Data Interface (FDDI) is a networking standard for operating at speeds of up to 100 Mbps. The standard FDDI network is set up in a ring topology with two rings that transmit signals in opposite directions to a series of nodes. FDDI accommodates up to 500 nodes per dual-ring network with spacing up to 2 kilometers between adjacent nodes. FDDI uses the same token-passing scheme as the IEEE 802.5 Token Ring network to control transmission around the loop.
<urn:uuid:3687c4c2-460f-4373-bc8e-9fbbf7108a61>
CC-MAIN-2017-09
https://www.blackbox.com/en-nz/products/black-box-explains/black-box-explains-fddi
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00196-ip-10-171-10-108.ec2.internal.warc.gz
en
0.863671
114
2.90625
3
Storage Growth in the Enterprise Between 5 and 7 exabytes (or 10 bytes of data to the 18 th power) of new information are created every year. If this somewhat conservative estimate is correct, then that means on average, more than 800 megabytes of information per year are generated for every person living on earth. This makes William Shakespeare, whose “Complete Works” amount to approximately 5 megabytes, seem like a slacker. Of course, these numbers have been twisted like so much political polling data (Shakespeare was a very prolific writer—as someone who’s slept through several of his plays, I should know) to illustrate a point: Information is exploding. Just ask Rick Bauer, technology director of the Storage Networking Industry Association (SNIA). “Data is just exploding, and there’s no diminution of that growth pattern in sight,” he said. “I think we’ve had a real roller-coaster ride in terms of growth in the past decade. We’ve seen Moore’s Law in spades with both tape capacity and storage capacity on discs growing by logarithmic metrics. During the pre-Internet-bubble days, there was an almost breathless speculation about storage approaching infinity, bandwidth costs approaching zero and profitability approaching billions weekly, or something like that.” In the mid- to late-1990s, information storage really came into its own in organizational IT with upsurges in Fibre-channel-based storage, storage area networking (SAN) and network-attached storage (NAS) deployments. “There was a real exuberance as network storage became more and more ready for primetime in the data center,” Bauer said. “In terms of adoption in the enterprise, it really began there: solving problems that the data center was having with the amount of storage proliferating and not being able to track that with direct attached (storage).” Although IT has slowed down—even after an economic recovery of sorts—storage keeps on keeping on. Two areas are particularly strong in the storage space, Bauer said. The first is Internet Small Computer System Interface (iSCSI), a method of connecting storage facilities that is increasingly used by both big corporations and small- to medium-sized businesses. The other, storage virtualization, has helped organizations condense heterogeneous data into clear and easy-to-understand implements. Virtualization engines allow storage professionals to view data pools granularly, while managing and backing up aggregate information. However impressive these new technologies are, though, they have to serve enterprises and their objectives. “Storage has got to be aligned to the business units, and the business units of the enterprise are the biggest drivers for the growth of storage,” Bauer said. “They’re the ones who are deploying the large databases, whether it be e-commerce, analytics or any of the other things fueling that growth. While I don’t have the exact figures, I would pretty much bet the mortgage payment that the growth of database or just the size of the databases themselves are really major pulls into the storage side of things. That’s how the storage gets justified: When you’re purchasing these large systems and arrays and, concomitantly, the kind of security and data protection systems you have to have as well, all of that is being driven by some business driver. It’s usually the manager of customer, business or critical infrastructure data.” A key driver of growth in the storage industry has been regulations like Sarbanes-Oxley and the Health Insurance Portability and Accountability Act (HIPAA). Projected spending in 2005 on regulatory compliance in the United States by Fortune 500 companies is more than $16 billion, Bauer said. “As storage has become more centralized, it’s a bigger and bigger target, not just for recreational hacking, but from people who are really part of criminal enterprises and attacking that data for economic advantage. We’re also seeing that in the security space, with some of the more publicized mishandling and exposure of private customer data. That seems to be a real problem right now: tapes being lost, tapes being exposed, things not being encrypted. “On the part of the government, in some ways the United States is playing catch-up here to very stringent and customer-focused regulations in Japan and the European Union,” he added. “I think we’re seeing a sea change on the part of Congress to take some of that legislation. Once corporate America has a financial penalty for accidents or really caring that much about the data, then I think we’ll have a lot more board members taking steps to make sure that data is secure.” Yet storage professionals shouldn’t wait for the government to get its act together to secure data. They ought to be working together and promoting best practices, Bauer said. “People will ask the question, ‘How do we do it?’ Hopefully, certified storage professionals are going to be able to give a variety of different ways, from physical to digital. This is where storage professionals who are trained will help get professionals who are not as aware of all the ways to secure data in flight.” Storage growth in enterprises will likely continue unabated in the future, due in large part to a couple of focal factors. “I think ubiquitous data—secure and whenever you want it—will be a driver for the next big things in our industry,” Bauer said. “We’re surrounded by mountains of data. Tools for finding what we need are becoming more significant as we multiply the data out, from the ability to intelligently sift through everything from the data warehouse to the individual desktop search for that CD I burned for my brother-in-law last weekend.” Another significant source of storage growth will be data management on an increasingly global basis, he said. “By managing, I mean securing, mining, protecting and keeping data legal. You’re going to find companies really looking for good solutions to manage a global information store. In the next five years, there are going some exciting things around being able to move data easily in an optimized and secure fashion from point to point.” Additionally, some of the most interesting developments in the storage sector within the next few years will take place not in the workplace, but rather the homes of individual consumers, Bauer predicted. “The resurgence of Apple is due to the ability to manage data in musical form. With video and music on-demand, figuring out how to manage security and digital rights, and yet be able to push data from the living room to the car, is going to mean software, hardware and storage companies working together to put out exciting products. I think we’re also going to see storage professionals managing those.” –Brian Summerfield, [email protected]
<urn:uuid:1321b0c6-6af3-4bb7-89ee-d840b114ca2b>
CC-MAIN-2017-09
http://certmag.com/storage-growth-in-the-enterprise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00372-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959557
1,438
2.515625
3
Learn about another web services framework from the Apache Software Foundation This content is part # of # in the series: Java web services This content is part of the series:Java web services Stay tuned for additional content in this series. CXF is another web services stack from the Apache Software Foundation, the same group behind the Axis2 stack. Even though they come from the same organization, Axis 2 and CXF take very different approaches to how web services are configured and delivered. In this article, you'll learn the basics of using JAXB 2.x and JAX-WS 2.x for web services work with CXF, and you'll see how CXF compares to the other JAXB/JAX-WS stacks — Axis2 and Metro — discussed in prior articles. CXF basics compared In terms of the user interface, CXF has a lot in common with the Axis2 and Metro web service stacks. All three stacks allow you either to start from existing Java™ code and build a web service, or start from a WSDL web service description and generate Java code to use or implement the service. And like the other stacks, CXF models service operations as method calls and service port types as interfaces. Like Axis2, but unlike Metro, CXF allows you to choose between different data-binding technologies. CXF support for JAXB 2.x data binding is on par with Metro and superior to Axis2 because it allows you to use JAXB customizations when generating code from WSDL (Axis2 does not). CXF also lets you use other data-binding approaches, though the support for them is not as well developed as in Axis2 — in particular, you can generate code from WSDL with CXF only if you're using JAXB or XMLBeans data binding. The preferred service-configuration technique (or frontend, in CXF terminology) used with CXF is JAX-WS 2.x annotations, generally supplemented by XML configuration files. Support for JAX-WS annotations in CXF is on par with Metro, making it much better suited for JAX-WS use than Axis2 (which has some major limitations in using JAX-WS, as discussed in "JAXB and JAX-WS in Axis2"). As with other JAX-WS implementations, CXF requires the service WSDL to be available to the client at run time. Like the other stacks, CXF uses request and response processing flows composed of configurable components. CXF calls the components intercepters, rather than handlers, but terminology aside these are equivalent components. Like Metro, CXF comes complete with support for WS-Security and other extension technologies as part of the basic download. Unlike Metro, the CXF JARs are modular — meaning you can pick and choose the JARs to include as part of your application depending on the technologies being used (the /lib/WHICH_JARS file in the CXF installation tells you the particular JARs needed for various common use cases). The downside of this modularity is that you can end up with a long list of specific JARs needed for your application; on the plus side, it allows you to keep down the size of your deployment. Again like Metro, CXF normally requires you to build a WAR file for your web service rather than deploying potentially many services to a single server installation (the approach used with Axis2). CXF also provides an integrated HTTP server suitable for production use in the form of the Jetty server. This gives you a more flexible and powerful alternative than the simple HTTP server support integrated in Axis2 and Metro. The code download provides a version of the simple library-management service used in previous articles of this series; this one is modified to demonstrate CXF usage. As with the earlier versions, the WSDL service definition defines four operations: getBookretrieves the details for a particular book identified by International Standard Book Number (ISBN). getBooksByTyperetrieves the details for all books of a particular type. getTypesfinds the types of books available. addBookadds a new book to the library. In "JAXB and JAX-WS in Axis2," you saw how this application worked in Axis2, then in "Introducing Metro," you saw it in Metro. Most of the discussion in the earlier articles also applies to using CXF. The WSDL is identical except for the service name and endpoint address; the generated JAXB data model is the same, and even the generated service classes are identical except for the Java package and the service name used in the JAX-WS annotations. Client-side code for the sample application on CXF is identical to using JAX-WS with Axis2 or Metro, and the build steps are very similar: just use the CXF wsdl2java tool in place of the JAX-WS reference implementation wsimport tool. See "JAXB and JAX-WS in Axis2" for details of the code and handling. Although the client code is the same, there's one significant difference in the client behavior with CXF. By default, CXF prints out an obnoxious amount of logging details to the console. CXF uses Java Logging, so to avoid this output, you need to set a system property to point to a logging properties file with settings changed to output only SEVERE information. The Ant build.xml for the sample application does this using the JVM parameter line The server-side code for the sample application on CXF is also identical to using JAX-WS with Axis2 or Metro, and the build process is very similar to Metro. With Axis2, you prepare the service for deployment by creating a JAR file containing the service and data model classes, then deploy the service by dropping that JAR into the WEB-INF/servicejars directory in an Axis2 server installation. With Metro and CXF, you instead need to create a WAR file containing the service and data model classes, the Metro or CXF library JARs, and a pair of configuration files (one of which is named differently in the two stacks). The WEB-INF/web.xml file configures the actual servlet handling. The version used for the sample application is shown in Listing 1: Listing 1. Sample application web.xml <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee"> <display-name>CXFLibrary</display-name> <description>CXF Library Service</description> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <context-param> <param-name>contextConfigLocation</param-name> <param-value> classpath:META-INF/cxf/cxf.xml classpath:META-INF/cxf/cxf-extension-soap.xml classpath:META-INF/cxf/cxf-servlet.xml </param-value> </context-param> <servlet> <servlet-name>CXFServlet</servlet-name> <servlet-class>org.apache.cxf.transport.servlet.CXFServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>CXFServlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> The Listing 1 WEB-INF/web.xml file is just a standard servlet configuration file, telling the web application server (such as Tomcat) how to interface to the servlet application. The details are similar to those in the Metro example, though for CXF the <servlet-class> is part of the CXF code and the <listener-class> references a Spring Framework class (see Related topics). As with the Metro example, the servlet is configured to process all requests coming to this web application (by the A separate file, WEB-INF/cxf-servlet.xml, is used to configure CXF to route requests received by the servlet to the service-implementation code and to supply the service WSDL on-demand. This file is shown in Listing 2: Listing 2. Sample application cxf-servlet.xml <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxws="http://cxf.apache.org/jaxws" xmlns:soap="http://cxf.apache.org/bindings/soap" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd"> <jaxws:endpoint id="Processor" implementor="com.sosnoski.ws.library.cxf.CXFLibraryImpl" wsdlLocation="WEB-INF/wsdl/library.wsdl" address="/"> </jaxws:endpoint> </beans> The Listing 2 WEB-INF/cxf-servlet.xml file defines a single endpoint with an implementation class, the pattern to be matched for requests, and a WSDL document location. The WSDL document location is the only optional part of this endpoint definition. If you don't specify a WSDL for a service endpoint in the cxf-servlet.xml file, CXF automatically generates one at run time based on the JAX-WS annotations. Building and running the sample code Before you can try out the sample code, you need to download and install a current version of CXF on your system (see Related topics). The sample code was tested with the 2.2.5 release. You also need to edit the build.properties file in the root directory of the unzipped sample-code download to change the value of the cxf-home property to the path to your CXF installation. If you're going to be testing with a server on a different system or port, you may need to change the To build the sample application using the supplied Ant build.xml, open a console to the root directory of the download code and type ant. This will first invoke the CXF wsdl2java tool (included in the CXF distribution), then compile the client and server, and finally package the server code as a WAR. You can then deploy the generated cxf-library.war file to your test server, and finally type ant run on the console to try running the sample client. The sample client runs through a sequence of several requests to the server, printing brief results for each request. As mentioned in Client-side usage, the build configures CXF logging to avoid printing configuration details when running the sample client. Spring in CXF Notice the use of Spring Framework bean configurations in the Listing 2 cxf-servlet.xml configuration file. Spring, as you may know, is an open source application framework that includes many component libraries you can use to assemble your applications. The Inversion of Control (IoC) container is the original basis of the Spring Framework. It allows you to link and configure JavaBean-style software components, using Java reflection to access properties of the bean objects at run time. The Spring IoC container normally uses XML files for the dependency information, and the cxf-servlet.xml file in Listing 2 is an example of such a Spring configuration. The <beans> element is just a wrapper around individual bean configurations. The <jaxws:endpoint> element is such a bean, one that CXF associates with a particular type of object (a You can specify many other options in the cxf-servlet.xml file beyond those used in this simple example, including message-flow configuration for a service. See the JAX-WS configuration information in the CXF documentation for full details (under Frontends/JAX-WS). Aside from JAX-WS annotations, Spring is used for all configuration of the CXF stack, including the organization of message flows internal to CXF. Most of the time these configuration details are handled automatically, using XML configuration files included directly in the CXF JARs (see the contextConfigLocation parameter value in the Listing 1 web.xml file to see how these are referenced), but you can override or add to the common flows using your own configuration files. That's not going to be covered directly in this series of articles; you can see the CXF documentation for details. More CXF ahead In this article you've seen the basics of using JAXB 2.x data binding and JAX-WS 2.x annotation-based configuration with the CXF web services stack. The same JAXB/JAX-WS code used in earlier articles with the Axis2 and Metro stacks also works in CXF, after only minor changes to the build and using a different deployment-configuration file. This cross-stack compatibility is the main benefit of using JAXB and JAX-WS, since it makes it easy for you to switch between stacks. There's a lot more to CXF than this simple example shows, and in future articles you'll find out about some of the other features. The next article will look at using WS-Security, so you can see how the CXF implementation compares with Axis2 and Metro. - Apache CXF: Visit the site for CXF, an open source web services stack from the Apache Software Foundation. - Spring Framework: Spring is the starting point for many kinds of development. - "Design and implement POJO Web services using Spring and Apache CXF, Part 1: Introduction to Web services creation using CXF and Spring" (Rajeev Hathi and Naveen Balani, developerWorks July 2008): This article shows you how to use CXF for Java-first web service development with JAXB and JAX-WS. - "Design and develop JAX-WS 2.0 Web services" (Rajeev Hathi and Naveen Balani, developerWorks, September 2007): This tutorial shows how to develop Java-first web services using the JAXB and JAX-WS support bundled in Java 6 SE. - JAX-WS Reference Implementation: Here's the home page for the JAX-WS reference implementation. - CXF: Download CXF.
<urn:uuid:b9d87570-c64c-4211-984d-655f5e79bc6c>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/java/library/j-jws12/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00368-ip-10-171-10-108.ec2.internal.warc.gz
en
0.837213
3,240
2.53125
3
Skip The Words - Give Me Videos or Become a Research Partner Do you know that smoking causes cancer? Specifically smoking behavior is found to be highly correlated with cancer. Experts over the years have gained a good understanding about the various mechanisms that initiates cancerous growth. However what you need to know is, "Don't Smoke". Take the same approach to computer security. Give straight-forward warnings. Tell simple stories. Use context to give people warnings when they need them. More details on how to instrument this approach are available at UsableSecurity.net Here is the common approach to warnings: Here is the computer science approach to smoking warnings. Communication should focus on the ability to mitigate or avoid a risk, not the details of the risk. You are welcome to use these vidoes and images for your own warnings and training. Please do let us know when you do so, and if you have feedback we would love to hear it. Short Form Videos Using Mental Models Current projects include vishing, spear phishing, whaling, smishing (call the IT hotline!); USBs (don't touch that port!), and various forms of malware including fake apps (verify!), and malicious scipts. We also have a demonstration project on creating stronger passwords and encouraing their use in specific contexts. Our video for that will follow our experimental evaluation.Password reuse modeled as unhealthy germ-accepting toothbrush resue. Using an unknown USB modeled as unclear food. Anti-smishing modeling smishing compliance as insane compliance. Each of these have been tested in pilots, and shown to increase understanding of how to mitigate online risks. Long Form Videos Access control - http://www.youtube.com/watch?v=F9m6A4gWKX8 Here we use the metaphor of inviting an unexpected visitor into your home, as opposed to keeping them on the porch, to encourage individuals not to download potentially dangerous malware. Keylogger - http://www.youtube.com/watch?v=6zHJoZqrCB0 This video makes phishing something that hits home. This video is unqiue in that is was used is a wide-scale study and illustrated significant increas in risk awareness in a population representative of elders in the US. Phishing - http://www.youtube.com/watch?v=4ZQ9pFTCdy4 We are currently updating this video for the workplace. Patching is a more complicated challenge. Here is our long-form video for patching. If you choose to shorten it, under the Creative Commons license, please send the shortened version back to host here. This is a link to the mp4. MP4 of the larger Patching Video as the uncompressed version is quite large indeed. We have a long-term agenda for creating and testing videos for risk awareness. Please contact Jean Camp. We Are Seeking Partners! We are implementing a human-centered extension based on the concept of contexts: work, banking, play, etc. This extension provides warnings only when individuals are actually taking the risk. We combine user click traces, group histories and cyptographic analysis of certificates to identify risky situations. Our team is looking for a partner with whom we will customize and then test our DHS-funded open-source software. In addition, we have a prototype for encouraging the use of stronger passwords, and making it more difficult to comply with various social engineering attacks. If you would like a preview of this work, please contact [email protected]. The videos require only a reference for use under the Creative Commons license. To use please reference one of these for basic theory, construction of a mental models system to use these, or proof of efficacy of videos (respectively). V. Garg, and L. Jean Camp, Heuristics and Biases: Implications for Security Design, IEEE Technology & Society, Mar. 2013. J. Blythe & L. Jean Camp, Implementing Mental Models, Semantic Computing and Security, An IEEE Symposium on Security and Privacy (SP) Workshop (San Francisco, CA) 24 May 2012. Vaibhav Garg, L Jean Camp and Kay Connelly, Risk Communication Design: Video vs. Text, PETS (Vigo, Spain) 11-13 July 2012.
<urn:uuid:fb64a7ec-1659-4a2f-86a6-e252db0c9367>
CC-MAIN-2017-09
http://ljean.com/awareness/awareness.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00544-ip-10-171-10-108.ec2.internal.warc.gz
en
0.896051
915
2.71875
3
Data centers require huge amounts of energy, and as the demand for new and updated data centers continues to grow, energy requirements could increase exponentially. The Green Grid (TGG) is trying to counteract the growing energy demands by enabling IT leaders to employ proven sustainable solutions as they capitalize on the new digital economy. TGG is the global authority on resource-efficient data centers. It is influencing data center design and maintenance by developing and providing metrics, models and educational resources that demonstrate best practices in sustainability. And its influence continues to grow as more and more IT and data center managers use TGG tools and metrics. Such tools include the Data Center Maturity Model (DCMM) and the Power Usage Effectiveness (PUE) metric, which has been formally adopted by the U.S., European Union and Japanese governments as a baseline method for quantifying energy use. In February 2012, TGG created a case study highlighting eBay's implementation of DCMM, a benchmarking model that touches on major components of the data center, including power, cooling, compute, network and storage. The company used the model to consider all areas of operation in its new data center facility in Phoenix, enabling it to significantly lower its PUE and create one of the most efficient data centers in the world. DCMM offers a 360-degree analysis of how well a current data center is performing in terms of meeting efficiency standards and how well positioned it is for future growth. For eBay, the tool helped identify opportunities to become more efficient in its operations and track progress toward that goal. TGG's case study of eBay's implementation of the DCMM has provided an ideal proof point, offering further industry validation and influencing other companies to follow suit. Similarly, Facebook's adoption of TGG's Water Usage Effectiveness metric, which the company publicly announced in August 2012, illustrates how TGG's measurements and metrics are impacting some of the world's largest companies and leading others to follow in their eco-conscious footsteps. More than 200 data centers globally now use the online DCMM tool and offer their performance publicly. Moreover, over the past five years, the growing use of TGG tools and metrics has helped reduce data center power and cooling energy overhead by 20%. Read more about the 2013 Computerworld Honors Laureates.
<urn:uuid:56d411fa-23ab-4f8f-b6cc-51682c367991>
CC-MAIN-2017-09
http://www.computerworld.com/article/2497329/data-center/computerworld-honors-2013--measuring-the-sustainability-of-data-centers-worldwide.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00420-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940319
465
2.546875
3
Composition and use of milk products for young children: Updated recommendations of the Nutrition Committee of the German Society of Pediatric and Adolescent Medicine (DGKJ) [Zusammensetzung und Gebrauch von Milchgetränken für Kleinkinder: Aktualisierte Empfehlungen der Ernährungskommission der Deutschen Gesellschaft für Kinder- und Jugendmedizin (DGKJ)] Bohles H.J.,Chausseestrasse 128 129 | Fusch C.,Chausseestrasse 128 129 | Genzel-Boroviczeny O.,Chausseestrasse 128 129 | Jochum F.,Chausseestrasse 128 129 | And 7 more authors. Monatsschrift fur Kinderheilkunde | Year: 2011 In recent years several special milk products intended for young children have been marketed, named "children's milk" or "young children's milk" similar to "milk for kids" "growing-up milk" or"toddler milk". These products are claimed to be advantageous as compared to cow's milk. The Nutrition Committee of the German Society of Pediatric and Adolescent Medicine (DGKJ) reconfirms its position from the year 2001 that special milk drinks for young children including follow-on formulae are in principle not needed. If toddler milks are used instead of cows' milk the nutritional composition of these products should be similar to whole cows milk regarding nutrients such as calcium, vitamins A and B2 and similar to low-fat cows milk regarding energy content. The content of critical nutrients such as iodine and vitamin D should be in line with the European directive for follow-on formulae. Flavoring and sweetening agents should not be added. Toddler milk should be drunk from a cup or mug but not from a feeding bottle. © 2011 Springer-Verlag. Source
<urn:uuid:fcd92d27-fe61-4526-a7fc-211b49dc6347>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/chausseestrasse-128-129-1598016/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00120-ip-10-171-10-108.ec2.internal.warc.gz
en
0.737727
417
3.078125
3
Today, many embedded devices rely heavily on SSL encryption through the use of hard-coded keys located within the device's firmware. In this scenario, all devices running a given firmware version are using the same private SSL key, resulting in a potential security vulnerability that could put data at risk. As recently described on the Embedded Device Hacking blog: That means that if Alice and Bob are both using the same router with the same firmware version, then both of their routers have the same SSL keys. All Eve needs to do in order to decrypt their traffic is to download the firmware from the vendor's Web site and extract the SSL private key from the firmware image. The difficulty in determining precisely which firmware version a device is using makes this attack impractical to execute. However, as reported by Embedded Device Hacking, a project known as "LittleBlackBox" --a growing database of known SSL private keys that have been correlated to their corresponding public certificates as well as the firmware known to use them--is proving that this vulnerability could become significantly more exploitable over time.
<urn:uuid:19e543ec-8b1d-4f20-bd42-751b99b7ca28>
CC-MAIN-2017-09
https://www.mocana.com/blog/2010/12/20/potential-vulnerability-of-ssl-on-devices
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00120-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967496
216
2.6875
3
Apollo is the god of the sun, so why Cable and Wireless has chosen to name its new submarine transatlantic cable network after him is a mystery. Cable and Wireless is working with Alcatel to build the new transatlantic system to meet increasing IP and data demands. When complete, the network will have four fiber pairs in each of two submarine legs, capable of 3.2 terabits per second of traffic transmission on each leg. The system will run for approximately 8,000 miles under the Atlantic Ocean, linking Long Island and New Jersey with Cornwall in the UK and Brittany in France. According to Cable and Wireless, the cable system will be the first 80 wavelength transatlantic system, with greater resilience to damage, and a system where customers can choose their own level of protection for voice, data, and IP data transfers. The system is expected to be operating by summer 2002. Cable and Wireless' London office was unavailable for comment.
<urn:uuid:445cb016-5cf7-4c9d-9d25-219859540b46>
CC-MAIN-2017-09
https://www.cedmagazine.com/print/news/2001/01/new-transatlantic-cable-network-no-myth
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00540-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932173
192
2.5625
3
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss one of the more difficult CCNA concepts; RIP v1 and IGRP. As you progress through your CCNA exam studies, I am sure with repetition you will find this topic becomes easier. So even though it may be a difficult concept and confusing at first, keep at it as no one said getting your Cisco certification would be easy! A discontiguous network is comprised of a major net separated by another major net. In the figure below, network 220.127.116.11 is separated by a subnet of network 18.104.22.168; 22.214.171.124 is a discontiguous network This document will describe why RIPv1 and IGRP do not support discontiguous networks and how to work around it. Readers of this document should have knowledge of these topics: - Configuring RIPv1 and IGRP - IP Addressing and Subnetting This document is not restricted to specific software and hardware versions. The information in this document was created from the devices in a specific lab environment setup in a topology very similar to the one shown above. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command. RIP and IGRP are classful protocols. Whenever RIP advertises a network across a different major net boundary, RIP summarizes the advertised network at the major net boundary. In the figure above, when Router 1 sends an update containing 126.96.36.199 to Router 2 across 188.8.131.52, it converts 184.108.40.206/24 into 220.127.116.11/16. This process is called summarization. Sending Routing Updates Using the topology shown above, let's follow the steps and see what questions need to be answered when Router 1 prepares to send an update to Router 2. More detailed information about this decision-making is given in Behavior of RIP and IGRP When Sending and Receiving Updates. Keep in mind the network we are interested in at this point is the advertisement of network 18.104.22.168/24. - Is 22.214.171.124/24 part of the same major net as 126.96.36.199/24, which is the network assigned to the interface that's sourcing the update? - No: Router 1 summarizes 188.8.131.52/24 and advertises the route 184.108.40.206/16. The summarization is done to the major classful boundary. In this case since it's a class B address, the summary is 16 bits. - Yes: Although this is not the case in our example, if the answer to the above question was yes, then Router 1 would not summarize the network and would advertise the network with subnet information intact. Using the debug ip rip command on Router 1, we can see the update sent by Router 1 in the output below: RIP: sending v1 update to 255.255.255.255 via Serial3/0 (220.127.116.11) RIP: build update entries network 18.104.22.168 metric 1 Receiving Routing Updates Now let's follow see what questions need to be answered when Router 2 prepares to receive and update from Router 1. Again keep in mind the network we are interested in at this point is the reception of network 22.214.171.124/24. However, remember that when Router 1 sent the update the network was summarized to 126.96.36.199/16 - Is the network being received (188.8.131.52/16) part of the same major network of 184.108.40.206, which is the address assigned to the interface that received the update? - No: Do any subnets of this major network already exist in the routing table known from interfaces other than that which received the update? - Yes: Ignore the update. Again, using the debug ip rip command Router 2, we can see the update received from Router 1: RIP: received v1 update from 220.127.116.11 on Serial2/0 18.104.22.168 in 1 hops However, displaying the routing table of Router 2, we see that the update was ignored. The only entry for any subnetwork or network on 22.214.171.124 is the one directly connected to Ethernet0. The output of the show ip route command on Router 2 shows: 126.96.36.199/24 is subnetted, 1 subnets C 188.8.131.52 is directly connected, Serial2/0 184.108.40.206/24 is subnetted, 1 subnets C 220.127.116.11 is directly connected, Ethernet0/0 The behavior of RIPv1 and IGRP when sending and receiving routing updates results in Router 1 and Router 2 not learning about the attached subnetworks of 18.104.22.168/24 and 22.214.171.124/24. Because of this devices on these two subnetworks would not be able to communicate with each other. There may be some situations where discontiguous networks are unavoidable. In these situations it is recommended that RIPv1 or IGRP not be used. Routing protocols such as EIGRP or OSPF are better suited for this situation. In the event that RIPv1 or IGRP is used with discontiguous networks, then static routes must be used to establish connectivity between the discontiguous subnetworks. In this example the following static routes would establish this connectivity: For Router 1: ip route 126.96.36.199 255.255.255.0 188.8.131.52 For Router 2: ip route 184.108.40.206 255.255.255.0 220.127.116.11 I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. I am sure you will quickly find out that hands-on real world experience is the best way to cement the CCNA concepts in your head to help you pass your CCNA exam!
<urn:uuid:e71cf4ec-d19b-4236-bff0-0f0e14c4538f>
CC-MAIN-2017-09
https://www.certificationkits.com/rip1-igrp-ccna/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00468-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916472
1,339
2.703125
3
Jurisdiction: Livermore, Calif.; Monrovia, Calif. Vendors: Amerigon, Allied Signal, Lawrence Livermore National Laboratory. Contact: Stephen Wampler, senior public information officer, Lawrence Livermore National Laboratory, 510/423-3107. LIVERMORE, Calif. - Radar, a bane to many highway speeders, may soon become a driver's friend because of a new technology soon to hit the market. An engineer at the Lawrence Livermore National Laboratory has developed a short-range radar that is commanding the attention of companies who see numerous inexpensive applications, including a device that can promote safer driving. The radar system can be mounted on a circuit board one-inch square, which is small enough to install in front and rear car bumpers. The devices, which detect motion and allow range settings, could give drivers an audible warning when changing lanes if another vehicle is in the blind spot and a collision would occur. "If you set the range between 10 to 15 feet in a car, it will warn you when another car is within that range and an alarm will go off," said Tom McEwan, a Lawrence Livermore engineer who developed the technology. "This has the potential to warn people about objects in blind spots and help them avoid collisions while backing up." The technology has been licensed to Monrovia, Calif.-based Amerigon Inc., which intends to build and market the devices by 1997. The devices are expected to cost about $10 apiece. McEwan said other uses of the technology could include airbag activation just before collisions. This would be an improvement over today's airbags, which deploy on collision. Another possible use of the technology could be to activate airbags when the vehicle is hit in the side. The key to this use of radar technology is a receiver that times the return signal from an ultra wideband transmission of short electronic pulses lasting less than 50 trillionths of a second. This differs from conventional radar, which sends continuous microwaves and has a range of up to hundreds of miles. And because there is no carrier signal from the breakthrough technology, FCC frequency allocation is unnecessary. The radar technology was originally used to detect particles generated through the fusion process. It uses low power and has low emission levels. The device can run for two years off a single AA battery, inventor McEwan said. Because the technology doesn't need frequency conversion or frequency domain signal processing, the cost of products created from it is expected to be low. The pulses can penetrate walls, and devices can be set to detect movement up to a 20-foot sphere. "Aside from metal, the radar can pass through most other types of materials," said McEwan. "That means the radar can find gas lines under rubble in earthquakes. It can also pick up breathing, which means it could also be used to find people." Because the technology can detect movement, products eventually developed could include devices which turn on lights when a person enters a room or burglar alarms. Because the devices can be as small as a square-inch, such a burglar alarm could be easily hidden behind wall hangings or put inside a vase. New construction tools are also being developed, such as a stud detector, which could help both professionals and weekend handymen. Campbell, Calif.- based Zircon Corp. is licensing the technology with such applications in mind. Using one of these products, a person could ascertain what is below the surface of a wall or floor before tearing into it. The product is expected to cost about $100. Devices may also be developed to find land mines or buried ordinance, which would be useful in countries recovering from wars, such as El Salvador or Cambodia. And, of course, armies could use it during conflicts to detect land mines . Creating the Products Lawrence Livermore, which has primarily been a nuclear weapons laboratory, is licensing the technology to companies which in turn create and market products Amerigon, which will market products for vehicles, signed a licensing agreement earlier this year and will pay royalties to the laboratory. Amerigon later signed an agreement with automotive equipment manufacturer Allied Signal Inc. to jointly create products using the technology. Negotiations have begun with some automobile manufacturers to include the anti-collision devices in new cars. The company reports that five firms, including domestic, European and Japanese auto makers have ordered prototypes of the devices. Lawrence Livermore has sent licensing materials to 11 other companies, and all that remains to be done is finalizing the financial end, said Steve Wampler, the lab's spokesman. "We got about 2,000 phone calls in the past year from businesses," he said. "It's my belief that this is one of the hottest technologies on the market from federal labs." "This technology is very definitely closer to market than our other research applications," said Bill Grant, a licensing specialist in the lab's technology transfer office. "It's essentially off-the-shelf technology."
<urn:uuid:2dadfa8c-7981-4d0e-b832-3bc49d78dbcb>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Tiny-Radar-Could-Save-Lives.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00412-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960445
1,024
2.75
3
Computer viruses and other malware such as worms, trojans and spyware, are rife and can cause tremendous damage to systems that become infected. Because of this, anti-virus (AV) technology is one of the most commonly deployed security controls used by the vast majority of computer users, from individuals to large organisations. According to the 2009 CSI Computer Crime and Security Survey, more than 99% of respondents have AV technology deployed. Having been on the market for some years, there are a wide variety of choices of AV technology, from standalone tools to AV bundled into security suites that integrate a variety of other security controls. Many standalone tools are offered for free and provide just basic protection. According to OPSWAT Inc, in its Worldwide AV Market Share Report of June 2010, free AV tools account for 42% of the total market share. Even with the use of AV technology being so widespread, malware infections were cited as the worst security incident faced by respondents to the CSI survey and are growing in number and complexity. This is echoed in the Information Security Breaches Survey 2010 commissioned, by Infosecurity Europe, which found that 62% of large organisations surveyed had been infected with malware in the previous year, up from 21% three years previously, and 43% of small organisations, up three-fold over three years. Overall, malware infections were the cause of the worst security incident faced by organisations of all sizes over the previous year. Such malware attacks are growing fast in sophistication and complexity, often using variants of known exploits that aim to get around defences that have been put in place. In mid-2010, technology vendor McAfee released research showing that 10 million malware samples had been entered into its database during the first half of 2010 alone, the majority of which are variants of known families of malware. For example, it states that it is not uncommon to see more than 10,000 variants of the Koobface worm, which looks to harvest information from users of social networking sites, in a single month. The complexity of new malware can be seen in the case of the Conficker worm, which combines the use of a number of advanced malware techniques to make it harder to eradicate it. Often introduced into computer networks via infected removable media, the worm blocks access to anti-malware websites, disables automatic updates that could include a patch against it and kills any anti-malware protection installed on the device. Its authors are also known to test Conficker against anti-malware defences commercially available to ensure that it can defeat them. Factors such as these mean that traditional AV protection, based on signatures identifying and patching known threats, provide little defence. This leaves users in an endless cycle of updating their AV software with patches as they are released and cleaning up infections that have occurred, which often requires support from the AV technology vendor. And here is the rub. Very few free AV products include any kind of support from the vendor and the cost of support can add a hefty price tag. Plus, only some products provide protection based on detecting patterns of behaviour that can be used to identify unknown threats, leaving users with huge gaps in protection. Many traditional standalone AV products--both free and paid-for versions--are also ineffective against new sophisticated threats that are often highly targeted and use a range of blended mechanisms to make their payload more successful. For example, a user may be sent a personalised phishing email that urges them to click on a link that takes them to a website infected with malware. Many standalone AV products provide no defence against such attacks as they do not include controls for protecting users from websites infected with malware or provide proactive protection against phishing attacks. Anyone relying on legacy, standalone, signature-based AV controls is putting themselves at risk of being the victim of an attack that could cost them dearly. This goes beyond the costs of clearing up after an attack and the time and cost involved with patching devices or purchasing updated versions of the software. Javelin Strategy & Research estimates that more than nine million Americans have had their identities stolen through their personal details being harvested from internet applications or other means. According to the UK Home Office, identity theft costs the UK economy £1.2 billion per year. That does not mean to say that computer users should not deploy AV controls. Rather, AV and other anti-malware technologies should be one component of a layered security defence, along with a host of other tools and services. These include a firewall and intrusion prevention capabilities, web filtering and blocking, email, phishing and spam protection, and, for consumers, parental control functionality. These security controls should be integrated and should be managed through one central console or interface, in the case where the products are administered and managed for the user by a hosted service provider. For true, proactive protection against all threats affecting computer users, the provider should offer proactive threat intelligence services to identify previously unknown threats as they are encountered. For any computer user--home users, small businesses or large organisations--the cost of the technology is a prime concern, especially as budgets are under pressure. But those costs need to be weighed against both the burden of maintaining legacy AV controls, including upgrading and vendor support costs, and the dangers of not having their systems adequately protected. The costs of remediating a security incident can far outstrip those of upgrading to better protection. For many small businesses and consumers, a cultural change is required. The survey referenced above from Infosecurity shows that 83% of small organisations with less than 50 employees had experienced a security incident during 2009—up from 45% the year before. And the average cost of clearing up after an incident for such organisations ranged from £27,500 to £55,000. Clearly it is not just large organisations that are being victimised. The key to lowering such costs is to purchase multi-tier protection. Rather than thinking that it is sufficient to place security controls to guard the perimeter of the organisation, the cultural change that is needed is to start thinking of security in terms of the assets that need to be protected--sensitive personal information and intellectual property and the like that can be used for financial gain. Organisations of any size, and consumers alike, should look to gain an understanding of what impact the loss or compromise of such assets would be on their business or their personal life. Then they will be in a position to decide what controls need to be put in place to protect those assets from the whole gamut of threats facing computer users today. There are many hidden costs in anything that appears to be free or low cost and, in business, a bargain is rarely as good as it sounds.
<urn:uuid:9e09705b-bd79-4074-84d7-96b67b57e0d3>
CC-MAIN-2017-09
https://www.bloorresearch.com/blog/security-blog/anti-virus-alone-is-a-poor-strategy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00640-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963342
1,348
3.21875
3
Tambling C.J.,Nelson Mandela Metropolitan University | Minnie L.,Nelson Mandela Metropolitan University | Meyer J.,George Mason University | Meyer J.,Davee Center for Epidemiology and Endocrinology | And 4 more authors. Behavioral Ecology and Sociobiology | Year: 2015 The response of prey to predation risk varies through time and space. These responses relate to trade-offs between foraging and predator avoidance. Following the extirpation of predators from many landscapes, the responses related to predator avoidance may have been lost or diluted. Investigating the activity pattern of prey species on comparable landscapes with and without large predators provides an opportunity to understand how predators may shape prey activity and behaviour. Using camera trap data from neighbouring fenced sections of the Addo Elephant National Park (Eastern Cape, South Africa), we investigated the activity patterns of species exposed to large predators, where the predators were only present in one of the sections. Our results suggest that prey species at risk of predation (e.g., buffalo, kudu and warthog) are more likely to be active diurnally when co-existing with nocturnally active predators, thereby reducing the activity overlap with these predators. In the absence of predators, kudu and buffalo were more active at night resulting in a low overlap in activity between sections. Warthog activity was predominantly diurnal in both sections, resulting in a high overlap in activity between sections. The presence of predators reduced the nocturnal activity of warthogs from 6 to 0.6 % of all warthog captures in each section. Elephants, which are above the preferred prey weight range of the predators and therefore have a low risk of predation, showed higher overlap in activity periodicity between predator-present and predator-absent areas. Our findings suggest that maintaining prey with their predators has the added benefit of conserving the full spectrum of prey adaptive behaviours. © 2015, Springer-Verlag Berlin Heidelberg. Source Crawford R.J.M.,Branch Oceans and Coasts | Crawford R.J.M.,University of Cape Town | Randall R.M.,South African National Parks | Whittington P.A.,East London Museum | And 18 more authors. African Journal of Marine Science | Year: 2013 White-breasted cormorants Phalacrocorax [carbo] lucidus breed around South Africa's coast and at inland localities. Along the coasts of the Northern, Western and Eastern Cape provinces, numbers breeding were similar during the periods 1977-1981 (1 116 pairs at 41 localities) and 2008-2012 (1 280 pairs at 41 localities). Along the coast of KwaZulu-Natal (not counted in 1977-1981), 197 pairs bred at nine localities in 2008-2012, when the overall number breeding around South Africa's coastline was about 1 477 pairs. Between the two study periods, numbers decreased in the Northern and Western Cape provinces following the loss of several breeding localities, but they increased in the Eastern Cape. In the Western Cape, however, numbers were stable east of Cape Agulhas and at nine well-monitored West Coast localities that were surveyed from 1978 to 2012. White-breasted cormorants breed throughout the year, with breeding at some localities more seasonal than at others and the timing of peaks in breeding varying at and between localities. In the vicinity of Saldanha Bay/Langebaan Lagoon (Western Cape), in Algoa Bay (Eastern Cape) and in northern KwaZulu-Natal, it is likely that birds moved between breeding localities in different years, although breeding often occurred at the same locality over several years. Human disturbance, presence of predators, competition for breeding space and occurrence of breeding by other waterbirds may influence movements between colonies. Securing sufficient good habitat at which white-breasted cormorants may breed will be important for conservation of the species. The species may breed at an age of 4 years, possibly younger. The bulk of their diet around South Africa's coast consists of inshore marine and estuarine fish species that are not intensively exploited by humans. © 2013 NISC (Pty) Ltd. Source
<urn:uuid:a20c4262-4e3c-4950-a9f2-cba6473c525a>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/addo-elephant-national-park-1220941/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00584-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943191
892
2.65625
3
This white paper provides insight into voice quality and the different methods to measure voice quality. Voice over IP (VoIP) has passed its infancy stage and is a mature technology that has been widely adopted by customers hoping to take advantage of the cost savings offered by VoIP in addition to a range of advanced features that improve efficiency. Voice quality is the qualitative and quantitative measure of sound and conversation quality on an IP phone call. Voice quality measurement describes and evaluates the clarity of voice conversation. The shift from the traditional time-division multiplexing (TDM) world to a packet-based IP telephony solution poses challenges for voice quality. Unlike data, which is bursty in nature and tolerant to delay and packet loss, voice and video are extremely sensitive to jitter, packet loss, and delay. In a converged network with voice, video, and data residing on the same network, there is a huge demand for the network infrastructure to be reliable and scalable and to offer different levels of service for advanced technologies such as voice, video, wireless, and data. Voice Impairment Parameters The real-time nature of voice drives strict service-level agreements (SLAs) to be implemented in the network. The primary voice impairment parameters are jitter, packet loss, and delay. In data networks, even if a few packets are lost during transmission, TCP ensures the retransmission and assembly of the packets, and the user will not notice any difference. But in the transmission of voice packets across the IP backbone, the missing packets cause distortion in voice quality on the receiving end, and retransmission of missing voice packets is useless. It is tolerable to have occasional packet loss, but consecutive loss of voice packets can affect the overall quality of the transmitted voice. Delay variation, or jitter, occurs when voice packets arrive at the destination at different time intervals. This can happen because of the connectionless nature of IP. Depending on the congestion and load on the network, the arrival rate of these packets at the destination may vary. The devices on the receiving end should be capable of buffering these packets and playing them back to the user at a consistent interframe interval. These types of devices are called dejitter buffers. A dejitter buffer usually adds a forced delay (default 60 milliseconds [ms]) to every VoIP packet received. Typically, this delay is in the 20 to 60 ms range. This delay is commonly called the playout delay. Delay is the finite amount of time it takes a packet to reach the receiving endpoint after being transmitted from the sending endpoint. In the case of voice, this is the amount of time it takes for a sound to travel from the speaker's mouth to the listener's ear. Delay (or latency) does not affect voice fidelity. Extended network delay is perceived as echo in the conversation. Even though network delay is not a direct cause of echo, it does amplify the perception of any echo present in the media path. Extremely long delays can lead to "collisions" in the conversation, when both parties seem to be speaking simultaneously. The network infrastructure must meet the following requirements: • Packet loss must not be more than 1 percent. • Average one-way jitter must not be more than 30 ms. • One-way delay must be under 150 ms. To help ensure good voice quality, it is imperative to keep jitter and packet loss under control by paying close attention to voice impairment factors. Requirements of a Voice Quality Measurement Product By now, you have a basic understanding of the voice impairment parameters and their importance in voice quality measurement. Some of the key requirements for a voice quality measurement product follow. It must: • Be able to calculate voice quality for actual calls. Although it is possible to generate synthetic voice traffic and calculate voice quality for these calls, the voice quality generated from synthetic calls does not represent end-user experience. The capability to simulate voice traffic must be made use of to verify voice quality when a real-time alert is received. • Provide details of voice impairment parameters such as jitter, packet loss, and so on. It is important to understand the cause of voice quality degradation; for example, knowing whether jitter or packet loss is causing the problem will help in fine-tuning the network, if required. • Report voice quality in real time. For example, in a call that lasts for 5 minutes, if the user encounters voice quality problems in the second minute, it is important for the administrator to get an alert at the second minute rather than at the end of the conversation. • Provide details about the endpoints involved in a conversation, the type of codec used, and the IP addresses and phone numbers of the endpoints. Such detailed information is important in troubleshooting a voice quality problem. • Be scalable and easily deployable. Components of Cisco Unified Communications ® Unified Communications Family of products can be divided into two parts: • Cisco Unified Communications infrastructure • Application infrastructure The Cisco Unified Communications infrastructure consists primarily of call-processing devices such as Cisco Unified Communications Manager, Cisco Unified Communications Manager Express, Cisco Unity ® software, and Cisco Unity Express. The Cisco Unified Communications infrastructure layer is the brain of the Cisco Unified Communications Family of products, and it performs the critical functions of call setup, call routing, and call teardown. The application infrastructure consists of software applications such as Cisco Emergency Responder, Cisco Unified MeetingPlace ®, MeetingPlace Express, IP Contact Center, IP Contact Center Express, and Personal Assistant. These applications provide added functionality to the Cisco Unified Communications Family of products. Cisco Prime Unified Service Monitor Given the dynamic nature of IP networks and the strong dependency of the Cisco Unified Communications solution on network infrastructure, it is imperative for network administrators to have voice quality information on real calls (not simulated calls) at their fingertips to help enable them to resolve voice quality problems. Cisco Prime Unified Service Monitor meets user requirements in reporting voice quality issues. There are two ways to measure call quality using Service Monitor: • Real-time call quality measurement and reporting progresses: Use the Cisco 1040 or Cisco Network Analysis Module (NAM) with version 4.x or later hardware. • Call quality measurement and reporting at the end of the call: Use the Cisco Unified Communications Manager cluster (4.2.x and 5.x/6.x/7.x/8.x provide Cisco Voice Transmission Quality (VTQ)-based Mean Opinion Scores [MOSs]; for earlier call manager versions, Service Monitor reports jitter and packet loss for the call). Service Monitor analyzes the data that it received from Cisco 1040/Cisco NAM or Cisco Unified Communications Manager and sends Simple Network Management Protocol (SNMP) traps when a MOS falls below a threshold. Service Monitor provides a set of default global thresholds, one per supported codec. Service Monitor also allows users to change the default global thresholds and to override them by creating Cisco 1040 threshold groups and cluster threshold groups. New Features in Cisco Prime Unified Service Monitor The major new enhancements in Service Monitor are in Device Credentials Repository (DCR) integration, call grading, support for a large call detail record (CDR) billing frequency, and configuration. DCR integration is designed for data and service sharing between Unified Operations Manager and Service Monitor. DCR integration will be active only in deployments in which Operations Manager and Service Monitor are coresident and coexistent. Service Monitor standalone deployments will remain the same as previous releases. However, in case of a coexistent deployment, the Service Monitor DCR needs to act as a slave to the Operations Manager master DCR. Remember to always configure the Operations Master server DCR as a master. The master server maintains the master list of device credentials, and the slaves are the instances of the DCR on other servers. Note: Clusters in Operations Manager may not initially appear on Service Monitor when set up in a master/slave configuration where Operations Manager is the master and Service Monitor is the slave. You will need to restart the SMDBMonitor process on the Service Monitor server for the clusters to appear in Service Monitor. This is a onetime setup that must be performed after the initial setup of Operations Manager as master and Service Monitor as slave. Please refer to the release notes for details. A new column called Grade has been added to the 1040 Sensor, Cisco VTQ, and call detail record reports. The grade will determine whether the MOS value was acceptable, poor, good, or unknown based on the global threshold settings. Following are the criteria for reporting various grades in Service Monitor: • If the MOS value falls below the MOS poor value in the global thresholds screen, the grade reported will be poor. • If the MOS value is between the MOS poor value and acceptable value, the grade reported will be acceptable. • If the MOS value is above the MOS acceptable value, the grade reported will be good. Service Monitor also allows you to change the global threshold values. Support for Large CDR Billing Frequency Previous to Cisco Unified Service Monitor Release 8.5, there was a limit to how much data Service Monitor could process in real time that was expressed as CDR/minute. Any data that was above the limit was discarded by Service Monitor. However, with 8.5, Service Monitor will not discard data that is over the limit and can handle larger CDR billing frequency (ranging from a few minutes up to a few hours). The following entries in the qovr.properties file ($NMSROOT\qovr\qovr.properties) need to be changed from: And then the QOVR process needs to be restarted: CDR-Based Trunk Utilization Service Monitor can now provide the trunk utilization statistics based on CDR. Make sure you configure the maximum capacity under Administration > Trunk Configuration. Select the cluster and then choose the appropriate protocol. For Media Gateway Control Protocol (MGCP), Service Monitor provides you with a default, and you can stay with the default. But with the H.323 gateway, this will be dependent on the number of channels communicating with the communications manager, so let's assume that if there are two T1 lines, the maximum call capacity will be 2 * 23 = 46. Data Collection and Analysis Service Monitor receives and analyzes MOSs from the following sources when they are installed and configured properly in your voice network: • Cisco 1040s: Cisco 1040s compute MOSs for each Real-Time Transport Protocol (RTP) stream and send syslog messages to Service Monitor every 60 seconds. • Cisco Network Analysis Module 4.0 or later: NAM (hardware form factors) computes MOSs for each RTP stream, and Service Monitor pulls the data from each NAM. This integration requires Service Monitor to have user credentials with collect view permissions on the NAM. • Cisco Voice Transmission Quality: Cisco Unified Communications Manager collects data from Cisco MGCP Voice Gateways (VGs) and Cisco IP phones; MOSs are calculated on MGCP gateways and Cisco IP Phones using the Cisco Voice Transmission Quality algorithm. At the termination of a call, Cisco Unified Communications Manager stores the data in call management records (CMRs). Call Classification and Call Detail Record-Based Reports The following describes how call classification works for Cisco Unified Service Monitor: • Call Classification: Cisco Unified Service Monitor classifies various call types based on Cisco Unified Communications Manager configuration and call detail records. The Cisco Unified Service Monitor Call Classification feature includes the ability to classify calls into the categories listed in Table 1. Table 1. Categories for the Cisco Unified Service Monitor Call Classification Feature • MGCP Gateway Outgoing • H.323 Gateway Outgoing • H.323 Trunk Outgoing • SIP Trunk Outgoing • MGCP Gateway Incoming • H.323 Gateway Incoming • H.323 Trunk Incoming • SIP Trunk Incoming Intercluster Trunk (ICT) • ICT GateKeeper Controlled • ICT Non-GateKeeper Controlled • OnNet H.323 Trunk • OnNet SIP Trunk Any custom-created category, such as: • Conference calls to Cisco WebEx • Calls to Legacy Voice mail • Long Distance to East Coast Cisco Unified Service Monitor supports two category types: System categories and user categories. Service Monitor can classify calls under system categories without any user input. All calls are classified based on Cisco Unified Communications Manager device configuration (for example device type, OnNet/OffNet configuration). Apart from classifying calls under these categories, Service Monitor also classifies calls as OffNet or OnNet. For a call to be classified as OffNet, one of the endpoints has to be a gateway or trunk, and the: – Gateway or trunk is configured as OffNet in Cisco Unified Communications Manager – Gateway or trunk is configured to use the system default (the system default is OffNet, configurable from Cisco Unified Communications Manager > Service Parameters) – Gateway is analog • Call Detail Record-Based Reports: Service Monitor performs call classification based on Cisco Unified Communications Manager call detail records. it has the ability to provide on-demand call reports based on various call types and to filter calls by call category, device type, and the success or failure of calls where call termination cause codes are determined and grouped. For previous Service Monitor releases, data from these new call classification and call detail record features will be retrieved and stored by Cisco Unified Service Statistics Manager for long-term reporting and trend analysis that displays call types, for example, call volume reports and call analysis reports. Note: The call classification user interface and analytics have been moved from Service Statistics Manager's logic to Service Monitor's. The Service Monitor software runs on an Intel-based machine, with a server running Windows Server 2003 (Standard/Enterprise) with Service Pack 1 or 2 or Windows Server 2008 Standard or Enterprise Edition (32-bit x86) with Service Pack 2. The software license must be procured by the customers. The ITU E model, as defined in G.107 (03/2003), predicts the subjective quality that is experienced by an average listener by combining the impairment caused by transmission parameters (such as loss and delay) into a single rating: the transmission rating factor R (the R-factor). This rating, expressed on a scale of 0 (worst) to 100 (best), can be used to predict subjective user reactions, such as the MOS. The MOS can be obtained from the R-factor with a converting formula. Thus the R value is an estimate of the quality that can be expected if the network is realized the way it is planned. Cisco 1040/NAM Deployment Locations The Cisco 1040 is the hardware component that will be deployed on the Switched Port Analyzer (SPAN) port of an access switch, as close to the IP phones and other problem areas (gateways, and so on) as possible. NAM could be a hardware module or an appliance. The module will be deployed on an integrated services router or Cisco 65xx/76xx. The number of concurrent RTP streams supported by Cisco 1040 and NAM are different. This plays a critical role in defining the deployment location of Cisco 1040 or NAM components. Table 2 outlines the number of concurrent RTP streams monitored by various components. Table 2. Number of Concurrent RTP Streams Measured Cisco 1040 Sensor 100 RTP streams/minute 100 RTP streams/minute 400 RTP streams/minute 1500 RTP streams/minute 4000 RTP streams/minute Cisco 1040 Hardware The Cisco 1040 Sensor has two Fast Ethernet interfaces: • Management port • SPAN port The Cisco 1040 uses IEEE 802.3af standard Power over Ethernet (PoE) from the switch to which it connects. When the Cisco 1040 boots up, it uses Dynamic Host Configuration Protocol (DHCP) option 150 to retrieve its configuration and image files on a Trivial File Transfer Protocol (TFTP) server. Similar to the way that an IP phone registers with Cisco Unified Communications Manager, a Cisco 1040 registers (using Skinny Client Control Protocol [SCCP]) to the Service Monitor application. On the TFTP server, the Cisco 1040 first looks for its configuration file, named QOV[Cisco 1040 MAC address].cnf. If that file does not exist, the Cisco 1040 looks for a file named QOVDefault.cnf. These CNF files provide the image (SvcMonAB2_102.img) filename for the Cisco 1040 to download in addition to the Service Monitor IP addresses. The Cisco 1040 then downloads this image and registers to the Service Monitor, just like a phone registers to Cisco Unified Communications Manager, using SCCP. Then the Cisco 1040 utilizes the SPAN port on a switch to monitor the actual voice calls. It collects voice impairment parameters such as jitter and packet loss from the RTP stream and computes MOS values. The Cisco 1040 works in passive mode to collect voice impairment statistics. The Cisco 1040 reports voice quality details and MOS values every 60 seconds, providing near real-time voice quality measurement. Each Cisco 1040 can monitor 100 RTP streams with optimal SPAN port configuration. NAM can monitor 100 to 5000 RTP streams based on the type of NAM or appliance. As shown in Figure 1, multiple Cisco 1040s/NAMs can be deployed in the network and configured to register to the Service Monitor software component. Each instance of a Service Monitor software component can report call quality for 45,000 IP phones and can support up to a maximum of 5000 RTP streams/minute. Figure 1. Multiple Cisco 1040s Register to Cisco Unified Service Monitor Note: In Figure 1, the Service Monitor and Operations Manager software instances coreside on a single machine. This type of deployment is supported for medium-size networks (up to 10,000 phones). For networks with more than 10,000 phones, the Service Monitor and Operations Manager software should be deployed to run on separate machines. Strategic Versus Tactical Service Monitor satisfies most quality monitoring requirements for enterprise IP telephony. Deployment strategies include: • Strategic monitoring: The Cisco 1040 and NAM are installed to continuously monitor RTP streams to the IP phones at some or all locations in the managed environment. Depending on monitoring goals, significant coverage of all or most sites could be included, or by using sampling techniques, representative sites would be selected for monitoring and would determine the location of the Cisco 1040s. Service Monitor can scale to support up to 5000 RTP streams per minute total from multiple Cisco 1040s/NAMs and can provide real-time alerting on call quality. • Tactical monitoring: Cisco 1040s/NAM can be inexpensively shipped overnight and installed at the site (such as a branch office) that has voice quality concerns or problems. Once it is installed and configured, it can immediately begin monitoring and assessing the quality of IP-based calls without complex setup or installation. The Cisco 1040 is FCC Class B compliant and can easily be installed in any office environment. Figure 2 shows centralized Cisco Unified Communications Manager deployment with one remote branch office connected through a WAN circuit. To monitor voice quality for calls between headquarters and branch across the WAN circuit, two Cisco 1040s can be deployed close to the phone and two NAMs can be deployed at the edge of the network as shown. The key recommendation is to deploy the Cisco 1040s as close to the IP phone as possible and the NAMs in the network exchange points such as WAN in/out or core or distribution layers. In most cases, the Cisco 1040s will sit on the access layer switch in the campus. • For each phone, there are transmit and receive RTP streams. • For the RTP stream originating from Phone A (TX RTP stream), the segment between 1 and 2 in Figure 2 experiences the least impairment, and the probability of voice quality degrading in this segment is slim to none. The RTP stream between segments 2 and 3 traverses several network devices in the WAN and is prone to network conditions. • The previous statement is also true for RTP streams originating from Phone B. • For the Cisco 1040/NAM on the left in Figure 2, MOS will be calculated based on the RTP stream between Phone A/B covering headquarters LAN segments 1 and 2. • For the NAM/NAM on the middle in Figure 2, MOS will be calculated based on the RTP stream coming between Phone A/B in the WAN covering segments 2 and 3. • For the NAM/Cisco 1040 on the right in Figure 2, MOS will be calculated based on the RTP stream between Phone A/B in the branch LAN covering segment 3, and 4 is of importance; the switch on the left must be configured to span the incoming RTP stream, to span the destination port seen by the left Cisco 1040. Service Monitor collects and correlates the collected MOS data from different segments, which will clearly identify the segment with poor call quality during the voice call. With optimal SPAN port configuration, each Cisco 1040 can monitor up to 100 RTP streams. Deciding on the Number of Cisco 1040s and NAMs for Call Quality Monitoring The number of Cisco 1040s/NAMs required depends on the busy hour call completion (BHCC) handled by the switch. For a 10,000-phone network, for example, the cluster could handle 1000 to 4000 calls simultaneously. As the size of the network increases, it becomes more appropriate to take samples of the calls generated from the cluster and measure voice quality for a subset of these calls. On the other hand, if the network consists of only about 1000 phones, it is easier to monitor voice quality on all of the calls; however, the same sampling technique can also be applied to a 1000-phone network. The Unified Communications deployment follows one of the following call processing models: • Single site with centralized call processing • Multiple-site WAN with centralized call processing • Multiple-site WAN with distributed call processing For a single site with centralized call processing, most often a Catalyst ® 6500 is used in the access layer/wiring closet to connect the IP phones. It is common to expect about 200 IP phones (4 blades with 48 ports = approximately 200) on a single Catalyst 6500. If 4 out of 10 phones are active at any point, a NAM line card can be placed on the Catalyst 6500 to monitor the active calls. It is also possible to deploy multiple Cisco 1040s on the same switch to address situations in which the switch is handling high call volume. It is not necessary to measure voice quality for every call. The general practice is to measure voice quality for a subset of the calls on a continuous basis and use a tactical approach for troubleshooting voice quality problems. Based on this analysis, for a 1000-phone deployment, if you were to sample 30 percent of the active calls and measure voice quality on a continuous basis, you would need three Cisco 1040s (30 percent of 1000 is 300, and each Cisco 1040 can monitor 100 RTP streams; hence, you would need three Cisco 1040s). Usually, the Cisco 1040s are deployed in pairs (one each at the origination and termination endpoints). This sampling could be quite aggressive for most common deployments. Based on the simultaneous calls that are active on a switch, the number of Cisco 1040s required for voice quality measurement varies. As the network size increases, the sampling policy can be reduced. The previous scenario had a dense population of phones on a single switch; another scenario could have phones scattered among multiple smaller switches. Typically, the number of phones in a branch is less, and you will see smaller-density switches used in the branch. For this scenario, it is excessive to have one Cisco 1040 per switch, especially if the number of RTP streams on these switches is small (20-40 RTP streams). To address this issue, you can use Remote SPAN (RSPAN) and combine RTP streams on multiple switches. Another alternative is to use an active hub and connect the SPAN destination port from multiple switches to the same active hub. With this alternative, there is potential for Layer 2 loops, and you must evaluate the best option before embarking on any approach. In summary, Service Monitor can monitor actual voice calls in real time and provide details of the parameters that cause voice quality degradation. The combination of Service Monitor and Operations Manager provides a powerful product for monitoring and troubleshooting voice quality problems. Placement of Cisco 1040s The Cisco 1040 is FCC Class B certified and can be deployed in a wiring closet or on a desk. The Cisco 1040 uses the SPAN port on the switch to monitor the RTP stream. As shown in Figure 3, the Cisco 1040s are deployed as close to the IP phone as possible so that the voice quality measurement will be close to what the user experiences. The Cisco 1040 reports voice quality measurements every 60 seconds. For each conversation, there are four RTP streams: two each from originating and terminating phones. In the default SPAN port configuration, Service Monitor receives four MOS values every 5 seconds for each conversation. As discussed in the previous section, of the four RTP streams, two provide meaningful statistics; hence, the MOS value calculated for the interesting RTP stream is of importance and should be considered for further analysis. Figure 3. Cisco 1040 Placements The number of Cisco 1040s per switch depends on the following: • Type of switch • Type of customer • Number of simultaneous calls The type of switch determines the number of SPAN destination ports that can be configured on the switch. Modular switches such as the 65xx support two SPAN destination ports with different source ports. Most of the fixed configuration switches support a single destination port. On the modular configuration switches, you can have two sensors deployed on the same switch, and on the fixed configuration switch, you can have a single Cisco 1040 deployed. A typical enterprise customer has various organizations, such as engineering, human resources, marketing, sales, and support. The traffic patterns are different in these organizations. You can expect more calls for support, sales, and marketing groups, and the switches that house these users will have a higher busy hour call attempt (BHCA) value. In a call center environment, you can expect high call volume and, therefore, high BHCA, which will be the deciding factor for the number of Cisco 1040s required. The number of simultaneous calls is tied to the previous argument about the BHCA value. It is important, therefore, to understand the BHCA value and average call hold (ACH) time to determine the number of Cisco 1040s required. Keep in mind the limit on the number of RTP streams supported by each sensor (100) and the number of simultaneous calls generated by the phones connected to the switch where the Cisco 1040 is deployed. You could have a situation in which the number of RTP streams exceeds 100, in which case you can add Cisco 1040s or configure the SPAN source port in such a way that you monitor only selected phones on the switch. If the SPAN destination port sees more than 100 RTP streams, the Cisco 1040 goes into sampling mode, and the MOS value reported is diluted. Bandwidth Used by Cisco 1040 Cisco 1040 bandwidth requirements are very little. Each syslog message takes up around 60 bytes per stream. So a total of 6000 bytes will be used up reporting call quality for the 100 streams each minute. Cisco 1040 in Sampling Mode A Cisco 1040 can monitor 100 RTP streams. If it is deployed on a switch that has more than 100 RTP streams, it performs sampling, in which case some of the RTP streams are not considered for MOS value generation. You must avoid this situation at all times. In sampling mode, the MOS value reported is diluted because some of the RTP streams are not considered. The Cisco 1040 monitors the RTP stream and collects the information necessary to compute the MOS value. This information is stored in a buffer, from where the computation process picks data to compute the MOS value. If the packets arrive at a faster rate than the rate at which the buffer is emptied, part of the RTP stream is dropped before the Cisco 1040 starts the collection process. Keep in mind that CPU resources are constantly utilized; hence, it is not just the buffer that becomes a bottleneck when the Cisco 1040 is overwhelmed with more RTP streams: the CPU also falls short in serving the different processes. The MOS value reported by the Cisco 1040 gets worse as the number of simultaneous RTP streams increases beyond 100. It is important to plan ahead and optimize the SPAN port configuration in these scenarios. Cisco 1040s in the Branch In a branch office, the density of IP phones is less when compared with the density seen in a campus. Typically, the branch office contains fixed-configuration switches, and the number of simultaneous calls is lower. In a fairly large branch, it is common to see multiple fixed-configuration switches stacked to provide more density and avoid the need to run gigabit uplink to aggregate switches and routers. The Cisco 1040 fits into this model the same as with any other switch. It utilizes the SPAN port to monitor the RTP stream. In a scenario in which the switches are not stacked but have gigabit uplink to an aggregate switch, if the number of RTP streams is below 100, then one Cisco 1040 per switch is excessive. In this situation, RSPAN is useful. The configuration done on the switch with SPAN, RSPAN, or Enhanced SPAN (ESPAN) is transparent to the Cisco 1040; the Cisco 1040 functions normally as long as it sees the RTP stream. For cases in which RSPAN is not a desirable configuration or not an approved configuration, a simple active hub can be used to connect the individual SPAN port from the various switches, and the Cisco 1040 can be deployed on the hub. It is very important to keep spanning tree loops in mind when such a configuration is attempted. The use of a hub must be selected as a last resort. SPAN Port Limitation SPAN ports are widely used to connect packet sniffers for troubleshooting. In the contact center world, the SPAN port is used to record the voice conversation. In the service monitor world, the SPAN port is used to monitor voice quality. If the SPAN port is required for a packet sniffer, contact center, and Service Monitor at the same time, the SPAN port does not allow configuration of the same source port tied to multiple SPAN destination ports in fixed configuration switches. This is a limitation of SPAN port configuration on fixed configuration switches. The only alternative is to use an active splitter that offers one-to-many streams. The simplest splitter must be an active hub that offers a one-to-many stream. In this model, the packet sniffer, contact center application, and Cisco 1040 connect to the hub, and the hub connects to the SPAN destination port on the switch. It is also recommended that all fax devices be aggregated in one voice gateway and that the 1040s not span the switch to which the gateway is connected. Currently the 1040 is known not to correctly handle certain cases of calls going to fax machines. Placement of Inline NAM and NAM Appliance The Cisco NAM can be deployed in a wiring closet/core or edge of the network (WAN entry/exit). The NAM uses the Cisco 1040 code to monitor the RTP streams and it provides higher scalability to monitor up to 4000 streams per second. As shown in Figure 4, the NAMs are deployed as close to the WAN entry and exit points as possible so that the voice quality measurement will provide the segment if the voice quality is dropping the WAN. The Cisco 1040 reports voice quality measurements every 3 seconds and MOS every 60 seconds. As discussed in the previous section, of the four RTP streams, two provide meaningful statistics; hence, the MOS value calculated for the interesting RTP stream is of importance and should be considered for further analysis. Figure 4. Cisco NAM Placements The number of Cisco 1040s per switch depends on the following: • Type of router/switch • Type of customer • Number of simultaneous calls The type of switch/router determines the NAM or line card support. Modular switches such as the 6500/7000 series support the NAM card. Most of the fixed configuration switches won't support NAMs or NAM line cards. Integrated services routers such as the 28xx/38xx series support NAM. Different configurations of NAM support different numbers of concurrent RTP streams. Refer to Table 2 for concurrent RTP stream support. K-factor (Klirrfaktor) is a mean opinion score (MOS) estimator of the endpoint type defined in ITU standard P.564. This standard relates to the testing and performance requirements of such a device. K-factor predates the standard. A P.564-compliant version will follow. K-factor is trained using thousands of speech samples and impairment scenarios, along with target P.862.1 MOS values for each scenario. The trained K-factor device in the IP phone or gateway can then recognize the current impairment and produce a running MOS value prediction. R-factor (see the section "Cisco 1040 or NAM-Based Call Quality: R-Factor") is based on three dimensions: loss, delay, and echo. K-factor and other P.564 MOS estimators measure only packet loss, which is a network effect. They are packet loss metrics projected onto a psychological scale. In general, primary statistics (packet loss, jitter, and concealment ratio) show visible degradation well before MOSs start to degrade. Hence, MOSs are a secondary indication of network problems, because MOS is essentially a packet loss metric. Packet loss counts, jitter, concealment ratio, and concealment second counters are primary statistics, based on direct observation. MOSs are a secondary statistic. Hence, you should use MOSs as a flag, but then use primary statistics to investigate or qualify the alarm. Use primary metrics in SLAs rather than MOSs. Cisco Voice Transmission Quality-based call quality reports can be obtained using Service Monitor in conjunction with Cisco Unified Communications Manager and the latest Cisco Unified IP Phones and Cisco Gateways. When to Use Cisco Voice Transmission Quality-Based Call Quality Reporting Some key points to keep in mind when choosing to go with Cisco Voice Transmission Quality-based call quality reporting are: • Cisco Voice Transmission Quality is supported from Cisco Unified Communications Manager 4.2 or later versions. • Cisco 7906, 7911, 7931, 7921, 7962-G, 7962-G/GE, 7942-G, 7942-G/GE, 7972-G/GE, 7940, 7960, 7941, 7961, 7970, and 7971 IP Phones support Cisco Voice Transmission Quality in SCCP and Session Initiation Protocol (SIP) mode. (You must have new firmware; the firmware can be downloaded from Cisco Unified Communications Manager 4.2 or 5.x or 6.x, 7.x or 8.x.) For an updated list of supported devices, please refer to the release notes for Cisco Unified Service Monitor on Cisco.com at http://www.cisco.com/en/US/products/ps6536/prod_release_notes_list.html. • Sampling rate is every 8 seconds. • Score is sent at the end of the call by using CMRs. Unlike Cisco 1040-based call quality reporting, Cisco Voice Transmission Quality-based call quality is reported at the end of the call. You can use Cisco Voice Transmission Quality-based reporting if you prefer not to have call quality reporting as the call progresses. Also, the Cisco Voice Transmission Quality feature is inherent to Cisco Unified Communications Manager 4.2 and later. Therefore, if you do not want to invest in a Cisco 1040, Cisco Voice Transmission Quality-based call quality still provides MOSs to estimate user experience. Service Monitor scalability is dependent on the estimated call rates generated in the network. The call volume supported by a fully loaded Service Monitor system with various scenarios is given in Table 3. Table 3. Call Volume Supported by a Fully Loaded Service Monitor System Cisco Voice Transmission Quality CDRs/Minute Cisco 1040/NAM Segments/Minute Cisco Voice Transmission Quality and Cisco 1040/NAM Cisco Voice Transmission Quality Only Cisco 1040/NAM Only Preparing the Server for Service Monitor This section describes how to prepare your server for Service Monitor installation. Operating System and Server Service Monitor is supported on Windows 2003 Server Standard/Enterprise Edition with Service Pack 1/Service Pack 2, Windows Server 2008 Standard or Enterprise Edition (32-bit x86) with Service Pack 2, and Windows Server 2003 R2 Enterprise Edition with Service Pack 2. No other operating systems are supported. It is recommended that software other than the operating system and antivirus software not be installed on this computer system. • For a small network (fewer than 5000 phones), Serial ATA (SATA) disks are required. • For a medium network (5000 to 15,000 phones), SCSI disks are required (suggestion: MCS 7845-I1 or MCS 7845-H1 comes with SCSI). • For networks with more than 15,000 phones, Serial Attached SCSI (SAS) disks are required (suggestion: MCS 7845-I2 or MCS 7845-H2 comes with SAS). It is recommended that you configure the hostname for the Service Monitor server before you start installing Service Monitor. Specify the hostname when you are installing the operating system or subsequently; select My Computer, right-click, and select Properties > Computer Name. Once Service Monitor is installed, changing the hostname is a very laborious process involving file manipulation and the execution of scripts. The User Guide for Cisco Unified Service Monitor documents all the steps involved in changing the hostname of the Service Monitor server. Verify Locale Settings Service Monitor supports only the U.S. English and Japanese locales. Using other locales means that you are running on an unsupported configuration. Further, Service Monitor may display erratic behavior, such as JRunProxyServer services not starting automatically. However, non-U.S. English keyboard layouts should work. Verify ODBC Driver Manager Some components of Service Monitor require the presence of the correct version of Open Database Connectivity (ODBC) on the Service Monitor server. To verify the ODBC Driver Manager version, do the following: Step 1. On the Service Monitor server, select Start > Settings > Control Panel > Administrative Tools > Data Sources (ODBC). Step 2. Click the About tab. Step 3. Make sure that all ODBC core components have the same version number (3.5xx or later). ODBC is not available from Microsoft as a standalone installation but is packaged along with Microsoft Data Access Component (MDAC). Note: If the necessary OBDC is not listed, install MDAC 2.5 or later by referring to the Microsoft website. Browser Version and Flash Plug-in The recommended browser is Microsoft Internet Explorer 6.0/7.0. When using Service Monitor, disable any software on your desktop that you use to prevent popup windows from opening. Service Monitor must be able to open multiple windows to display information. Network Time Protocol The clocks on Service Monitor, Cisco Unified Communications Manager servers, NAMs, and Cisco 1040s must be synchronized for Service Monitor reports to include complete and up-to-date information and accurately reflect activity during a given time period. CDR/CMR stream correlation won't work if the clock is not in sync on all of the Cisco Unified Service Monitor components (Cisco Unified Communications Manager/NAM/Cisco 1040). These notes offer a starting point and do not provide complete instructions for configuring Network Time Protocol (NTP). 2. Use your system documentation to configure NTP on the Windows Server 2003 system where Service Monitor will be installed. Configure NTP with the time server being used by the Cisco Unified Communications Manager in your network. You might find "How to configure an authoritative time server in Windows Server 2003" at Make sure that the Service Monitor server can reach the TFTP server and the phones or devices in the IP address range where the Cisco 1040 would be deployed. Terminal Server Services Remote Desktop Service and Virtual Network Computing (VNC) Services are recommended to remotely manage the Service Monitor server. VNC Services and Remote Desktop can be used to remotely install the Operations Manager and Service Monitor software. Antivirus and Platform Agents You should enable virus protection on the Service Monitor server, using antivirus software. Active scanning of drives and memory should be performed during off-peak hours. Please exclude from scanning the "CSCOpx" folder. You may experience delays, and performance may be degraded, when the virus protection software is scanning all files. Service Monitor has undergone interoperability testing with the following: • Third-party virus protection software: – Symantec Antivirus Corporate Edition Version 9.0 – McAfee VirusScan Enterprise 8.0 • Platform agents: – (Optional) Cisco Security Agent 5.2.0 Check Routing and Firewalls Make sure that any firewalls between the Service Monitor server and the Cisco Unified Communications Manager, TFTP server, and Cisco 1040s are configured to allow management traffic through. See the "Port Availability" section below for information on which ports should be opened. Also make sure that there is connectivity between devices and the Service Monitor server. Even if a route exists to a network behind a device, it does not mean that one exists to (and from) the device itself. Table 4 lists the ports used by Service Monitor. These ports should not be scanned. Table 4. Service Monitor Port Usage Secure Shell (SSH) Protocol User Datagram Protocol (UDP) Domain Name System (DNS) 67 and 68 Syslog: Service Monitor receives syslog messages from Cisco 1040 SCCP: Service Monitor uses SCCP to communicate with Cisco 1040s Interprocess communication between the user interface and back-end processes Cisco Unified Communications Manager 5.x or 4.2 can be used as the TFTP server. If you use Cisco Unified Communications Manager as a TFTP server, Service Monitor cannot copy configuration files to Cisco Unified Communications Manager due to security settings on the latter. You will need to manually upload the configuration file. After uploading the configuration file, reset the TFTP server on Cisco Unified Communications Manager. For more information, see Cisco Unified Communications Manager documentation. Note: Due to known security limitations of TFTP, it is recommended to disable the TFTP service in Service Monitor if Cisco 1040 Sensors are not in use. The CiscoWorks Common Services TFTP Service could be disabled in Windows Control Panel > Services. These are some important points that you should be aware of when you are configuring Cisco Unified Communications Manager: • If you don't see call records in the Cisco Voice Transmission Quality reports, make sure CDR and CMR are enabled on all Cisco Unified Communications Manager nodes of a cluster. • Make sure that the cluster ID is unique in the system; Service Monitor will generate an error when a duplicate cluster is entered. • Make sure that on the Cisco Unified Communications Manager Enterprise Parameter Configuration page, the CDR File Time Interval is set to 1 (minute). This determines how frequently the Cisco Unified Communications Manager generates CDRs. (Typically this is not changed.) • Check the owner of the sm_record_create_table in the CDR database after adding the Cisco Unified Communications Manager to the Service Monitor server. Make sure the table owner is dbo. Caution: The Cisco Unified Communications Manager cannot write CDRs to the database if the sm_record_create_table owner is not dbo. • Cisco Unified Service Monitor IP should be added to the Unified Communications Manager server 5.x and later as a billing server. Depending on the Unified Communications Manager version that you use, you need to perform some subset of the tasks listed in the above section. Where tasks themselves differ slightly from one Unified Communications Manager version to another, version-specific steps are noted in the procedures. Table 5 lists the configuration tasks you must complete for each version of Unified Communications Manager that you want Service Monitor to obtain Cisco Voice Transmission Quality data from. Table 5. Configuration Tasks for Unified Communications Manager Perform Task for These Unified Communications Manager Versions 5.x and later Setting Unified Communications Manager service parameters This section describes the tasks that should be performed during Service Monitor installation. Perform the following checks before installing Service Monitor: • Dual homing (dual network interface cards [NICs]), using two different IP addresses, is not supported on Service Monitor. If, during installation, you receive a warning message to edit a file named gatekeeper.cfg, then your server is dual homed, and you must disable one of the NIC interfaces before adding any devices to Service Monitor. Using two NICs with a single IP address (a failover configuration, in case one of the NICs fails) is supported. • Make sure that you change the default Cisco Unified Communications Manager cluster ID setting (located at Cisco Unified Communications Manager Administration > Enterprise Parameters). The default setting is Standalone Cluster. Unless you change this entry, all of the clusters will have the same cluster ID. This causes problems in Service Monitor. Changing the cluster ID requires a restart of RIS Collector service, Windows SNMP service, and the CCMadmin service. Perform these restarts on the publisher and then on the subscribers. Note: If Service Monitor is already using Cisco Unified Communications Manager and you are changing the cluster name, then you have to delete and readd the Cisco Unified Communications Managers in Service Monitor for it to reflect the new cluster name. If Operations Manager is already managing Cisco Unified Communications Manager and you are changing the cluster name, then the cluster names in the service-level view will not reflect the new cluster name. You have to delete and readd the Cisco Unified Communications Manager in Operations Manager for it reflect the new cluster name. • Make sure that the Service Monitor server's hostname is resolvable using DNS. If DNS is not being used, edit the Windows hosts file and enter the Service Monitor hostname and IP address. The hosts file is located at C:\Windows\system32\drivers\etc. If you do not have a license key, then during the installation, select the evaluation version. The evaluation version can manage up to 1000 phones for up to 90 days. When the Service Monitor license has been acquired, simply upload the license into the Service Monitor server by clicking http://hostname:1741/cwhp/maas.licenseInfo.do. Operations Manager and Service Monitor require separate licenses. Licensing and Registering the Software Licensing grants you permission to manage a certain number of phones. You can enter licenses for Service Monitor during installation or add them later. There is a separate license for Cisco Unified Service Monitor and Cisco Unified Operations Manager. The uninstallation process may cause a warning message similar to the following to appear: The uninstallation is waiting for a process to stop, do you wish to continue to wait? If you see this message, click Yes and continue to wait. It is a good practice to delete the C:\Program Files\CSCOpx folder and then reboot the server after the Service Monitor application has been uninstalled from any server. Remember to save any Cisco 1040-related call metrics, performance, or node-to-node archived files that you might want to keep from the C:\Program Files\CSCOpx\data folder. Failover and Redundancy Service Monitor supports failover for only Cisco 1040 functionality. A primary and secondary Service Monitor server can be configured to provide redundancy and failover support to Cisco 1040. In case of a primary Service Monitor server going down, the probe would automatically switch over to the secondary Service Monitor server. However there is no synchronization between the primary Service Monitor and secondary Service Monitor servers. Cisco 1040 Failover Mechanism The 1040 establishes a connection with Service Monitor and periodically sends an SCCP Keep Alive message. Service Monitor acknowledges the Keep Alive message to maintain the connection. The following scenario describes when a 1040 will fail over to the secondary server: 1. The Cisco 1040 stops receiving Keep Alive acknowledgement messages from the primary Service Monitor server. 2. After sending three Keep Alive messages without any acknowledgement, the Cisco 1040 sends a Keep Alive message to the secondary Service Monitor server. 3. The secondary Service Monitor server sends a Keep Alive acknowledgement message. 4. The Cisco 1040 sends a StationRegister message with the station user ID set to the Cisco 1040's ID. 5. Secondary Service Monitor goes to the TFTP server to get the latest configuration file for this Cisco 1040. 6. Secondary Service Monitor sends a StationRegister acknowledgement message. 7. Now the Cisco 1040 will start sending syslog messages to the secondary Service Monitor server while still sending Keep Alive messages to the primary Service Monitor server to see whether it's back up again. Note: Users cannot set the time of a failover Cisco 1040. The only way to make any configuration changes to a failover Cisco 1040 is to first make this Service Monitor server its primary Service Monitor server. The following scenario describes how the Cisco 1040 will revert back to the primary server: 1. Cisco 1040 begins to receive a Keep Alive acknowledgement from its primary Service Monitor server once it comes back up again. 2. Cisco 1040 sends a StationUnregister message to the secondary Service Monitor server. 3. Secondary Service Monitor server sends a StationUnregister acknowledgement message to the Cisco 1040. 4. Cisco 1040 sends a StationRegister message with the station user ID set to the Cisco 1040's ID to the primary Service Monitor server. 5. Primary Service Monitor server sends back a StationRegister acknowledgement message. 6. Cisco 1040 starts to send syslog messages to this Service Monitor server now. See Figure 5. Figure 5. Cisco Failover Scenario Cisco 1040 Failover Deployment Preparing for Failover The first step is to have two Service Monitor servers available for configuration. One server acts as the primary server and the second as a secondary server. Please refer to the Service Monitor installation guide for the hardware specification of these servers. It is recommended that these servers connect to the network through redundant paths. This helps ensure that a failure in one part of the network that affects the primary server does not also affect the connectivity of the secondary server. Setting Up Failover Failover can be set up globally or for specific Cisco 1040s. If a Service Monitor server acts as the primary or secondary Service Monitor server for any Cisco 1040s across all locations, failover can be set up globally. A Service Monitor server can act as the primary Service Monitor server for one domain or location and at the same time as the secondary Service Monitor server for another domain or location. In this case, failover needs to be set up for specific Cisco 1040s. Setting Up Failover in Default Configuration Go to the primary Service Monitor server, select the Configuration tab, the Cisco 1040 option, and Setup from the TOC. The Setup dialog box is displayed (Figure 6). Enter the IP address or DNS name of the primary Service Monitor server and the IP address or DNS name of the secondary Service Monitor server to the default configuration for failover operations of any Cisco 1040s. Figure 6. The Setup Dialog Box Setting Up Failover for a Specific Cisco 1040 Sensor Go to the primary Service Monitor server, select the Configuration tab, the Cisco 1040 option, and Management from the TOC. The Cisco 1040 Details dialog box opens showing a list of any previously defined or registered Cisco 1040s. Select Add to create a specific configuration for a Cisco 1040. The Add a Cisco 1040 Sensor dialog box opens (Figure 7). Enter the IP address or DNS name of the primary Service Monitor server and the IP address or DNS name of the secondary Service Monitor server to the configuration for failover operations of a specific Cisco 1040. Figure 7. Dialog Box for Adding a Cisco 1040 Viewing Failover Status Enter http://<IP address> in your browser where IP address is the address of your Cisco 1040. The Current Service Monitor field will show the Service Monitor to which the Cisco 1040 is sending data; this could be a primary or secondary Service Monitor server. Go to the primary Service Monitor server or secondary Service Monitor server if failover took place, select the Configuration tab, the Cisco 1040 option, and Management from the TOC. The Cisco 1040 details page will show the primary Service Monitor server, secondary Service Monitor server, and the Service Monitor server to which the Cisco 1040 is registering (Figure 8). Figure 8. The Service Monitor Cisco 1040 Details Page Cisco Voice Transmission Quality Redundancy There is no failover capability for the Cisco Voice Transmission Quality capability. However the Cisco Unified Communications Manager can be added to multiple Service Monitor servers. Up to three Service Monitor servers can be configured as billing application servers in Cisco Unified Communications Manager 5.x or later to receive Cisco Voice Transmission Quality data. So when one Service Monitor server is down, the other Service Monitor server will still be able to obtain the Cisco Voice Transmission Quality data from Cisco Unified Communications Manager. Note: Cisco Unified Communications Manager publisher server is responsible for transferring Cisco Voice Transmission Quality data to the Service Monitor server. If the publisher server is unavailable, there is no mechanism for Cisco Unified Service Manager to obtain the Cisco Voice Transmission Quality data in the cluster. Backup and Restore The Service Monitor database normally is very big. Backup can take hours, so the backup process is not included when the Service Monitor software is reinstalled or upgraded. On servers on which Operations Manager and Service Monitor are coresident, the database is not very big so it is OK to use the Common Services backup and restore process. The backup UI might time out, but the backup will be complete after some time (this might take a few hours for a big database). Restore can be done only after backup has completed successfully. In Service Monitor standalone servers, it is recommended to use the Common Services backup and restore process only when the database is less than 6 GB. For large Service Monitor databases, it is recommend to do backup manually (saving database password and copy database files) as documented in the user guide. Configuring Low-Volume Schedule and Database Purging Service Monitor needs 8 hours of low-volume time during a day. During a low-volume schedule, Service Monitor handles roughly 20 percent of the number records that are processed during a peak period and performs database maintenance. For Cisco 1040/NAM data, during regular call volume the maximum segment rate allowed is 5000 per minute. Anything over this rate will be discarded. During low call volume, the maximum segment rate allowed is 25 percent of the maximum regular call volume. Throttling of Cisco 1040 data is based on the total amount of data received in any 5-minute interval; it is not per minute. This allows accommodation of temporary spikes in traffic while blocking continuous high-rate traffic that is over the supported limit. There is no throttling of Cisco Voice Transmission Quality data. Service Monitor standalone server has been tested to support a maximum rate of 1500 Cisco Voice Transmission Quality calls per minute. The default low-volume schedule is 10 p.m. through 6 a.m. To change the schedule, on the Service Monitor server, change the values of these properties in the NMSROOT\qovr\qovrconfig.properties file: You can configure more than one low-volume period as long as the total time adds up to 8 hours and it covers midnight to 1 a.m. Here are some examples: To put changes into effect after you edit qovrconfig.properties, you must stop and start the QOVR process. While logged on to the server where Service Monitor is installed, from the command line, enter these commands: Service Monitor needs 4 hours data purge time. Data purging must occur during the low-volume schedule and must not run from midnight to 2 a.m. The default schedule is 2 a.m. to 6 a.m. To change the schedule on the Service Monitor server, change the values of these properties in the NMSROOT\qovr\qovrconfig.properties file: Data purge need not run continuously for 4 hours. You can configure more than one data purge period as long as: • The total time adds up to 4 hours • Data purging occurs during low-volume schedule • No data purging occurs from midnight through 2 a.m. Here are some examples: Note: Do not edit the properties files using Wordpad as it introduces a carriage return. Use Notepad to edit the file instead. The data retention period determines the number of days that data is retained in the Service Monitor database before being purged. The default value depends on the deployment scenario: • Service Monitor alone on a server: 7 days • Any coresident server: 3 days This section provides a few troubleshooting tips. Q. I don't see any service quality alerts in the Operations Manager dashboard. What could be the problem? A. Before debugging this problem, do the following: • Go to the Service Monitor application and verify that the probes are registered and visible in the Service Monitor GUI. • Go to the Service Monitor setup page and verify that Operations Manager is entered as a trap recipient, even if Service Monitor is on the same machine. • In Operations Manager, go to Administration > Service Quality and add Service Monitor, even if it is on the same machine. • If all of the previous are correct, then only calls that fall below the Service Monitor threshold will be shown in the Alerts and Activities page. When all of the previous have been done, debug the problem by doing the following: • Check syslog.log under NMSROOT\CSCOpx\log\qovr\syslog.log in Cisco Unified Operations Manager and Cisco Unified Service Monitor. Check whether you see recent syslogs. Check the D=<value>. This is the MOS value multiplied by a factor of 10. Check to see whether this value is below the threshold multiplied by 10. • If this is true, check NMSROOT\log\qovr\trapgen.log. This file should contain the traps that are being generated by the system. If traps are available in this file, then it means that the Service Monitor portion is functional and ready. If not, then check for exceptions in probemanager.log and datahandler.log. Q. A few rows in the Cisco 1040 diagnostic reports do not show directory numbers. What could be the issue? A. Make sure that the call is over. CDRs and CMRs are generated at the end of the call only, whereas Cisco 1040s send data even while the call is in progress. Make sure that the corresponding call in the Cisco Voice Transmission Quality diagnostic report shows the directory numbers for both the caller and callee. Some gateway ports do not have directory numbers; Cisco 1040 sensor rows for those calls will not have directory number information. Q. Why is the directory number missing in sensor reports? A. Please check the following: • The Cisco 1040s monitor RTP traffic; they do not report directory numbers. • Directory numbers can be seen in the diagnostic Cisco 1040 reports only if the call whose streams are being reported by Cisco 1040s to Service Monitor is also reported by the Cisco Unified Communications Manager to Service Monitor, meaning that the Communications manager server must be added to the Service Monitor server. • Service Monitor and Communications manager have to be time synched in order to display the directory numbers. Q. Can you give me some general Cisco 1040 troubleshooting tips? A. Please check the following: • If the Cisco 1040 is not receiving the IP address, check the DHCP configuration on the DHCP server. • Start with http://<ip-addr>/Communication and see what the Cisco 1040 communication debug page says. If there are any startup issues on the Cisco 1040 (such as the TFTP server not being reachable or an inability to download the image file), this page should tell you. • If you are suspecting a Cisco 1040/Service Monitor issue, then use the sniffer to check the communication from the Cisco 1040 to the rest of the world. You should use the sniffer on the management port of the Cisco 1040 to begin with. • If Cisco Unified Communications Manager 5.x is used for the TFTP server, please restart the TFTP service after the configuration files and image files are manually copied over for changes to take effect. • The spanning port doesn't play a role in booting the Cisco 1040. But do make sure SPAN is configured on the switch port instead of the routed port. • Service Monitor installs the TFTP server by default; if the customer installs another TFTP server, make sure the TFTP service in Common Services is shut down to avoid any potential issue. • Do not edit configuration files using Wordpad (where the TFTP server is not editable), as it introduces a carriage return. Q. We do not have a PoE switch to power our Cisco 1040 sensor. Cisco discontinued selling the SM-1040-PWR power adaptor. What are our options to power the sensors? A. Procure an industry-standard DC 5-volt, 2.6-amp power adapter. Q. Why are Cisco Voice Transmission Quality MOS values constantly very low, at the 2.0 range (Figure 9), and why are the concealment seconds values too high? A. Please add the following configuration under dial-peers in the voice gateway: • rtp payload-type comfort-noise 13 • Or if there is no need to conserve bandwidth, turn off Voice Activity Detection (VAD) with the no vad • Examples follow: Dial-peer voice 101 voip Ip qos dscp cs3 signaling Session target ipv4:10.10.1.1 Figure 9. Low MOS Values and High Concealment Seconds Q. Why is the MOS value unavailable from reports? A. The following scenarios could lead to the MOS value being unavailable: • The phone device does not support Cisco Voice Transmission Quality. Refer to your Cisco IP Phone documentation for Cisco Voice Transmission Quality support; Cisco Unified Personal Communicator, voicemail, and H.323 calls do not support Cisco Voice Transmission Quality. • The call duration is less than 8 seconds. • By default, voice quality statistics reporting is turned off in MGCP gateways. Make sure to enable Cisco Voice Transmission Quality in gateways by adding this configuration in global mode: mgcp voice-quality-stats all. • Minimum of Cisco IOS® Software Release 12.4.(4)T and 5510 DSP hardware: use the Cisco IOS Software show version and show voice dsp detailed commands. • If using a Session Initiation Protocol phone and Unified Communications Manager version 7.1(2) or earlier, enable the Call Stats checkbox in the phone's SIP profile. Q. How do I force Service Monitor to store and report zero duration calls, which are often failed calls? A. Make sure that the CDR Log Calls With Zero Duration Flag in Cisco Unified Communications Manager > Service parameters is off for all nodes. Q. What UDP and TCP ports are in use by Service Monitor and must be opened in the firewall? A. The following ports must be allocated for Service Monitor and exempted in the firewall inspection: • UDP: 53 (DNS), 67 and 68 (DHCP), and 5666 (syslog) 2. Select the Collateral & Subscription Store link. 3. Read the notice to Cisco employees and click Continue. 4. From the navigation menu at the top-left corner of the page (above the Subscriptions link), select the Marketing Collateral link. From the Marketing Collateral navigation menu, select Network Management Evaluation Kits, and then select the desired evaluation kit. 5. Use Add to cart and Checkout to place the order for the desired kit, using your Access Visa or personal credit card. For further questions on Cisco Unified Operations Manager or Cisco Unified Service Monitor, or for any other Cisco Unified Management-related questions, send an email to
<urn:uuid:6087f25f-8b77-4667-9e2f-29baf792b62e>
CC-MAIN-2017-09
http://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/prime-unified-service-monitor/deployment_guide_c07-705322.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00108-ip-10-171-10-108.ec2.internal.warc.gz
en
0.876852
13,677
3.03125
3
NASA seeks new projects for malfunctioning Kepler - By Frank Konkel - Aug 13, 2013 There were promising signs when NASA first tried to resurrect its crippled Kepler telescope in late July, but the space agency is seeking alternative plans in case a fix isn't possible. NASA engineers have put out a request for new mission proposals in the event that Kepler's reaction wheels – two of which failed between July 2012 and May 2013 – never again allow the telescope to maneuver precisely enough to hunt for planets outside the solar system. "If one of the two reaction wheels cannot be returned to operation, it is unlikely that the spacecraft will resume the nominal Kepler exoplanet and astrophysics mission," NASA engineers said in a request for white papers. "The purpose of this call for white papers is to solicit community input for alternate science investigations that may be performed using Kepler and are consistent with its probable two-wheel performance." NASA fired up Kepler's failed reaction wheels beginning July 18 after a temporary shutdown. One responded normally to commands. The other was able to spin properly in only one direction. Both wheels experienced significant friction, which Kepler Mission Manager Roger Hunter said will be "critical in future considerations" as engineers decide how to proceed. Kepler, launched in 2009 with a price tag of about $550 million, is already considered a NASA success. It has confirmed 134 extra-solar planets and detected another 3,200 possible planets that scientists are attempting to confirm. Based on Kepler's prior success, scientists expect 90 percent of those planets to be confirmed. But without precise movement led by fully functioning reaction wheels, Kepler's repurposed mission scope could shift more toward data collection. It contains just one instrument, a large focal plane array, which further limits the scope of future missions. "Therefore, in addition to proposals for science investigations, we are soliciting proposals for new and innovative techniques for instrument operation, data collection, instrument calibration, and data analysis that can improve photometric precision under conditions of degraded pointing stability, possibly including significant linear image motion," the request states. Repurposed mission white papers are due by Sept. 3. NASA will review them through November. Should efforts to revive Kepler fail, NASA wants to begin a repurposed mission by summer 2014. Frank Konkel is a former staff writer for FCW.
<urn:uuid:65091238-2d56-4d81-beb0-588b8b909e2a>
CC-MAIN-2017-09
https://fcw.com/articles/2013/08/13/kepler.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00284-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946282
474
2.515625
3
Using a WDM(Wavelength Division Multiplexing) for expanding the capacity of the fiber to carry multiple client interfaces is a highly advisable way as the physical fiber optic cabling is not cheap. As WDM widely used you must not unfamiliar with it, it is a technology that combines several streams of data/storage/video or voice protocols on the same physical fiber-optic cable, by using several wavelengths (frequencies) of light with each frequency carrying a different type of data. Two types of WDM architecture available: Coarse Wavelength Division Multiplexing (CWDM) and Dense Wavelength Division Multiplexing (DWDM). CWDM/DWDM multiplexer and demultiplexer and OADM (Optical Add-Drop Multiplexer) are common fit in with Passive. With the use of optical amplifiers and the development of the OTN (Optical Transport Network) layer equipped with FEC (Forward Error Correction), the distance of the fiber optical communication can reach thousands of Kilometers without the need for regeneration sites. CWDM, each CWDM wavelength typically supports up to 2.5Gbps and can be expanded to 10Gbps support. The CWDM is limited to 16 wavelengths and is typically deployed at networks up to 80Km since optical amplifiers cannot be used due to the large spacing between channels. CWDM uses a wide spectrum and accommodates eight channels. This wide spacing of channels allows for the use of moderately priced optics, but limits capacity. CWDM is typically used for lower-cost, lower-capacity, shorter-distance applications where cost is the paramount decision criteria. The CWDM Mux/Demux (or CWDM multiplexer/demultiplexer) is often a flexible plug-and-play network solution, which helps insurers and enterprise companies to affordably implement denote point or ring based WDM optical networks. CWDM Mux/demux is perfectly created for transport PDH, SDH / SONET, ETHERNET services over WDM, CWDM and DWDM in optical metro edge and access networks. CWDM Multiplexer Modules can be found in 4, 8 and 16 channel configurations. These modules passively multiplex the optical signal outputs from 4 too much electronic products, send on them someone optical fiber and after that de-multiplex the signals into separate, distinct signals for input into gadgets across the opposite end for your fiber optic link. Typically CWDM solutions provide 8 wavelengths capability enabling the transport of 8 client interfaces over the same fiber. However, the relatively large separation between the CWDM wavelengths allows expansion of the CWDM network with an additional 44 wavelengths with 100GHz spacing utilizing DWDM technology, thus expanding the existing infrastructure capability and utilizing the same equipment as part of the integrated solution. DWDM is a technology allowing high throughput capacity over longer distances commonly ranging between 44-88 channels/wavelengths and transferring data rates from 100Mbps up to 100Gbps per wavelength. DWDM systems pack 16 or more channels into a narrow spectrum window very near the 1550nm local attenuation minimum. Decreasing channel spacing requires the use of more precise and costly optics, but allows for significantly more scalability. Typical DWDM systems provide 1-44 channels of capacity, with some new systems, offering up to 80-160 channels. DWDM is typically used where high capacity is needed over a limited fiber resource or where it is cost prohibitive to deploy more fiber. The DWDM multiplexer/demultiplexer Modules are made to multiplex multiple DWDM channels into one or two fibers. Based on type CWDM Mux/Demux unit, with optional expansion, can transmit and receive as much as 4, 8, 16 or 32 connections of various standards, data rates or protocols over one single fiber optic link without disturbing one another. Ultimately, the choice to use CWDM or DWDM is a difficult decision, first we should understand the difference between them clearly. CWDM vs DWDM CWDM scales to 18 distinct channels. While, DWDM scales up to 80 channels (or more), allows vastly more expansion. The main advantage of CWDM is the cost of the optics which is typically 1/3rd of the cost of the equivalent DWDM optic. CWDM products are popular in less precision optics and lower cost, less power consumption, un-cooled lasers with lower maintenance requirements. This difference in economic scale, the limited budget that many customers face, and typical initial requirements not to exceed 8 wavelengths, means that CWDM is a more popular entry point for many customers. Buying CWDM or DWDM is driven by the number of wavelengths needed and the future growth projections. If you only need a handful of waves and use 1Gbps optics, CWDM is the way to go. If you need dozens of waves, 10Gbps speeds, DWDM is the only option.
<urn:uuid:88094cfd-e33c-4663-a903-9cb42db8fe6f>
CC-MAIN-2017-09
http://www.fs.com/blog/multiplex-your-fiber-by-using-cwdm-or-dwdm.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00336-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922388
1,049
3.046875
3
The automotive industry is on the cusp of a driverless revolution, with actual driverless vehicles being tested on the road. Now the aviation industry is debating whether pilotless planes make sense. While the notion of fully automated commercial planes no doubt has been previously kicked around, last month's Germanwings crash -- caused by a co-pilot struggling with mental health who steered a plane carrying 150 passengers straight into a French mountainside -- has caused aviation experts to seriously rethink ways to increase commercial flight security. The New York Times reports that "government agencies are experimenting with replacing the co-pilot, perhaps even both pilots on cargo planes, with robots or remote operators." (Related article: Flying cars: What could go wrong?) Commercial flights today are almost flown exclusively on auto-pilot. The Times notes that in a recent survey of commercial pilots, "those operating Boeing 777s reported that they spent just seven minutes manually piloting their planes during the typical flight. Pilots operating Airbus planes spent half that time." But they're still on the plane, and still able to take control. The idea of boarding a plane that's going to fly hundreds or thousands of miles without a human pilot even on the craft is going to be a tough sell for much of the population. Even within the aviation industry, there's great skepticism. Here's Mary Cummings, director of the Humans and Autonomy Laboratory at Duke University, being quoted by The Times: “You need humans where you have humans. If you have a bunch of humans on an aircraft, you’re going to need a Captain Kirk on the plane. I don’t ever see commercial transportation going over to drones.” We agree with Dr. Cummings. This story, "Would you fly in a plane with no human pilots?" was originally published by Fritterati.
<urn:uuid:475e0ec5-7eb8-4587-9757-247caa3a2c24>
CC-MAIN-2017-09
http://www.itnews.com/article/2906766/would-you-fly-in-a-plane-with-no-human-pilots.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00512-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955462
381
2.640625
3
SNMP for Everybody Managing a small network isn't all that difficult. When something doesn't work, it's usually obvious where the fault lies. But even for a small network, having some kind of management and monitoring in place is helpful. As your network grows, it becomes even more important to have some tireless automated watchdogs keeping an eye on things. The computer world is cram-full of monitoring, alerting, and management tools of all kinds, from nice free Open Source applications like MRTG, Cacti, Nagios, OpenNMS, and Mon, to expensive commercial suites that do everything but cook breakfast like ZenWorks, HP OpenView, and Tivoli. These perform a range of different duties, such as network discovery and mapping, providing real-time status indicators, reporting outages, tracking processes, disk usage, restarting downed servers and shutting down devices in trouble. The one thing these all have in common is they are SNMP-aware. SNMP (Simple Network Management Protocol) has been around since the '80s. The idea was to create a standard protocol for managing IP (Internet Protocol) devices, rather than having a hodge-podge of different applications and suites that use differing, incompatible client agents. When you start reading about SNMP, you'll encounter all kinds of new terminology and abbreviations. There are three main pieces in an SNMP-managed network: network management systems (NMS), sometimes called managers, agents and your managed devices. Agents are software modules in managed devices; think of these as the go-betweens that handle communications between devices and managers. Because SNMP provides a common standard, theoretically all devices with SNMP agents can be managed by any SNMP-aware management applications. Messages originate from both ends: managers can query agents, and agents can volunteer information to managers. Some devices have agents built-in, like managed switches, routers, printers, power supplies and network access points. You can also install SNMP agents on servers or workstations to monitor just about anything that is monitor-able: CPU temperature, services, database performance, disk space, network card performance — you name it. There are three versions of SNMP: SNMPv1, SNMPv2, and (guess what!) SNMPv3. SNMPv1 is the most widespread, and probably will be for some time to come. The main objection to v1 is the lack of security; all messages are sent in cleartext. v2 was developed to add security, but it seems that development got a bit out of hand and we ended up with four versions: - SNMPv2 "star", or SNMPv2* SNMPv3 is supposed to restore order and sanity, and it is a nice implementation that is easier to use and has real security, so over time it should replace v1 and v2. There is a common set of SNMP commands across all versions: read, write, and trap. You've probably heard of "SNMP traps". Your manager uses the read and write commands: it polls for device information, which is stored by the agents as variables, and writes commands to devices, which means altering the variables. Managed devices emit traps asynchronously, which means when they have something urgent to say they don't wait for the manager to ask them what's up. For example, a router will report that it has lost Internet connectivity, or a server that it is overheating and melting down. Your manager will capture the trap and ideally do something sensible in response. The ingenuity of SNMP is it doesn't require the managed devices to do anything other than report state, which places a trivial burden on them, and uses the manager to do all the heavy lifting, like evaluating the information it collects and deciding what to do with it. The NMS doesn't issue commands, but re-writes variables. This can be a bit weird to wrap your mind around, but the result is a very flexible, low-overhead system that is easy to implement across all kinds of devices by different vendors, and on different platforms. SMI and MIB No, not Stylish Mullets Irresistible and Men In Black, but Structure of Management Information and Management Information Base. This is fancy talk for all those device variables and how they are stored. Agents each keep a list of objects that they are tracking; then your manager collects and uses this information in hopefully useful ways. SMI is the syntax or framework used to define objects; MIB is the definitions for specific objects. Every object gets a unique Object Identifier (OID). These are managed in the same way as MAC addresses, with a central registry and unique allocations to hardware vendors. All versions of SNMP use these five messages: GetRequest, GetNextRequest, SetRequest, GetResponse and Trap, which I believe explain themselves. SNMPv2 uses different message formats and protocol operations than v1, which pretty much renders it non-interoperable with v1. However, there workarounds. Some managers support both v1 and v2, or your v2 agents can act as v1 proxies. This means they translate messages between the manager and agent so that v1 devices can understand them. RMON, or Remote Monitoring, is part of SNMP. It is an MIB module that defines a set of MIB objects for use by network monitoring probes. The SNMP framework is made up of dozens of MIB modules. Some are freely available, some are deep dark proprietary secrets, and of course you can always write your own. (See Resources for the online MIB validator, and to download MIBs.) SNMP In Action In future articles we'll dig into how to use SNMP with Linux-based network management applications. Until then, you can play around with SNMP to see what it looks like. On Debian, install the snmp andsnmpd packages. On Fedora, net-snmp-utils and net-snmp. The installers should start up the snmp daemon automatically. Then run this command: #snmpwalk -v 1 -c public localhost system This should spit out a bunch of output that looks something like this: SNMPv2-MIB::sysORLastChange.0 = Timeticks: (32) 0:00:00.32 SNMPv2-MIB::sysORID.1 = OID: IF-MIB::ifMIB SNMPv2-MIB::sysORID.2 = OID: SNMPv2-MIB::snmpMIB SNMPv2-MIB::sysORID.3 = OID: TCP-MIB::tcpMIB Of course the man pages will tell you how to do more fun things, and the excellent O'Reilly book "Essential SNMP, 2nd Edition" by Douglas Mauro and Kevin Schmidt is a great practical guide to understanding and using SNMP. Want to read the SNMP RFCs? Really? Well, allrighty then. Start here at this handy partial table of the relevant RFCs:
<urn:uuid:dffc95ad-5e7b-4bee-bde5-213fd069e028>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/3660916/SNMP-for-Everybody.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00105-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921698
1,494
2.578125
3
The fastest computer in the world today can deliver about 125 petaflops of performance, but that could quadruple in the coming years. Cray's XC50, announced Monday, can deliver a petaflop of performance in a single box, and up to 500 petaflops for an entire supercomputer. A supercomputer is typically multiple servers -- also called nodes -- strung together, combining to provide multiple petaflops of horsepower. So the XC50 needs to be specifically configured to hit 500-petaflop performance, an effort that could take a few years. There are multiple technologies that boost the performance of the XC50. It is compatible with Nvidia's Tesla P100 GPU and Intel's Xeon and Xeon Phi processors, which are both accelerators that speed up scientific computing tasks. A supercomputer in Switzerland called Piz Daint, which uses the older Cray XC30 design, has been upgraded to XC50. It is the world's eighth-fastest supercomputer, according to a new list of the world's fastest systems released by Top500 on Monday. In-system upgrades to Piz Daint are ongoing, and the supercomputer will be merged with another, called Piz Dora. Once the supercomputers are combined and new components in place, Piz Daint will be one of the fastest supercomputers in the world, Cray claims. However, it's performance numbers weren't provided by Cray. Cray's XC50 is one step ahead in a race to release supercomputers that can deliver an exaflop (a million trillion calculations per second) of performance. The world's fastest supercomputer is China's TaihuLight, which can deliver about 125 petaflops of performance, according to Top500. Piz Daint today has Intel's older Xeon E5-v3 processors, but will support Intel Xeon E5-v5 processors based on Skylake, coming out in the middle of next year. The new chips will bring tremendous performance increases. Processors alone can't speed up a computer's performance; speedy throughput, storage and networking are also important. The XC50 has an upgraded Aries interconnect, which is used in some of the world's fastest computers. The Aries interconnect topology enables multipoint communication among computing nodes in a supercomputer. The XC50 will also support SSD storage, which is now replacing hard drives. The supercomputer has smaller individual chassis, so cooling costs and power consumption for the supercomputer will be lower. Liquid cooling won't be required for XC50 supercomputers.
<urn:uuid:de21c0d5-4c35-4265-a88c-8d1d075f0562>
CC-MAIN-2017-09
http://www.itnews.com/article/3141447/high-performance-computing/cray-aims-for-the-500-petaflop-mark-with-xc50-supercomputer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00457-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934713
546
2.703125
3
Software-defined storage (SDS) abstracts storage services from the physical storage hardware and allows them to run on a dedicated appliance or as a virtual machine. It essentially replaced the features that were already there -- like LUN masking, snapshots, and replication -- and lets those features be commonly applied across a variety of storage hardware. But the layers of the datacenter -- the compute, the network, and the storage -- were all still very present. Converged storage architectures Converged storage begins the attempt to collapse these layers by providing dedicated hardware nodes that can perform compute, networking, and storage all on a single layer. The storage component of this single layer is driven by software and leverages the internal storage of each node to create a shared pool of storage that all the virtual machines in the cluster can access. This allows functions like virtual machine migration to continue to operate as they would in a traditional share environment. [But does it have flash? Read Flash Storage: What's Your Best Option?] The goal of converged storage architectures is to provide a turnkey and consolidated infrastructure that combines the three layers. Vendors essentially provide appliances to offer that turnkey experience. These appliances allow for rapid deployment but at the potential loss of flexibility. That is not necessarily bad, it just depends on the priorities of the organization. If speed of deployment is the most critical aspect of a project, then converged architectures can be very appealing. Hyper-converged storage architectures Hyper-converged architectures take the converged concept to the next level in that they are provided as software and they can run on any vendor's server hardware. This will appeal to organizations that have a long lasting relationship with a particular server vendor or to an organization that is looking to drive as much potential cost out of their storage infrastructure. How converged is converged? While converged architectures claim to collapse the three layers of the datacenter, they don't make them vanish. There is still plenty of networking and storage in these environments. You have to interconnect the nodes and you have to put storage into those nodes. The components that make up that storage or networking is another thing that differentiates the converged storage architecture from the hyper converged. In a converged architecture the components that make up these layers are selected for you. In hyper converged you get to or have to choose. While choice can be a good thing, you have to make sure you have the time to go through a selection and qualification process. All storage and networking hardware is not created equal. There is also a group of storage offerings that claim to be converged. But in my opinion they are not. In these systems there are still three discrete layers; a compute layer where the hypervisor and VMs reside, a networking layer that communicates storage and messaging traffic, and a storage layer that houses the media and manages data services. The key is that the compute layer is not leveraged to support any of these other functions, while in both converged and hyper-converged it is. There is nothing wrong with having three separate layers; we've done that for years and many datacenters manage these infrastructures quite well. But these are not converged architectures. At best they are pre-integrated solutions. As I stated above, even converged and hyper converged architectures still have networks and storage. In my next column we will talk about how to select the right components for those. Could the growing movement toward open source hardware rewrite the rules for computer and networking hardware the way Linux, Apache, and Android have for software? Also in the Open Source Hardware issue of InformationWeek: Mark Hurd explains his "once-in-a-career opportunity" at Oracle.
<urn:uuid:7f415751-449d-4c16-986b-42514cf2f2af>
CC-MAIN-2017-09
http://www.networkcomputing.com/storage/what-hyper-converged-storage/727878319?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00633-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952287
768
2.6875
3
Many different security technologies are built upon the premise of recognizing potentially malicious activity and stopping it before it can do harm. How they recognize these activities though is what separates detection from false alarms and security from catastrophe. The most common form of recognition is signature or “fingerprint” matching. Whether we are talking about anti-virus software or intrusion detection, the idea of matching packets to a database of known malicious attacks is the backbone of many security products and services. Properly managed, signature-based intrusion detection continues to be an excellent approach to detecting threats so you can respond to them quickly, and getting insight into the real security challenges impacting your infrastructure. Of course signature-based matching has its drawbacks. First of all to capture traffic, analyze it, match it to a signature and then try to block or stop it can introduce latency into the equation. Perhaps even more important is that in order to match malicious activity, you first have to have a signature of it to match to. This means that you have had to already seen this type of attack or virus before and classified it as malicious. In an age where new attacks are unleashed by the hundreds every day, signature based detection alone has become increasingly less effective. Zero day attacks by definition are new attacks for which signatures don’t exist. Luckily, the security industry has developed other means to recognizing malicious activity besides signature or fingerprint matching. One promising technology is anomaly detection, which has made terrific strides over the past few years as a way to recognize that something unusual is taking place in your environment. If something doesn’t fit the normal patterns, it is an anomaly and it needs to be investigated. That sounds easy, but in fact it is not. A tremendous level of expertise, experience and intelligence goes into building a system that will recognize an anomaly as such. There are patterns to identify a normal baseline and detect deviations in complex network traffic – a statisticians dream. And often what you don’t see is as significant as what you do. Some security experts think that today signature-based detection by itself is virtually useless. You just can’t identify and update your signatures often enough. This is a simplistic view; signature-based detection is an efficient way to identify malicious activity, especially with the addition of sophisticated multi-factor correlation to screen out false positives. However, all technologies have inherent limitations, and anomaly detection has the promise of adding another method of detection that doesn’t depend on signature updates. Early on anomaly detection was subject to lots of false positives and even worse, false negatives. However over time, refinement has made anomaly detection more and more efficient at spotting malicious activities. Today’s advanced anomaly detection is a very effective tool in identifying malware and malicious activity. However, anomaly detection isn’t a silver bullet or a “set and forget” solution! It also needs to be managed and updated to continue understanding what normal is, as “normal” changes over time. At Alert Logic we don’t think there’s any single technology to meet all security needs, or that jumping to the latest interesting new approach is the right answer. We’ve taken a multilayered approach: signature-based intrusion detection with sophisticated correlation based on a global view of threat data with Threat Manager, positive security that for web applications that learns to identify proper user behavior with the Web Security Manager WAF, vulnerability scanning services, log management to identify suspicious behavior throughout your network, and now, managed anomaly detection in our ActiveWatch Premier service. And we are always looking at new technologies to add to the mix.
<urn:uuid:daf966fc-94b2-462e-be57-0d6f4c4cb1e8>
CC-MAIN-2017-09
https://www.alertlogic.com/blog/anomaly-detection-emerges-as-a-new-approach-to-threat-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00209-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949689
732
2.640625
3
You know a technology is going mainstream when Microsoft, Cisco, IBM, VMware and Red Hat all make big announcements about embracing it. Such is the case with containers. But as containers have grown in popularity among developers during the past year, I still get asked by people, “What are containers?” There still seems to be some education needed about exactly what containers are and how they’re used. So, Cloud Chronicles and Network World give you Frequently Asked Questions (FAQ): Containers. What are containers? Containers can be thought of as a type of virtualization for the operating system. Typically virtualization refers to hardware, using a software hypervisor to slice up a server into multiple virtual machines. Container technology virtualizes the operating system, abstracting applications from their underlying OS. Blogger Greg Ferro has a good summary on his blog: “Containers virtualize at the operating system level, Hypervisors virtualize at the hardware level. Hypervisors abstract the operating system from hardware, containers abstract the application from the operation system. Hypervisors consumes storage space for each instance. Containers use a single storage space plus smaller deltas for each layer and thus are much more efficient. Containers can boot and be application-ready in less than 500ms and creates new designs opportunities for rapid scaling. Hypervisors boot according to the OS typically 20 seconds, depending on storage speed.” What’s the advantage of containers? Containers have a couple of appealing qualities, most notably speed and portability. Containers are often described as “lightweight” because they don’t have to boot up an operating system like a virtual machine does - so, containers can be spun up very quickly. The other common advantage associated with containers is their portability; containers can run on top of a virtual machine, on physical or bare metal servers in a public cloud or on-premises - it doesn’t matter. Are containers new? No, not at all. The current hype is around Linux Containers, which have been around for more than 10 years. Before Linux containers, Unix had container technology. Even earlier systems from Oracle Solaris had the concept of Zones, which are basically an equivalent of containers. Why all the hype now about containers? As more new social, mobile and web-scale applications are being built, containers are seen as an emerging tool for developers to use in these types of applications because of the advantages outlined above. Concurrently, much of the hype about containers has been galvanized by the rise of a company named Docker, which is attempting to commercialize an open source project of the same name that automates the deployment of an application as a container. Basically as interest in containers is growing, companies like Docker and others are making containers easier to use. What does Docker do? Docker is an open source tool for packaging applications inside containers; Docker is basically used to make containers. Docker also has what’s called the Docker Hub, which is a registry of containers that have been developed to be used with specific programs, such as MongoDB, Redis, Node.js and others. Are containers a replacement for virtual machines? This one depends on who you ask. Some believe that containers offer a better way to run certain applications compared to just running them on a virtual machines. Generally, the theory is that in an environment with multiple operating systems (Windows and Linux for example), virtual machines are helpful. In a heterogeneous OS environment (all Linux), containers could be more helpful. It also depends on the application. In some circumstances a developer may want a dedicated virtual machine, or perhaps even a whole physical server for running an application. In other situations, a VM can be a good platform for running containers and yet in other scenarios containers could be best to run on bare metal servers. Kubernetes is an open source project created by Google that specializes in cluster management. Part of its functionality includes being able to manage Docker, which creates containers. So, think of Docker as an engine for creating containers, and Kubernetes as a tool for managing the scheduling of containers or groups of containers.
<urn:uuid:90fd58b7-063d-4250-870b-a8e614666740>
CC-MAIN-2017-09
http://www.networkworld.com/article/2601434/cloud-computing/faq-containers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00153-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944474
863
2.75
3
As the apparel industry is aware, the length of the supply chain (from natural fiber, polymer resin or other material, all the way to finished clothing) is quite long. Because of the numerous processing steps involved in garment production, often conducted by different suppliers, the major environmental impacts of production usually occur before the Tier 1 (cut and sew) suppliers of brands and retailers. Those in the industry working on environmental sustainability, such as the Sustainable Apparel Coalition (SAC), the Outdoor Industry Association (OIA) and The Sustainability Consortium (TSC) have concluded that additional focus needs to be placed on lower-tier suppliers to best improve the environmental performance of the textiles and apparel industry. One important tool employed by members of these organizations is the assessment of supplier environmental performance through indices. Requesting environmental information from suppliers initiates the engagement process, raises awareness of important issues and signals that customers are concerned about the environmental impact of suppliers. The use of indices allows facilities to identify the areas for greatest improvement, and the scoring system provides a benchmark of sustainability performance to track progress. Much of the initial supplier improvement work centers on individual facilities completing self-assessments such as the Facility Module of the Higg Index. The Higg Index is a larger sustainability assessment tool organized by the SAC. In addition to supplier facilities, it also evaluates brands and apparel products (footwear to come in Version 2). Currently, the results from the Facility Module assessments are used internally by suppliers and with direct customers, but are not yet intended for external communication. The Facility Module was closely based on the criteria of the Global Social Compliance Program (GSCP), a program previously developed by leading retailers to improve environmental and social responsibility within their shared global supply chains. It was designed to assess and drive improvement in suppliers to many different industries, and the scope of the program covered 11 different environmental areas of focus. The Facility Module tailored the questions and criteria of the GSCP to be more specific to the apparel industry, and it focuses on seven environmental areas, which are a subset of the eleven included in GSCP. The Facility Module environmental areas address: environmental management systems, energy use and greenhouse gas emissions, water use, wastewater, emissions to air, waste management, and pollution prevention/hazardous substances. Some of the general questions textile suppliers will need to answer as part of the Facility Module include: Do you measure your usage (or emissions) associated with each of the environmental areas? Do you regularly set and review improvement targets in these areas? Can you substantiate improvements in these areas? By going through this process, suppliers and their customers can get a snapshot of where they stand on environmental performance. However, because these questions are being answered by the suppliers themselves, the SAC and others are looking into having the results verified in some way. Verification serves not only to encourage honest responses and identify false ones, but to ensure the accuracy of information and clarify instances where improper scores are simply a result of a misunderstanding by the supplier about the criteria. As an organization that has conducted audits based on the GSCP program for textile and other facilities, SGS can confirm that major opportunities for improvement indeed exist within most facilities. When suppliers take the next step and initiate plans of improvement in areas where gaps were identified in the self assessment, then both environmental improvement and operational cost savings can be significant. Indeed, to achieve the highest scores in the Facility Module suppliers need to go beyond minimum regulatory compliance, and actually plan for and work on reducing their impact. This calls for more specific steps and may require a detailed onsite assessment of what improvements should be made and how. There are a number of approaches textile facilities can take to work on these issues. One way is to work internally, using index results to guide their efforts. The SAC has at least one supplier member that has communicated openly about its internal efforts and success, reporting hundreds of thousands of dollars of annual savings in electricity and water consumption. Alternatively, companies may choose to seek external support, such as an energy, water and waste audit to identify specific problems, for example water/steam leaks, sub-optimal equipment settings, or improper storage of waste. A third option is one Nike has taken, and that is to work closely with an organization such as bluesign that specializes in textile chemistry and production processes. This approach focuses more on the selection/sourcing of the best chemicals, materials and processes. These efforts in combination with efficiency improvements also greatly reduce environmental impact and cost. Regardless of the approach, it is in the economic interest of the suppliers, as well as the brands and retailers, to implement these improvements, as cost savings may be shared. Sharing of savings can be incentivized by programs where the brand or retailer provide some or all of the funds for the auditing and/or training that will guide the suppliers in being more cost effective. Additionally, the environmental benefits from these efforts will reach even further, being felt directly in the countries where these facilities operate, and indirectly by the consumers around the world who are demanding clothing produced in a more sustainable fashion. Michael Richardson, P.E., LCA & sustainable design sr. project manager, SGS.
<urn:uuid:fe91b17e-e08a-4ece-851c-de796717a780>
CC-MAIN-2017-09
http://apparel.edgl.com/news/sustainable-textiles-begin-with-a-sustainable-supply-chain89270
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00029-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953902
1,060
2.625
3
The Problem With Common Two-Factor Authentication Solutions More websites and online businesses today are beginning to rely on smartphones as a second factor of authentication. Some online banks have been using SMS-based authentication for transaction verification but recently, major websites and businesses not in regulated industries are recognizing the need for stronger online authentication. Earlier this year Google made two-factor authentication available to all users, and in the past few days Facebook also rolled out two-factor authentication. It's great news that more websites are strengthening online authentication. When one considers how much sensitive, personal information people share on the Web, relying on a single layer of password protection simply is not enough. However, sending a one-time password or authentication code by SMS text message is also not very secure, because they are often sent in clear text. Mobile phones are easily lost and stolen and if another person has possession of the user's phone, they could read the text message and fraudulently authenticate. SMS text messages can also be intercepted and forwarded to another phone number, allowing a cybercriminal to receive the authentication code. With more businesses relying on mobile phones for out-of-band authentication, cybercriminals will increasingly target this channel for attack -- meaning that businesses should use a more secure approach than simple SMS text message. However, the challenge for consumer-facing websites is to balance strong security with usability. Complicated security schemes will not achieve widespread adoption among Internet users. A more secure and easy to use approach is to display a type of image-based authentication challenge on the user's smartphone to create a one-time password (OTP). Here's one example of how it can be done: During the user's first-time registration or enrollment with the website they choose a few categories of things they can easily remember - such as cars, food and flowers. When out-of-band authentication is needed, the business can trigger an application on the user's smartphone to display a randomly-generated grid of pictures. The user authenticates by tapping the pictures that fit their secret, pre-chosen categories. The specific pictures that appear on the grid are different each time but the user will always look for their same categories. In this way, the authentication challenge forms a unique, image-based "password" that is different every time - a true OTP. Yet, the user only needs to remember their three categories (in this case cars, food and flowers). Delivering a type of knowledge-based authentication challenge to the user's smartphone rather than an SMS message with the code displayed in clear text is more secure because the interaction takes place entirely out-of-band using the mobile channel. Because the mobile application communicates directly with the business' server to verify that the user authenticated correctly, it is much more secure than having the user receive a code on their phone but then type it into the web page to authenticate. Additionally, even if another person has possession of the user's phone, they would not be able to correctly authenticate because they do not know the user's secret categories. This secure two-factor, two-channel authentication process will help mitigate more sophisticated malicious attacks such as man-in-the-browser (MITB) and man-in-the-middle (MITM). Perhaps as important as security is ease of use. Most Internet users won't adopt security processes that are too cumbersome, and most online businesses don't want to burden their users. Image-based authentication is much easier on users because they only need to remember a few categories of their favorite things and tap the appropriate images on the phone's screen, which is much easier than typing long passwords on a tiny phone keyboard or correctly copying an alphanumeric code from one's text message inbox on the phone to the web page on the PC. In fact, a survey conducted by Javelin Strategy and Research group confirmed that 6 out of 10 consumers prefer easy-to-use authentication methods such as image identification/recognition. More websites and online businesses should follow the example set by Google and Facebook by deploying two-factor authentication for users. However, as criminals increasingly target mobile authentication methods and intercept SMS text messages, it will be critical for businesses to use a type of knowledge-based authentication challenge rather than sending an authentication code as a plain SMS text message.
<urn:uuid:35129b31-245a-4d89-8dee-3cefe134fd62>
CC-MAIN-2017-09
http://www.infosecisland.com/blogview/13734-The-Problem-with-Two-Factor-Authentication-Solutions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00029-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928719
887
2.59375
3
Sustainability consultants develop and implement energy-efficient strategies to help reduce a company’s overall carbon footprint. But designing environmentally sustainable solutions isn’t their only purpose. Their implemented policies can improve a company’s performance while drastically reducing costs on utilities and additional resources. With the increase of our dependency on technology, the data center industry – the backbone of the Digital Age – has seen substantial growth. As a result, data centers have become one of the fastest-growing consumers of electricity in the United States, creating a need for intelligent, sustainable architecture within the industry. The Natural Resource Defense Council (NRDC) projects that by 2020, the energy consumed by data centers will cost American businesses $13 billion in electric bills and will emit 150 million metric tons of carbon pollution on an annual basis. These numbers seem staggering, especially when you take into account that the data centers run by large cloud providers aren’t the culprit of this massive energy consumption. In fact, they only take up about 5% of consumed energy by the industry. The other 95% of energy usage comes from corporate and multi-tenant data centers that lack a sustainable design. Another alarming stat presented by the NRDC is that the US data center industry is consuming enough electricity to power all the households in New York City for two years. This massive amount of energy and pollution output is equal to that of 34 coal-fired power plants. And by 2020, the output is projected to be equivalent to 50 power plants. These statistics highlight that in the data center industry alone, sustainability consultants are a necessity in helping reduce the industry’s energy consumption by designing green data centers. In the video below, Randy Ortiz, VP of Data Center Design and Engineering at Internap, and Dan Prows, Sustainability Consultant at Morrison Hershfield, discuss how Internap’s design team works with sustainability consultants to construct a highly energy-efficient and sustainable data center. As the expected need for data center growth continues, it is economically and fiscally responsible for data center providers to design their facilities to be as energy efficient and sustainable as possible. And in working with sustainability consultants, data center engineers can ensure that their facility is designed to drive performance while reducing operational costs and environmental impact. Learn more about Internap’s energy-efficient data centers.
<urn:uuid:05fafd1c-4ded-4d8f-9276-d47447b90ee9>
CC-MAIN-2017-09
http://www.internap.com/2015/11/10/sustainability-consultant/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00205-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918729
479
3.046875
3
“Digital citizenship is a concept that helps teachers, technology leaders and parents to understand what students, children and technology users should know to use technology appropriately. Digital citizenship is more than just a teaching tool; it is a way to prepare students and technology users for a society full of technology. Digital citizenship is the norm of appropriate, responsible technology use.” – Mike Ribble, Digital Citizenship Institute It’s clear teaching students the basics of digital citizenship is a priority in schools today. But in order to teach students how to act as digital citizens, a school needs to create a digital environment conducive to safe and responsible technology use. That safe and productive digital environment is at the heart of Impero Software’s philosophy. Here’s an infographic (above) and explanation (below) of how our real-time monitoring features help schools foster a comprehensive approach to teaching good digital citizenship: The first step to building a safe environment for teaching good digital citizenship is doing the homework on the appropriate (and inappropriate) content for students. Of course, school IT managers block obviously unsafe websites. But how do you keep students safe without knowing every single term and phrase that could be potentially harmful? How do IT managers allow students to utilize resources such as social media with the peace of mind that they won’t access something restricted? Enter Impero. We’ve done all the research to create a revolutionary library of keyword policies that detect thousands of terms and phrases related to bullying, self-harm, weapons and violence, suicide, adult content, sexting, eating disorders and more. By working alongside schools, experts and leading nonprofit organizations, we continuously update the libraries with new phrases and provide relevant, up-to-date resources for real-time monitoring. To create a healthy environment for teaching digital citizenship, you need tools and resources. Teachers and administrators need ways to detect when students have made poor choices so they can help students navigate online in a more effective way. Through monitoring, capturing and logging online activity, Impero software works with schools to keep everyone on track. Monitor: Keyword detection with real-time monitoring – Our keyword libraries have algorithms running in the background on student computers and portable devices. These algorithms detect key terms and phrases on HTML, email, social media and applications. When it detects a phrase, it alerts an educator. The alert contains a definition and severity level, which allows the teacher to escalate next steps as determined by the district. Capture: Photo and video capture provides staff with context – To help educators understand specific situations and content, Education Pro provides photo and video capture features. Depending on the severity level of a keyword or phrase, a screenshot or short video of a student’s screen is automatically captured and sent to an educator. This provides adults with proof and context of an issue, which can provide the opportunity to teach better choices. Schools can also set their own policies surrounding photo and video capture, if needed. Log: Log the captures for detailed incident reporting – All of the captured keyword detections, screenshots and videos are logged within Education Pro. This allows the educator or administrator to have a detailed report of any serious issues or patterns of behavior. For example, a large amount of logged bullying conversations can indicate that a teacher or administrator should to take action (schedule a workshop or assembly). Real-time monitoring is not about policing kids. Rather, it’s about providing opportunities for mentorship, teaching and learning. Keyword detection, photo and video capture and logged incident reports provides educators and administrators with tools to mentor good digital citizenship. Discussing issues, offering counter narratives and intervening before things escalate are all ways of mentoring students. This provides the roadmap to better digital behavior. To echo Ribble’s notion, digital citizenship is the norm of appropriate, responsible technology use. But to learn responsible Internet usage, students have to navigate through a murky body of digital water. Safeguarding students is the responsibility of everyone involved in the education journeys. By providing all of the tools previously mentioned – researched keywords, monitoring, screen capture, event logs, mentoring – educators, administrators and counselors can adequately safeguard students while teaching them to think for themselves online. This allows students to be responsible, safe and good digital citizens – both in school and out in the world. With technology surrounding everything in the lives of young people, this is the ultimate goal for all. To learn more about about how Impero Education Pro can help provide your school with a safe environment for producing good digital citizens, get in touch today.
<urn:uuid:3b130986-77cd-461b-9661-a9bf2ed3dc98>
CC-MAIN-2017-09
https://www.imperosoftware.com/creating-good-digital-citizenship-in-and-out-of-school-with-real-time-monitoring/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00205-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928382
930
3.609375
4
Schools and libraries are hurting students by setting up heavy-handed Web filtering software that block access to potentially educational sites. Instead, educators should trust teachers and librarians to oversee schools Internet access, says Craig Cunningham, a professor at National-Louis University. Web filtering software should be configured so that, when a student stumbles across a site that's blocked, the teacher or librarian can make a judgment whether the content is appropriate for study, and if it is, the teacher or librarian can let the site through. (Disclaimer.) "If a student tries to show something that's part of a presentation and it's blocked, the teacher types a password and everyone sees it," Cunningham said. "Why should teachers not be in charge of what to teach?" For examples of how heavy-handed Web filtering software harms education, see my post earlier today, "How Internet censorship harms schools." Also, read the post I wrote that started the discussion: "Internet filtering as a form of soft censorship." Ultimately, the purpose of schools should be to teach students to live in a democratic society, and that means teaching critical thinking and showing students controversial Web sites, Cunningham said. That includes sites that Web filters might classify as hate speech, or sites discussing same-sex marriage -- both for and against. Students need to access this information under the guidance of teachers and librarians, in the process of learning how to think about these issues. The alternative is using schools as a means of indoctrinating students with social norms as defined by parents and the local community. "Should students prepare schools to think ike their parents do?" Cunningham said. "My response is no. The purpose of K-12 education should be to open up kids' minds to other ways of thinking. One of the hallmarks of democracy is opening people up to other points of view and not denouncing them as evil. The idea that two people can have two different conclusions from observing the same phenomenon, without any of them being more right than the other, that's a difficult thing for a lot of people to handle." I asked Cunningham: If parents don't decide how their own children should be educated, then who does? He responded, "That education that the best and wisest parents want for their children should be the education available to all." But who decides what's the best and parents want for their children? That's something society needs to work out through discussion and debate, Cunningham said. He said he favors a national educational curriculum as practiced by other countries, including England, France, and Korea. Cunningham cited the example of a small town in Kansas that's all white, and all Christian, and all Republican, where students need to be exposed to alternate viewpoints. "In that small town, there's going to be one kid who's different. And who does that kid turn to if not teachers? If they can't turn to the teachers, they turn to the Internet," he said. He added, "Everybody should be thinking about the balance between exploration and safety, but they shouldn't always fall down on the side of safety. Because there is a trade-off." I work as Internet Marketing Director for Palisade Systems, which provides a network security appliance and service that includes Web filtering. Mitch Wagner on Internet censorship in schools
<urn:uuid:2e6fd519-43a0-43ff-b86e-a4f2585ae502>
CC-MAIN-2017-09
http://www.computerworld.com/article/2468124/endpoint-security/a-simple-fix-for-internet-censorship-in-schools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00501-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964654
681
2.75
3
WolframAlpha has been generating a low-frequency buzz lately. A lot of people might have heard it’s coming but still have no idea what it is. A lot of people hear that it’s a new search engine, but it’s more than that. Still others, have not heard about it at all. What is WolframAlpha and why is it worth getting excited? According to Wolfram’s blog, WolframAlpha is a computational knowledge engine. What does that mean? First, WolframAlpha is built off of Mathematica, so it can compute the normal results that you’d get back into meaningful data or the comparisons that you are requesting. You can read more about how WolframAlpha takes advantage of the power of Mathematica with this WolframAlpha blog article. Secondly, WolframAlpha appears to have very high-level language or natural language abilities. That means you can talk to it just like you’d talk to another human. Instead of having to format your query, use special commands, or even code you can just state your inquiry and WolframAlpha will parse it down to get your answer. The biggest difference (and welcome change) for WolframAlpha is that it provides answers versus pages. With WA, you’ll get facts and calculations versus pages with related content that might answer your questions. “You said something about mad scientists. What about us, I mean, them?” WolframAlpha pulls a lot of information from a lot of places and puts it at your fingertips. Research on the Internet is easy and authoritative again. The resource is indicated at the bottom of every result. It’s not just for mad scientists, but for everybody that does any sort of research. Many search queries in Google and Yahoo! will count as research perfectly suited (if not, more suited) for WolframAlpha. Librarians, journalists, scientists, mathematicians, students, and many more groups of people should add WolframAlpha to their repertoire of tools at their disposal. This is a search engine uncommercialized and tailor See it in Action Watch a very interesting screencast with Stephen Wolfram, to see all of the types of information WolframAlpha can pull up and make useful. This thing does it all! From worldwide facts to chemistry to genetics to even a crossword solver! Of course, we’ll have to see how it handles traditional “problematic” math numbers. WolframAlpha will be made available publicly in a matter of minutes and will undergo infrastructure testing throughout the weekend. On Monday, if everything goes well, it is expected to officially launch. There are rumors about a paid, professional version going around, but I haven’t found anything to base these rumors on. This is an unbiased article. I do not work for Wolfram Research, Inc. (although I wouldn’t mind doing so, they are based in the same town I live in… [email protected], just in case). Check out WolframAlpha as it launches tonight and see the live webcast of the launch itself. Update: Check it out before the launch.
<urn:uuid:d7a61c46-bed0-40e0-a2fd-d2c82fc0e400>
CC-MAIN-2017-09
https://www.404techsupport.com/2009/05/wolframalpha-finally-a-search-engine-for-the-mad-scientist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00501-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929874
667
2.765625
3
As with any innovation or trend, as the Internet of Things (IoT) matures, we will see some ingenious applications for IoT devices and data. But we also will see some ignoble ones as well. For example, IoT projects on Kickstarter include a smart trash can that can alert you when to take out the garbage, a smart jump rope that counts your jumps, and a smart desk that learns your habits and can even order food, make appointments, and set reminders. Perhaps slightly more useful is the smart wallet, which has lights and alarms that go off when it’s stolen, as well as a GPS tracker to tell you when and where you lost it. This is a handy solution for a vexing inconvenience, but hardly an attempt to take on the greatest challenges facing mankind. In other words, not every IoT-based innovation sets out to change the world. But this doesn’t mean that these efforts are without value. Although these innovations might seem insignificant or even silly, they share a common feature with IoT-based innovations whose value to mankind is immediately evident. Could IoT Devices Cure Cancer? Unlike the examples cited above, the contribution of some IoT-based innovations to the greater good is clear. Take IoT fitness trackers, for example. One of the first examples of mass-produced wearable technology, they are, for many people, their first foray into the world of smart, connected devices (beyond their smart phones, of course). Wearable fitness trackers have changed the types and amount of data we can collect about people’s health. It used to be that if your doctor wanted you to wear a heart-rate monitor, the device was bulky and expensive. Now, it’s an affordable plastic bracelet. But the possibilities go way beyond counting your steps or calculating your resting heart rate. As Apple debuted its ResearchKit app, researchers already were imagining innovative ways to use it. And researchers already are using it to help patients with asthma, Parkinson’s disease, diabetes, breast cancer, and cardiovascular disease. This is a whole new kind of data for the field of medical research, not based on patients’ self-reporting or on data that are gathered in a controlled lab setting, but real-world data. The usefulness of that kind of information can’t be ignored. OK, so maybe the data itself won’t cure cancer or any of these other diseases, but it will advance the research and delivery of care faster and more reliably than any other innovation in recent memory. Tracking Other Data Of course, it’s unrealistic to think that the IoT is going to be focused only on curing cancer or improving how people with chronic diseases manage their conditions. Other new sources of data are appearing every day with new IoT devices, with their own applications to improve the human condition. Test projects abound, including some proving that sensors can help grow healthier vegetables and protecting bee colonies with automatic heaters. They have even got Internet-connected cows (seriously) to help farmers catch disease among herds sooner and produce higher quality milk. Smart electrical grids and smart homes have the potential to revolutionize the way we consume and distribute power. Internet-connected appliances will help manufacturers with research and development of new products and will help retailers predict consumer demand. But what these “smart” innovations, with their obvious and immediate applications to the biggest challenges facing mankind, have in common with seemingly silly and frivolous applications of IoT technology is that they all are generating whole new kinds of data. And with the variety of IoT devices and all the many and varied new forms of data they produce, anything is possible. Even apparently inconsequential IoT devices, for example, like smart frying pans, yoga mats, or the aforementioned smart desk could turn out to produce invaluable data about cooking or exercise habits. And the applications for this data could reach well beyond furniture that can order your dinner. Bernard Marr is a bestselling author, keynote speaker, strategic performance consultant, and analytics, KPI, and big data guru. In addition, he is a member of the Data Informed Board of Advisers. He helps companies to better manage, measure, report, and analyze performance. His leading-edge work with major companies, organizations, and governments across the globe makes him an acclaimed and award-winning keynote speaker, researcher, consultant, and teacher. Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.
<urn:uuid:192b6900-f8a6-4f5b-aa8a-819091d565fe>
CC-MAIN-2017-09
http://data-informed.com/silly-iot-projects-and-growing-heterogeneous-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00377-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943925
922
2.8125
3
Using a hierarchical structure, treemaps provide meaningfully organized displays of high-volume information. Alert naturalists scan the forests and the trees, taking in the overview and noticing seasonal changes while being on the lookout for fires. They also are watching for insect invasions that could damage trees, and they consider sites for controlled burns to reinvigorate the forest. Alert naturalists know what to look for and are quick to spot surprises. In a similar manner, alert managers do more than recognize expected patterns in sales cycles, movements in supply chains, or yields in manufacturing. Successful managers are skillful at spotting exceptional events, identifying emerging fashions, and noticing what is missing. When problems develop, they take prompt action to keep plans on schedule. When novel trends emerge, they change plans to take advantage of opportunities. Making confident, correct, and bold decisions is a challenge, especially when the volume and pace of activity are high. Experience and intuition remain important, but generating reasonable alternatives based on accurate information is necessary. The 30-year emergence of computerized databases and data warehouses from internal and external sources provides the business intelligence that is needed to spot trends, notice outliers, and identify gaps. Early relational database systems with SQL queries were a great step forward, and then business intelligence tools provided still easier access for some kinds of data exploration; but these point-and-click interfaces still produced tabular results or static graphics. Software that produces visual displays of search results with interactive exploration has only recently become widely available. One family of new tools are the organization-wide and manager-specific information dashboards that help ensure daily or even minute-by-minute situation awareness by presenting current status and alerts. These dashboards employ spatial presentations, color-coded meters, and iconic markers to provide at-a-glance information that indicates all is well or that action is needed. A second family of new tools is the more powerful information visualization and visual analytic software that supports ambitious exploration of mission-critical data resources. Well-designed visual presentations make behavioral patterns, temporal trends, correlations, clusters, gaps, and outliers visible in seconds. Since scanning tabular data is time-consuming and difficult, effective visual presentations are becoming highly valued. Training and experience in using these new tools are important to derive the maximum benefit. Organizations are learning how a few statistical or data analysis professionals can develop displays that hundreds of managers can use effectively. This strategy is supported by commercial software developers who provide powerful studio toolkits for designers to make simplified displays that serve the needs of specific managers. The good news is that appropriate user interface designs can integrate data mining with information visualization so users can make influential discoveries and bold decisions. Treemaps are a space-filling approach to showing hierarchies in which the rectangular screen space is divided into regions, and then each region is divided again for each level in the hierarchy. The original motivation for treemaps was to visualize the contents of hard disks with tens of thousands of files in 5-15 levels of directories. Many treemap implementations have been produced, but you might want to start with the free version called SequoiaView (Figure 1), which lets you browse your hard drive. In Figure 1, the area indicates file size and color shows file type. An early popular application on SmartMoney Magazineís Web site shows 600 stocks organized by industry and by sub-industry in a 3-level hierarchy (Figure 2). The area encodes market capitalization and color shows rising or falling prices. Users become familiar with industry groups and specific stocks so when one group (such as energy stocks) is down, they notice immediately. Treemaps for stocks are especially interesting on days when an industry group is largely falling (shown as red), but one company is rising (green). Figure 2 shows that on a particular day, there is a mostly green communications sector with one bright red problem and an interesting bright green stock in utilities. Treemaps for Sales Monitoring Letís take a look at a simple example of sales force management that is available for your interactive exploration. The basic display shows 200 sales representatives in six sales regions, with size indicating total sales for the fourth quarter (Figure 3). Green regions indicate above quota and red below quota. This example reveals a typical mixed picture with some high- and some low-performing sales representatives. The main good news is from the Northeast and the Mountain West regions where many green regions indicate above quota performance. There is some cautionary news about the Southwest; but even there, one of the salespeople has delivered well above quota. A simple movement of the cursor over any region or group heading generates a pop-up box with detailed information. To get an understanding of the best sales representatives, users can use the filters on the right side control panel. Moving the Total Sales -- Q4 slider to show only high sales figures and moving % of Quota Met -- Q4 slider to limit the display to those above 100%, we see the top ten sales representatives in bright green (Figure 4). There are strong performers who are doing well above quota in all six regions. Turning to the problems, users can use the filters to remove all but those doing much below quota (Figure 5). These sixteen are only in the Mountain West and Southwest, so maybe a discussion with those region managers might help to understand what could be done to improve sales for the next quarter. These are simple cases meant to demonstrate possible analyses. Larger cases with hundreds of products take time to learn but provide managers with unusual powers to analyze their data by region, salesperson, product, and time period. Pharmaceutical companies are doing just that to understand which products are gaining or losing, while insurance companies are analyzing claims to detect patterns of fraud in tens of thousands of claims. Treemaps for Product Catalogs Another consumer-friendly application of treemaps is the Hive Groupís presentation of the daily status of the iTunes 100 most popular songs, grouped by genre (rock, pop, hip-hop, etc.) show in Figure 7. The highest ranked songs are larger, and color-coding shows whether a song has moved up or down in the past day. A final consumer example which has proven successful is Peets Coffee Selector shown in Figure 8. Itís a small treemap, but a survey of their customers revealed strong preferences for the treemap versus the tabular presentation of products. Sliders to filter data items allow users to limit the display to just those items that interest them, maybe the high-performing salespeople or the ones who are not meeting quotas in regions where most salespeople are above quota. Another way of zooming in on sections is to use the entire display to show just some branches of the hierarchy. The treemap algorithm used in many commercial applications is based on the squarified strategy that makes each box as square as possible, usually placing the large squares in the upper left and the small squares in the lower right. This is visually appealing and helpful in understanding the range of size differences. Sometimes it is important to keep the items in order by name or date, in which case the order-preserving treemap algorithms such as slice-and-dice or strip treemaps are helpful. Supportive evidence comes from a recent controlled experiment comparing spreadsheets to the Hive Group software. This study by Oracle found that treemaps were significantly faster for all eight tasks tested. The author concluded: "These results suggest that treemaps should be included as a standard graphical component in enterprise-level data analysis and monitoring applications." Improvements are inevitable as users apply treemaps for ever wider sets of problems. The good news is that new ideas and applications for treemaps are emerging weekly. One that I like especially was the cleverly designed newsmap that shows news stories from around the world in a way that makes prominent stories more visible. I wonder what business or consumer application will be the next one to cause excitement on the Web Ė maybe it will be yours. About Ben Shneiderman Ben Shneiderman is a Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and Member of the Institute for Advanced Computer Studies and the Institute for Systems Research, all at the University of Maryland at College Park. He has taught previously at the State University of New York and at Indiana University. He was made a Fellow of the ACM in 1997, elected a Fellow of the American Association for the Advancement of Science i n 2001, and received the ACM CHI (Computer Human Interaction) Lifetime Achievement Award in 2001. He was the Co-Chair of the ACM Policy 98 Conference, May 1998 and is the Founding Chair of the ACM Conference on Universal Usability, November 16-17, 2000. Dr. Shneiderman is the author of Software Psychology: Human Factors in Computer and Information Systems (1980). Shneiderman, B., "Using Treemap Visualizations for Decision Support", DSSResources.COM, 06/23/2006. Ben Shneiderman, Stephen Few and Jean M. Schauer provided permission to publish and archive this article at DSSResources.COM on April 11, 2006. A version of the article was originally published on The Business Intelligence Network on April 11, 2006 at www.BeyeNETWORK.com. This article was posted at DSSResources.COM on June 22, 2006.
<urn:uuid:439691de-215f-407e-a185-ab074e55204e>
CC-MAIN-2017-09
http://dssresources.com/papers/features/shneiderman/shneiderman06232006.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00377-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926375
1,940
2.578125
3
Jobs Never Forgave Googles Eric Schmidt for Backing Android In reality, the iPhone, as nice as it is, is derivative of the products that preceded it in the market. While Apple did a beautiful job of the user interface, and made a device that's attractive enough to garner a gazillion followers and an ecosystem that was just closed enough to control while being open enough to gain a great deal of external support, the iPhone still depended on the work of others. This is true of Apple's products in general. As nice as the original Macintosh may have been, it depended on Xerox for the original design for the interface. As nice as the Apple II may have been, it too was based on predecessors. But this isn't to suggest that the Macintosh or the Apple II were bad computers or that they shouldn't have been developed using the concepts of others. There really is no alternative. Despite Apple's claims of uniqueness, the company couldn't have been completely unique if it expected to actually sell computers. Apple didn't invent computing after all. The company simply developed software using a different approach from what was emerging elsewhere at the time. Of course, Apple insisted on using a closed platform. The company refused, except for a brief time, to allow clones of its product. And when clones did appear, Apple put them out of business.
<urn:uuid:c8a4369e-7ab2-49aa-b01c-0a7b79706776>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Mobile-and-Wireless/Why-Steve-Jobs-Was-Wrong-About-Android-Being-a-Stolen-Product-221142/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00377-ip-10-171-10-108.ec2.internal.warc.gz
en
0.978456
276
2.5625
3
Servers occupy a place in computing similar to that occupied by minicomputers in the past, which they have largely replaced. The typical server is a computer system that operates continuously on a network and waits for requests for services from other computers on the network. Many servers are dedicated to this role, but some may also be used simultaneously for other purposes, particularly when the demands placed upon them as servers are modest. White Paper Published By: D-Link Published Date: Jun 17, 2011 Server virtualization is becoming a no-brainer for any business that runs more than one application on servers. Nowadays, a low-end server is 64-bit capable and comes with at least 8GB of memory. Without virtualization, most such servers cruise along at 5 percent of CPU capacity with gigabytes of free memory and some I/O bandwidth to spare. Virtualization helps you better utilize these resources. If your organization's servers run applications that are critical to your business, chances are that you'd benefit from an application delivery solution. Today's Web applications can be delivered to users anywhere in the world and the devices used to access Web applications have become quite diverse. At a projected market of over $4B by 2010 (Goldman Sachs), virtualizationhas firmly established itself as one of the most importanttrends in Information Technology. Virtualization is expectedto have a broad influence on the way IT manages infrastructure.Major areas of impact include capital expenditure and ongoingcosts, application deployment, green computing, and storage. The idea of load balancing is well defined in the IT world: A network device accepts traffic on behalf ofa group of servers, and distributes that traffic according to load balancing algorithms and the availabilityof the services that the servers provide. From network administrators to server administrators to applicationdevelopers, this is a generally well understood concept. Application Delivery Controllers understand applications and optimize server performance - offloading compute-intensive tasks that prevent servers from quickly delivering applications. Learn how ADCs have taken over where load balancers left off. High availability solutions are no longer an all or nothing discussion about expensive, proprietary solutions. Today there are wide range of affordable alternatives that provide the required level of availability at a cost justified by the risks of downtime. Many businesses struggle to guarantee application and data availability for Windows applications. They may protect the application from one type of outage (like a disk failure) while ignoring other risks. Or they can end up deploying multiple point solutions to handle different aspects of availability, increasing overall cost and complexity. There are many expensive, complex technologies that promise high availability for SQL. Fortunately there are also simple, automated ways to get the highest levels of protection. The following five secrets to affordable SQL availability will help you to implement a SQL environment with little or no downtime, zero data loss, and no added IT complexity. White Paper Published By: Chatsworth Published Date: Oct 22, 2016 By using intelligent and scalable platforms, your organization can improve resource consumption, cloud utilization and more. Solid data center management platforms help empower your business and data center to consume less energy and trim infrastructure costs. Case Study Published By: Zerto Published Date: May 31, 2016 Download this case study today to learn more about how ARA was able to complete a datacenter migration in a compressed window, and how they continue to use ZVR to deliver aggressive service levels across their infrastructure with a product that is very easy to use. What digital trends are forcing CDN services to evolve? CDN services are changing, and web and application delivery professionals are navigating the evolving CDN landscape. Read the Forrester Research Report, "CDNs Extend to Web Performance Optimization and End-To-End Cloud Services" to learn what new capabilities are required from the Next Generation CDN. This paper proposes standard terminology for categorizing the types of prefabricated modular data centers, defines and compares their key attributes, and provides a framework for choosing the best approach(es) based on business requirements. White Paper Published By: IO Published Date: Dec 31, 2015 The case for a re-envisioned data center is being made every day, and at an increasingly urgent pace. Growing technology demands, transforming global economics, corporate efficiency initiatives, and required business agility are among the drivers making change not merely a strategy, but a prerequisite for survival. This planning guide looks at the importance of environmental concerns in the age of heightened corporate responsibility, and identifies the considerations for moving to a clean energy model for data centers.
<urn:uuid:ddbaa0aa-dea2-477d-bbf1-0b79155ff484>
CC-MAIN-2017-09
http://research.crn.com/technology/networking/servers
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00021-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91912
930
3
3
First there's a little stutter. Next a program hangs, and a funny noise creeps from your machine. Then that familiar blue screen slaps you in the face. Your computer just crashed, and all you can do is sit in the awkward silence of a restart, and hope it wasn't fatal. There are many possible causes for these hellish episodes, and it's important to be educated on the why and hows of PC crashes to prevent them in the future. After all, the next crash could be your PC's last. Following is a rundown of seven common causes and solutions. Many blue screens are a result of hardware and installation conflicts. All of your system's components consume IRQs (interrupt request channels) when installed, and every device requires its own channel to function properly. When two devices share the same channel and are used simultaneously, a crash can occur. Thumb through your Device Manager, and look for any devices marked with a yellow exclamation point. These are the ones with issues, and can usually be fixed with a driver update. Just search your device manufacturer's website for the latest driver software, or, in a pinch, reinstall the offending hardware itself. Bad memory is to blame for many blue screens and failed boots. Fortunately, however, your RAM modules are some of the easiest components to check and replace. First, use the software utility Memtest86+ to ensure your RAM is the problem. If errors arise, you next need to determine exactly which memory stick is to blame. To do this, remove all the sticks from your systemsave one inserted in the primary memory slot. If the system boots fine, and no errors are detected in Memtest86+, continue testing in the same fashionone stick at a time, inserted in the primary slotuntil the system fails to boot, or Memtest86+ indicates problems. Eventually, you'll nail down exactly which memory module is causing trouble, and then you can replace it with a fresh, clean stick (just make it's fully compatible with your motherboard and other sticks of RAM). Heat is thy enemy Computers get hot. We know this from the loud fans bolted inside our desktops, and the alarming burning sensation we feel on our legs after using a laptop for too long. Everything inside a PC generates heat, and heat can cause components to become unstable and crash your PC. Indeed, computers are designed to crash as a last-ditch effort to protect their own internal components from permanent heat damage. If you suspect your PC isn't effectively dispersing enough heat, first check to make sure all your fans are spinning properly. If one isn't moving, or appears to be spinning abnormally slow, check its connections to make sure it's properly powered. If all appears fine, but the fan still isn't doing its job, it's best to replace it. Next make sure that all of your PC's vents, grates and filters are unhindered by dust, pet hair and other gross materials that prevent proper airflow. These areas are hotbeds (pun intended) for heat buildup. If you find any problem areas (see the disgusting example below), use a can of compressed air to clear the airways.
<urn:uuid:34c3045d-9997-4f61-ac1c-67636bb45360>
CC-MAIN-2017-09
http://www.itworld.com/article/2715178/hardware/anatomy-of-a-pc-crash--7-scenarios--and-how-to-avoid-them.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00545-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940262
657
2.5625
3
The UC Berkeley Seismological Laboratory issued an alert about the recent earthquake in California's Napa Valley 10 seconds before it struck. That may not seem like much time -- unless you're a child of the 1950s and 1960s who was trained in school to duck and cover the second you saw a large bright nuclear flash. Earthquake early warning systems can deliver alerts of impending seismic activity a few seconds to as long as four minutes before the tremors begin. The systems don't predict earthquakes, but a quake's energy waves move slowly enough to create an opportunity for a warning. The length of warning depends on the distance from the earthquake's center. Even if it sends an alert just a few seconds before an event, an earthquake warning system can help save lives and prevent property damage. But the U.S. has yet to fund an earthquake early warning system. That's not the case in Japan; that nation has a warning system that issued alerts that triggered the shutdown of the transit system when a 9.0-magnitude earthquake struck offshore in the Pacific Ocean on March 11, 2011. No trains derailed. The cost of building and operating an alert system for the West Coast of the United States has been estimated at approximately $120 million for the first five years. But investing in a fully built alert system that's integrated into schools, offices and other types of buildings could give rise to a new industry, said William Leith, a senior science adviser at the U.S. Geological Survey. Leith offered testimony on the subject of early warning systems to a subcommittee of the U.S. House of Representatives last June. "Consultants will advise users on how to use alerts to take protective actions," he said. "Mass notification companies will customize alerts for their clients. Automated control producers will make and install equipment to take actions and sound alarms at user facilities. Entrepreneurs will undoubtedly develop creative new applications specific to various industry sectors." Joshua Bashioum, the founder of Early Warning Labs in Santa Monica, Calif., is in the vanguard of this industry. His year-old, privately funded company is building hardware systems that can interface with building operational systems and IT networks. What Early Warning Labs intends to do is take earthquake alert data, calculate the intensity at client locations and project the risk of damage at those locations. It will then push out machine-to-machine commands. The possible action scenarios are almost endless. An automated command could turn on a data center's emergency generators and begin disaster recovery procedures. It could alert surgeons in an operating room of an impending quake, get schoolchildren to take cover, automatically open garage doors at fire houses to prevent jamming, turn off gas pipelines, shut down high-tech manufacturing assembly lines and brace equipment, stop elevators and set off audible alarms. That's not to mention the potential for triggering messages on TV and radio and even activating citywide sirens. Bashioum said that until the government makes earthquake early warning systems fully functional, private-sector companies can't use the data to issue alerts. All they do now are test installations. "Our efforts right now are focused on identifying what needs to be done," said Bashioum, who is among those speaking next week at the International Conference on Earthquake Early Warning at the University of California, Berkeley. Saving lives is the main goal, he said. Large companies may be interested as well. David Jonker, the senior director of Big Data Initiatives at SAP, said in an earthquake early warning, where every second counts, the challenge is to issue a warning as fast as possible. There won't be time to read disks, and he believes in-memory systems will be the preferred approach for processing warning data. SAP is already in the warning business; it provides technology for NY-Alert, New York's all-hazard alert and notification system. Jonker said SMS, as well as user responses, are too slow. "You are very much talking about a machine-to-machine play," he said. What is clear is that building an early warning system will take time, and that includes the time needed to train the public in how to respond to warnings. "We're going to be focused on getting the science right and the warning generated correctly, and then we're going to depend on our public sector and corporate partners to figure out how we are going to push it out," said Bill Steele, director of outreach and information services at the Pacific Northwest Seismic Network. This story, "In Earthquakes, Alerts May Turn Machines Into Action Heroes" was originally published by Computerworld.
<urn:uuid:d4e0752e-3fa9-4e51-90e6-07a87e9e9e94>
CC-MAIN-2017-09
http://www.cio.com/article/2600612/disaster-recovery/in-earthquakes-alerts-may-turn-machines-into-action-heroes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00541-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956189
949
3.265625
3
WASHINGTON, DC--(Marketwired - November 11, 2016) - In honor of Veterans Day, 18 new World War II lesson plans are being released on the award-winning, ABMCeducation.org. This free resource for teachers was produced out of a partnership between National History Day®, the American Battle Monuments Commission (ABMC), and the Roy Rosenzweig Center for History and New Media. Designed to reinvigorate the study of World War II in American classrooms, the site features lesson plans on a variety of subjects from art to science and everything in between. The new lesson plans were created by 18 extraordinary teachers who participated in the 2016 Understanding Sacrifice program. Each teacher chose one local American service member who made the ultimate sacrifice and is buried or memorialized at an ABMC cemetery in Southern Europe or North Africa. Teachers spent a year uncovering the life story of their fallen hero. Concurrently, teachers developed in-depth lesson plans utilizing their research that focused on one element of World War II. Because immersive experiences create richer teaching materials, the group then journeyed to southern Europe to walk in the footsteps of history to see first-hand the places that influenced the outcome of the war. Using this experience, the teachers designed lesson plans specific to their teaching discipline. These lesson plans are a free resource designed to help American students better understand the sacrifices that soldiers made during World War II. Designed for middle and high school classrooms, the lesson plans are multi-disciplinary and can be applied in history, art, math, science and English classrooms. Using primary and secondary sources, videos, and hands-on activities, students are transported from the modern-day home front to the war front of the past. From determining supply priorities for the troops to role-playing the challenges faced by a paratrooper to gaining an understanding of the roles of women and minorities, students will walk away with a vivid understanding of vast scope of World War II history and the men and women who risked everything. "This partnership with the American Battle Monuments Commission and the Roy Rosenzweig Center for History and New Media at George Mason University has allowed us to take 18 extraordinary teachers to battlefields and memorials of northern Europe," said National History Day Executive Director Dr. Cathy Gorn. "Their unique experiences can now help teachers around the world bring history to life with the materials added to ABMCeducation.org." Each lesson plan is based on solid scholarship, integrated with Common Core Standards, and makes use of interpretive materials provided by the ABMC. They are accompanied by research and eulogies about fallen heroes of World War II who are honored at ABMC cemeteries in southern Europe and north Africa. Established by Congress in 1923, the American Battle Monuments Commission commemorates the service, achievements, and sacrifice of U.S. armed forces. ABMC administers 25 overseas military cemeteries, and 27 memorials, monuments, and markers. About National History Day®: National History Day® is a non-profit education organization based out of College Park, MD. Established in 1973, National History Day® seeks to promote the learning and teaching of history through a variety of curricular and extra-curricular programs that engage over half a million secondary students around the world each year. More information is at nhd.org. About the Roy Rosenzweig Center for History and New Media: The Roy Rosenzweig Center for History and New Media at George Mason University uses digital media and computer technology to democratize history -- incorporating multiple voices, reaching diverse audiences, and encouraging popular participation in presenting and preserving the past. For more information, visit http://rrchnm.org. Image Available: http://www.marketwire.com/library/MwGo/2016/11/11/11G121721/Images/DSCN0931-f054ba4a5a45b389b509616f82b1c5d6.jpg Embedded Video Available: https://www.youtube.com/watch?v=KJ4A1RAPpu4
<urn:uuid:a5af7036-3a01-43bd-a56b-e6c9ff1c4180>
CC-MAIN-2017-09
http://www.marketwired.com/press-release/award-winning-website-premieres-new-free-teacher-resources-2174943.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00593-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923087
848
3.140625
3
Bridging is a method of path selection that contrasts with routing. In a bridged network, no correspondence is required between addresses and paths. Put another way, addresses don't imply anything about where hosts are physically attached to the network. Any address can appear at any location. In contrast, routing requires more thoughtful address assignment, corresponding to physical placement. Bridging relies heavily on broadcasting. Since a packet may contain no information other than the destination address, and that implies nothing about the path that should be used, the only option may be to send the packet everywhere! This is one of bridging's most severe limitations, since this is a very inefficient method of data delivery, and can trigger broadcast storms. In networks with low speed links, this can introduce crippling overhead. IP, designed as a wide-area networking protocol, is rarely bridged because of the large networks it typically interconnects. The broadcast overhead of bridging would be prohibitive on such networks. However, the link layer protocols IP functions over, particularly Ethernet and Token Ring, are often bridged. Due to the pseudo-random fashion in which Ethernet and Token Ring addresses are assigned, bridging is usually the only option for switching among multiple networks at this level. Bridging is most commonly used to separate high-traffic areas on a LAN. It is not very useful for disperse traffic patterns. Expect it to work best on networks with multiple servers, each with a distinct clientele that seldom communicate with any servers but their “home”. Two types of bridging exists, corresponding to the distinction outlined earlier. Transparent bridging is used in Ethernet environments and relies on switching nodes. Token Ring networks use source-route bridging (SRB), in which end systems actively participate by finding paths to destinations, then including this path in data packets. Transparent bridging, the type used in Ethernet and documented in IEEE 802.1, is based on the concept of a spanning tree. This is a tree of Ethernet links and bridges, spanning the entire bridged network. The tree originates at a root bridge, which is determined by election, based either on Ethernet addresses or engineer-defined preference. The tree expands outward from there. Any bridge interfaces that would cause loops to form are shut down. If several interfaces could be deactivated, the one farthest from the root is chosen. This process continues until the entire network has been transversed, and every bridge interface is either assigned a role in the tree, or deactivated. Since the topology is now loop-free, we can broadcast across the entire network without too much worry, and any Ethernet broadcasts are flooded in this manner. All other packets are flood throughout the network, like broadcasts, until more definite information is determined about their destination. Each bridge finds such information by monitoring source addresses of packets, and matching them with the interfaces each was received on. This tells each bridge which of its interfaces leads to the source host. The bridge recalls this when it needs to bridge a packet sent to that address. Over time, the bridges build complete tables for forwarding packets along the tree without extraneous transmissions. There are several disadvantages to transparent bridging. First, the spanning tree protocol must be fairly conservative about activating new links, or loops can develop. Also, all the forwarding tables must be cleared every time the spanning tree reconfigures, which triggers a broadcast storm as the tables are reconstructed. This limits the usefulness of transparent bridging in environments with fluid topologies. Redundant links can sit unused, unless careful attention is given to root bridge selection. In such a network (with loops), some bridges will always sit idle anyway. Finally, like all bridging schemes, the unnecessary broadcasting can affect overall performance. Its use is not recommended in conjunction with low-speed serial links. On the pro side, transparent bridging gives the engineer a powerful tool to effectively isolate high-traffic areas such as local workgroups. It does this without any host reconfiguration or interaction, and without changes to packet format. It has no addressing requirements, and can provide a “quick fix” to certain network performance problems. As usual, careful analysis is needed by the network engineer, with particular attention given to bridge placement. Again, note that for IP purposes the entire spanning tree is regarded as a single link. All bridging decisions are based on the 48-bit Ethernet address. Source-route bridging (SRB) Source-route bridging (SRB) is popular in Token Ring environments, and is documented in IEEE 802.5. Unlike transparent bridging, SRB puts most of the smarts in the hosts and uses fairly simple bridges. SRB bridges recognize a routing information field (RIF) in packet headers, essentially a list of bridges a packet should transverse to reach its destination. Each bridge/interface pair is represented by a Route Designator (RD), the two-byte number used in the RIF. An All Rings Broadcast (ARB) is forwarded through every path in the network. Bridges add their RDs to the end of an ARB's RIF field, and use this information to prevent loops (by never crossing the same RD twice). When the ARB arrives at the destination (and several copies may arrive), the RIF contains an RD path through the bridges, from source to destination. Flipping the RIF's Direction Bit (D) turns the RIF into a path from destination to source. See RFC 1042 for the format of the RIF field and a discussion of SRB's use to transport IP packets. Source-route bridging has its problems. It is even more broadcast-intensive than transparent bridging, since each host must broadcast to find paths, as opposed to each bridge having to broadcast. It requires support in host software for managing RIF fields. To take advantage of a redundant network, a host must remember multiple RIF paths for each remote host it communicates with, and have some method of retiring paths that appear to be failing. Since few SRB host implementations do this, SRB networks are notorious for requiring workstation reboots after a bridge failure. On the other hand, if you want to bridge a Token Ring network, SRB is just about your only choice. Like transparent bridging, it does allow the savvy engineer to quickly improve network performance in situations where high-traffic areas can be segmented behind bridges.
<urn:uuid:28ee483a-7e36-4d58-bdc9-8427e987c812>
CC-MAIN-2017-09
https://www.certificationkits.com/cisco-bridging/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00469-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943303
1,317
4.25
4
New Research Gives Hard Numbers on How Cloud Computing Improves Environment The study assessed the carbon footprint of server, networking and storage infrastructure for three different deployment sizes.Researchers are now compiling hard numbers that prove running enterprise applications in the cloud actually does complete a data center triple play by reducing costs, use of electricity and carbon emissions. A new study conducted on behalf of Microsoft, Accenture and WSP Environment & Energy released Nov. 4 shows enterprises running business applications in the cloud can cut energy consumption and carbon emissions by a net 30 percent or more as opposed to running that same software on their own infrastructure. Large data centers, such as those run by Microsoft, IBM, Google, Yahoo, Fujitsu and others, can benefit greatly from economies of scale and operational efficiencies beyond what corporate IT departments can achieve, the study reported. - Dynamic provisioning: Large operations enable better matching of server capacity to demand on an ongoing basis. - Multitenancy: Large public cloud environments are able to serve millions of users at thousands of companies simultaneously on one massive shared infrastructure. - Server utilization: Cloud providers can drive efficiencies by increasing the portion of a server's capacity that an application actively uses, thereby performing higher workloads with a smaller infrastructure footprint. - Data center efficiency: Through innovation and continuous improvement, cloud providers are leading the way in designing, building and operating data centers that minimize energy use for a given amount of computing power.
<urn:uuid:39f929de-56e8-404c-94b3-cffb0352528a>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Cloud-Computing/New-Research-Gives-Hard-Numbers-on-How-Cloud-Computing-Improves-Environment-578989
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00289-ip-10-171-10-108.ec2.internal.warc.gz
en
0.891617
292
2.796875
3
The march of technology continues to accelerate, and we must respond. The response has to be: Feed the screens. A couple of decades ago, while driving through New York City, I wondered if there were more bricks in the world than transistors. I have no idea why such a question entered my mind. My best guess is I was thinking about Moore’s Law and how far it had progressed and how far it might go. You will recall that Moore’s Law is an observation of Gordon Moore, a founder of Intel, that the number of digital transistors you can buy on a chip for a given price doubles every 18 to 24 months. The number grows very rapidly. So maybe that’s why I wondered about the ratio of transistors to bricks. (It made a good geek conversation topic.) Currently, there are billions of transistors on large integrated circuits. Some of the recent laptop computers no longer have hard drives. They have “solid-state drives” instead with, to quote Carl Sagan, “billions and billions” of transistors. Not only are there more transistors than bricks in the world, but likely I personally own more transistors than there are bricks in the world! And to think that my first transistor, the CK722, was a precious, expensive gift from a parent to a teenager not so very long ago. Google CK722 for your amusement. A similar wonderment concerned the number of calculators with trigonometric functions compared to the number of people in the world. My first calculator’s most “scientific” function was the square root. Later, and for much more money, I got one with trigonometric functions. That was a precious thing. I have a Texas Instruments SR-10 in its original box sitting on a display shelf in my office next to my college-days Dietzgen slide rule and my Dad’s Keuffel & Esser slide rule. All three have dear memories attached. Google SR-10 for your amusement, too. Subsequently, calculators with trigonometric functions became very inexpensive, and even so cheap that they were given away as tchotchkes at trade shows. I have concluded that more calculators with trigonometric functions have been produced (and even lost) than there are people who know what trigonometric functions are. Likely at this point, more of these calculators have been made than there are people in the world. I recently saw a Hello Kitty full-function scientific calculator. Those with young daughters and granddaughters know that Hello Kitty is a line of little girls’ toys. So the idea of a functional Hello Kitty scientific calculator bends the mind. When I started working in the consumer electronics industry, more monochrome television receivers were sold than color televisions. If you wanted a color television set, you placed an order and waited for it. If you were impatient, you could pay a premium over the list price to get it sooner. And list price was a substantial fraction of the cost of a new car. I was fascinated with the technology of color television and learned about human color perception and the various color television approaches to satisfy the eye’s challenging critical demands. I studied how color picture tubes were made and about the tradeoffs required to make them affordable and the circuits and communication theory that made “compatible” color television possible. When I left the consumer electronics industry, full-function color television receivers were sold at supermarkets for $5 an inch of display size! Recently, advertisements for bedroom furniture included a “free” flat-panel TV as an inducement to “buy now.” Back in the research and development labs were engineers working on flat-panel televisions that were always promised to be “just 10 years away.” Now they are everywhere. Nearly all cell phones have great color displays. And one can wonder when the threshold will be crossed of more cell phones produced with these displays than there are people in the world. Maybe it already has. I commonly see people on airplanes with two cell phones turned on as soon as the wheels hit the ground, and it’s permissible to have them on. The iPad revolution has put color displays with more resolution than HDTV in the hands of the masses. It’s amazing how many times I see an infant in a stroller with an iPad instead of a teddy bear. So what does this all mean for us in the cable television industry? Simply this: The march of technology continues to accelerate, and we must respond. The response has to be: Feed the screens. The consumer doesn’t care where the video and images come from, they just want more and more of them and in more convenient forms. Likely, more video will soon be watched on portable devices than on fixed television displays. Some video programming will come from ever-more-pervasive Wi-Fi, other video will come from the cell phone signal, and maybe even some from the digital broadcast signal. Cable needs to be the major supplier of those signals or we will go the way of the individual transistor, the slide rule, the scientific calculator and the color video display and just be a vestige of a former great industry.
<urn:uuid:3236ee4c-8470-4565-9727-2bb3ecb6eac8>
CC-MAIN-2017-09
https://www.cedmagazine.com/article/2012/08/cicioras-corner-feed-screens
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00109-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96423
1,079
2.671875
3
Amnesty International is using satellite cameras to monitor highly vulnerable villages in war-torn Darfur, Sudan. The human rights organization is inviting ordinary people worldwide to monitor 12 villages by visiting the Eyes on Darfur project Web site and put the Sudanese Government on notice that these and other areas in the region are being watched around the clock. "Despite four years of outrage over the death and destruction in Darfur, the Sudanese government has refused worldwide demands and a U.N. resolution to send peacekeepers to the region," said Irene Khan, Secretary General of Amnesty International. "Darfur needs peacekeepers to stop the human rights violations. In the meantime, we are taking advantage of satellite technology to tell President al-Bashir that we will be watching closely to expose new violations. Our goal is to continue to put pressure on Sudan to allow the peacekeepers to deploy and to make a difference in the lives of vulnerable civilians on the ground in Darfur." According to Ariela Blätter, director of the Crisis Prevention and Response Center for Amnesty International USA (AIUSA), new images of the same villages are being added currently within days of each other. This time frame offers the potential for spotting new destruction. Amnesty International worked with noted researchers to identify vulnerable areas based on proximity to important resources like water supplies, threats by militias or nearby attacks. Amnesty International worked closely on the project with the American Association for the Advancement of Science (AAAS), which offered expertise on satellite imagery and other cutting edge geospatial technologies. The images from commercial satellites can reveal visual information about conditions on the ground for objects as small as two feet across. According to Lars Bromley, project director for the AAAS Science and Human Rights Project who advised Blätter on technical matters, the photos could show destroyed huts, massing soldiers or fleeing refugees. Amnesty International has been at the forefront of efforts to wed human rights work with satellite technology. For example, Amnesty, the AAAS and the Zimbabwe Lawyers for Human Rights joined in a ground-breaking project in 2006 to document the destruction of a settlement by the Zimbabwean government. The groups presented evidence that the government destroyed entire settlements, including the informal settlement of Porta Farm, forcing thousands of civilians to flee. Eyes on Darfur also includes an archival feature, which shows destroyed villages since the conflict began in 2003 and includes expert testimony. For example, an image of the village of Donkey Dereis in south Darfur taken in 2004 shows an intact landscape with hundreds of huts. Two years later, a satellite image shows the near total destruction of the villages -- 1,171 homes gone and the landscape overgrown with vegetation.
<urn:uuid:ec742bf1-b5b0-4dc8-b8f8-056154f98922>
CC-MAIN-2017-09
http://www.govtech.com/geospatial/Amnesty-International-Adopts-Powerful-Technology-in.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00285-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940407
549
2.671875
3
One useful definition for the unstructured data that underlies most existing and theoretical big data projects is that it was often collected for some purpose other than what the researchers are using it for. That definition was provided by Chris Barrett, executive director of the Virginia Bioinformatics Institute during a series of presentations before the President’s Council of Advisors on Science and Technology on Thursday focused on the value of data mining for public policy. Data that was initially collected to measure educational achievement, for instance, could be used to analyze how educational achievement relates to obesity or incarceration rates in a particular community. This definition points to the potential of big data analysis as more and more information is gathered online and elsewhere, but it also points to some challenges as outlined by Duncan Watts, a principal researcher at Microsoft’s research division. First off, a large portion of the data that might be valuable to social scientists, policymakers, urban planners and others is held by private companies that release only portions of it to researchers. Facebook, Amazon, Google, email providers and ratings companies all know certain things about you and about society, in other words, but there’s no way to aggregate that data to draw global insights. “Many of the questions that are of interest to social science really require us being able to join these different modes of data and to see who are your friends what are they thinking and what does that mean about what you end up doing,” Watts said. “You cannot answer these questions in any but the most limited way with the data that’s currently assembled.” Second, even if social scientists were able to draw on that aggregated data, it would raise significant privacy concerns among the public. “This is a very sensitive point because, to some extent, this is what the NSA has been reputedly doing, joining together different sorts of data,” Watts said. “And you can understand how sensitive people are about that. Precisely the reason why this is scientifically interesting is also the reason why it’s so sensitive from a privacy perspective.” Finally, because much of the data that’s useful to social scientists was gathered for other purposes, there’s often some bias in the data itself, Watts said. “When you go to Facebook, you’re not seeing some kind of unfiltered representation of what your friends are interested in,” he said. “What you’re seeing is what Facebook’s news ranking algorithm thinks that you'll find interesting. So when you click on something and the social scientist sees you do that and makes some inference about what you’re sharing and why, it’s hopelessly confounded.”
<urn:uuid:bc006de0-c377-4847-b15c-eb6f4bd35195>
CC-MAIN-2017-09
http://www.nextgov.com/big-data/2014/04/limits-big-data-social-science/81940/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00161-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96188
569
2.921875
3
IBM may be onto something with its predictions. In 2010, the tech giant predicted that within five years there will be mobile phones that project holographic images of callers. It also envisioned the popularization of 3D imaging technology in flat panel televisions and video chat in mobile phones. It's that time of year again, and IBM has announced five new predictions for five years from now. This year's list, based on ideas contributed by IBM biologists, engineers, mathematicians and medical doctors, includes one prediction for each of the five senses. Computers, IBM predicted, will use sound to identify structural weaknesses in buildings before they collapse and sensors that can “smell”, using olfactory data to analyze personal health. IBM predicted that infrared and touch technologies will evolve to simulate the actual feeling of touching something on a screen -- a fabric, a road surface, animal fur, etc. IBM also predicted that small anomalies seen in images through the use of Big Data analytics will allow for faster and more accurate medical diagnoses. A new computing system will utilize digital taste buds to encourage healthy food choices, and help build perfect meals for people around the world, IBM predicted. An archive of IBM's past predictions going back to 2006 can be found on their website. Some predictions proved correct, such as the 2006 prediction of the availability of remote health care by 2011. Other predictions, like the one that foretold a 3D Internet, came true in a more limited fashion -- the picture IBM painted in 2006 was one far loftier than the seldom-used online 3D interfaces that now exist. Check out IBM's 2012 predictions, complete with videos and story maps for each of the five senses.
<urn:uuid:17a7c0bc-a19c-44ec-b587-ae6e1b3c33ae>
CC-MAIN-2017-09
http://www.govtech.com/IBM-Five-Predictions-for-2017.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00334-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916789
346
2.765625
3
When reports of the Dark Seoul attack on South Korean financial services and media firms emerged in the wake of the attack on March 20, 2013, most of the focus was on the Master Boot Record (MBR) wiping functionality. PCs infected by the attack had all of the data on their hard drives erased. McAfee Labs, however, has discovered that the Dark Seoul attack includes a broad range of technology and tactics beyond the MBR functionality. The forensic data indicates that Dark Seoul is actually just the latest attack to emerge from a malware development project that has been named Operation Troy. The name Troy comes from repeated citations of the ancient city found in the compile path strings of the malware. The primary suspect group in these attacks is the New Romanic Cyber Army Team, which makes significant use of Roman terms in their code. The McAfee Labs investigation into the Dark Seoul incident uncovered a long-term domestic spying operation, based on the same code base, against South Korean targets. Software developers (both legitimate and criminal) tend to leave fingerprints and sometimes even footprints in their code. Forensic researchers can use these prints to identify where and when the code was developed. It’s rare that a researcher can trace a product back to individual developers (unless they’re unusually careless). But frequently these artifacts can be used to determine the original source and development legacy of a new “product.” Sometimes, as in the case of the New Romanic Cyber Army Team or the Poetry Group, the developers insert such fingerprints on purpose to establish “ownership” of a new threat. McAfee Labs uses sophisticated code analysis and forensic techniques to identify the sources of new threats because such analysis frequently sheds light on how to best mitigate an attack or predicts how the threat might evolve in the future. History of Troy The history of Operation Troy starts in 2010, with the appearance of the NSTAR Trojan. Since the appearance of NSTAR seven known variants have been identified. (See following diagram.) Despite the rather rapid release cycle, the core functionality of Operation Troy has not evolved much. In fact, the main differences between NSTAR, Chang/Eagle, and HTTP Troy had more to do with programming technique than functionality. The first real functional improvements appeared in the Concealment Troy release, in early 2013. Concealment Troy changed the control architecture and did a better job of concealing its presence from standard security techniques. The 3RAT client was the first version of Troy to inject itself into Internet Explorer, and Dark Seoul added the disk-wiper functionality that disrupted financial services and media companies in South Korea. Dark Seoul was also the first Troy attack to conduct international espionage; all previous versions were simple domestic cybercrime/cyberespionage weapons. As interesting as the legacy of Operation Troy is, even more enlightening are the fingerprints and footprints that allow McAfee Labs to trace its legacy. In the “fingerprint” category is what developers term the compile path. This is simply the path through the developer’s computer file directory to the location at which the source code is stored. An early Troy variant in 2010, related to NSTAR and HTTP Troy via reused components, used this compile path. A second variant from 2010, compiled May 27, also contained a very similar compile path. We were able to obtain some traffic with the control server. McAfee Labs has consistently seen the Work directory involved, just as throughout the other post-2010 malware used in this campaign. By analyzing attributes such as compile path, McAfee Labs researchers have been able to establish connections between the Troy variants and document functional and design changes programmed into the variants. Both the Chang and EagleXP variants are based on the same code that created NSTAR and the later Troy variants. The use of the same code also confirms the attackers have been operating for more than three years against South Korean targets. In the “footprint” category McAfee Labs documented the most significant functional change that occurred, in the 2013 release of the Concealment Troy. Historically, the Operation Troy control process involved routing operating commands through concealed Internet Relay Chat (IRC) servers. The first three Troy variants were managed through a Korean manufacturing website in which the attackers installed an IRC server. From the attacker’s perspective there are two problems with this approach. The first is that if the owners of infected servers discover the rogue IRC process, they would remove it and the attacker would lose control of the Troy-infected clients. The second is that the Troy developers actually hardcoded the name of the IRC server into each Troy variant. This means that they had to first find a vulnerable server, install an IRC server, and then recompile the Troy source into a new variant controlled by that specific server. For this reason nearly all Troy variants needed to be controlled by a separate control server. The Concealment Troy variant was the first to break this dependency on a hardcoded IRC control server. Concealment Troy presumably gets its operating instructions from a more sophisticated (and likely more distributed) botnet that is also under the control of the Troy syndicate. This investigation into the cyberattacks on March 20, 2013, revealed ongoing covert intelligence-gathering operations. McAfee Labs concludes that the attacks on March 20 were not an isolated event strictly tied to the destruction of systems, but the latest in a series of attacks dating to 2010. These operations remained hidden for years and evaded the technical defenses that the targeted organizations had in place. Much of the malware from a technical standpoint is rather old, with the exception of Concealment Troy, which was released early 2013. A copy of the full report can be found here.
<urn:uuid:da307667-2a60-4ddc-ae18-d5dbca6fc16a>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/07/08/dissecting-operation-troy-cyberespionage-in-south-korea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00210-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946488
1,161
2.96875
3
Any questions? Ask Google. In the time of authority crisis, it’s the only place left where people want to seek the answers. It looks as if after a magical gate, unimaginable amount of knowledge was hidden and the only key to access it, was placed just next to the colorful “g” letter in the corner of our web browsers. It seems, like in most of the cases, Internet has also its second depth and it’s deeper than we have ever imagined. Last month, I read an interesting article about Ian Clarke and his invention. This modest student of Artificial Intelligence and Computer Science at the University of Edinburgh devised a smart way for web users to stay completely anonymous. In assumption, the whole shared data was supposed to be encoded in that way that none of the users could never recognize, from whom they receive and to whom they send information. That’s how Freenet was invented as a sort of dark and impossible control division of Internet. What’s interesting, it is now far bigger than its original prototype and the access to it is protected by mysterious meta-browsers. Of course, it is clear what kind of data can be found in Freenet. All kinds of agents, weirdos and criminals exchange such interesting documents, as for instance “The Handbook Of Terrorism: The Practical Guide For Explosive Materials” or “The Companion Of Animal Rights Defender: How To Deal With Fire”:) But it’s not the content that really drew my attention. The fact that really shocked me was the size of this phenomenon. It turns out that using standard web browsers, we only get the access to a tiny part of whole web resources. The characteristic window next to the “g” letter mentioned in the beginning of the article collects only the “cottage” from the vast sea of all information stored. Some sources say, that using Google, we only see 0,003% of all Internet data! So how big is Internet? Is it possible to measure? If yes, than is there a mind that could imagine its real size? When you consider those questions, an analogy to Universe comes easily to your mind. Whenever anyone tried to draw its borders, it ended up with nothing. It seems, that humans, in their race for reigning the World, created another parallel reality that can’t be controlled anymore.
<urn:uuid:d849ac6a-302c-4174-bd30-30b978e8f796>
CC-MAIN-2017-09
https://www.codetwo.com/blog/which-universe-is-bigger-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00154-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946104
494
2.671875
3
Flash! Supercomputing goes solid-state Lawrence Livermore lab is shaping the next generation of supercomputers - By Henry Kenyon - Jun 24, 2010 A prototype computer system is demonstrating the use of flash memory in supercomputing. The Hyperion Data Intensive Testbed at Lawrence Livermore National Laboratory uses more than 100 terabytes of flash memory. Hyperion is designed to support the development of new computing capabilities for the next generation of supercomputers as part of the Energy Department's high-performance computing initiatives. Specifically, it will help test the technologies that will be a part of Lawrence Livermore’s upcoming Sequoia supercomputer. The Hyperion testbed is an 1,152-node Linux cluster, said Mark Seager, assistant department head for advanced technology at Lawrence Livermore. It was delivered in 2008, but is only now at the point where serious operational testing can begin with the recent addition of the solid-state flash input/output memory. Flash memory is a key component of the Hyperion system, Seager said. The memory is in the form of 320-gigabyte enterprise MLC ioMemory modules and cards developed by Fusion-io. Supercomputers access data from long-term memory stored on disks to augment what is in their active memory. Desingers typically use dynamic random access memory chips to serve as a temporary repository for active data in use before it is stored. Shortening this transfer time between long-term storage and accessible memory is key to higher supercomputer speeds. Flash memory eliminates the need for DRAMs, shortening the transfer time; it also greatly reduces the amount of hardware needed, thereby significantly cutting space and power requirements. Unlike DRAMs, flash memory chips retain data when electrical current is cut off. China threatens U.S. supercomputer supremacy Scientists creating advanced computer simulation of nuclear reactor Seager said that the testbed is a partnership between Lawrence Livermore and 10 participating commercial firms that are testing technologies that will be used in Sequoia. He noted that Red Hat has been testing its Linux kernel and Oracle has been testing and developing its Lustre 1.8 and 2.0 releases on the machine for six months. Other Linux-based technologies being evaluated include cluster distributions of Linux software and the Infiniband software stack. Testing for the Hyperion system will include trials of the Lustre object storage code on the array’s devices. Seager said the goal is to see how much faster various processes can be made to operate by using flash memory. He added that Lawrence Livermore researchers also want to use an open source project called FlashDisk, which combines flash memory with rotating media in a transparent, hierarchical storage device behind the Lustre server. Seager said that the project will also examine methods to directly use flash memory without a file system. “We think that that will probably give us the best random [input/output operations per second] performance,” he said. Achieving a performance in excess of 40 million IOPS is a key goal of the effort. The Hyperion system uses 80 one-use servers occupying two racks and not even filling them. A similar system using conventional data storage technology would occupy about 46 racks, Seager said. This provides a power savings that is an order of magnitude better than current systems, he added. All of these technologies are used to support Lawrence Livermore’s large, high-performance computing efforts. The data intensive testbed extension of Hyperion was designed to meet the goals of the Sequoia next generation advanced strategic computing system being built by IBM and scheduled for delivery in mid-2011. Sequoia will be a third-generation Blue Gene system with a compute capability of about 20 petaflops and 1.6 petabytes of memory. Another goal is achieving one terabyte per second random IO bandwidth performance. When Hyperion’s technologies are used in Sequoia, the supercomputer will take up relatively little space and save power. Seager said that IBM’s Blue Gene line is focused on exceptional flops per watt. He noted that one of the goals of the Blue Gene line is high end performance at low power. Sequoia is a third generation Blue Gene computer. The Lawrence Livermore research is funded by the National Nuclear Security Administration. Lawrence Livermore, Sandia National Laboratory and Los Alamos National Laboratory will be using Sequoia to support the Stockpile Stewardship mission to test the security and reliability of the nation’s nuclear stockpile without the need for underground testing.
<urn:uuid:f370b4df-6028-4bec-8f42-28f97d8fd80c>
CC-MAIN-2017-09
https://gcn.com/articles/2010/06/24/lawrence-livermore-hyperion.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171758.35/warc/CC-MAIN-20170219104611-00030-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918728
943
2.875
3
Inline asm: manually encoding instructions billo 120000DT0E Comment (1) Visits (5362) The basic premise of inline asm is to be able to write assembly code within your C or C++ program (using asm operands to connect the asm code to the parent program), delegate the translation of that assembly to the compiler and/or the system assembler, and consequently have the resulting machine code embedded within your program at the location you specified. The reasons for using inline asm can be many, but one reason can be to utilize special instructions are too new or too novel for either the compiler or assembler to know about. That seems like an impossible quandary: inline asm is the perfect mechanism for utilizing new instructions, but if even the assembler doesn’t know about them, how can you get your inline asm translated? The answer is to do some of the translation yourself. If you’ve ever used the mc_func pragma, then you will have had experience with manual encoding of instructions. Inline asm is a more straightforward way of doing this for two reasons: 1) you have easy access to your asm operands, and 2) there are bitwise operations available that assist with the encoding. As an opening example, let’s start with an instruction that doesn’t take operands, such as isync. Note that isync is not an “unknown” instruction – it is standard in the PowerPC architecture, but it will serve as a good example on how to do encoding. If one looks in the Assembly Language Reference (link given below) one will see that the primary opcode for isync is decimal 19 (in bits 0 to 5) and the extended opcode is decimal 150 (in bits 21 through 30). The rest of the bits are don’t-cares, which we will put as zero. Calculating this on a hexadecimal capable calculator yields “0x4C00012C” as the whole 32-bit instruction. I’ll add this to the end of the two instruction sequence from my previous blog entry: asm ("addc %0, %2, %3 \n" "adde %1, %4, %5 \n" “.long 0x4C00012C \n” This may seem a little strange – using a “.long” pseudo-op in the middle of some text, but it is perfectly acceptable. We are not using .long here to define data (which wouldn’t be supported), but merely to encode an instruction. If we disassembled the inline asm after it had been processed, it would look something like this: Which specific registers are chosen is up to the compiler. This is a significant difference from the mc_func pragma, which forces the user to use argument registers r3, r4, etc, and is restricted to a single result, returned in r3. This code snippet has two results (r0 and r3) which is already outside the capabilities of mc_func. The biggest complication in putting together the above inline asm was coming up with the correct eight digit hexadecimal number. In fact, it isn’t really necessary to do that manually if one uses the bitwise operations made available by the system assembler: the “|” bitwise or and “<” bitwise shift operations (note: on Linux the shift operator is “<<”). In the case of isync, putting decimal 19 into bits 0-5 and decimal 150 into bits 21-30 can be done directly as such: ".long 19<26 | 150<1 \n" The final piece of the puzzle for manual encoding is using operands. The isync was relatively easy to encode as it doesn’t take operands, but what if we wanted to use manual encoding for, say, the adde instruction in our asm. Again – adde is not an “unknown” instruction, but serves well as an example. Looking at the Assembly Language Reference gives us the basic layout of the instruction. There are three fields in the instruction that are to be filled in with register numbers. For inline asm, it is (typically) the compiler that chooses registers, but we can utilize what is chosen through the “%n” specifiers. We have been using these all along for standard asm instructions, but we can use them for manually encoded instructions as well. Just as in a printf statement, %0, %1, etc will be replaced with the correct text – in our case a register number. Doing this with adde in the above asm snippet yields the following correct asm: asm ("addc %0, %2, %3 \n" ".long 31<26 | (%1)<21 | (%4)<16 | (%5)<11 | 138<1 \n” “.long 0x4C00012C \n” Here we have one regular instruction (addc) and two manually encoded instructions: adde and isync. Notice how we’ve used %1, %4 and %5 exactly as we did before, for the “RT,” “RA,” and “RB” register operands to adde. TIP: carefully check the documentation regarding the placement of the register operands in the instruction – personal experience has taught me that sometimes there are surprises. Of course, if you are encoding a new or novel instruction, you’ll hopefully have some documentation to consult :-) Till next time
<urn:uuid:ede943bb-52c3-47f1-8c53-f7855518c55f>
CC-MAIN-2017-09
https://www.ibm.com/developerworks/community/blogs/5894415f-be62-4bc0-81c5-3956e82276f3/entry/inline_asm_manually_encoding_instructions10?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00206-ip-10-171-10-108.ec2.internal.warc.gz
en
0.912224
1,193
3.1875
3
Multi-Protocol Label Switching (MPLS) was created to improve packet performance in the core of the networks and is widely used for that purpose. It has also been adapted for other use cases, and one of the most important is traffic engineering. If you already have MPLS deployed in your network -- perhaps for a VPN -- MPLS traffic engineering can be very beneficial. Here we'll discuss the additional steps that must be taken, the design criteria, and other design-centric questions that must be answered to do so. In MPLS traffic engineering, all configurations are done on a specific network node called the headend or ingress node. Here is where all tunnels and constraints are created. Tunnel destination address is also specified at the headend. For example, if an MPLS traffic engineering tunnel will be set up between R2 and R6 in Figure 1, all the definitions are done at R2. The tunnel destinations are called tailend or egress node. MPLS traffic engineering tunnels are unidirectional tunnels and not congruent. This means that if one tunnel is created to carry traffic between R2 and R6, the return tunnel from R6 to R2 is not created automatically. Reverse tunnels must also be created, but this time R6 is used as the headend and R2 as the tailend. The tailend has no configuration. Four steps are required for MPLS traffic engineering to take place: - Link-state protocols carry link attributes in their link-state advertisements (LSAs) or link-state packets (LSPs). - Based on the constraints defined, the traffic path is calculated with the help of Constrained Shortest Path First (CSPF). - The path is signaled by Resource Reservation Protocol (RSVP). - Traffic is then sent to the MPLS traffic engineering tunnel. Let's take a look these steps in detail: 1. By default, link-state protocols send only connected interface addresses and metric information to their neighbors. Based on this information, the Shortest Path First (SPF) algorithm creates a tree and builds the topology of the network. MPLS traffic engineering allows us to add some constraints. In Figure 1 above, let's assume the R2-R5 link is 5 Mbit/s; R5-R6 is 10 Mbit/s; and all the interfaces between the bottom routers are 6 Mbit/s. If we want to set up a 6-Mbit/s tunnel, SPF will not even take the R2-R5-R6 path into consideration, because the link from R2 to R5 does not satisfy the minimum requirement. In addition, we could assign an administrative attribute, also called a "color," to the link. For example, the R2-R5-R6 interfaces could be designated blue, and the R2-R3-R4-R6 route could be assigned red. At the headend, the constraint can then specify whether to use a path that contains a red or blue color. The color/affinity information, as well as how much bandwidth must be available, reserved, and unreserved for the tunnel are carried within the link-state packet. In order to carry this information, some extensions have been added to the link-state protocols. Open Shortest Path First (OSPF) carries this information in the Opaque LSA (or Type 10 LSA), and Intermediate System to Intermediate System (IS-IS) uses TLV 22 and 135 for traffic engineering information. 2. As we stated earlier, SPF is used to calculate the path for destinations. For traffic engineering, a slightly modified version of SPF is used, called constrained SPF (CSPF). With the extensions to link state protocols that Opaque LSAs and TLVs provide, a traffic engineering database is created that is only accessible by CSPF. CSPF can understand that the link from R2 to R5 is 5 Mbit/s and does not satisfy the 6 Mbit/s tunnel constraint. So it will not take that path into consideration in its calculation. 3. If there is an appropriate path, the path is signaled by RSVP. Previously used to provide Integrated Services QoS, RSVP incorporated new messages, including path and reservation messages, to enable MPLS traffic engineering. Label information is carried within the reservation messages. 4. Once a path is signaled, traffic is put into the tunnel. This can be accomplished via many methods including static routing, policy-based routing, class-of-service-based tunnel selection (CBTS), policy-based tunnel selection (PBTS), autoroute, and forwarding adjacency. I'll discuss these methods in detail in a future post. And in the next part of this series, I explain how to use MPLS traffic engineering path selection for bandwidth optimization.
<urn:uuid:e9db05bd-b190-4dec-8fe8-4c59b377ced2>
CC-MAIN-2017-09
http://www.networkcomputing.com/networking/mpls-traffic-engineering-tunnel-setup/442703769?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00502-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934056
1,014
2.703125
3
Slamming a vehicle into an elk or deer is more harmful for humans than many realize. These creatures cause approximately 1.5 million collisions each year, resulting in around 150 deaths, according to the Insurance Institute for Highway Safety. Another 10,000 people suffer injuries from collisions with the animals. Animal-car collisions can also cost governments money. For instance, the Arizona Department of Transportation (ADOT) has a financial stake in this problem because the state "owns" all wildlife, meaning the government is financially liable when an elk or deer obstructs a car or truck. The state paid more than $3 million for one such lawsuit in 2003. For the past seven years, Arizona has been moving to solve the problem using a GIS tool that determines where deer and elk cross highways so underpasses can be built for the animals. Arizona's Game and Fish Department (AGFD) implemented the project in 2002 in partnership with ADOT, which funded most of the project. Statistics suggest the program is effective. Deer and elk collisions dropped from 56 to eight on one major Arizona highway in one year after strategically placed underpasses were built, according to Jeff Gagnon, research technician for the AGFD. Another highway averaged 12 deer or elk accidents a year before it was targeted by the project; that highway has only seen one animal collision in the past two years, Gagnon said. The state hasn't yet measured what the project has done to the percentage of vehicle collisions statewide. Gagnon said GIS helps his agency persuade ADOT to invest in the underpasses, which cost more than $1 million each. "They really buy into it when you pull out a map and say, 'This is where these animals are crossing,'" Gagnon said. In 2004, ADOT and the AGFD organized the Arizona Wildlife Linkages Workgroup. This group expanded the project to include nine organizations with relevant input, such as the U.S. Department of Interior's Bureau of Land Management, the Federal Highway Administration, the U.S. Fish and Wildlife Service, Northern Arizona University, the Sky Island Alliance and other private environmental organizations. After pooling its resources, this team created a more developed GIS tool that's becoming a model for other states interested in solving animal-car collisions. Before Arizona could do a GIS analysis of where to install underpasses, officials needed data from the animals. Gagnon and others collared elk and deer with GPS devices. The resulting data showed animals crossing highways at areas where pastures or water waited on the other side. ADOT began installing underpasses. Conveniently some of them already existed for other purposes, like transporting water. GIS maps gave guidance to the state on how far to extend the fencing necessary for funneling the animals into the underpasses. "We've had video cameras on some of those underpasses for about six years now. We've documented [more than] 6,000 animals using them. Most of those are elk, some deer -- 11 different species," Gagnon said. He added that even longhorn sheep and desert tortoises use the underpasses. GIS technicians and fieldworkers attempting this in other states should expect to stay connected to the project throughout its life cycle, according to Susan Boe, GIS spatial analyst for the AGFD. With each highway the state converted for animal passage, she ran GIS tests of animal movements before, during and after construction. To run the analysis, Boe used ESRI's ArcGIS 9.2 software. To complete the job, she downloaded free software called Animal Movement, which is an extension of ESRI's ArcView application. "Using Animal Movement, I was able to connect the dots to follow the path of movement," Boe explained. "It was what I used to find out where the animals were crossing the highway." Arizona will use the project's findings to make informed decisions about where it builds future highways, Gagnon explained. "If we're building a new highway, we could say, 'Hey, this meadow's going to cause problems. If we have options, let's take the highway over here,'" Gagnon said. ADOT would know from the beginning where it should build underpasses. A critical layer in Arizona's GIS tool was one identifying the different types of property owners connected to land alongside highways -- ADOT contributed that information. By viewing a GIS map, the Arizona Wildlife Linkages Workgroup saw what land was federally, privately and state owned, as well as what was part of an Indian reservation. This helped the team organize its time and resources more efficiently, because building underpasses on state-owned land comes with additional challenges. For example, Arizona's Constitution lets the state auction its land to commercial developers to raise money for public schools. In many cases, by the time the workgroup identified a parcel of state land that needed an underpass, the sale to the private sector was already in progress. This meant the Arizona State Land Department had to ensure the wildlife underpasses wouldn't conflict with the winning buyer's development plans. "We're in a race against time. The land is getting developed fast. A lot of these plans started before we began our planning," said Bruce Eilerts, manager of the Natural Resources Management Group within ADOT. Geographic data from nonprofit organizations also inform Arizona's GIS tool. Environmental groups on the team alert the workgroup to prospective sites for highway underpasses. For example, environmental groups alerted the workgroup to an expansion project on Highway 77 near Tucson, Ariz. ADOT was making changes to that highway, the environmental group said, that would increase vehicle collisions with animals. "It wasn't something on our radar screen at first, but the community development is happening so fast up there," Eilerts explained. "The community was screaming about all of the wildlife hits and how it impacted the land and animals in the area, which had some state parks." The environmental bodies in his workgroup coordinated meetings with community organizations from the area to develop a strategy for expanding Highway 77 without harming wildlife. Many view a state's wildlife as part of its identity. Gagnon cautioned that Arizona's growing population and busy roads will affect wildlife. "Once you put a highway in an area causing an animal to not cross the road very often, you isolate it," Gagnon said. "It can't get across to resources. It becomes genetically isolated, and if you genetically isolate the animals, they start to inbreed more. Instead of having two fawns they have one or none. They're more susceptible to diseases." Eilerts said many European countries have killed off much of their wildlife due to property development. "They actually have toad crossings in England and butterfly crossings in Germany," Eilerts said. "Good for them, but do we [the United States] want to wait until we're down to our field mice?"
<urn:uuid:489e1ff7-b111-403f-ad40-a7e4e9e421de>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/GIS-Maps-Prevent-Vehicle-Collisions-with.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00026-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966609
1,449
2.984375
3
Shortly after an EF5 tornado flattened Moore, Oklahoma, this past May, the Department of Homeland Security called Jim Lux at NASA's Jet Propulsion Lab. "We were asked to come out with our machine," Lux says. The machine in question unfortunately wasn't ready. It will be next time. Short for "Finding Individuals for Disaster and Emergency Response," NASA's FINDER is a prototype portable radar system, small enough and light enough to be carried by a single person, and powerful enough to detect a heartbeat under 30 feet of rubble. Assuming the federal government contracts with a manufacturer in a timely manner, first responders at the local and state level should be able to buy FINDERs starting in spring 2014 for about $10,000 each. "People have done this for a while," Lux says of radar technology that can detect heartbeats and breathing. "There are products that look for sleep apnea in infants, and there’s been people who have built laboratory systems that can detect heartbeats but have to be moved into the field for an experiment." The difference between previous life-detecting radar technology and FINDER is like the difference between the first super computer and an iPhone: ease of use.
<urn:uuid:8cec2d38-552b-416f-8588-156a0e3ed59c>
CC-MAIN-2017-09
http://www.nextgov.com/defense/2013/09/nasa-machine-can-detect-human-heartbeat-under-30-feet-rubble/70482/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00022-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965922
251
2.875
3
As we know, fiber optic connector is an important fiber optic component used to link two fiber optic lines together. Beside connector, there is also another item, which is fiber optic adapter with panels to connecting multi fiber optic line. Specifically, the fiber optic adapter is a small device that used to terminate or link the fiber optic cables or fiber optic connectors between two fiber optic lines. In order to realize the fluent fiber optic connection, the fiber optic adapter panel shapes or types should be in accordance with the fiber optic connectors or cables. Common shapes of the adapters are square, rectangular, or round that with FC, LC, ST, SC, MTRJ types. There are also single mode and multimode fiber optic adapters or single mode and multimode fiber optic connections. So when purchasing fiber optic adapters for fiber connections, it is essential to choose the right fiber optic adapter according to the fiber optic connector or fiber optic cables. Standard or flange fiber optic adapter is a typical type used to connect the same types of optical connector, there are SC, ST, LC and MTRJ fiber optic cable adapter type available for choosing. These adapters are comprised of two or more female connections that fiber optic cables can be plugged into. flange fiber optic adapters are typically with ceramic sleeves, fitting for both single mode and multimode fiber optic connections. Hybrid fiber optic adapters are another type used to link two different kinds of fiber connectors or cable assemblies. For example, LC to SC hybrid adapter, it can link LC connector at one side and SC connector at the other side. Hybrid fiber adapters can be also used for single mode and multimode fiber optic connections with PC or APC sleeves, in simplex and duplex style. Hybrid fiber adapters use high precision ceramic sleeves because it can provide reliable ferrule mating and ensure low insertion loss and return loss during the connecting. This type of optical fiber adapters is with compact sized and widely used for network environments integrating different configurations and telecommunications networks. Bare fiber adapter is structured with optic fibers on one side and the adapter on the other side. It is used to link the bare optical fiber cable to fiber optic equipments. The adapter side is a connector that can plug into the equipment and enable a quick and easy termination for the optic fiber. Because this feature of the bare fiber adapters, they are widely used for emergency situation for fast and temporary fiber optic or urgent connection, testing bare fiber, fiber on the reel, fiber before and after installation and so on. SC, FC, LC, ST bare fiber adapters is now available in the market. A single optical fiber adapter usually could hold a dozen of cables, if you splice multiple adapters together, it can even make hundreds or thousands of connection. Knowing what kind of connections, multimode or single mode, simplex or duplex, as well as the connector types can help you choose the corresponding right type of optical fiber adapter for application.
<urn:uuid:ded29d6b-820f-41ef-9bad-d390fac2f16c>
CC-MAIN-2017-09
http://www.fs.com/blog/how-to-choose-the-fiber-optic-adapter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00550-ip-10-171-10-108.ec2.internal.warc.gz
en
0.891484
591
2.53125
3
Avoiding Thin Ice The idea of driving a truck across a frozen expanse of water may sound a little iffy to people living in warmer climates. But ice bridges, which traverse rivers and lakes, and ice roads, which travel along frozen rivers, are common transportation links in the northern regions of Canada, Alaska, Europe and Russia. In Canada's far north, ice roads and bridges are often the only economical way to get materials to and from remote towns and mining operations. Canada's Government of the North West Territories and contractors actually build and maintain ice bridges and roads throughout the winter season until the thaw comes in April, said Peter Dyck, fleet facilities officer of the Department of Highways. The bridges are built in layers, he said, and the Department of Highways and contractors use a ground-penetrating radar (GPR) system to track the thickness of the ice. The GPR system is pulled behind a snowmobile or other small vehicle. The system combines GPR data with GPS data so government officials can get a precise correlation between ice depth and physical location of the bridge. The data also can be fed into a GIS software package to create color-coded maps that display weak spots. Road repair and maintenance crews use the maps to target areas that need attention, Dyck said, and officials scan the ice for potentially fatal faults. "There's a certain panic to move loads across these roads before we close them in mid-April. In previous years, we've had traffic jams as a result of people showing up as late as midnight on the last day," he said. The department estimates that more than 4,000 heavy loads crossed ice bridges between January and March last year. Eyes on Emissions For the past five years, the Nevada Department of Motor Vehicles' emission control program has monitored vehicle emissions in the Las Vegas metropolitan area under a mandate of the EPA. Nevada has used remote-sensing technology -- roadside cameras and emissions-monitoring equipment -- to track the tailpipes of approximately 20,000 autos in the Las Vegas area, said Lloyd Nelson, program manager. Every year, some public grumbling accompanies newspaper stories about the testing, Nelson said. But this year, the grumbling is decidedly louder, focusing on the roadside cameras. In April, state Sen. Mark James, R-Las Vegas, told the Las Vegas Review Journal that using the roadside cameras could violate a state law banning the use of cameras for traffic enforcement -- a law that James co-authored. The law prohibits the use of roadside cameras for traffic enforcement unless the camera is held by a law enforcement officer or mounted on a law enforcement facility or vehicle. But Nelson said the cameras are essential because the DMV needs to gather license plate numbers to get information on vehicles' year of manufacture. "The remote sensing is being used to evaluate [our] emissions program's performance, general research, evaluating the fleet in the area [and] evaluating certain vehicles that are high emitters. That's the focus that the DMV has taken over the last five years," Nelson said. James contends that using unmanned cameras to gather information that ultimately could result in a notification of suspended registration for failure to pass emissions tests is ultimately an enforcement action and doesn't comply with state law. Counties Quake at Cable Revenue Shortfall In March the Federal Communication Commission reclassified cable Internet connections as an information service instead of a cable service, and fallout from that ruling already is hitting Maryland counties. Comcast Cable told several counties they would no longer receive cable-modem franchise fees from the company because the FCC decision means the company is no longer obligated to collect the money. Counties say that could cost them a sizable chunk of change. For example, Baltimore County, Md., officials said they could lose $830,000 next year if Comcast stops collecting cable-modem franchise fees. "The definition we created [in the franchise agreement] was that the county would receive a percentage of the gross revenue derived, in essence, from the wire," said Kevin Kamenetz, an eight-year member of the Baltimore County Council and the lead negotiator on cable issues for the county. "Any sources of income that our local Comcast entity receives as gross revenue derived from the transmission over the county rights of way would be subject to our franchise fee." Officials said the county told Comcast that its decision to cease collecting the fee is premature, given that the FCC's ruling isn't final and that if the FCC does reverse its decision, the county is due a refund of fees that haven't been paid. Several counties, through the state's association of counties, will lobby the FCC to change its decision about the classification of cable Internet services. "Congress has been pretty clear that they want to take any negotiating leverage from the local jurisdictions in the guise of free market competition," Kamenetz said. "Obviously, this may be a position that will be resolved in the courts or by Congress." State Commission Addresses DSL Regulation As broadband Internet connectivity becomes more commonplace in homes, public utilities' commissions could well play a larger role in regulating broadband providers. The California Public Utilities Commission waded into the broadband fray in March, ruling that CPUC has jurisdiction over quality of service issues, marketing of broadband services and business practices of providers. "State commissions are the place that people go to when they have complaints about the quality of their telephone service, and they don't differentiate DSL from their regular voice-grade telephone service," said Tom Long, adviser to CPUC President Loretta Lynch. The decision stems from a formal complaint filled by smaller DSL providers against Pacific Bell, part of telecommunications giant SBC, about issues related to Pacific Bell's DSL service. "The issue for us, that was raised by the motion to dismiss by SBC, was whether federal actions or federal law had preempted the ability and right of the state commission to address the claims," Long said. "The answer is no. ISPs have raised claims that come under state law. Nothing under federal law says that we're the wrong place to address those -- the claims related to service quality, discrimination of service." SBC had argued that the FCC alone has jurisdiction over such issues because the services are provided under tariffs that the FCC oversees, and, based on that, states don't have regulatory authority over that service. Long noted the decision is not final; a full administrative hearing before the commission is scheduled and a final ruling should come in about six months. He said approximately 15 states have delved into some form of regulation for DSL providers, and about seven states have reached similar conclusions to the CPUC's. "As DSL is becoming a more important service around the country, commissions around the country are facing the same kinds of claims and cases," he said. "These issues are getting sorted out, and, in California, we're just starting to see formal complaints filed by parties."
<urn:uuid:5fe880f1-2642-4c21-8a44-b84c5c75819b>
CC-MAIN-2017-09
http://www.govtech.com/public-safety/99403654.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00550-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965299
1,415
2.984375
3
Everything You Wanted to Know About Blockchain Potential Impacts of Blockchains Reducing paperwork in the transaction pipeline reduces processing time with huge volumes of transactions. As long as transactions can be validated automatically then Blockchains can be used to process them. This means that Blockchains will give rise to increased automation with an internet of agents and smart peers who will analyze and approve marketplace actions. This agent-managed, peer-to-peer automation will make it possible to scale significantly but requires planning. As with any automation, clear business rules are needed but the potential to reduce paperwork and time to market is significant. Using secure hashed history provides for better security and a heightened ability to audit and validate transactions. This makes it much easier for the business to pass audits. Blockchains also have the potential to eliminate the need for bank clearing houses as each bank could have its own copy of the ledger which would allow them to automatically approve transactions. This is because Blockchains plus business logic can be used to not only validate transactions but also to provide a tamperproof history of the transaction record. Challenges Around Blockchains Implementing Blockchains has several associated challenges. The two most obvious are 1. the requirements for storage as the chains grow 2. the time needed for synchronization and mining as the chains and networks of nodes grow. Additionally, participants need to agree on a common network protocol and technology stack as well as a consensus mechanism. Businesses will also need to change the way they perform some functions. As with any form of automation, processes need to be repeatable so that they can be programmed. For example, smart contracts need to be approved from a legal perspective so that the business can be sure they can be validated and honored. In October 2015, Docusign and Visa showcased a Blockchain application that allows a person to enter a car,sign all purchase or lease documents, and pay for the car electronically , within minutes and without leaving the car (bit.ly/2d9xVUa). This will provide a much better customer experience and will streamline the purchase process. Banks and credit card companies, like Visa and MasterCard, are also experimenting with Blockchain as a way to safely move money between banks and between banks and businesses. Everledger is using Blockchain to track diamonds from the mine to consumers as a way to combat insurance fraud as well as avoid conflict diamonds. According to one Wired article (bit.ly/2fbJXhG), over 980,000 diamonds have been registered since 2015 and the company plans to expand into the wine and fine art market. Blockchains have the potential to be a disruptive technology that can streamline tasks and processes in a secure and auditable way. The ability to have a distributed ledger of transactions also provides for better reliability as there’s not a single point of failure. Blockchains also open the door to the use of smart contracts which can be used in the financial services, public sector and healthcare services areas. Bitcoin has been around since 2008 and is based on Blockchain but Blockchain can provide so much more than just support for Bitcoin which is why companies are now starting to seriously consider how they can take advantage of Blockchain in order to get competitive advantage and streamline business processes. Moving forward in this internet of things world, where autonomous vehicles are becoming a reality, I expect to see Blockchains being used to monitor, record and trace ownership of cars. I also expect to see Blockchains heavily used in fraud prevention and intellectual property protection services. I highly recommend evaluating Blockchains to anyone involved in industries where authenticity is critical. Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here. comments powered by
<urn:uuid:a0596f35-70ee-43b5-9967-22b088db9211>
CC-MAIN-2017-09
http://ibmsystemsmag.com/aix/trends/whatsnew/blockchain/?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00370-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951901
768
2.5625
3
All murderers have an important and obvious thing in common: They have deliberately taken at least one human life. But researchers are discovering vast differences between the minds of impulsive killers and premeditated murderers. "Impulsive murderers were much more mentally impaired, particularly cognitively impaired, in terms of both their intelligence and other cognitive functions," said Robert Hanlon, associate professor of clinical psychiatry and clinical neurology at Northwestern University Feinberg School of Medicine. Hanlon is the lead author of a study published in the online journal Criminal Justice and Behavior. The research team examined neuropsychological and intelligence differences between impulse killers and those who murder as the result of a premeditated plan. Among the findings: * Compared to impulsive murderers, premeditated murderers are almost twice as likely to have a history of mood disorders or psychotic disorders -- 61% versus 34%. * Compared to predatory murderers, impulsive murderers are more likely to be developmentally disabled and have cognitive and intellectual impairments -- 59% versus 36%.* Nearly all of the impulsive murderers have a history of alcohol or drug abuse and/or were intoxicated at the time of the crime -- 93% versus 76% of those who strategized about their crimes. What I find interesting in the data above is that premeditated murderers are more likely to have mood and/or psychotic disorders. I would have expected impulsive murderers to be more burdened by mood and psychotic issues. Hanlon's team studied 77 murderers from prisons in Illinois and Missouri, administering tests for intelligence, memory, attention and other neuropsychological factors. By studying the minds of murderers, Hanlon says, "We may be able to increase our rates of prevention and also assist the courts, particularly helping judges and juries be more informed about the minds and the mental abnormalities of the people who commit these violent crimes." Now read this:
<urn:uuid:530edb2c-cf2a-4506-aac3-8e1edeeb869f>
CC-MAIN-2017-09
http://www.itworld.com/article/2706637/hardware/understanding-the-minds-of-murderers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00546-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952244
376
2.625
3
The latest tools from Apple have the potential to drive the educational digital shift and transform classroom practices. But, with any advancement in education technology comes the concerns of misuse or propagating inferior teaching habits. When technology like Apple’s Classroom app can improve the way teachers teach and students learn, these concerns should be mitigated rather than used as an excuse to not implement certain technology advantages. Here are a few benefits to share with the skeptics out there and provide more freedom for teachers. Apple’s Classroom app comes with the ability to view a student’s screen while in the classroom. This functionality allows teachers to be mobile while still being able to check in on students’ progress. Untethering teachers from their desk, whiteboard, or podium enables them to meet students' learning needs. Teachers are free to move about the room working one-on-one or with small groups of students. With screen view, the possibilities are endless In addition to increased mobility, Classroom app comes with a variety of features that promote positive and effective teaching practices: - Real-time checks for understanding. Acting as a student response system, teachers can see student progress, notes, or answers to questions displayed on their iPads in real time. Instead of waiting until test day, teachers can check for understanding multiple times throughout a lesson to ensure students are on track. - Academic achievement for every student. Seeing the progress of individual students through screen view helps teachers recognize which students are progressing adequately and which students may need more assistance. Being able to identify which students may fall behind earlier in a lesson increases the likelihood that they’ll get the help they need. - More student-to-student engagement. With the ability to AirPlay screens, teachers who observe students’ screens have the ability to recognize opportunities where student work can be spontaneously shared with the class; setting students up to be better, lifelong contributors and collaborators. - Less interruptions or conflict when students get off task. If a teacher suspects a student is off task, they can quickly and unobtrusively check in and pause their screen if necessary, thereby reducing escalation, frustration, and conflict that could arise from a negative encounter. And, this streamlined interaction ultimately leads to more active learning and a better experience for students and teachers. Fear, uncertainty, and doubt should not drive decisions. Identify concerns and risks and put plans in place to mitigate them, especially when tools enable a more engaged environment that supports student learning. While these tools bring a change to the classroom environment, they support a teacher’s need to check for understanding and ensure students are progressing as expected—all while minimizing interruptions and distractions. While the fear of new technology or the unknown may come into play, schools and teachers should consider how the benefits (creating a more engaged environment for students) far outweigh any uncertainty.
<urn:uuid:26c0d734-6a22-44a7-9396-e7c43062b223>
CC-MAIN-2017-09
https://www.jamf.com/blog/dont-fear-the-screen-how-screen-sharing-benefits-teachers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00070-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940945
584
2.984375
3
How Hadoop WorksBy David F. Carr | Posted 2007-08-20 Email Print Initiative for distributed data processing may give the No. 2 search service some of the "geek cred" it's been lacking. The Hadoop runtime environment takes into account the fact that when computing jobs are spread across hundreds or thousands of relatively cheap computers, some of those computers are likely to fail in mid-task. So one of the main things Hadoop tries to automate is the process for detecting and correcting for those failures. A master server within the grid of computers tracks the handoffs of tasks from one computer to another and reassigns tasks, if necessary, when any one of those computers locks up or fails. The same task can also be assigned to multiple computers, with the one that finishes first contributing to the final result (while the computations produced by the laggards get thrown away). This technique turns out to be a good match for massive data analysis challenges like producing an index of the entire Web. So far, at least, this style of distributed computing is not as central to Yahoo's day-to-day operations as it is said to be at Google. For example, Hadoop has not been integrated into the process for indexing the Web crawl data that feeds the Yahoo search engine—although "that would be the idea" in the long run, Cutting says. However, Yahoo is analyzing that same Web crawl data and other log files with Hadoop for other purposes, such as market research and product planning. Where Hadoop comes into play is for ad-hoc analysis of data—answering a question that wasn't necessarily anticipated when the data gathering system was designed. For example, instead of looking for keywords and links, a market researcher might want to comb through the Web crawl data to see how many sites include a Flickr "badge"—the snippet of code used to display thumbnails of recent images posted to the photo sharing service. From its first experiments with 20-node clusters, Yahoo has tested the system with as many as 2,000 computers working in tandem. Overall, Yahoo has about 10,000 computers running Hadoop, and the largest cluster in production use is 1,600 machines. "We're confident at this point that we can get fairly linear scaling to several thousand nodes," Baldeschwieler says. "We ran about 10,000 jobs last week. Now, a good number of those come from a small group of people who run a job every minute. But we do have several hundred users." Although Yahoo had previously created its own systems for distributing work across a grid of computers for specific applications, Hadoop has given Yahoo a generally useful framework for this type of computing, Baldeschwieler says. And while there is nothing simple about running these large grids, Hadoop helps simplify some of the hardest problems. By itself, Hadoop does nothing to enhance Yahoo's reputation as a technology innovator, since by definition this project is focused on replicating techniques pioneered at Google. But Cutting says that's beside the point. "What open source tends to be most useful for is giving us commodity systems, as opposed to special sauce systems," he says. "And besides, I'm sure we're doing it differently."
<urn:uuid:e0a05dcd-9591-411d-a086-673a3681f88e>
CC-MAIN-2017-09
http://www.baselinemag.com/c/a/Projects-Enterprise-Planning/Yahoo-Challenge-to-Google-Has-Roots-in-Open-Source/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00542-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954458
683
2.5625
3
Energy aims to retake supercomputing lead from China Department's next high-performance system, being built by IBM, expected to surpass the current leader - By Henry Kenyon - Feb 11, 2011 China currently holds the lead position for the world’s fastest supercomputer, but not for long. The U.S. is working on a new class of computers that will greatly outperform all of the planet’s current supercomputers. These machines themselves will pave the way for even faster computers scheduled to appear by the end of the decade. Commissioned by the Energy Department’s Argonne National Laboratory, the computer will be able to execute 10 quadrillion calculations per second, or 10 petaflops. Nicknamed Mira, the machine will be built by IBM and based on a version of the upcoming version of the firm’s Blue Gene supercomputer architecture, called Blue Gene/Q, Computerworld reported. The supercomputer will be operational in 2012. According to Computerworld, the 10-petaflop performance will be vastly higher than today’s most powerful machine, the Tianjin National Supercomputer Center’s Tianhe-1A system, which has a peak performance of 2.67 petaflops. There’s a new supercomputing champ in town Making sense of exaflops The added speed and computing muscle will allow Mira to conduct a variety of modeling and simulation tests that current machines cannot do. In a statement, IBM said the computer could be used in a variety of applications, such as modeling new, highly efficient batteries for electric cars or developing better climate models. Argonne officials expect that Mira will not only be the fastest computer in the world, but the most energy efficient as well. These efficiencies will be achieved by a combination of new microchip designs and very efficient water cooling. The Argonne Leadership Computing Facility (ALCF), which will house Mira, won an Environmental Sustainability (EStar) award in 2010 for the innovative and energy efficient cooling designed for its current system. Laboratory officials predict that Mira will be even more efficient. Mira is also a stepping stone in U.S. efforts to develop exascale computers — a class of machines that would be a thousand times faster than the upcoming petascale systems. Computerworld noted that by 2012, Mira will be one of three IBM systems able to operate at 10 petaflops or higher. The company is also developing a 20-petaflop machine called Sequoia for the DOE’s Lawrence Livermore National Laboratory. Another IBM-built 10 petaflop machine in production is the Blue Waters system for the National Science Foundation-funded National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
<urn:uuid:90d91c17-8221-4702-b79d-3b63c130055d>
CC-MAIN-2017-09
https://gcn.com/articles/2011/02/11/energy-supercomputer-to-break-performance-records.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00118-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919184
586
2.921875
3
Image via CrunchBase On Thursday, the world learned that attackers were breaking into computers using a previously undocumented security hole in Java, a program that is installed on hundreds of millions of computers worldwide. This post aims to answer some of the most frequently asked questions about the vulnerability, and to outline simple steps that users can take to protect themselves. Q: What is Java, anyway? A: Java is a programming language and computing platform that powers programs including utilities, games, and business applications. According to Java makerOracle Corp., Java runs on more than 850 million personal computers worldwide, and on billions of devices worldwide, including mobile and TV devices. It is required by some Web sites that use it to run interactive games and applications. Q: So what is all the fuss about? A: Researchers have discovered that cybercrooks are attacking a previously unknown security hole in Java 7 that can be used to seize control over a computer if a user visits a compromised or malicious Web site. Q: Yikes. How do I protect my computer? A: The version of Java that runs on most consumer PCs includes a browser plug-in. According to researchers at Carnegie Mellon University‘s CERT, unplugging the Java plugin from the browser essentially prevents exploitation of the vulnerability. Not long ago, disconnecting Java from the browser was not straightforward, but with the release of the latest version ofJava 7 — Update 10 — Oracle included a very simple method for removing Java from the browser. You can find their instructions for doing this here. Q: How do I know if I have Java installed, and if so, which version? A: The simplest way is to visit this link and click the “Do I have Java” link, just below the big red “Download Java” button. Q: I’m using Java 6. Does that mean I don’t have to worry about this? A: There have been conflicting findings on this front. The description of this bug at theNational Vulnerability Database (NVD), for example, states that the vulnerability is present in Java versions going back several years, including version 4 and 5. Analysts at vulnerability research firm Immunity say the bug could impact Java 6 and possibly earlier versions. ButWill Dormann, a security expert who’s been examining this flaw closely for CERT, said the NVD’s advisory is incorrect: CERT maintains that this vulnerability stems from a component that Oracle introduced with Java 7. Dormann points to a detailed technical analysis of the Java flaw by Adam Gowdiak of Security Explorations, a security research team that has alerted Java maker Oracle about a large number of flaws in Java. Gowdiak says Oracle tried to fix this particular flaw in a previous update but failed to address it completely. Either way, it’s important not to get too hung up on which versions are affected, as this could become a moving target. Also, a new zero-day flaw is discovered in Java several times a year. That’s why I’ve urged readers to either uninstall Java completely or unplug it from the browser no matter what version you’re using. Q: A site I use often requires the Java plugin to be enabled. What should I do? A: You could downgrade to Java 6, but that is not a very good solution. Oracle will stop supporting Java 6 at the end of February 2013, and will soon be transitioning Java 6 users to Java 7 anyway. If you need Java for specific Web sites, a better solution is to adopt a two-browser approach. If you normally browse the Web with Firefox, for example, consider disabling the Java plugin in Firefox, and then using an alternative browser (Chrome, IE9, Safari, etc.) with Java enabled to browse only the site(s) that require(s) it. Q: I am using a Mac, so I should be okay, right? A: Not exactly. Experts have found that this flaw in Java 7 can be exploited to foist malware on Mac and Linux systems, in addition to Microsoft Windows machines. Java is made to run programs across multiple platforms, which makes it especially dangerous when new flaws in it are discovered. For instance, the Flashback worm that infected more than 600,000 Macs wiggled into OS X systems via a Java flaw. Oracle’s instructions include advice on how to unplug Java from Safari. I should note that Apple has not provided a version of Java for OS X beyond 6, but users can still download and install Java 7 on Mac systems. However, it appears that in response to this threat, Apple has taken steps to block Java from running on OS X systems. Q: I don’t browse random sites or visit dodgy porn sites, so I shouldn’t have to worry about this, correct? A: Wrong. This vulnerability is mainly being exploited by exploit packs, which are crimeware tools made to be stitched into Web sites so that when visitors come to the site with vulnerable/outdated browser plugins (like this Java bug), the site can silently install malware on the visitor’s PC. Exploit packs can be just as easily stitched into porn sites as they can be inserted into legitimate, hacked Web sites. All it takes is for the attackers to be able to insert one line of code into a compromised Web site. Q: I’ve read in several places that this is the first time that the U.S. government has urged computer users to remove or wholesale avoid using a particular piece of software because of a widespread threat. Is this true? A: Not really. During previous high-alert situations, CERT has advised Windows users to avoid using Internet Explorer. In this case, CERT is not really recommending that users uninstall Java: just that users unplug Java from their Web browser. Q: I’m pretty sure that my Windows PC has Java installed, but I can’t seem to locate the Java Control Panel from the Windows Start Menu or Windows Control Panel. What gives? A: According to CERT’s Dormann, due to what appears to potentially be a bug in the Java installer, the Java Control Panel applet may be missing on some Windows systems. In such cases, the Java Control Panel applet may be launched by finding and executing javacpl.exe manually. This file is likely to be found in C:\Program Files\Java\jre7\bin or C:\Program Files (x86)\Java\jre7\bin. Q: I can’t remember the last time I used Java, and it doesn’t look like I even need this program anymore. Should I keep it? A: Java is not as widely used as it once was, and most users probably can get by without having the program installed at all. I have long recommended that users remove Java unless they have a specific use for it. If you discover later that you really do need Java, it is trivial and free to reinstall it. Q: This is all well and good advice for consumers, but I manage many PCs in a business environment. Is there a way to deploy Java but keep the plugin disconnected from the browser? A: CERT advises that system administrators wishing to deploy Java 7 Update 10 or later with the “Enable Java content in the browser” feature disabled can invoke the Java installer with the WEB_JAVA=0 command-line option. More details are available in the Java documentation. Get your personal as well as office laptops encrypted by Alertsec Unencrypted laptops present a major risk of data loss. 80% of information theft is due to lost or stolen laptops and other equipment. About 50% of network intrusions are performed with credentials gathered from lost or stolen devices. The penalties for a data breach are severe not only in terms of the monetary fines imposed on the organization, but also the potential loss of trust from customers and suppliers. Encryption software greatly enhances the security of your organization’s data as the information is not compromised if a laptop is lost or stolen. Alertsec Xpress is the full disk encryption service that delivers a mobile data protection system for all information stored on laptops used throughout your organization.
<urn:uuid:63abd15e-bd52-4bdc-9d80-a60dbdb21c0f>
CC-MAIN-2017-09
http://blog.alertsec.com/2013/02/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00642-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92723
1,717
2.796875
3
Global cloud computing traffic is expected to grow 12-fold from 130 exabytes to reach a total of 1.6 zettabytes annually by 2015 — a 66% compound annual growth rate — according to Cisco's Global Cloud Index. 1.6 zettabytes is approximately equivalent to 22 trillion hours of streaming music; 5 trillion hours of business Web conferencing with a webcam; 1.6 trillion hours of online high-definition (HD) video streaming. From the report: "The vast majority of the data center traffic is not caused by end users but by the data centers and clouds themselves undertaking activities that are largely non-transparent to end users — like backup and replication. By 2015, 76 percent of data center traffic will remain within the data center itself as workloads migrate between various virtual machines and background tasks take place, 17 percent of the total traffic leaves the data center to be delivered to the end user, while an additional 7 percent of total traffic is generated between data centers through activities such as cloud-bursting, data replication and updates." Cisco Global Cloud Index (2010 - 2015) Infographic Cisco YouTube Animation: How Big Will Cloud Computing be in 2015? More on Cisco's Cloud Index can be found here. |Data Center||Policy & Regulation| |DNS Security||Regional Registries| |Domain Names||Registry Services| |Intellectual Property||Top-Level Domains| |Internet of Things||Web| |Internet Protocol||White Space| Afilias - Mobile & Web Services
<urn:uuid:5fb12c26-8085-4a90-8b10-6f3c7fe3845d>
CC-MAIN-2017-09
http://www.circleid.com/posts/20111130_cloud_computing_traffic_expected_to_grow_12_fold_by_2015/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00586-ip-10-171-10-108.ec2.internal.warc.gz
en
0.790241
319
2.515625
3
By Sam Grobart June 13, 2013 The age of wearable computing is upon us: There are now wristbands from Nike (NKE), clip-on devices from Fitbit, and eyewear from Google (GOOG). We’ve come this far because we’ve been able to shrink computing power from something the size of a room to a box that sat atop a desk, to a smaller box that fits in the palm of our hand, to now an even smaller box we can wear on our bodies. But they’re still boxes, more or less: Rigid devices that stick out because they don’t conform to the human shape. A startup in Cambridge, Mass., called MC10 aims to change that. The 70-person company is developing a manufacturing technology that will allow digital circuits to be embedded in fabric or flexible plastic. MC10’s approach means we will no longer “wear” technology like jewelry but have it sit unobtrusively on our skin or inside our bodies. “By embedding technology in bendable, stretchable materials, you can start to think about entirely new form factors for electronics,” says Benjamin Schlatka, a co-founder of MC10. The BioStamp is MC10’s first flexible computing prototype. It’s a collection of sensors that can be applied to the skin like a Band-Aid or, because it’s even thinner than that, a temporary tattoo. The sensors within collect data such as body temperature, heart rate, brain activity, and exposure to ultraviolet radiation. Using near field communication—a wireless technology that allows devices to share data (think E-ZPass)—the BioStamp can upload its information to a nearby smartphone for analysis. Besides being unobtrusive, a device such as the BioStamp can be worn constantly (each lasts about two weeks), which changes the nature of medical diagnosis. Until now, understanding what’s happening inside a body only happens when that body is being actively examined. Implantable sensors can provide full-time monitoring. “You want it to be happening in the background, without thinking about it,” says MC10 Chief Executive Officer David Icke, who worked in the chip and cleantech industries before joining MC10 just over four years ago. “The idea behind continuous pickup of information is you get access to health care when you need it.” This kind of constant monitoring fuels sci-fi visions of the future, when an ambulance may pull up next to you because the implanted sensors in your body are picking up the earliest indications of a heart attack. The BioStamp is expected to cost less than $10 per unit, and MC10 aims to have a commercial product in the next five years. MC10 is developing another device that will be available sooner. The Checklight measures velocity and impact to help diagnose concussions in sports. Although not flexible, it’s quite small (about the size of a camera’s memory card) and can be tucked into a skullcap and worn under any type of helmet. Checklight was developed with Reebok (ADS), who will begin marketing it later this year. “A lot of the products we try to create are transparent in their use but apparent in their effectiveness,” says Paul Litchfield, Reebok’s vice president for advanced products. “If you take these hard, plastic pieces and make them work organically with the human body, the sky’s the limit as to what they can do.” As it has with Reebok, MC10 plans to license its technology to third parties that have the scale and expertise to bring products to market. “We think of ourselves as a latter-day Intel (INTC),” says Icke. “We want to power the next generation of wearable electronics, no matter where they come from.” Another version of the technology in the BioStamp is used in a catheter being developed with Medtronic (MDT), a maker of medical devices that’s an investor in MC10. The catheter can be inserted through a vein in the leg and run up into a patient’s heart, inflated like a balloon to expose its sensor-laden surface, and then used to collect electrical data about the heart’s rhythm, which can be useful to electrophysiologists when diagnosing rare occurrences of tachycardia. Tests on humans are expected to start within a year. “Today’s catheters don’t have the kind of electronics that we take for granted in many of our consumer devices,” says Schlatka. “By adding that intelligence, doctors can make better decisions about how they are performing the procedure.” The applications go beyond health care. At AllThingsD.com’s D11 tech conference last month, Regina Dugan, senior vice president for advanced technology and products at Motorola Mobility (GOOG), demonstrated how MC10’s BioStamp could be used to verify a person’s identity to a computer or mobile device. Users now rely on key chain fobs or credit-card-size displays that authenticate a user’s access. But wearing a flexible microprocessor that contains an encrypted code could put that function directly on your skin. “Electronics are boxy and rigid,” says Dugan. “Humans are curvy and soft.” The bottom line: Startup MC10 miniaturizes medical diagnostic devices and has enlisted big-name partners in the medical and sports world. Grobart is a senior writer for Bloomberg Businessweek. Follow him on Twitter @samgrobart.Back to all News
<urn:uuid:9fcff713-b657-45e6-bdcc-7c002b079944>
CC-MAIN-2017-09
http://www.northbridge.com/mc10s-biostamp-new-frontier-medical-diagnostics
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00586-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93792
1,200
2.875
3
Using bioinformatics tools, cancer researchers can now search for common protein markers shared among afflicted patients. The process involves tracking thousands of proteins over a course of the disease and identifying the ones that map to patient survival. In a recent study, researchers believe they have a better chance of finding these relationships with a program inspired by Google’s famous PageRank algorithm. Last week, the Txchnologist covered the scientists’ unconventional method. Christof Winter, one of the study’s researchers and a computational biologist at Lund University in Sweden, explained how cells respond to protein and gene interactions: “A cell integrates many different inputs from the inside and outside and makes decisions based on them — grow, divide, migrate, differentiate, and so on. These decisions are mostly the result of proteins talking to each other, and if we want to predict what the cell does next, we have to, besides measuring the protein levels, take into account and better understand these networks of interactions.” The researchers attempted creating their own algorithm before realizing that Google’s PageRank algorithm solved essentially the same problem for the web. They modified the algorithm somewhat, and came up with NetRank, a code that analyzes the relationship between proteins and gene expression. Initially they used it to study pancreatic cancer. They found that out of 20,000 proteins they looked at, seven seemed to correlate most strongly with the how aggressive the cancer became. That information could then be used as criteria for patient treatment. Significantly, the researchers found that NetRank was able to produce a prognosis that was 6 to 9 percent more accurate than conventional medical practices. Unfortunately, the program only applies to patients already diagnosed with the disease and does not allow for early detection. And more testing is required before the software can be used in real-world clinical environments. According to their paper, written up in the PloS Computation Biology journal, the scientists view the application as a tool for medical professionals to improve individualized care. The researchers conclude that the technology can be used in a clinical setting to help decide if a cancer patient should receive chemotherapy. “Reliable prediction of survival and response to therapy based on molecular markers bears a great potential to improve and personalize patient therapies in the future,” they write. Beyond predicting patient outcomes, information gleaned from NetRank could assist in the development of new cancer fighting drugs. For example, the program identified a protein named STAT3, believed to shorten the survival rate of a patient. With the protein identified, pharmaceutical manufacturers can begin to develop and test STAT3-inhibiting drugs, which might slow or reverse the cancer’s progression.
<urn:uuid:a022cac1-edec-4562-b8a6-ad4e6d127945>
CC-MAIN-2017-09
https://www.hpcwire.com/2012/05/23/can_googles_page_ranking_algorithm_cure_cancer_/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00462-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939105
547
3.359375
3
During the Great Depression, President Franklin Delano Roosevelt conceived of Social Security as a program for senior citizens, the disabled, the unemployed, widows and orphans who lacked financial protection. However, when Roosevelt signed the Social Security Act into law in August, 1935, the document did not say how the details would play out. The task of creating and managing more than 26 million individual accounts had yet to be determined. The sheer scale of this early “Big Data” project was daunting enough; press reports labeled it as the largest bookkeeping job of all time. In addition, the seemingly unrealistic timeframes – the law dictated that the program be in place by January 1, 1937 – were equally frightening. Some experts felt the task was impossible, and recommended that Roosevelt abandon it. But the Social Security Administration stayed the course. In the summer of 1936, the agency collected proposals from various accounting equipment vendors, each suggesting their own approach to record-keeping. IBM was ready to handle the challenge because it had a proven track record in large scale government accounting projects dating back to the 1920s. The company had the systems and process knowledge necessary to ensure that the Social Security program’s policies and procedures could be quickly developed and rapidly deployed. The depth of IBM’s proposal, as well as the government’s familiarity with IBM’s skills and equipment, convinced the Agency that the company had the most viable solution, and in September 1936, IBM was awarded the contract. There was another factor. IBM’s CEO, Thomas Watson, Sr., continued to invest in research & development throughout the Depression. So when the Agency awarded IBM the contract and asked the company to invent a machine that would automatically and rapidly integrate payroll contributions into millions of individual accounts – something that was essential to the success of the program – IBM engineers were ready for the task. They developed the IBM 077 Collator, the machine that made Social Security a reality. The invention of a new machine wasn’t the only challenge facing Social Security; the logistics of the program were equally daunting. The paper records alone took up 24,000 square feet of floor space. In fact, the weight of the paper records and IBM machines was so great that no building in Washington had floors sturdy enough to hold them, so operations were set up in an old Coca-Cola bottling plant on Baltimore’s waterfront. The building was far from people friendly. It was cold in the winter, and hot in the summer. Plus, the summer heat brought with it the overpowering smells of rotting fish from the docks and spices from a local spice factory. The Social Security employees in the building also were plagued by sand fleas that lived in the sound-deadening sand barriers between floors. When the IBM collators were put into action in June 1937, there was still much work to be done before the first Social Security check would be mailed to Miss Ida May Fuller in 1940. However, there were no longer doubts that the program was possible. It was the close partnership between IBM and the Social Security Administration that created the record keeping system that made Roosevelt’s vision a reality. The partnership improved the quality of life for generations of Americans. It also catapulted IBM from a mid-sized company to the world’s leading information management provider. But beyond the monumental size and scope of the project, the real significance of Social Security was that it proved that public-private partnerships could roll out enormous solutions to meet grand challenges, promote economic growth and help society. Public-private partnerships aren’t easy. You need to balance different concerns and learn to work together. But when you do, these partnerships work, and they are essential for driving business and societal growth for the long term. From Social Security to IBM’s work with smarter cities around the world, public-private partnerships demonstrate that collaboration is the key to innovation. Jonathan Fanton, Ph.D., is the Franklin Delano Roosevelt Visiting Fellow at the Roosevelt House Public Policy Institute at Hunter College in New York City. Dr. Fanton previously served as President of the John D. and Catherine T. MacArthur Foundation, and as President of the New School for Social Research. Share this post:
<urn:uuid:c17c655d-cc5e-4f57-95b1-aca2a272ca02>
CC-MAIN-2017-09
https://www.ibm.com/blogs/citizen-ibm/2012/06/social-security-turns-75-the-mother-of-all-big-data-projects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00162-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971171
868
3.703125
4
Public health officials face ongoing challenges in identifying, preparing for, and managing outbreaks of diseases and other illnesses. The Centers for Disease Control and Prevention (CDC) have released a newly developed Community Assessment for Public Health Emergency Response (CASPER) toolkit to help epidemiologists and other public health professionals collect pertinent health information during a large-scale emergency or natural disaster. During a disaster, existing infrastructure is compromised and communication and transportation systems may be inoper¬able. Finding a method to collect health data to detect and prevent outbreaks and to minimize health risks within the community becomes an important task for public health professionals. The CDC is encouraging local public health agencies to adopt the CASPER protocol to better prepare for and respond to future emergencies by increasing their capa¬bility to quickly establish surveillance systems during disasters. An innovative appli¬cation of this protocol is being implemented by the City of Nashua, New Hampshire's Division of Public Health & Community Services. The Division is using the protocol to develop field procedures to gather community health information for a local health assessment. By utilizing this tool during non-emergencies, local health depart¬ments can practice using the toolkit, train volunteers and staff to use the toolkit and increase their ability to use this method to respond more efficiently during a disaster.
<urn:uuid:a1f136ef-4de9-4d33-8e62-2d1062897f30>
CC-MAIN-2017-09
https://www.bsminfo.com/doc/city-saves-time-and-money-completing-survey-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00211-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925413
266
3.28125
3
Ettercap Automates the Malicious Middleman Man in the middle (MITM) attacks can be devastatingly effective, providing hackers with all kinds of confidential information and, just as seriously, giving them the opportunity to feed false information to victims. These attacks involve a hacker diverting packets which are meant to flow between a victim's computer and another machine - usually an Internet gateway so that they flow through the attacker's computer, where they can be inspected and changed before being passed on. The easiest and most effective way to achieve a MITM attack is though an Address Resolution Protocol (ARP) spoofing attack. Recall that on an Ethernet network local IP addresses are associated with hosts' network adapter MAC addresses, and that hosts send out ARP requests to find out the MAC address that any arbitrary local IP address has been assigned to. These requests take the form "who-has 192.168.1.150, tell 192.168.1.1". Since ARP responses, in the form "192.168.1.150 is 00:11:22:33:44:55" do not get authenticated and will be acted on even if there has not previously been an ARP request, it is possible to send a spoof ARP response telling the victim's machine that the gateway IP address 192.168.1.1 is associated with the MAC address of the attacker's machine, and to send one to the gateway effectively informing it that all traffic for the victim's IP address should be sent to the adapter with the attacker's MAC address, not the victim's. All that's then needed is for the attacker to forward the packets on to their intended destinations, and the victim will be none the wiser any delay due to this diversion is usually far too small to be detectable. Let's think about the implications of a MITM attack. Any packets sent from the victim's machine to the gateway go through the attacker's machine, where they can be inspected. The sorts of packets the attacker may be after include POP, SMTP and FTP logins and passwords, or any other type of data that is not encrypted. It's possible to carry out an ARP poisoning MITM attack manually using Wireshark (Ethereal) to intercept and edit ARP requests, but actually it's very easy for anyone who can get on to your network (using Aircrack-ng to get on wirelessly, for example) to carry out such an attack using automated open source tools. The best known one of these is called Ettercap. Taking Ettercap for a Spin Lets take a look at how Ettercap works. You can install the GTK GUI version of Ettercap from Synaptic in Ubuntu, and you'll also find it pre-installed in BackTrack 2 and 3 beta. To start Ettercap open a console window and, as root, type: (You can do this in Ubuntu using the sudo command) The rather empty Ettercap GUI will start, ready for you to begin. The first step is to click the "Sniff" menu, choose "Unified sniffing", and select the network interface you want from the dropdown box probably eth0 for a wired connection, or something like wifi0, wlan0 or ath0 for a wireless one. Next it's time to see what other hosts are on the network, by clicking the "Hosts" menu, and choosing "Scan for hosts". You may want to do this twice, to ensure that no hosts are missed, before displaying Ettercap's findings by clicking the "Hosts" menu again and choosing "Hosts list". (Figure 1) To choose a victim machine, click on its IP address, and click on "Add to Target 1." Then select the Internet gateway, and click "Add to Target 2." Any packets flowing between Target 1 and Target 2 will now travel via Ettercap, once the attack is launched by choosing the "MITM" menu and choosing "Arp poisoning," selecting "Sniff remote connections," and finally clicking "Start sniffing" from the "Start" menu. To see the power of an attack like this, simply check e-mail from the victim machine. In the bottom half of Ettercap, you'll immediately see the user name and password that's been used on the victim machine, along with the IP address of the server. Everything you need, in fact, to snoop on the victim's e-mail. Connect to an FTP site and the same thing happens. (This is not the case when you check using a secure connection, however.) Things get more insidious when, instead of just snooping on passing traffic, we change the packets that are requested by the victim. One way to do this is through DNS spoofing. Using this attack, we can intercept DNS requests, and change the IP addresses returned for certain domains. A victim's browser sending a DNS request to resolve the domain "bigbank.com" could be given the IP address of a phishing site that looks identical to the bank's real site, and since the victim has actually typed "www.bigbank.com" in to his browser, he or she is unlikely to suspect that anything is amiss. Ettercap has a ready made module for DNS spoofing, accessed from the "Plugin" menu. But the first step is to open the etter.dns file located in /usr/share/ettercap and edit it to point the domains you want to divert to the IP addresses you want to divert them to. You can open the file by typing, as root: make the changes you want E.g. bigbank.com A 188.8.131.523 *.bigbank.com A 184.108.40.2063 www.bigbank.com PTR 220.127.116.113 and save the file again. (Note you'll have to be root or use sudo to do this.) Now click the Plugins menu, and choose "Manage plugins," and double click on "dns_spoof. " Scarily, that's it! Try to go to anywhere on the domain you spoofed (bbc.co.uk in the illustration) and your browser will take you to the IP address you specified in etter.dns . Note that the address that appears in the browser's address bar is the bbc address, even though the page displayed is completely different "www.enterprisenetworkingplanet.com" ( If the change doesn't work immediately then wait a few minutes and try again sometimes the address will already have been cached and you'll need to wait till it expires. ) Bundled With Badness There are plenty of other harmful Ettercap plugins bundled with the software which do everything from launching a denial of service attack against a particular IP address to reporting on the URLs visited by the victim's browser. Ettercap can also filter packets and change individual words you can set up filters to scan every web page requested by the victim, replacing a particular telephone number with the hacker's own, for example. By now it should be pretty clear that, thanks to tools like Ettercap, anyone accessing your network can wreak havoc with your users, stealing passwords and altering the information they receive over the Internet. So how do you defend against MITM attacks launched with Ettercap? The answer is that it is very difficult preventing unauthorized network access is easier than preventing a hacker with network access from carrying out such attacks. But the good news is that Ettercap offers some lines of defence against itself. One useful Ettercap plugin bundled with the application is arp_cop, which is designed to report suspicious ARP activity by passively monitoring ARP requests and replies. It can report ARP poisoning attempts, or simple IP-conflicts or IP-changes. Changes in IP MAC address associations may be an indication that ARP spoofing is going on. There are also other open source tools (for example arpwatch) which monitor ARP requests and e-mail administrators when anything fishy is going on. Another useful Ettercap plugin is find_ettercap. This plugin tries to identify Ettercap packets traversing the LAN, and so can be used to detect if an intruder is using Ettercap. However, since it only looks for certain packets, it can not always detect when Ettercap is being used. Search_promisc can also be a useful plug-in to try it attempts to discover if any host on the network is sniffing the network in promiscuous mode, which normal users would not normally need to do. Another possible way of defending against ARP spoofing is to configure your hosts with static ARP tables, which cant be changed by spoofed ARP replies. In Windows, from a command prompt, you can do this by typing something like: arp -s 192.168.0.1 11-22-33-44-55-66 or in Linux using arp -s 192.168.0.1 11:22:33:44:11:11 changing the IP and MAC addresses as appropriate. The command for your router will depend on your router manufacturer. Unfortunately, static ARP tables are not very convenient for an administrator to set up, and cause problems with laptops which are moved from one network to another. What Ettercap demonstrates quite clearly is that there are open source tools out there which can be used as formidable weapons to attack your network. But by becoming familiar with them you can use them defensively to prevent hackers causing havoc on your network. Hopefully this article has demonstrated how easily an ARP spoofing attack can be carried out by anyone who gains access to your network and how devastating they can be. Preventing unauthorized access to your network is the best form of defense, but knowing how to detect an ARP spoofing attack could end up saving the company.
<urn:uuid:138f29b2-c0b5-4f8d-8bd0-fcf25dd026d7>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netos/article.php/3724916/Ettercap-Automates-the-Malicious-Middleman.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00631-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935542
2,055
2.75
3
John Matherly’s Shodan, a search engine that finds Internet-connected devices, can be used for many things: gauging the impact of policies and network security efforts (e.g. patching), finding malware C&C servers, checking how a company we want to do business with is handling security, checking which devices our competitors are deploying (market research), and much more. For Matherly, Shodan is a means to measure things that couldn’t be measured before. And with the advent of the Internet of Things, the available data set will keep growing day by day. “The Internet of Things is happening. The world is becoming hyper-connected, whether we want it or not – security be damned!” Matherly pointed out to the audience at Hack In The Box conference in Amsterdam. An Internet connection is being added to “pretty much everything,” whether it’s a good idea and or not. “Who needs to Tweet from their fridge?” he wondered aloud, but admitted that sometimes an Internet connection for certain devices can be helpful. Securing the Internet of Things will be an enormous endeavor, but it has to be done. The stakes are much higher – security failures can lead to serious real-world consequences. Still, making administrators take unsecured IoT devices offline or securing them well is difficult, as Shodan can’t really tell who’s their owner (dynamic IP addresses tell you little). But, generally, manufacturers are still not that interested in security, he says. Many of the IoT devices they create are accessible over the Internet by default, often so that updates can be easily delivered and problems fixed remotely. Effectively, they open a backdoor to the device, without the users’ knowledge. Connecting to these devices is also often executed via insecure means. For example, the popularity of telnet for remote logins is still high, even though it provides no traffic encryption, (usually) no authentication option, and has many vulnerabilities. Most users fail to realize that IoT devices – fridges, TVs, termostats, cameras, billboards, and so on – now come with computers inside them, which means they will have many of the problems “regular” computers have. They see the fact that they are connected to the Internet as a great functionality, and fail to realize the dangers it brings. They do not think about the huge amount of data these computers collect: usage data, health data, and more. It’s interesting to note that users are usually not comfortable revealing some of this data to a person, but they are somehow comfortable giving it up to a computer. They also fail to realize that this data is sold and used – anonymized, to be sure, but anonymization is not foolproof, as we’re finding out – and occasionally stored in databases in the cloud without any protection, there for the taking for those who know how to find it. And even if some users are worried about their privacy, and avoid having these devices in their home or on their person, there is little they can do about IoT devices that are not theirs and surround them when they walk down the street or visit a mall – cameras, trackers, beacons. As an example of what data can be found laying around, and how easy it is to collect it, Matherly used Shodan to find license plate capture cameras all over the US. And given that many of them store these images insecurely in the cloud, he managed to create a database of over 63,000 license plates in mere 5 days. He stopped there, and notified the authorities about this problem, but found out that they knew already – they have been told about it by other researchers years ago. And nothing has changed. “IoT is still full of huge, gaping holes everywhere you look,” he concluded. Many say that this initial phase will pass, that manufacturers will stop making the most obvious mistakes (whether they do it intentionally or not), that they will begin to consider security a priority from the very beginning of a project, but it’s hard to believe this. Luckily, we have security advocates – initiatives like BuildItSecure.ly, which tries to push IoT vendors towards security best practices and tries to build partnerships between them and the security community – on our side.
<urn:uuid:e3eff941-80cc-4ba9-8914-e9ade1d45275>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2015/06/09/iot-is-full-of-gaping-security-holes-says-shodan-creator/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00207-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955913
910
2.53125
3
DARPA seeks ways to rebuild space junk - By Henry Kenyon - Jan 20, 2012 The Defense Department’s research and development shop is developing a new method to get more detailed images of orbiting satellites. Better imaging would allow the DOD to select dead or deactivated spacecraft for a related program that would use robots to build new satellites in orbit from parts salvaged from deactivated craft. The primary goal of the Defense Advanced Research Project Agency’s Galileo program is to get better and timely images of objects in geosynchronous orbit from the ground, said program manager Air Force Lt. Col. Travis Blake. Space junk: Is a solar-sail ship the answer? But Galileo is also intended to support the agency’s Phoenix program, which aims to salvage usable antennas and other components from retired satellites. Being able to image spacecraft is key to the planning aspect of the Phoenix program, Blake said. The Phoenix program aims to save on the cost of launching new satellites when older ones have died by robotically removing and re-using space apertures and antennas from the old satellites. The program plans to develop a new class of small "satlets," or nano satellites, that could "ride along" with a commercial satellite and then be “attached to the antenna of a non-functional cooperating satellite robotically, essentially creating a new space system,” DARPA said. The main challenge Galileo faces is that using ground-based telescopes to get detailed views of object in geosynchronous orbit (22,000 miles) would require mirrors that are too large to build or use efficiently. Instead, DARPA is working on a different imaging technique, interferometric imaging, to get detailed images. Astronomers use interferometry techniques to track and image objects in space with multiple telescopes. However, this process currently takes time and requires extensive infrastructure such as long light tubes, mirrors and other equipment that inhibits participating telescopes’ range of movement. DARPA’s goal is to replace the light tubes with flexible fiber optic cable, which would allow telescopes to move more freely on multiple axes, which could significantly speed up the imaging of objects in orbit, Blake said. Besides checking out non-functional satellites, another benefit of Galileo would be to allow satellite operators to determine if components such as solar panels have deployed properly, which could significantly help in resolving any problems that occur once a vehicle is deployed, he said. DARPA currently plans to run Galileo in two phases. The first will look at proposals for advanced concepts in precision fiber optic control and mobile telescopes. Phase two will last longer and include development, fabrication and testing of systems and end with an imaging demonstration, Blake said.
<urn:uuid:667b7c81-2106-4930-a2af-38db9aed672e>
CC-MAIN-2017-09
https://gcn.com/articles/2012/01/20/darpa-to-develop-better-ways-to-view-satellites-in-orbit.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00151-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931391
559
3.03125
3
MikroTik Traffic-Flow is a system that provides statistic information about packets which pass through the router. Besides network monitoring and accounting, system administrators can identify various problems that may occur in the network. With help of Traffic-Flow, it is possible to analyze and optimize the overall network performance. As Traffic-Flow is compatible with Cisco NetFlow, it can be used with various utilities which are designed for Cisco's NetFlow. Traffic-Flow supports the following NetFlow formats: - version 1 - the first version of NetFlow data format, do not use it, unless you have to - version 5 - in addition to version 1, version 5 has the BGP AS and flow sequence number information included - version 9 - a new format which can be extended with new fields and record types thank's to its template-style design This section lists the configuration properties of Traffic-Flow. interfaces (string | all; Default: all) - Names of those interfaces which will be used to gather statistics for traffic-flow. To specify more than one interface, separate them with a comma. cache-entries (128k | 16k | 1k | 256k | 2k | ... ; Default: 4k) - Number of flows which can be in router's memory simultaneously. active-flow-timeout (time; Default: 30m) - Maximum life-time of a flow. inactive-flow-timeout (time; Default: 15s) With Traffic-Flow targets we specify those hosts which will gather the Traffic-Flow information from router. - How long to keep the flow active, if it is idle. If connection does not see any packet within this timeout, then traffic-flow will send packet out as new flow. If this timeout is too small it can create significant amount of flows and overflow the buffer. address (IP:port; Default: ) - IP address and port (UDP) of the host which receives Traffic-Flow statistic packets from the router. v9-template-refresh (integer; Default: 20) - Number of packets after which the template is sent to the receiving host (only for NetFlow version 9) v9-template-timeout (time; Default: ) - After how long to send the template, if it has not been sent. version (1 | 5 | 9; Default: ) - Which version format of NetFlow to use By looking at packet flow diagram you can see that traffic flow is at the end of input, forward and output chain stack. It means that traffic flow will count only traffic that reaches one of those chains. For example, you set up mirror port on switch, connect mirror port to router and set traffic flow to count mirrored packets. Unfortunately such setup will not work, because mirrored packets are dropped before they reach input chain.Examples This example shows how to configure Traffic-Flow on a router Enable Traffic-Flow on the router:[admin@MikroTik] ip traffic-flow> set enabled=yes [admin@MikroTik] ip traffic-flow> print [admin@MikroTik] ip traffic-flow> Specify IP address and port of the host, which will receive Traffic-Flow packets: [admin@MikroTik] ip traffic-flow target> add address=192.168.0.2:2055 \ [admin@MikroTik] ip traffic-flow target> print Flags: X - disabled # ADDRESS VERSION 0 192.168.0.2:2055 9 [admin@MikroTik] ip traffic-flow target> Now the router starts to send packets with Traffic-Flow information.
<urn:uuid:1fb75590-dbe8-4f4f-bf5c-23dbe8da6fea>
CC-MAIN-2017-09
http://www.netflowauditor.com/forum/viewtopic.php?f=42&t=157
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00327-ip-10-171-10-108.ec2.internal.warc.gz
en
0.840998
789
2.6875
3
Material shows potential for data storage in flash memories South Korean researchers have used graphene quantum dots instead of nanocrystals as the discrete charge trap material for storing data in commercial flash memories. The researchers, Soong Sin Joo, et al., at Kyung Hee University and Samsung Electronics, based in Yongin, South Korea, published their findings in a paper on graphene quantum dot flash memories in a recent issue of Nanotechnology. Data is usually stored as electric charge in polysilicon layers in today’s commercial flash memories. As polysilicon is a single continuous material, defects in the material can interfere with the desired charge movement, which can limit data retention and density, reports Phys.org. The researchers were focusing on solving this problem by storing charge in discrete charge traps, such as nanocrystals, which prevent unwanted charge movement because of its lower sensitivity to local defects. Now they have gone ahead and used graphene, which is already popular as an attractive material for next-generation electronics and photonics. Graphene quantum dots of three different sizes (6, 12, and 27 nm diameters) were incorporated between silicon dioxide layers to test the trapping of charge material. It was found that the memory properties of the dots differ depending on their sizes. While talking to Phys.org, Suk-Ho Choi at Kyung Hee University said this is the first successful application of graphene quantum dots in practical devices, including electronic and optical devices. "This is the first report of charge-trap flash nonvolatile memories made by employing structurally characterized graphene quantum dots, even though their nonvolatile memory properties are currently below the commercial standard," Ho Choi added. Graphene quantum dot memories have shown potential, with an electron density comparable to that of memory devices based on semiconductor and metal nanocrystals. Future improvements to the devices are further expected to enhance performance and discovery of new applications, say the researchers.
<urn:uuid:c3c3b4eb-fbf0-49c9-bcb3-ff9ee773d93a>
CC-MAIN-2017-09
http://www.cbronline.com/news/enterprise-it/storage/south-korean-researchers-use-graphene-for-data-storage-190614-4297567
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00555-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940466
397
3.1875
3
Why Use R? - Page 4 Connections With R So far the tools I've mentioned have been focused on the database portion of the problem -- gathering the data and performing some queries. This is a very important part of the Big Data process (if there is such a thing), but it's not everything. You must take the results of the queries and perform some computations, usually statistical, on them such as, what is the average age of people buying a certain product in the middle of Kansas? What was the weather like when most socks were purchased (e.g., temperature, humidity and cloudiness all being factors)? What section of a genome is the most common between people in Texas and people in Germany? Answering questions like these takes analytical computations. Moreover, much of this computation is statistical in nature (i.e., heavily math oriented). Without much of a doubt, the most popular statistical analysis package is called R. R is really a programming language and environment. It is particularly focused at statistical analysis. To add to the previous discussion of R, it has a wide variety of built-in capabilities, including linear and non-linear modeling, a huge library of classical statistical tests, time-series analysis, classification, clustering and a number of other analysis techniques. It also has a very good graphical capability, allowing you to visualize the results. R is an interpreted language, which means that you can run it interactively or write scripts that R processes. It is also very extensible allowing you to write code in C, C++, Fortran, R itself or even Java. For much of Big Data's existence, R has been adopted as the lingua franca for analysis, and the integration between R and database tools is a bit bumpy but getting smoother. A number of the tools mentioned in this article series have been integrated with R or have articles explaining how to get R and that tool to interact. Since this is an important topic, I have a list of links below giving a few pointers, but basically, if you Google for "R+[tool]" where [tool] is the tool you are interested in, you will likely find something. - Column Stores Key-Value Store/Tuple Store + R - CouchDB + R MongoDB + R Terrastore + R - Article about Teradata add-on package for R But R isn't the only analytical tool available or used. Matlab is also a commonly used tool. There are some connections between Matlab and some of the databases. There are also some connections with SciPy, which is a scientific tool built with Python. A number of tools can also integrate with Python, so integration with SciPy is trivial. Just a quick comment about programming languages for Big Data. If you look through a number of the tools mentioned, including Hadoop, you will see that the most common language is Java. Hadoop itself is written in Java, and a number of the database tools are either written in Java or have Java connectors. Some people view this as a benefit, while others view it as an issue. After Java, the most popular programming languages are C or C++ and Python. All of these tools are really useful for analyzing data and can be used to convert data into information. However, one feature that is missing is good visualization tools. How do you visualize the information you create from the data? How do you visually tell which information is important and which isn't? How do you present this information easily? How do you visualize information that has more than three dimensions or three variables? These are very important topics that must be addressed in the industry. Whether you realize it or not, visualization can have an impact on storage and data access. Do you store the information or data within the database tool or somewhere else? How can you recall the information and then process it for visualization? Questions such as these impact the design of your storage solution and its performance. Don't take storage lightly.
<urn:uuid:06f53fcd-79d8-4419-a2eb-b87a48f7ae87>
CC-MAIN-2017-09
http://www.enterprisestorageforum.com/storage-management/why-use-r.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00023-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955377
821
2.640625
3
Feeling under the weather? You’ve probably looked up your symptoms on the Internet and self-diagnosed your ailment, according to a new report released by the Pew Research Center’s Internet and American Life Project. The survey found that 72 percent of U.S adults with Internet access looked online for health information in 2012. Of 35 percent of American adults who have used the Web to gauge their medical condition, 46 percent said their search led them to consult a medical professional. Women are more likely than men to check the Web for medical diagnoses, as are younger people, and individuals with post-secondary education. “Many have now added the Internet to their personal health toolbox, helping themselves and their loved ones better understand what might be ailing them,” Susannah Fox and Maeve Duggan wrote. Nearly 80 percent of online health inquiries began on a major search engine like Google, Bing or Yahoo, while 13 percent began on a site specialized for medical information such as WebMD or Mayo Clinic. Two percent of searches were on general information websites, and 1 percent was through social media. “Consulting online reviews of particular drugs or medical treatments, however, took a noticeable dip in the last two years,” the authors added. The researchers said many consumers are willing to write reviews for general products or services –such as a purchase on Amazon.com – but they’re less likely to have written a review of their medical treatments. Only 3 to 4 percent of Internet users write such reviews, according to Fox and Duggan. Even with the Internet, the authors noted that offline interaction with clinicians and other health care professionals were important components for everyday medical care. “And, since a majority of adults consult the Internet when they have health questions, these communications with clinicians, family, and fellow patients joined the stream of information flowing in,” Fox and Duggan wrote.
<urn:uuid:388c3138-5828-4834-a1bf-bca4dce47892>
CC-MAIN-2017-09
http://www.nextgov.com/health/health-it/2013/01/prescription-strength-google/60673/?oref=ng-channelriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00375-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966705
395
2.78125
3
Before Americans can realize Newt Gingrich's dream of building a moon colony, they must first send other living things to the rocky celestial body to test whether long-term survival is possible. NASA plans to start by gardening on the moon. Studying plant growth, known as germination, in the lunar environment can help us predict how humans may grow too, said the space agency in a recent announcement of the experiment. NASA hopes to coax basil, turnips, and Arabidopsis, a small flowering plant, from tiny seedlings to hearty greens in one-sixth of the gravity they're used to here on Earth. Plants, like humans, are sensitive to environmental conditions when they are seedlings. Their genetic material can be damaged by radiation in outer space, as well as by a gravitational pull unlike that of Earth. "If we send plants and they thrive, then we probably can," the statement read. Humans would depend on plant life to live out their days in an extraterrestrial world, just like they do on their home planet. Plants would provide moon dwellers with food, air, and medicine. They would also, as previous research has shown, make them feel better by reducing stress, and even improve concentration—welcome side effects for those aware that their new home is built to kill them. NASA hopes to cultivate its green thumb by sending a sealed growth chamber to the moon on the Moon Express lander, a privately funded commercial spacecraft, in 2015. The 2.2-pound habitat will contain enough oxygen to support five to 10 days of growth and filter paper, infused with dissolved nutrients, to hold the seeds. When the spacecraft lands in late 2015, water will surge into the chamber's filter paper. The seedlings will use the natural sunlight that falls on the moon for energy. An identical growth chamber will be mirroring the experiment on Earth, and the twin experiments will be monitored and compared. Astronauts have been tinkering with plants in space for some time now, growing (and even glowing in the dark) aboard the International Space Station. Cultivating a garden on the moon, however, is the first genuine life sciences experiment on another world.
<urn:uuid:043d22b1-deb6-434c-9fa8-54523466940a>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2013/12/nasa-sending-basil-moon/74945/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00551-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948406
443
4.1875
4
Motion detection is an important element of many, if not, most surveillance systems. It plays a central role in both storage search time reduction. Storage is routinely reduced by 30% - 80% by using motion based rather than continuous recording. Likewise, an investigator can often much faster find a relevant event by simply scanning through areas of motion rather than watching through all video. At the same time there are a number of challenges associated with using motion detection: - Scene Conditions: The accuracy of motion detection and the amount of times motion is detected can vary depending on what's in the scene - people, cars, trees, leaves, etc. - and the time of day - night time with lots of noise, sunrise and sunset with direct sunlight into a camera, etc. - Performance of Detector: Motion detetion is built into many surveillance products - from DVRs to VMS systems and now IP cameras. As such, how well each one works can vary significantly. In this report, we share our results from a series of tests we performed to better understand motion detection performance. We did a series of tests in different locations: - Indoor well light scene to simulate the simplest scene possible - Indoor dark scene (<1 lux) to examine what problems low light caused - Outdoor parking lot to see how a complex scene with trees, cars and people would perform - Roadway to see how a moderately complex scene with periodic cars would perform Three IP cameras were used with their motion detection enabled to see differences in performance: With these tests, we answered the following questions: - How can one estimate motion percentage accurately? - Does motion estimation vary significantly by scene? - How accurate was motion detection in each scene? - Did certain cameras exhibit greater false motion detection than others? What scenes or conditions drove those problems?
<urn:uuid:90b9aac1-786a-40f1-a438-ef229efd4668>
CC-MAIN-2017-09
https://ipvm.com/reports/motion-detection-performance-tested
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00551-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956949
376
2.921875
3
HTML 5: What's new about it? Precise elements and application programming interfaces - By Joab Jackson - Aug 28, 2009 HTML 5 will maintain backward compatibility with all former versions, while cleaning up some ambiguities of the previous version of the markup language. It will also offer a number of new elements, or markup symbols, that can more precisely define the elements of a Web page. And for the first time, HTML will come with a set of application programming interfaces (API) that assist developers in setting up Web applications. In this report: The long road to HTML 5 Here are some highlights: - Article and Aside: Elements for marking the main body of text for a page and for additional sidebars of text, respectively. - Audio and Video: Elements for marking video and audio files. With these elements in place, application authors can write their own interfaces or use a browser's built-in functions for actions such as fast-forwarding or rewinding. - Canvas: An element that can used for rendering dynamic bitmap graphics on the fly, such as charts or games. - Details: An element that could be put in place to allow users to obtain additional information upon demand. - Dialog: An element that defines written dialog on a Web page. - Header and Footer: Elements for rendering headers and footers to a Web page. - Meter: An element that can be used to render some form of measurement. - Section: This element can be used to define different sections within a Web page. - Nav: An element for aiding in navigation around a site. - Progress: An element that can be used to represent completion of a task, such as downloading file. - Time: An element to represent time and/or a date. - An API for allowing Web applications to run off-line. - An API for crossdocument messaging, which allows two parts of a Web page that come from different sources to communicate information. - An API for dragging and dropping content across a Web page. - An API for drawing 2-D images for the canvas tag. - An API for playing audio and video, used in conjunction with the audio and video tags. Source: "HTML 5 differences from HTML 4" (http://dev.w3.org/html5/html4-differences/ ) and HTML 5 Draft (http://dev.w3.org/html5/spec/Overview.html Joab Jackson is the senior technology editor for Government Computer News.
<urn:uuid:8f4a7eb1-b2f8-4c82-a6ab-a5e80f981c26>
CC-MAIN-2017-09
https://gcn.com/articles/2009/08/31/html-5-sidebar-new-elements.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00075-ip-10-171-10-108.ec2.internal.warc.gz
en
0.823421
531
3.09375
3
Distributed denial of service is a type of DoS attack where multiple compromised systems, which are often joined with a Trojan, are used to target a single system causing an attack. The DDoS attack itself may be a bit more sinister, according to NSFOCUS IB. A DDoS attack is an attempt to exhaust resources so that you deny access to resources for legitimate users. “It has never been easier to launch a sustained attack designed to debilitate, humiliate or steal from any company or organization connected to the Internet. These attacks often threaten the availability of both network and application resources, and result in loss of revenue, loss of customers, damage to brand and theft of vital data,” NSFOCUS Global wrote in a business white paper. [ ALSO ON CSO: 4 trends in DDoS security in 2016 ] In a question-and-answer session, Dave Martin, director of product marketing at NSFOCUS, IB, explained the different types of DDoS attacks and how to detect and respond to these attacks. What are some of the most common types of DDoS attacks? There are actually three styles of attacks that we see often. Application order, volumetric, and hybrid. Can you explain the differences in each method? Application order is less volumetric but still tries to consume resources. Attacker connect to a website and asks for a password. They send data and get a response from the server. Rather than send all data at once, they send a character at a time. As an attacker, you can create hundreds of thousands of connections at a time. They are opening up a secure connection to a website that appears normal but is consuming memory. Volumetric attempts to overwhelm the target with traffic. The hybrid attack is often application order and volumetric used in combination. The consequence is loss of revenue, loss of customers, and damage to reputation. These are not even about denial of service. These are smoke screens for exfiltration of data. Because of the distraction, attackers are able to plant back doors in other areas of the network. How can security teams detect these attacks? Detecting the DDoS attack itself really requires specialized hardware that will send alerts like emails or management tracks. The goal is to get these notifications before resource becomes unavailable. If you don’t have anti DDoS detection, you won’t know until the service goes down. How do security teams respond once they identify these attacks? It takes a while for service providers to identify and clean that traffic. A lot of service providers black hole the traffic so that all of your traffic is offline. How can security professionals differentiate when an attack is DDoS? These attacks are advanced persistent threats. Often the bad actors install a back door and sit on a network making them difficult to detect. Why are these attacks so persistent? These DDoS attacks are very easy to pull off. There are botnets available that criminals can rent for as little as $10 a month, and they require no technical expertise. These can generate a very large attack. Also, a lot of folks think they can handle these attacks with firewall, but many people are finding that those types of general purpose tools fall over in the face of an attack. People are starting to recognize that existing security equipment is not going to provide adequate protection. A firewall is great, you have to have it, but it’s not a panacea. How do security teams determine what tools are best in mitigating the risks of these attacks? They first have to ask, “Is it a good solution that fits in my budget?” Be sure that the technology has been battle tested. While enterprises like major banks have enormous budgets for their security strategy, small to midsize organizations are working with more limited resources. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:fb2a3b30-e664-45f8-b114-d5461f161229>
CC-MAIN-2017-09
http://www.csoonline.com/article/3036742/advanced-persistent-threats/ddos-attacks-how-to-mitigate-these-persistent-threats.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00251-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953072
808
2.703125
3
A U.S. soldier is on patrol with his squad when he kneels to check something out, unknowingly putting his knee into a puddle of contaminants. The soldier isn't harmed, though, because he or she is wearing a smart suit that immediately senses the threat and transforms the material covering his knee into a protective state that repels the potential deadly bacteria. Scientists at the Lawrence Livermore National Laboratory, a federal government research facility in Livermore, Calif., are using nanotechnology to create clothing designed to protect U.S. soldiers from chemical and biological attacks. The researchers turned to nanotechnology to overcome the tough task of creating military-grade protective clothing that's breathable and isn't heavy to wear. "The threat is nanoscale so we need to work in the nano realm, which helps to keep it light and breathable," said Francesco Fornasiero, a staff scientist at the lab. "If you have a nano-size threat, you need a nano-sized defense." For a little more than a year, the team of scientists has focused on developing a proof of concept suit that's both tough and inexpensive to manufacture. The lab group is teaming up with scientists from MIT, Rutgers University, the University of Massachusetts at Amherst and other schools to get it done. Fornasiero said the task is a difficult one, and the suits may not be ready for the field for another 10 to 20 years. Ross Kozarsky, a senior analyst with Boston-based Lux Research, said the effort could also lead to a lot of other uses for smart nano-based clothing or devices. "I think it's definitely innovative. It's a pretty powerful platform technology," he added. "Materials that intelligently react to their external surroundings -- that is certainly an interesting class of materials. This is at the front end of the tunnel. Imagine an athlete wearing some kind of clothing that reacts to humidity or temperature and can make itself a lighter or warmer shirt." Kozarsky also noted that smart clothing could be used for personal tasks, like measuring a user's heart beat, pulse and blood pressure. The technology could also lead to smart footwear, which could, for example, transform itself to repel potential danger found in water and keeping the user's feet dry. The military also might consider adapting the base technology so instead of a nano-infused fabric transforming itself to protect a human from a biological or chemical attack, the smart material could be body armor that automatically strengthens itself based on the stress it's under. "This is a big step forward for nanotech," said Ming Su, an associate professor of biomedical engineering at Worcester Polytechnic Institute. "It can lead to a big area of bionics. Basically, you are dealing with man-made stuff that ... can achieve certain biological functions -- having a self-sensing ability or self-healing abilities, or localized protection from toxic materials." Think, he added, of a baby blanket or baby clothes that could become warmer when the temperature drops. The same technology could be used to make gloves that can detect high heat or hazmat suits that become more protective when they detect toxins. "This is very good work, definitely," said Su. "I would say it will have a large impact." Building better protection The U.S. military today does have protective gear for soldiers who are under threat of biological or chemical attacks, but it's big, bulky, heavy and hot to wear. Today's suits can only be worn for an hour or two at a time, according to Fornasiero. "Your physical abilities drop and you can get heat stroke [wearing them]," he said. "It's a big problem." The Lawrence Livermore team isn't taking just one track to make that happen. They're working on at least two different options for the carbon nanotubes. One option is to use carbon nanotubes in a layer of the suit's fabric. Sweat and air would be able to easily move through the nanotubes. However, the diameter of the nanotubes is smaller than the diameter of bacteria and viruses. That means they would not be able to pass through the tubes and reach the person wearing the suit. However, chemicals that might be used in a chemical attack are small enough to fit through the nanotubes. To block them, researchers are adding a layer of polymer threads that extend up from the top of the nanotubes, like stalks of grass coming up from the ground. The threads are designed to recognize the presence of chemical agents. When that happens, they swell and collapse on top of the nanotubes, blocking anything from entering them. A second option that the Lawrence Livermore scientists are working on involves similar carbon nanotubes but with catalytic components in a polymer mesh that sits on top of the nanotubes. The components would destroy any chemical agents they come in contact with. After the chemicals are destroyed, they are shed off, enabling the suit to handle multiple attacks. "We are not selecting either option," said Fornasiero. "We have multiple options and we don't know what will work so we will keep looking." This story, "Gov't developing smart suits to protect U.S. troops from bio attacks" was originally published by Computerworld.
<urn:uuid:936a766d-b27e-4d9a-ad36-e8be456bbd17>
CC-MAIN-2017-09
http://www.networkworld.com/article/2175393/data-center/gov--39-t-developing-smart-suits-to-protect-u-s--troops-from-bio-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00123-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961629
1,103
2.984375
3
Transportation researchers affiliated with the University of California, Berkeley, have used roadway sensor data to come to a surprising conclusion: Discontinuing a program that gave solo drivers of hybrid vehicles access to carpool lanes has slowed traffic in all lanes. Conventional wisdom would lead one to believe that with fewer hybrids in the carpool lane, the traffic in that lane would speed up. But that hasn’t been the case. Everybody has slowed down — the drivers of hybrid vehicles and all other motorists on the road. “Drivers of low-emission vehicles are worse off, drivers in the regular lanes are worse off, and drivers in the carpool lanes are worse off. Nobody wins," said Michael Cassidy, University of California, Berkeley, professor of civil and environmental engineering, in a news announcement from the university. Cassidy and a graduate student studied six months’ worth of data from roadway sensors in the San Francisco Bay Area before and after the carpool lane privileges were revoked for hybrid cars. For one stretch of freeway in Hayward, Calif., the researchers concluded that carpool lane speeds were 15 percent slower after hybrids were expelled. One, the researchers found that when hybrids moved back into the regular traffic lanes, those lanes were slower — and that contributed to a slowdown in the adjacent carpool lane. "As vehicles move out of the carpool lane and into a regular lane, they have to slow down to match the speed of the congested lane," explained Kitae Jang, the doctoral student who contributed to the research. "Likewise, as cars from a slow-moving regular lane try to slip into a carpool lane, they can take time to pick up speed, which also slows down the carpool lane vehicles." Two, in Cassidy’s words, “Drivers probably feel nervous going 70 miles per hour next to lanes where traffic is stopped or crawling along at 10 or 20 miles per hour. Carpoolers may slow down for fear that a regular-lane car might suddenly enter their lane.” The researchers said that in order to improve traffic flow, more vehicles — not fewer — should be allowed into carpool lanes. The researchers presented their results in a report published by UC-Berkeley’s Institute of Transportation Studies. The researchers’ paper is available here. According to the university, in 2005 California began giving low-emission vehicles, including hybrids, a yellow sticker that qualified them to drive legally in the carpool lane. An estimated 85,000 hybrids in the state had the passes. The program was discontinued July 1 in order to comply with a federal regulation that, according to the Institute of Transportation Studies, requires low-emitting vehicles “be expelled from a carpool lane when traffic slows to below 45 mph on any portion of that lane during more than 10 percent of its operating time.”
<urn:uuid:cddad4ea-e9ea-4ab5-a60c-9dfc5418d8d9>
CC-MAIN-2017-09
http://www.govtech.com/transportation/Kicking-Hybrids-from-Carpool-Lanes-Slows-Traffic.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00243-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965962
588
2.9375
3
National Cyber Security Month, celebrated every October, is history. Did you implement any special awareness activities for your employees? At a minimum, did you require that your employees change their passwords? Check out these facts from the Multi-State Information Sharing and Analysis Center: - During 2010, more than 12 million records were involved in data breaches. - During 2010, cyber attacks on social networks doubled from 2009. - More than 100 million computers are infected with malware. - 32% of teens have experienced online harassment. - 42% of younger children (ages 4-8) have been victims of cyber bullies. Yet, with the increase of online data breaches, cyber attacks, and online harassment, we continue to participate on social networking sites without HTTPS protection, without checking privacy controls on a regular basis, and without performing due diligence on strangers who send us invitations to connect. So what should you do? Here are good ways to protect yourself everyday, not just during Cyber Security Awareness Month: - Use virus protection on your computer. - Don’t open emails when you don’t recognize the sender – and definitely don’t open attachments when you don’t recognize the sender. - Update your software on a regular basis. - If you use a computer or mobile device for purchases, only provide confidential information (personally identifiable information) if the URL has HTTPS security. - Secure your computer, smartphone, and mobile device with a password. - Learn how to disable the geotagging function on your mobile phone so that you don’t share your location unintentionally. - Don’t use your laptop at Wi-Fi locations since your data may be accessible to anyone. - Consider backing up your files to an external hard drive or other media on a regular basis – weekly if possible. And when Cyber Security Awareness Month begins next October, you can take the National Cyber Pledge and promote safe online computing to friends and family. Allan Pratt, an infosec consultant, represents the alignment of marketing, management, and technology. With an MBA Degree and four CompTIA certs in hardware, software, networking, and security, Allan translates tech issues into everyday language that is easily understandable by all business units. Expertise includes installation and maintenance of hardware, software, peripherals, printers, and wireless networking; development and implementation of integration and security plans; project management; and development of technical marketing and web strategies in the IT industry. Follow Allan on Twitter (http://www.twitter.com/Tips4Tech) and on Facebook (http://www.facebook.com/Tips4Tech). Cross-posted from Tips4Tech
<urn:uuid:0bd02369-3ef8-40af-a974-c8ce6c6c0f53>
CC-MAIN-2017-09
http://www.infosecisland.com/blogview/17813-Did-You-Take-the-National-Cyber-Pledge-During-October.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00419-ip-10-171-10-108.ec2.internal.warc.gz
en
0.866494
550
2.9375
3
Cell, AMD Chips to Power New IBM Supercomputer For the past three consecutive years, a computer IBM built for the US Department of Energy called BlueGene/L, powered by over 131,000 massively parallel PowerPC processors, has reigned supreme over the Top 500 Supercomputers list, now published twice annually by the University of Mannheim. While Intel processors have dominated the rest of the list, with as many as one-third of the systems running parallel Xeons, Itaniums or Itanium 2s, Intel's dream to topple the aging PowerPC architecture may be crushed by an alliance between its two arch-rivals, IBM and AMD. Many expected IBM at some time in the near future to be commissioned to build a new supercomputer based on its own Cell architecture, which utilizes PowerPCs in tandem with newer "synergistic processing elements" (SPEs). But buried amid yesterday's news from IBM is the revelation that "Roadrunner" -- the current designation for the computer that may run circles around BlueGene/L -- will be a hybrid, mixing over 16,000 Cells with at least as many AMD Opteron processors. While AMD chips have generally led in single-processor performance tests -- at least up until last June, when Intel unveiled its Woodcrest architecture -- they have not played a major factor in the Top 500, with only 80 systems on last June's list running Opterons. Despite that fact, IBM is predicting Roadrunner will be capable of surpassing a modern milestone: specifically, the next great metric prefix. It is aiming for a peak performance of 1.6 trillion calculations per second, or 1.6 petaflops, hurdling it above the 1 petaflop mark for the first time. Today, BlueGene/L has a theoretical peak performance of 367,000 gigaflops (billions of calculations per second), based on Top 500 estimates. Last year, at least a few IBM engineers were boasting of the possibility of replacing PowerPCs in the basic BlueGene design with Cell processors, and perhaps boosting peak performance past that magical milestone. But are AMD processors really necessary to achieve this goal? If you had asked these same engineers last year, they would have said no. As IBM has explained in the past, both Intel's EM64T and AMD64 architectures approach the problem of compounding processor power using a linear scale: for more power, you pack on more cores. By contrast, Cell processors use SPEs, which rely upon a PowerPC (PPE) to divide tasks into subtasks, but then work not so much in parallel but in tandem. As a result, we've been told, to improve performance, you don't stack on more Cells on top of Cells, but instead you build clusters of SPEs, the total number of which, for efficiency's sake, should be a multiple of 2. Each cluster is managed by a PPE. Theoretically, performance is improved exponentially rather than linearly. Yesterday, IBM cited the value "over 16,000;" and 16,384 would certainly be an appropriate multiple of 2, specifically to the 14th power. But with Opteron processors scaling linearly, using the same number on the AMD side presents a bit of a mystery: What is IBM trying to accomplish, and who is it really accomplishing this for? IBM's statement yesterday did offer a few clues. So-called "typical computing processes," the company said, will be handled by the Opteron CPU bank, including file I/O and communication. Some at AMD might say just one Opteron processor handles that job well enough, let alone 2 to the 14th power Opterons. Meanwhile, tasks that typically consume the majority of computing resources will be delegated to the Cell bank. What exactly does the delegating in this case was left a mystery. Is it a Cell PPE that treats Opterons the same as it does SPEs? At this point, even Intel might be interested in the answers. BetaNews has contacted IBM, and is arranging for these issues and others to be addressed in forthcoming stories. Stay in touch as we follow the evolution of Roadrunner.
<urn:uuid:9ca9a1e1-e19b-441d-a831-dfabec665201>
CC-MAIN-2017-09
https://betanews.com/2006/09/07/cell-amd-chips-to-power-new-ibm-supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00419-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951483
865
2.703125
3
As Storm Imogen hits south-west England and south and mid-Wales, 13,000 homes are left without any power. In fact, the Environment Agency has nearly sixty flood warnings in place and more than 170 flood alerts. These storms are occurring more frequently with Storm Henry hitting recently and in early December, Storm Desmond had a devastating impact on Cumbria and Lancashire, with more than 43,000 homes suffering from power cuts, an estimated 5,200 homes affected by flooding and caused untold damage to properties and transport infrastructure. With this in mind, the question arises as to what organisations can do to mitigate the damage inflicted by these increasingly prevalent natural catastrophes. The way that emergency services, housing associations and property insurers use GIS (Geographic Information Systems) mapping and analytics to plan ahead, manage risk and respond proactively to severe weather events, can be a lesson to the wider business community. Interactive maps are able to provide real-time geospatial information throughout the entire development of the weather system; from tracking the path; to identifying where and when it’s going to hit; aiding disaster response and evacuation. GIS mapping can play a vital role in emergency services’ evacuation planning. North Wales Police use GIS to provide a live, real-time view of emergency situations. Used in conjunction with general mapping of the area, location of units and live feedback from mobile devices, emergency services are able to execute safe and efficient evacuation protocols swiftly. Housing associations also make good use of GIS technology to pre-emptively put in place anti-flooding measures around susceptible areas, minimising future damage and the associated costs. Moreover, housing associations are able to assess locations for potential new housing developments according to whether the new area is at risk of flooding in the future. GIS mapping also plays a vital role in disaster response. Insurance companies can access a wealth of information about a specific geographic area, and implement a protocol that needs to be followed in the event of a disaster. For example, if a severe weather event is due to hit a specific area, GIS mapping could be used to locate policy holders, showing the individual value of each insured property, as displayed in this insurance risk damage demo. Using this data, they could identify concentrations of at-risk policy holders via a heat map. This would allow the insurer to inform all at-risk policy holders to help protect their family and property. Not only would the insurance company reduce claims and minimise damage done to property, but it is simultaneously providing excellent customer service. Market leading property insurers RSA, Direct Line Group and Aviva all use GIS in this way to respond promptly to claims and weed out fraudulent ones. They are also able to price policies competitively and more fairly based on the precise location, as opposed to postcode area for example. That precise location can be intersected with data such as flood plains, physical hazards, geopolitical risk and historic claims in order to assess the overall risk of a property. Insurers can even decide not to insure a property if their overall exposure to flood events or other risks in the area where that property is located would then be too high. When devastating floods swept across central Europe in 2013, Esri UK worked with international insurance broker Willis Group Holdings to gain detailed, real-time insight into the disaster as it happened. Willis was then able to provide well informed advice to its increasingly fraught customers and strengthen its reputation as an expert in flood risk and mitigation. The ramifications of a natural disaster can be endless. Land Rover had serious issues with its supply chain back in 2011 when its paint supplier in Japan was affected by the earthquake. This meant that it was unable to deliver cars on time, due to it not being able to get hold of the specific paint pigment it needed. Now, supply chains span the globe and are exposed to many environmental, geopolitical, economic, and other risks. Avoiding disruptions is critical to meeting customer demands and avoiding the scenario outlined above. Using GIS for supply chain management can assist in optimising distribution networks and facility locations, balance inventories based on demand or events, and react to unexpected disruptions quickly and effectively. , making disaster response proactive rather than purely reactive. Visualising an area and applying spatial analysis with GIS enables risk to be more accurately assessed, informs the advanced planning and identification of preventative measures, enhances emergency management and facilitates faster recovery. The increasing frequency and prevalence of these natural catastrophes in the UK cannot be ignored. Rather than just a timely response, proactive measures need to be put in place to truly mitigate the potential damage that can be inflicted. Whether it’s to strategically plan for building developments, or ensure safe and efficient evacuation of civilians in the event of a catastrophe – GIS has a key role to play.
<urn:uuid:766fb7b6-37c3-4eac-9cf0-6227ba092392>
CC-MAIN-2017-09
http://www.information-age.com/weathering-storm-why-data-mapping-capabilities-are-key-catastrophe-planning-123460908/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00595-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945276
991
2.953125
3
California's judicial system -- with 170 courts spread over 158,000 square miles in 58 counties -- dwarfs that of many countries in size and complexity. Since this sprawling system costs California more than $1.5 billion a year, even small increases in efficiency can mean tremendous savings. And efficiency was a problem. A 1991 report by the Commission on the Future of the California Courts pointed out many problems inherent in this cumbersome system and said that technology would be the key to improving it. "The courts have it within their power to transform their relationship with technology," said the report. "If they do not, their inefficiency will at best frustrate the public. At worst, disputants will seek out private dispute resolution forums ...." Since then, the state has taken some major steps to establishing a statewide judicial information infrastructure. As with most technology projects, good planning and design are important, and California placed its emphasis on laying a firm foundation. In December 1991, the Judicial Council, the administrative arm of the courts, formed the Commission on the Future of the California Courts. According to the commission's final report, its method of operation was a "planning process fairly novel in the nation's courts at the time, one known as 'alternative futures planning.'" This method was designed to embrace "conventional forecasting, trend analysis and scenario construction" to help decision makers "anticipate what the future might be in order to propose what it should be." "The commission looked into the court system and where we should be in the year 2020," said Justice Gary Hastings of the California Court of Appeal, 2nd District. "One result of the commission was a report that suggested a body be set up to address where technology should be." The Judicial Council accepted the 2020 report and established the Court Technology Task Force (CTECH) for the purpose of formulating the "design, charge and process for a permanent governing body that would oversee the planning for and implementation of technology in California's trial and appellate courts." According to Ron Titus, manager of the Office of Court Technology and Information (OCTI) for the Judicial Council, who is also principle staff of the task force, "Our first effort was to try to get a handle on the level of technology. We went to the National Center for State Courts and asked for good courts to go see. There were little pockets of technology in California, but the state as a whole wasn't very advanced." Members of CTECH traveled to other states -- including Utah and Washington -- to study successful court technology systems and management practices. They did short courses at Harvard's John F. Kennedy School of Government and MIT, and attended seminars and trade shows. The result was a report containing general strategic guidelines for the courts as well as specific tasks and goals. Then, Judicial Council Rule 1033 was issued in November 1994, which established the Court Technology Standing Advisory Committee, chaired by Justice Hastings. The committee is charged with achieving the objectives defined by the task force. It soon became apparent, however, that in order to make further progress, the committee needed to know more about the existing state of technology in the courts. At the same time, the courts in each county were facing a deadline imposed by the Judicial Council's Rule 991 which mandated that by Sept. 1, 1996, "the trial courts within each county shall develop a common plan for countywide implementation of information and other technologies." The council allocated $2 million to help counties develop those plans. "We had no idea what courts had what technology," noted Hastings. "We figured there would be no way to allocate [technology] between the counties, and decided that if we held the money and provided services to them, we could probably get more bang for the buck." A project, consisting of three phases, was developed and went to bid. The first phase was to hold five regional workshops to help explain the purpose and duties of the committee and the state's technology goals to county leaders. The second phase was to visit each court to asses the technology already in place. The final phase was to visit officials in each county and focus on a strategic plan. Tech Prose, a relatively small Walnut Creek, Calif.-based firm, won the contract. Meryl Natchez, Tech Prose president, gave the workshop presentations. All but two counties participated, and the workshops received high marks from attendees. This helped to set the groundwork for the survey of existing technology. The court visits and technology surveys found a remarkably wide range of technology. As expected, the more densely populated, affluent counties had the better technology. But the average age of existing technology was seven years, and 30 of the 58 counties were running systems more than 10 years old. Nineteen counties had two or fewer staff in charge of data processing -- not just for the courts, but for the entire county. Eleven counties had no one in charge of data processing -- in those cases either court administrators or vendors were making the decisions. "We were surprised how many didn't even have their own fax machine," said Jolly Young, Tech Prose's Project Manager for the Courts Project. "We'd bring out our own printers. A lot of them have obstacles you wouldn't even think of, historical things like you couldn't put wiring in the building [because it was a historical site]." One county shared the town hall with a local social group. This usually worked okay, except for one month when there was a scheduling mix up and they ended up holding court in the parking lot. According to Young, the courts were generally very receptive to the surveying, especially once they realized it was intended to help them get their strategic plans together. The surveying phase also became more than merely gathering information -- surveyors were able to relate the experiences from one county to officials from other counties, thereby helping to "cross-pollinate" ideas. All the survey information was transferred to a database and made available for officials to see what was being done where and by whom. The database also contains information on satisfaction levels with specific software and plans for new installations. This helped counties evaluate their own software and expansion plans. "Planners were sent out into the counties to talk with the courts and begin to develop a strategic plan," said Titus. "We provided a template as to what we wanted to develop. It was very successful in terms of providing a service from a state level to the local level." Based on the template, each county did develop and file its own strategic plan by the Sept. 1, 1996 deadline. "All the counties have filed their reports," said Hastings. "They have their own plans they have signed off to. We now have a $4.6 million project to help the courts implement those plans. Any requests they make must be consistent with their long-range and short-range plans." The project seems to have put California in a good position to move forward. The central court administrators have a strategic plan which describes where the state should be in the next century and local officials have a clear view of where they stand and what they must do to bring their county up to the goals listed in the strategic plan. Although there remains a tremendous amount of work to accomplish the overall goals and the specific county plans, at least the state knows where it stands and where it wants to go. Instead of issuing unreal targets to overloaded county officials, the state has put itself into a position where state and local officials can work together to achieve mutually agreed upon goals. PROBLEM/SITUATION: California courts needed a coherent technology plan. SOLUTION: A project surveyed existing court technology and helped develop strategic plans for each county. VENDOR: Tech Prose. CONTACT: Court Technology Hotline, 415/396-9315. An organizational diagram of the courts may be seen at: .
<urn:uuid:7901c82f-ed6b-46db-a1cf-2571cd448808>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Courts-Develop-Plans-for-2020.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00415-ip-10-171-10-108.ec2.internal.warc.gz
en
0.973833
1,604
2.53125
3
For the automotive and aerospace industries, crash and safety analysis by finite elements is used to shorten the design cycle and reduce costs. Recently, a popular crash/safety simulation set a new speed record. Over on the Cray blog, Greg Clifford, manufacturing segment manager at the historic supercomputing company, explains how LS-DYNA “car2car” simulation reached new heights, running on a Cray supercomputer, pointing the way for engineering simulations that can take advantage of the massive computing power offered by next-generation systems. The Cray XC30 supercomputer, outfitted with Intel Xeon processors and bolstered by the scalability of the Aries interconnect, enabled engineers to run the “car2car” model, a 2.4-million element crash/safety simulation, in under 1,000 seconds. The results of the LS-DYNA simulation are posted on topcrunch.org, which documents the performance of HPC systems running engineering codes. The record-setting job turnaround time was 931 seconds, but equally important, the simulation broke new ground by harnessing 3,000 cores. “As the automotive and aerospace industries continue to run larger and more complex simulations, the performance and scalability of the applications must keep pace,” notes Clifford. Clocking in under 1,000 seconds marks a significant milestone in the ongoing effort to enhance performance. Over the past quarter-century model sizes for crash safety simulations have increased by a factor of 500. At first, the computing power only enabled single load cases, like frontal crashes. Over time, the models grew to support 30 load cases at once, and now incorporate frontal, side, rear and offset impacts. As further detailed in this paper, researchers from Cray and Livermore Software Technology Corporation found the key to improving LS-DYNA scalability was to employ HYBRID LS-DYNA, which combines distributed memory parallelism using MPI with shared memory parallelism using OpenMP. This was preferable to using MPP LS-DYNA, which only scales to about 1,000 to 2,000 cores depending on the size of the problem. Clifford writes that time crash/safety simulation has evolved from being mainly a research endeavor to becoming a crucial part of the design process – it was a change that followed the democratization of HPC, as ushered in by Moore’s law-prescribed progress. The automotive and aerospace fields have become full-fledged HPC-driven enterprises, and have reaped the benefits of shorter design times and safer, more-performant end products. The MPI framework for parallel simulations and the increase in processor frequency provided the foundation for this transformation. But the playing field is changing. With chip speeds leveling off, now software must be mined for hidden inefficiencies. This is why, in Clifford’s opinion, the recent car2car benchmark performance is so significant. It signifies a changing paradigm and where the focus must shift. Some of the models in use today incorporate millions of elements. Take the THUMS human body model with 1.8 million elements – and safety simulations, which are headed to over 50 million elements. “Models of this size will require scaling to thousands of cores just to maintain the current turnaround time,” observes Clifford. “The introduction of new materials, including aluminum, composites and plastics, means more simulations are required to explore the design space and account for variability in material properties. Using average material properties can predict an adequate design, but an unfortunate combination of material variability can result in a failed certification test. Hence the increased requirement for stochastic simulation methods to ensure robust design. This in turn will require dozens of separate runs for a given design and a significant increase in compute capacity — but that’s a small cost compared to the impact of reworking the design of a new vehicle.”
<urn:uuid:81a86b4a-2885-4d07-ae4f-8bb0e0954fd2>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/05/08/safety-simulation-sets-speed-record/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00639-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91282
803
2.5625
3
Using the Pupil as a Crime-Fighting Tool / December 31, 2013 In the U.K., scientists are using the human pupil as a crime-fighting tool -- one that allows pinpointing the perpetrators by simply looking more closely at the reflections in their victim's eyes. As was initially reported in the journal PLOSONE, the researchers wrote that for crimes in which the victims are photographed, such as hostage taking or child sex abuse, reflections in the eyes of the photographic subject could help to identify perpetrators. “The pupil of the eye is like a black mirror," Dr. Rob Jenkins, a psychologist at the University of York, said in a statement. "To enhance the image, you have to zoom in and adjust the contrast. A face image that is recovered from a reflection in the subject’s eye is about 30,000 times smaller than the subject’s face."
<urn:uuid:1578d0f9-9577-4753-9149-ffcda729f0d0>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-Using-the-Pupil-as-a-Crime-Fighting-Tool.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00284-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955123
185
3.15625
3
Computer forensics is the practice of collecting, analysing and reporting on digital data in a way that is legally admissible. It can be used in the detection and prevention of crime and in any dispute where evidence is stored digitally. Computer forensics follows a similar process to other forensic disciplines, and faces similar issues. - Uses of computer forensics - Live acquisition - Stages of an examination About this guide This guide discusses computer forensics from a neutral perspective. It is not linked to particular legislation or intended to promote a particular company or product, and is not written in bias of either law enforcement or commercial computer forensics. The guide is aimed at a non-technical audience and provides a high-level view of computer forensics. Although the term “computer” is used, the concepts apply to any device capable of storing digital information. Where methodologies have been mentioned they are provided as examples only, and do not constitute recommendations or advice. Copying and publishing the whole or part of this article is licensed solely under the terms of the Creative Commons – Attribution Non-Commercial 4.0 license Uses of computer forensics There are few areas of crime or dispute where computer forensics cannot be applied. Law enforcement agencies have been among the earliest and heaviest users of computer forensics and consequently have often been at the forefront of developments in the field. Computers may constitute a ‘scene of a crime’, for example with hacking or denial of service attacks or they may hold evidence in the form of emails, internet history, documents or other files relevant to crimes such as murder, kidnap, fraud and drug trafficking. It is not just the content of emails, documents and other files which may be of interest to investigators but also the ‘metadata’ associated with those files. A computer forensic examination may reveal when a document first appeared on a computer, when it was last edited, when it was last saved or printed and which user carried out these actions. More recently, commercial organisations have used computer forensics to their benefit in a variety of cases such as; * Intellectual Property theft * Industrial espionage * Employment disputes * Fraud investigations * Bankruptcy investigations * Inappropriate email and internet use in the work place * Regulatory compliance For evidence to be admissible it must be reliable and not prejudicial, meaning that at all stages of a computer forensic investigation admissibility should be at the forefront of the examiner’s mind. A widely used and respected set of guidelines which can guide the investigator in this area is the Association of Chief Police Officers Good Practice Guide for Digital Evidence [PDF], or ACPO Guide for short. Although the ACPO Guide is aimed at United Kingdom law enforcement, its main principles are applicable to all computer forensics. The four main principles from this guide (with references to law enforcement removed) are as follows: 1. No action should change data held on a computer or storage media which may be subsequently relied upon in court. 2. In circumstances where a person finds it necessary to access original data held on a computer or storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions. 3. An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third-party should be able to examine those processes and achieve the same result. 4. The person in charge of the investigation has overall responsibility for ensuring that the law and these principles are adhered to. In what situations would changes to a suspect’s computer by a computer forensic examiner be necessary? Traditionally, the computer forensic examiner would make a copy (or acquire) information from a device which is turned off. A write-blocker would be used to make an exact bit for bit copy of the original storage medium. The examiner would work from this copy, leaving the original demonstrably unchanged. However, sometimes it is not possible or desirable to switch a computer off. It may not be possible if doing so would, for example, result in considerable financial or other loss for the owner. The examiner may also wish to avoid a situation whereby turning a device off may render valuable evidence to be permanently lost. In both these circumstances the computer forensic examiner would need to carry out a ‘live acquisition’ which would involve running a small program on the suspect computer in order to copy (or acquire) the data to the examiner’s hard drive. By running such a program and attaching a destination drive to the suspect computer, the examiner will make changes and/or additions to the state of the computer which were not present before his actions. However, the evidence produced would still usually be considered admissible if the examiner was able to show why such actions were considered necessary, that they recorded those actions and that they are to explain to a court the consequences of those actions. Stages of an examination We’ve divided the computer forensic examination process into six stages, presented in their usual chronological order. Forensic readiness is an important and occasionally overlooked stage in the examination process. In commercial computer forensics it can include educating clients about system preparedness; for example, forensic examinations will provide stronger evidence if a device’s auditing features have been activated prior to any incident occurring. For the forensic examiner themself, readiness will include appropriate training, regular testing and verification of their software and equipment, familiarity with legislation, dealing with unexpected issues (e.g., what to do if indecent images of children are found present during a commercial job) and ensuring that the on-site acquisition (data extraction) kit is complete and in working order. The evaluation stage includes the receiving of instructions, the clarification of those instructions if unclear or ambiguous, risk analysis and the allocation of roles and resources. Risk analysis for law enforcement may include an assessment on the likelihood of physical threat on entering a suspect’s property and how best to counter it. Commercial organisations also need to be aware of health and safety issues, conflict of interest issues and of possible risks – financial and to their reputation – on accepting a particular project. The main part of the collection stage, acquisition, has been introduced above. If acquisition is to be carried out on-site rather than in a computer forensic laboratory, then this stage would include identifying and securing devices which may store evidence and documenting the scene. Interviews or meetings with personnel who may hold information relevant to the examination (which could include the end users of the computer, and the manager and person responsible for providing computer services, such as an IT administrator) would usually be carried out at this stage. The collection stage also involves the labelling and bagging of evidential items from the site, to be sealed in numbered tamper-evident bags. Consideration should be given to securely and safely transporting the material to the examiner’s laboratory. Analysis depends on the specifics of each job. The examiner usually provides feedback to the client during analysis and from this dialogue the analysis may take a different path or be narrowed to specific areas. Analysis must be accurate, thorough, impartial, recorded, repeatable and completed within the time-scales available and resources allocated. There are myriad tools available for computer forensics analysis. It is our opinion that the examiner should use any tool they feel comfortable with as long as they can justify their choice. The main requirements of a computer forensic tool is that it does what it is meant to do and the only way for examiners to be sure of this is for them to regularly test and calibrate the tools they rely on before analysis takes place. Dual-tool verification can confirm result integrity during analysis (if with tool ‘A’ the examiner finds artefact ‘X’ at location ‘Y’, then tool ‘B’ should replicate these results). This stage usually involves the examiner producing a structured report on their findings, addressing the points in the initial instructions along with any subsequent instructions. It would also cover any other information which the examiner deems relevant to the investigation. The report must be written with the end reader in mind; in many cases the reader will be non-technical, and so reader-appropriate terminology should be used. The examiner should also be prepared to participate in meetings or telephone conferences to discuss and elaborate on the report. As with the readiness stage, the review stage is often overlooked or disregarded. This may be due to the perceived costs of doing work that is not billable, or the need ‘to get on with the next job’. However, a review stage incorporated into each examination can help save money and raise the level of quality by making future examinations more efficient and time effective. A review of an examination can be simple, quick and can begin during any of the above stages. It may include a basic analysis of what went wrong, what went well, and how the learning from this can be incorporated into future examinations’. Feedback from the instructing party should also be sought. Any lessons learnt from this stage should be applied to the next examination and fed into the readiness stage. Issues facing computer forensics The issues facing computer forensics examiners can be broken down into three broad categories: technical, legal and administrative. Encryption – Encrypted data can be impossible to view without the correct key or password. Examiners should consider that the key or password may be stored elsewhere on the computer or on another computer which the suspect has had access to. It could also reside in the volatile memory of a computer (known as RAM ) which is usually lost on computer shut-down; another reason to consider using live acquisition techniques, as outlined above. Increasing storage space – Storage media hold ever greater amounts of data, which for the examiner means that their analysis computers need to have sufficient processing power and available storage capacity to efficiently deal with searching and analysing large amounts of data. New technologies – Computing is a continually evolving field, with new hardware, software and operating systems emerging constantly. No single computer forensic examiner can be an expert on all areas, though they may frequently be expected to analyse something which they haven’t previously encountered. In order to deal with this situation, the examiner should be prepared and able to test and experiment with the behaviour of new technologies. Networking and sharing knowledge with other computer forensic examiners is very useful in this respect as it’s likely someone else has already come across the same issue. Anti-forensics – Anti-forensics is the practice of attempting to thwart computer forensic analysis. This may include encryption, the over-writing of data to make it unrecoverable, the modification of files’ metadata and file obfuscation (disguising files). As with encryption, the evidence that such methods have been used may be stored elsewhere on the computer or on another computer which the suspect has had access to. In our experience, it is very rare to see anti-forensics tools used correctly and frequently enough to totally obscure either their presence or the presence of the evidence that they were used to hide. Legal issues may confuse or distract from a computer examiner’s findings. An example here would be the ‘Trojan Defence’. A Trojan is a piece of computer code disguised as something benign but which carries a hidden and malicious purpose. Trojans have many uses, and include key-logging ), uploading and downloading of files and installation of viruses. A lawyer may be able to argue that actions on a computer were not carried out by a user but were automated by a Trojan without the user’s knowledge; such a Trojan Defence has been successfully used even when no trace of a Trojan or other malicious code was found on the suspect’s computer. In such cases, a competent opposing lawyer, supplied with evidence from a competent computer forensic analyst, should be able to dismiss such an argument. A good examiner will have identified and addressed possible arguments from the “opposition” while carrying out the analysis and in writing their report. Accepted standards – There are a plethora of standards and guidelines in computer forensics, few of which appear to be universally accepted. The reasons for this include: standard-setting bodies being tied to particular legislations; standards being aimed either at law enforcement or commercial forensics but not at both; the authors of such standards not being accepted by their peers; or high joining fees for professional bodies dissuading practitioners from participating. Fit to practice – In many jurisdictions there is no qualifying body to check the competence and integrity of computer forensics professionals. In such cases anyone may present themselves as a computer forensic expert, which may result in computer forensic examinations of questionable quality and a negative view of the profession as a whole. Resources and further reading There does not appear to be very much material covering computer forensics which is aimed at a non-technical readership. However the following links may prove useful: Forensic Focus An excellent resource with a popular message board. Includes a list of training courses in various locations. NIST Computer Forensic Tool Testing Program The National Institute of Standards and Technology (America) provides an industry respected testing of tools, checking that they consistently produce accurate and objective test results. Computer Forensics World A computer forensic community web site with message boards. Free computer forensic tools A list of free tools useful to computer forensic analysts, selected by Forensic Control. The First Forensic Forum (F3) A UK based non-profit organisation for forensic computing practitioners. Organises workshops and training. - Hacking: modifying a computer in a way which was not originally intended in order to benefit the hacker’s goals. - Denial of Service attack: an attempt to prevent legitimate users of a computer system from having access to that system’s information or services. - Metadata: data about data. It can be embedded within files or stored externally in a separate file and may contain information about the file’s author, format, creation date and so on. - Write blocker: a hardware device or software application which prevents any data from being modified or added to the storage medium being examined. - Bit copy: ‘bit’ is a contraction of the term ‘binary digit’ and is the fundamental unit of computing. A bit copy refers to a sequential copy of every bit on a storage medium, which includes areas of the medium ‘invisible’ to the user. - RAM: Random Access Memory. RAM is a computer’s temporary workspace and is volatile, which means its contents are lost when the computer is powered off. - Key-logging: the recording of keyboard input giving the ability to read a user’s typed passwords, emails and other confidential information.
<urn:uuid:04ac57f7-cd84-4f14-9a17-552cdb4bf133>
CC-MAIN-2017-09
https://forensiccontrol.com/resources/beginners-guide-computer-forensics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00212-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946125
3,047
3.359375
3
By Jolize Gerber, Healthcare Analyst, Frost & Sullivan The Southern African Development Community (SADC) was formed with the vision of establishing a regional community that will ensure economic well-being and improve the standards of living and quality of life for the people of Southern Africa. However, what does the quality of life really look like for the majority of the SADC population? The majority of countries within the SADC region continue to face great challenges with regards to poverty, and the region remains heavily dependent on donor aid. Despite economic improvement in the past two decades, a great percentage of the region's population still lives below the poverty line. In Malawi, 50.0 percent of the population lives below the poverty line. In other countries such as Zambia, this percentage is even higher, averaging 86.0 percent. But it is not all doom and gloom for countries in the SADC region. There are also outliers. In Mauritius, only 8.0 percent of the population lives below the poverty line. However, bear in mind that Mauritius is ranked as the best-governed country in Africa and is considered to be one of the developing world's most successful democracies, with a well-developed legal and commercial infrastructure. For most of the other countries in the SADC region, poverty and poor living conditions for the majority of the population remain the order of the day. These conditions have also caused healthcare in the region to remain largely underdeveloped. The picture of healthcare: Too many thorns
<urn:uuid:5a5cad8f-7df4-42aa-b259-fde5479ed7bf>
CC-MAIN-2017-09
https://www.frost.com/sublib/display-market-insight.do?id=209147778
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00212-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936625
313
2.828125
3
What did the rocket scientist say to the Free/Open Source Software developer? Let's do launch! It's only natural that they'd want to work together. Both communities are focused on the cutting edge: creating tools and capabilities that did not previously exist. Both dedicate their work to expanding humanity's pool of information and want that information to float freely through society. I am a software developer currently working on the NASA/JPL MSL (Mars Science Laboratory) rover, which launches in 2009. These are personal observations of how I encounter Free/Open Source Software (FOSS), and what I think about it. Free floating information feeds a cycle of knowledge. Where the FOSS community donates code, algorithms and products, NASA and other organizations reciprocate with knowledge about weather systems, climate and basic science. Everyone contributes what they're best at, and tightly chartered organizations can stay focused on deeper penetration of hard problems, confident that others are doing the same. Space exploration is necessarily a cooperative venture; it's much too hard for anything less than all of humanity. Look at these statements side by side, and you'll see the philosophical similarities: NASA codifies its dedication in Congress' Space Act Charter: [NASA shall] ... provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof... The Open Source Initiative criteria for "Open Source" includes: - Allow free redistribution - Provide access to source code - Allow modifications and the creation of "derived works" FOSS developers codify that dedication in copyrights, copy-lefts, and license agreements like the GPL (GNU Public License), which says in part: When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. Open Source in Space Need a few examples? FOSS explores our Solar System. We send robots to the moon, Mars and beyond to flyby, orbit or land. FOSS goes with them, pervasive in the real-time operating systems, math libraries and file systems. Consider the robotic decisions of where to rove, and realize the power given the human race by the Free Software Foundation's (FSF) compilers, libraries, build scripts and so on. "Electra" is NASA/JPL's Software Defined Radio (SDR) product created to support the Mars Network, and the InterPlanetary Internet. Electra provides UHF radio links in compliance with Consultative Committee for Space Data Systems (CCSDS) protocols Proximity-1 (data link) and CFDP (file delivery). The neat thing about SDRs is that we can still reconfigure the protocol and signal processing functions after launch. For example, on MRO some other hardware started "leaking" electro-magnetically, which interfered with the Electra radio. We sent up a software fix to Electra to reduce the impact. We already know MRO will act as a radio relay for future Mars probes not yet built, such as the Mars Science Laboratory (MSL), which will arrive in 2010. MSL and others will use protocols not yet invented, and MRO will have to be updated to learn them. Please bear with me a moment; I'd like to make a point about how much FOSS enters into this. Each of the following are FOSS project technologies used in the program. The flight software is mostly C, with some assembly for trap handling. It lives on file servers running kerberized OpenAFS, in a CVS repository, and is cross-compiled on Linux for RTEMS on SPARCv7 target chip. The code is built with gcc, make and libtools, and linked with newlib. (There, that wasn't so bad. I'll mention it again later.) |Hubble "Pillars of Creation"| FOSS observes our Universe. All the striking Mars rover navigation images, all the roiling clouds of Jupiter, each new view of Saturn's spectacular rings, every azure picture of deep-blue Neptune; the distant stars, far-away galaxies and stupendous galaxy clusters; all of these come to us touched in some way by FOSS. Think about the JPEG image formats themselves, the X11 workstation software to process them, and the MySQL databases to hold them. Deep Space Network antenna FOSS moves and analyzes ground data. When we prepare, check and double-check command sequences for uplink, the inputs and outputs travel variously across multiple sendmail e-mail systems, Linux platforms, and of course Internet Protocol stacks. The downlinked data, after its tiring journey across the solar system, bounces those last miles in mostly the same way the rest of our Web-connected world works. In the 1960s, NASA engineers had to invent lots of ways to move data around, and it became expensive to maintain. FOSS methodologies develop space software. Some FOSS project efforts are very widely distributed, driving development methodologies to leverage all those eyes while integrating all that expertise and smoothing together all those styles. That problem is much tougher than, but similar to, our allocation and integration of functions across organizations, teams and contractors. Those methodologies are very attractive to us. There are several NASA examples of using an "agile software lifecycle," and we look to open communities to show us that it can be done, and how to do it best. The only way for the public to participate in seeing fresh images in near-real time is through an open architectures for public outreach. FOSS develops the next-generation cutting-edge technology. Why can we put a man on the moon, but we still don't have robot cars? Some key challenges are: - Robots have different physical characteristics. - Robots have different hardware architectures. - Contributions made by multiple institutions. - Advanced research requires a flexible framework. - Software must support various platforms. - Lack of common low-cost robotic platforms. - Software must be unrestricted and accessible (ITAR and IP). - Software must integrate legacy code bases. The Coupled Layer Architecture for Robotic Autonomy (CLARAty) project brings together folks from many institutions. They develop unified and reusable software that provides robotic functionality and simplifies the integration of new technologies on robotic platforms. (They also have some funky movies of robots doing funny things.) Our dirty little secret is that space agencies are companies just like everybody else. We too (am I shocking you?) use e-mail, Web servers, and all the usual non-space-qualified suspects. Here are some examples: - Operating Systems, Systems Management: Rocks (cluster Linux), Ganglia, amanda - Software Management: Depot, Subversion, Trac, Bugzilla - Communications: OpenSSH, Apache, Jabber, Firefox/Mozilla, Sendmail, Mailman, Procmail, CUPS, OpenOffice, wikis (various) - Data Visualization: ImageMagick, GMT, MatPlotLib - Compilers, languages, code checkers: SunStudio, splint, Doxygen, valgrind, Java, Perl (some JPL history there), Python, Ruby - Databases: MySQL The Open Advantage OK, so given that our cultures are similar, how does that translate into our bottom line? Why does FOSS have such a large role in space exploration? Here's the top-10 list of what I see. 1. Schedule Margin Planets move; launch windows don't. The Spirit and Opportunity Mars Rovers had to go in the summer of 2003 or never. They are simply too massive to throw that far, for that budget, unless the planets aligned just so. (Mars and Earth line up every 26 months or so, but in 2003 they were unusually close together.) Procurement cycles for spending lots of government money can be months long, and they can dominate critical paths. Quickly obtainable FOSS relieves that pressure and gives us some elbow room. Bug fix turnaround times can be critical. If we can fix the source code ourselves, we can keep a whole team moving forward. If the fix is accepted by the open-source community, we avoid long-term maintenance costs and have it for the next project. Feature additions ("Gee, if it only did this, too...") have the same advantage but take longer to give back. Oddly, we can contract for new features but cannot easily give them away. The FOSS spirit hasn't yet pervaded government contracting rules. 2) Risk Mitigation Full system visibility is key to risk identification, characterization and resolution. The Mars robots are sent to encounter unfamiliar situations. Think how much information system engineers need to mitigate those risks. This is no place for a closed system. All flight software goes through rigorous review, including the software (compilers) that builds the software (command sequence generators) that builds the software (commands). We do code walk-throughs which perforce means having the source code. We design white-box test plans by analysis of software decision paths, which is easier to do with the source code in hand. Our review process requires "outside experts, not working on the project" to review the code; well, that's exactly what a FOSS community is all about, isn't it? In essence, the open-source community is the world's largest Review Board, only we don't have to buy the doughnuts. When you leave Earth's orbit, you also leave "push the reset button" and "reload from CD" far behind. We tend to find bugs that don't bother other customers. We live at or beyond the border cases, and we push frontiers in all senses of the word. So all the critical bugs have to be found and squashed before we go. The best way to shake out software bugs is to have lots of testers independent of development try it out in unfamiliar environments and in ways unforeseen—which pretty much describes the FOSS user community. By the time something's on its 2.1 release, it's usually been beaten up pretty thoroughly. And the beauty is, you have full disclosure about what broke in the 1.0 release, under what conditions, how it was fixed and what tests prove it's gone. Space exploration takes a lot of brain power—more brains than any one company or nation commands. Our industry, academic and international partners each have specialized expertise vital to the effort. And each partner, it seems, uses a different platform, language or protocol from the rest that's optimized for that particular piece. Each builds a sub-assembly, and the thing has to work when you bolt it all together. This is interoperability by definition. Software must be designed from the start to "play well with others" beyond your organizational control. Attempts to dictate uniform development platforms are not infrequent, and always fail. At worst, they represent a willful desire to ignore strict interface control, and interfaces are precisely what call for the greatest care. As we build the space shuttle replacement and new spacecraft for the moon, interoperability is a top-level requirement. Under the current "Vision for Space Exploration," the "Constellation Program's Communications, Command, Control and Information (C3I) Interoperability Specification" was drafted early. Not surprisingly, the specification calls on open standards. The Pioneer and Voyager spacecraft are older than disco, though younger than the Beatles. They are further from Silicon Valley than anything made by human hands, and getting further. Data from them continues to puzzle us. So, software to analyze that data has been ported to myriad computers. There's never enough money to upgrade routinely, so we stick with a platform until it dies and/or its manufacturer goes out of business. This is only barely tenable through strict portability conventions. Spacecraft parts are hideously expensive, what with radiation toleration, quality screening and so on. Software usually has to be developed on simulators and ported to a number of similar but not identical units. For every flight article, there may be a "qualification unit" (for testing to failure), two or three "form/fit" units for functional testing, and some "engineering units" for development. A simple radar algorithm has seen development on Mac OS X, Microsoft Windows and Linux (that I know of)—none of which is the final environment. It's been coded in python, perl and C. The work would take far longer had we been locked into one vendor. The Electra platform and code described earlier have been ported/inherited/reused for: - a landing radar on MSL - a spectrometer interface on ISRO's (Indian Space Research Organisation) Chandrayaan-1 Moon Mineralogy Mapper - a lunar radio architecture prototype C3I (Command, Control, Communications, and Information) Communications Adaptor (CCA) - Radio Atmospheric Sounding and Scattering Instrument (RASSI) —all in the space of a few years, each time by a different team. I've seen technical information retrieved from hard copies of presentations, because people unable to open files that were only a few years old. The (closed) format had changed when a desktop computer was upgraded. That just won't fly.
<urn:uuid:f58e3ae2-febb-469f-bc31-ef148adc45ad>
CC-MAIN-2017-09
http://www.cio.com/article/2438926/open-source-tools/open-source-software-and-its-role-in-space-exploration.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00456-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929716
2,815
2.578125
3
Universities and colleges are places of higher learning. That doesn’t just mean for students. Some of the greatest discoveries of the 21st century have been made in staff and student research laboratories. In fact, U.S. universities received almost 70 billion dollars in funding for research in science and humanities last year. Learning doesn’t come just through teaching, it comes through discovery as well. The problem is that faculty has long stated that they often spend as much time on research as they do on administrative tasks related to their research. This takes time away from other, more fruitful activities. According to an article from Forbes: A commissioned review of the University of California Los Angeles pointed out that the university spends about a billion dollars a year on research but lacks a “formal, guiding technology strategy for research administration.” In aggregate, administrative inefficiencies impose burdens on researchers. It diverts time and attention from more productive activities; it delays experiments and raises the costs of doing research. There are roughly a million faculty and academic researchers in the U.S. – which means reclaiming even one hour every week from admin by using technology would be worth trillions of dollars annually. Scientific research in universities contribute to new medicine, breakthroughs in scientific fields, and advents of new technologies (some that could cut down on administrative tasks for researchers). It is true that faculty should be worried about teaching students, but students are involved in this research as well. Not only that, by cutting down the time faculty spends on administrative tasks, it gives them more time for all endeavors – teaching, researching, mentoring and writing scholarly papers. Forbes mentions a few spots where technology could help unburden faculty when it comes to research: - Building Research Teams – To build a research team you need to find team members, and this task often falls to faculty. Better matching platforms could help faculty discover students that are willing and qualified to aide in research. - Enabling Collaboration – Interdisciplinary collaboration can allow researchers to pool resources, conduct peer reviews, and diversify the workforce. Most research papers today feature multiple authors, so faculty are already working together. Sharing data, equipment, and information through collaboration platforms will boost all research efforts. - Harnessing the Cloud – Cloud-based digital platforms offer a centralized location for data, documents, new findings, peer reviews, and more. It lets faculty track inventory and manage procurement. Faculty can keep track of where things are, what’s happening, and what’s being discovered in other labs. Not only that, researchers from different institutions can work together or check each other’s work much more quickly by having a central location where information and critiques can be stored. At higher education institutions the student should be the first priority. That doesn’t mean other priorities don’t exist. We rely on colleges and universities to further our understanding of the universe, the planet, medicine, science and more. By cutting down time on administrative tasks that further no learning, we can help faculty make discoveries and teach students.
<urn:uuid:65a3f8c3-356b-41d7-9339-a63a66aa8e4f>
CC-MAIN-2017-09
https://techdecisions.co/unified-communications/technology-education-can-improve-faculty-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00456-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948611
631
3.078125
3
The Non–Inverting Buffer now spend some time investigating useful circuit elements that do not directly implement Boolean functions. The first element is the non–inverting buffer. This is logically equivalent to two NOT gates in a row. are engineering differences between the two, most notably that the non–inverting buffer delays the signal less than a chain of two NOT gates. This is best considered a “voltage adjuster”. logic 1 (voltage in the range 2.0 – 5.0 volts) will be output as 5.0 volts. A logic 0 (voltage in the range 0.0 – 0.8 volts) will be output as 0.0 volts. The output of a circuit element does not change instantaneously with the input, but only after a delay time during which the circuit processes the signal. This delay interval, called “gate delay” is about 10 nanoseconds for most simple TTL circuits and about 25 nanoseconds for TTL Flip–Flops. The simplest example is the NOT gate. Here is a trace of the input and output. Note that the output does not reflect the input until one gate delay after the input changes. For one gate delay time we have both X = 1 and Y = 1. For some advanced designs, it is desirable to delay a signal by a fixed amount. One simple circuit to achieve this effect is based on the Boolean identity. A circuit to implement this delay might appear as follows. Here is the time trace of the input and output. The Pulse Generator circuit represents one important application of the gate delay principle. We shall present this circuit now and use it when we develop flip–flops. This circuit, which I call a “pulse generator”, is based on the Boolean identity. Here is the circuit Here is a time plot of the circuit’s behavior. pulse is due to the fact that for 1 gate delay, we have both X = 1 and Y = 1. This is the time it takes the NOT gate to respond to its input and change Y. The Tri–State Buffer time ago, we considered relays as automatic switches. The tri–state buffer is also an automatic switch. Here are the diagrams for two of the four most popular tri–state buffers. An enabled–low buffer is the same as an enabled–high buffer with a NOT gate. does a tri–state buffer do when it is enabled? What does a tri–state buffer do when it is not enabled? What is this third state implied by the name “tri–state”? An Enabled–High Tri–State Buffer an enabled–high tri–state buffer, with the enable signal called “C”. When C = 1, the buffer is enabled. When C = 0, the buffer is not enabled. What does the buffer do? The buffer should be considered a switch. When C = 0, there is no connection between the input A and the output F. When C = 1, the output F is connected to the input A via what appears to be a non–inverting buffer. Strictly speaking, when C = 0 the output F remains connected to input A, but through a circuit that offers very high resistance to the flow of electricity. For this reason, the state is often called “high impedance”, “impedance” being an engineer’s word for “resistance”. What is This Third State? a light attached to a battery. We specify the battery as 5 volts, due only to the fact that this course is focused on TTL circuitry. 0 volts to lamp Third state 5 volts to lamp the switch is closed and the lamp is connected to the battery, there is a voltage of +5 volts on one side, 0 volts on the other, and the lamp is on. the case at left, both sides of the lamp are connected to 0 volts. Obviously, it does nothing. The middle diagram shows the third state. The top part of the lamp is not directly connected to either 0 volts or 5 volts. this third state, the lamp is not illuminated as there is no power to it. This is similar to the state in which the top is set to 0 volts, but not the same. Understanding Tri–State Buffers The best way to understand a tri–state buffer is to consider this circuit. C = 0 The top buffer is outputting the value of A (logic 0 or logic 1) The bottom tri–state buffer is not active. F = A C = 1 The top tri–state buffer is not active. The bottom buffer is outputting the value of B. F = B Due to the arrangement, exactly one tri–state buffer is active at any time. shall use tri–state buffers to attach circuit elements to a common bus, and trust the control circuitry to activate at most one buffer at a time.
<urn:uuid:e0a642c7-960a-4d6b-822f-d04c969501d7>
CC-MAIN-2017-09
http://edwardbosworth.com/My5155_Slides/Chapter03/OtherCircuitElements.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00508-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916919
1,073
3.96875
4
Full disk encryption software uses a symmetrical encryption algorithm to encrypt every block on a hard disk or other persistent storage media (e.g., flash drives, etc.). The idea is that even if the storage device is lost or stolen, none of the contents of the filesystem will be compromised. A key consideration with full disk encryption is generating and securing the encryption key. Normally a single, long, pseudo-random encryption key is used to encrypt the storage device. User keys are used to encrypt/decrypt the disk encryption key. User keys, in turn, may be: The most common approach to key management on personal computers (i.e., not servers where system startup typically must proceed unattended) is to prompt the user to enter a password prior to starting the PC's operating system. The password decrypts the user's key, which in turn decrypts the data key that encrypts/decrypts hard drive contents. Where pre-boot password authentication is used, the pre-boot password may be synchronized with the user's primary network login password -- usually an Active Directory password. This reduces the number of distinct passwords users must remember and type. If a user forgets his pre-boot password, he must go through an unlock process. Typically the full disk encryption software presents the user with a challenge string, which the user communicates to an IT support person with access to a key recovery application. The support person enters the challenge string and reads back a response, which the user must type. A correct response will unlock the user's PC, at which time the user should choose a new password (and remember it this time!). Hitachi ID Password Manager enables users whose PC is protected with a disk encryption software and who have forgotten the password they type to unlock their computer to reactivate their PC. The process for key recovery is as follows:
<urn:uuid:2a7dd9fa-a893-4461-b3f2-b2c8cd42a1c2>
CC-MAIN-2017-09
http://hitachi-id.com/resource/concepts/full-disk-encryption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00204-ip-10-171-10-108.ec2.internal.warc.gz
en
0.908844
378
2.890625
3
Voyager 2, one of two NASA spacecraft to travel farthest from Earth, marked 36 years in space Tuesday. The spacecraft, which launched on Aug. 20, 1977 from Cape Canaveral, Fla., aboard a Titan-Centaur rocket, is more than 9 billion miles away from the sun, according to NASA. Voyager 2, working in conjunction with its twin Voyager 1, were launched to explore the outer solar system Both spacecraft have flown past Jupiter, Saturn, Uranus and Neptune, along with 48 of their moons and their magnetic fields. Voyager 2 is the only NASA spacecraft to have visited and explored Uranus and Neptune. In 1990, Voyager 1 and Voyager 2 both embarked on a mission to enter interstellar space, the space between star systems in a galaxy. NASA noted that both spacecraft still are sending scientific information about their surroundings back through the Deep Space Network, an international network of large antennas and communication facilities. There's a scientific debate going on about where Voyager 1 is. NASA reported late in June that Voyager 1, which was launched on Sep. 5, 1977, was nearing the edge of the solar system, flying near the edge of the heliosphere, which is akin to a bubble around the sun. The spacecraft is so close to the edge of the solar system that it is sending back more information about charged particles from outside the solar system and less from those inside it, according to the space agency. However, a team of researchers from the University of Maryland last week reported that they believe the spacecraft has already left the solar system and entered interstellar space. Voyager 1, according to university researchers, has begun the first exploration of our galaxy beyond the sun's influence. Despite the debate over Voyager 1, scientists seem to consistently believe that Voyager 2 still is within the heliosphere. This article, NASA's Voyager 2 marks 36 years on its space odyssey, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "NASA's Voyager 2 marks 36 years on its space odyssey" was originally published by Computerworld.
<urn:uuid:a7d28805-685c-4dac-9fca-07809af1e49c>
CC-MAIN-2017-09
http://www.networkworld.com/article/2169227/data-center/nasa--39-s-voyager-2-marks-36-years-on-its-space-odyssey.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00024-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941808
507
3.734375
4
A quick pointer to today's A1 New York Times story on a phenomenon we've been following on this blog for the past year: as algorithmic entities explode across the web, humans remain central to their operation. Automation only goes so far and for all Watson's Jeopardy wins, there are still many, many tasks on which computers are terrible and humans are effortlessly amazing. Like understanding language, say, or knowing what's happening in a photograph. There is an analogy to be made to one of Google's other impressive projects: Google Translate. What looks like machine intelligence is actually only a recombination of human intelligence. Translate relies on massive bodies of text that have been translated into different languages by humans; it then is able to extract words and phrases that match up. The algorithms are not actually that complex, but they work because of the massive amounts of data (i.e. human intelligence) that go into the task on the front end. Google Maps has executed a similar operation. Humans are coding every bit of the logic of the road onto a representation of the world so that computers can simply duplicate (infinitely, instantly) the judgments that a person already made.
<urn:uuid:00f09d83-6d5f-448c-8e8d-7f42d239a886>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2013/03/best-intelligence-cyborg-intelligence/61816/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00252-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955933
241
2.734375
3
The patent application, filed in August and published today, summarizes the invention thus: "The portable computing device includes an enclosure that surrounds and protects the internal operational components of the portable computing device. The enclosure includes a structural wall formed from a ceramic material that permits wireless communications therethrough. The wireless communications may for example correspond to RF communications, and further the ceramic material may be radio-transparent thereby allowing RF communications therethrough." With the introduction of Microsoft's wireless-capable Zune media player, analysts have anticipated Apple would add similar capabilities to its iPod. At the very least, this filing demonstrates that Apple's engineers are working on it. In some ways, this filing is more about materials science than electronics. The patent application is focused on the company's innovative use of ceramics as a housing for electronic components. "It should be noted that ceramics have been used in a wide variety of products including electronic devices such as watches, phones, and medical instruments," the filing states. "In all of these cases, however, the ceramic material (sic) have not been used as structural components. In most of these cases they have been used as cosmetic accoutrements. It is believed up till now ceramic materials have never been used as a structural element including structural frames, walls or main body of a consumer electronic device, and more particularly an enclosure of a portable electronic device such as a media player or cell phone."
<urn:uuid:e7e0b0a4-566d-4f73-a374-9dd4944db1d0>
CC-MAIN-2017-09
http://www.networkcomputing.com/wireless/apple-seeks-patent-wireless-handheld/1251957944
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00072-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953812
291
3.109375
3
By Panda Security According to cloudcomputing.org , "Cloud Computing is a nebulous term covering an array of technologies and services including: grid computing, utility computing, Software as a Service (SaaS), storage in the cloud and virtualization. There is no shortage of buzzwords and definitions differ depending on who you talk to." Leading analysts have also sought to define the term, offering varying explanations which, although they don't coincide completely, have much in common. IDC defines Cloud Computing as "Consumer and business products, services and solutions delivered and consumed in real-time over the Internet." It also defines eight attributes that a solution should have in order to be cataloged under cloud computing.
<urn:uuid:e81cd02d-7339-4f2e-b6fe-883c8b5154d0>
CC-MAIN-2017-09
https://www.bsminfo.com/doc/a-new-technological-paradigm-sets-the-trend-0002
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00120-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953542
145
2.640625
3
Researchers in the U.S. have developed integrated circuits that can stick to the skin like a child's tattoo and in some cases dissolve in water when they're no longer needed. The "bio chips" can be worn comfortably on the body to help diagnose and treat illnesses, said John Rogers, a professor of materials science at the University of Illinois at Urbana-Champaign, who described the research at an IEEE conference in San Francisco on Monday. He and his students are working at the intersection of biology and electronics, experimenting with elements and compounds to come up with "epidermal electronics" that are soft and flexible, yet durable enough to be worn like a second skin. The circuits are so thin that when they're peeled away from the body they hang like a sliver of dead skin, with a tangle of fine wires visible under a microscope. Similar circuits could one day be wrapped around the heart like "an electronic pericardium" to correct irregularities such as arrhythmia, Rogers said. Silicon is usually too rigid to be molded to the body, but sliced to a nanometer thick, or a billionth of a meter, it becomes a "floppy" membrane that can bend and twist, Rogers said. It's still fragile, however, so it needs to be laid on a rubber-like substrate that gives it strength. And it still won't stretch, so the researchers form the circuits into ribbed structures that can flex back and forth like an accordion. The circuits can be applied like a child's temporary tattoo, Rogers said, by laying them on the skin and washing off a thin, soluble backing. The resulting circuit is about 5 microns thick and can stretch by about 30 percent, equivalent to how much skin will stretch. To show the technology, Rogers rolled up his sleeve during his talk and, using a microscope and an overhead projector, revealed a circuit stuck on his arm. It looked like a clear tattoo, with a spaghetti-like mass of wires embedded in the surface. The researchers are also working on "transient" circuits that dissolve in water when they're no longer needed. Some are variations of the tattoo-like circuits but they can also take other forms. Silicon, it turns out, is soluble in water when it's sliced thin enough, and a sliver of silicon 35 nanometers thick will dissolve in about two weeks, Rogers said. The substrate can be made from silk, magnesium, silicon dioxide or some other material that also becomes soluble when thin enough. The soluble circuits have less of silicon, magnesium and other minerals than are in a daily vitamin pill, so they are safe in the body, Rogers said. To illustrate his point, he produced and then ate a tiny RF oscillator 5 millimeters across. One possible application of the soluble electronics is to help prevent infections forming at surgical sites. A device could be implanted in the wound and programmed to emit bursts of heat sufficient to kill off bacteria. Because the device dissolves, there's no need for further surgery -- and further risk of infection -- to remove it. Soluble electronics could also be used for non-medical purposes, such as environmental monitors at a chemical spill that eventually dissolve. Or they could be used in consumer electronics to reduce hazardous waste. Rogers received the US$500,000 Lemelson-MIT Prize in 2011 for his work in bio-electronics.
<urn:uuid:867deccc-dae5-43a1-9e3b-fa3c353f29e3>
CC-MAIN-2017-09
http://www.computerworld.com/article/2493667/healthcare-it/researchers-develop-featherweight-chips-that-dissolve-in-water.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00592-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95548
699
3.3125
3
TADOUSSAC, QUEBEC--(Marketwire - Oct. 10, 2012) - A three- or four-metre long white or greyish beluga whale has been seen several times near the Old Port since September 28. No photos have been taken yet, but the Québec Marine Mammals Emergency Network feels that these observations are reliable. Where did this beluga come from? The closest population of belugas lives in the St. Lawrence Estuary. It is a small population, isolated from other northern populations, and is considered threatened. The beluga in Montréal could be a young animal from this group that has gone exploring, which is normal behaviour. Why is it being monitored? Belugas are social animals. If this beluga were at home, it would be in constant contact with other belugas. Now that it is on its own, it may try to interact with boats and humans. In the summer of 2012, for example, we saw two young belugas travelling around the Gaspé Peninsula, interacting with boats and swimmers in every small town. Luckily, they returned to their natural habitat and those abnormal behaviours ceased. Other isolated belugas, spotted off the Lower North Shore and around Nova Scotia or Newfoundland, have been less lucky; they were eventually wounded or killed by a boat. Will it go back to where it came from? The best thing that could happen to this beluga is for it to swim back down the Saint Lawrence, find a group of belugas and return to its normal habitat. There is a good chance that this will occur. To help ensure its return, we must avoid it becoming used to humans and, therefore, we should not interact with it. How can you help? If you see the beluga, immediately call the Marine Mammals Emergency Network at 1-877-722-5346. It is important to stay at least 400 m away, not to approach it, not to lure it close to humans, not to make noise or stimulate or attract its attention, and not to try to feed it. It is also advisable to avoid boating in the area it has been seen. By limiting its interaction with humans, we can maximize the chance that it will return to its natural habitat in good health. The Quebec Marine Mammal Emergency Response Network is made up of a dozen private and governmental organizations. It has been mandated to organize, coordinate and implement measures to reduce the accidental death of marine mammals, help animals in trouble and gather information in cases of beached or drifting carcasses in waters bordering the province of Quebec.
<urn:uuid:ad357b64-0c1d-449f-b952-4dc30954b7b0>
CC-MAIN-2017-09
http://www.marketwired.com/press-release/beluga-sighting-in-montreal-1712010.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00116-ip-10-171-10-108.ec2.internal.warc.gz
en
0.962484
540
3.1875
3
Cloud Computing to Slice Data Center Energy Consumption by 2020 Cloud computing is set to reduce data center energy consumption of 31% from 2010 to 2020, according to a recent report from Pike Research. Pike Research is a market research and consulting firm that provides in-depth analysis of global clean technology markets. Its newly released report “Cloud Computing Energy Efficiency“, provides an in-depth analysis of the energy efficiency benefits of cloud computing, including an assessment of the software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) markets. The report indicates that companies have recognized the significant energy-efficiency benefits of cloud computing and its growth in the market will have important implications for both energy consumption and greenhouse gas (GHG) emissions. “Cloud computing revenue will grow strongly over the next decade, with a CAGR of almost 29%,” said senior analyst Eric Woods. “But the reduction in energy consumption will be even more significant. Massive investments in new data center technologies and computing clouds are leading to unprecedented efficiencies.” Pike Research notes that transition to the cloud will continue to accelerate as they are less expensive to operate, consume less energy, and have higher utilization rates than traditional data centers. The research firm forecasts much of the work done today in internal data centers to be outsourced to the cloud by the end of the decade. Today, a large number of suppliers of servers, network equipment, disk drives, and cooling and power equipment has begun to design their products to suit the needs of large cloud operators. This has resulted in improved operating margins through better use of electricity, and in turn to more adoption. “Cloud Computing Energy Efficiency“, also adds that several products designed specifically to optimize cloud computing have only recently begun to reach the market. By Anuradha Shukla
<urn:uuid:84589fe7-d1a1-4385-a728-c63d131ba99b>
CC-MAIN-2017-09
https://cloudtweaks.com/2011/09/cloud-computing-to-slice-data-center-energy-consumption-by-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00292-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952512
389
2.546875
3
PNRP – The Peer Name Resolution Protocol is new protocol made by Microsoft which is one of the first technology that will change the way we think about naming resolution in computer networking and possibly be the next DNS – Domain Name System like technology. PNRP is the new DNS but there are so much differences between them that it deserves an article on this blog. Just to remind, is few simple words, DNS is a technology that enables us to type the domain name in the browser and leaves to Domain Name System to translate the domain name to IP address of the server where the web page is published. As we are stepping forward to IPv6 implementation in the whole world in next years, there are technologies and future services that will not function at their best using DNS. In this case Microsoft was one of the first to develop a new technology, decentralized technology that will rely on neighbor computer for the name resolution and completely rely on IPv6 addressing. The Per Name Resolution protocol was the answer. In case of DNS, it depends on a hierarchical structure of naming, while PNRP depends on peer systems in order to resolve the computer system’s location. Mainly, PNRP is a referral system that operates lookups on the basis of data it is familiar with. Here is a simple example, if you require to search Computer 1 and you are close to Computers 2 and 3, it is important for your system to know whether Computer 2 knows Computer 1 or not. If the response of Computer 2 is positive, only then a a link to Computer 1 is provided to you. If the reply is in negative, then the system asks Computer 3 whether it knows Computer 1 and the same method is used with Computer 2. If none of the computers knows Computer 1, then the request is sent to other computers close to the system till it successfully finds the one that is familiar with Computer 1. There are number of ways in which PNRP is different from the DNS service:
<urn:uuid:bcfdec05-548f-45b2-a5cf-e63faf228efa>
CC-MAIN-2017-09
https://howdoesinternetwork.com/tag/name-resolution
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00292-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937558
401
3.671875
4