text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Power Network Services with MySQL Last week we learned how to create and populate a user authentication database. Today we'll dig into making changes, making backups, connecting to remote MySQL servers and plugging MySQL in to servers like Postfix and Samba. In Part 1 we learned some commands for making changes to tables and data. Making changes to data, such as deleting or modifying users, means needing to know how to find the entries to change. MySQL provides the select, where and update commands for this. For example, suppose user Alice Smith wins the lottery and in a fit of sanity walks (or more likely, runs) off the job never to return. So you have to delete her from your user authentication database. Remember, you have to select the database and table (which we created in Part 1), and the MySQL commands are performed at the MySQL command line. In these examples the administrative user created in Part 1, sqllackey01, will do the work: $ mysql -u sqllackey01 -p mysql> use samba_auth; mysql> select * from users where last_name='smith' and first_name='alice'; | uid | login | password | first_name | last_name | | 611 | asmith | jVqfGYRRSm | alice | smith | Easy peasey. Then delete Alice using the field containing the primary key, which in the example table is the uid: mysql> delete from users where uid=611; If you don't know exactly what to search for, use the like keyword: mysql> select * from users where last_name like %sm%th%; The % are wildcards; this example with find all last names containing the strings sm and th. So you'll get Smythe, Smithers, Smoothoperator, Smithee, and so forth. If you need to find literal percent characters, escape them like this: %%. A common chore is resetting passwords, which you might do to disable Alice's account, instead of erasing her: mysql> update users set password=(encrypt('newpass')) where uid=611; Any data can be updated the same way, for example: mysql> update users set first_name='RichAlice' last_name='SmytheSupreme' where uid=611; Backing Up & Restoring The easy way to make a backup of a database is to use MySQL Backup. MySQL Backup is a Perl script that uses mysqldump, tar, and gzip. The documentation is in the script, and it's simple to use. Starting at around line 104, comment out the three lines referencing CGI commands. These are for running backups from a Web browser, which is not a secure way to run the backups. The easiest thing to do is set everything up in the script, then run it automatically from a cron job. Anywhere a program or file is named, be sure to use the full absolute path name. You'll have the option to backup all tables, or to select certain ones. The backups are stored locally by default, and can be uploaded via FTP to another location. There is even an option to email the backups to whatever lucky soul is elected to receive them. This cron job runs the script every midnight: # crontab -e 0 0 * * * /usr/sbin/scripts/mysql_backup Restoring a database from backup is done by redirecting the contents of the backup file to the original location: # mysql -u sqllackey01 -p [password] samba_auth < /backups/samba_auth_backup.sql Logging in to a remote MySQL server is the same as logging into a local server, except you must specify the hostname or IP: $ mysql -h windbag -u sqllackey01 -p Administrative user sqllackey01 must have already been granted remote login privileges by the MySQL root user. Using the % wildcard allows sqllackey01 to log in from anywhere, and of course you don't want to use the word "password" for the password: mysql> grant all privileges on samba_auth.* to sqllackey01@'%' identified by 'password'; You might want to restrict sqllackey01 to the local network by replacing the wildcard with either the domain name or subnet: '%.domain.com', '192.168.1.%'. Opening a MySQL database to the Internet is a bad idea. For remote administration over untrusted networks use SSH (see Resources.) Debian users will probably get the dreaded "ERROR 2003 (HY000): Can't connect to the MySQL server" error, because by default MySQL accepts TCP connections only from localhost. Fix this by commenting out the "bind_address = 127.0.0.1" line in /etc/mysql/my.cnf. (Remember to restart MySQL after making changes to configuration files.) Another way to prevent MySQL from accepting remote connections is to put this entry in /etc/mysql/my.cnf: This tells it to not accept TCP connections, but only local UNIX sockets. You can see what MySQL is listening for with $ netstat -an --inet tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN That's the output from commenting out "bind_address = 127.0.0.1", showing that MySQL is accepting connections from all network interfaces. Using skip-networking should show no listening TCP ports at all. What MySQL Version? mysql> select version(); Now that you're a wizard at the MySQL command line, check out the MySQL Administrator. It is a very nice graphical interface for MySQL that lets you perform all the common administrative functions: user management, backups and restores, and connection and server health monitoring. Using MySQL with Samba The samba-doc package comes with a script for creating the table that holds your user accounts, examples/pdb/mysql/mysql.dump. Of course you can create your own table from scratch; this is how to use the script to create the table: mysql> samba_auth < /usr/share/doc/samba-doc/examples/pdb/mysql/mysql.dump Then Samba needs some configuration tweaks; see the Samba/MySQL howto. Using MySQL With Postfix There are no official Postfix scripts for creating tables, but you can find a good third-party script here. (See the Postfix howto page for a nice assortment of excellent howtos.) One way or another, create your database and populate it with your desired tables, like transport, virtual, and users. Then edit a bunch of Postfix configuration files so Postfix knows where to find everything, and you'll have a flexible backend that can be used by all the components of a mail server POP, IMAP and SMTP. And all the users and domains are completely virtual, so you can easily add, remove and change accounts, just like an ISP. Another fast, fully-featured, Free/Open Source database worth checking out is PostgreSQL. SQLite is also worth a look. It's small and embeddable, which means you don't have to hassle with setting up a server.
<urn:uuid:5ed4aed1-17d6-4727-8ef7-0cbb0b9aa462>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netos/article.php/3531076/Power-Network-Services-with-MySQL.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00464-ip-10-171-10-108.ec2.internal.warc.gz
en
0.845601
1,581
2.953125
3
The makings of a federal police e-academy Training for federal law enforcement officers might soon incorporate many of the online technologies that universities have been using to make instruction more accessible, comprehensive and affordable. John Besselman is leading a program at the Homeland Security Department’s Federal Law Enforcement Training Center that is exploring using virtual and digital learning capabilities to improve the education of the more than 60,000 students who receive training at the center’s campuses each year. He said he hopes the program, named Train 21, can help the center save space and time, make more efficient use of its employees, and improve the overall effectiveness of its training. “We have an opportunity to take advantage of the Digital Age,” he said. The new approach to training could involve offering distance learning programs to some groups of students, online continuing education courses and digital learning resources. The center published a request for information Nov. 10 in the Federal Register. The center's students come from more than 80 federal agencies. They typically stay at one of the center's campuses while they attend training sessions for anywhere from a few days to several months. Although the center differs from traditional universities, Besselman said the benefits of electronic learning are similar for all students. For example, he said, allowing established law enforcement officers to complete advanced training remotely could mean significant savings in time and money for employees and agencies. Although students will still need to travel to a campus for basic training, using information technology could mean they spend less time away from their jobs. Furthermore, the roughly 500 professors who work at the center’s campuses will have more tools for delivering information and interacting with students. “The generation is such that they are willing, ready and capable of receiving their information in different formats,” Besselman said. Implementing all the ideas that he envisions could cost as much as $70 million. However, he said many of the initiatives cost relatively little and are easy to implement. For example, making better use of the center’s TV station for training purposes, using podcasts and digitizing materials are low-cost endeavors. Besselman said he knows new can be scary, but since he opened the Train 21 office at the center’s Glynco, Ga., headquarters campus, he has been surprised by how many people have stopped by with good ideas. Train 21 is still in its early planning phase and focused on developing detailed plans, prototypes, a proof of concept and a long-term implementation road map, the RFI states. Officials are seeking information about whether organizations can help the center: • Explore e-learning best practices that have been successful elsewhere and could be applied to law enforcement training. • Identify e-learning solutions that are successful and cost-effective. • Design and develop pilot training, learning solutions and acquisition plans. • Document training and technology prototypes that can serve as blueprints for larger-scale implementations. • Develop a road map for the long-term evolution of e-learning in a major, multicampus training environment. Responses to the RFI are due by Nov. 21. Ben Bain is a reporter for Federal Computer Week.
<urn:uuid:dac08fba-9d9f-4d66-a85a-9084dbf074b2>
CC-MAIN-2017-09
https://fcw.com/articles/2008/11/17/the-makings-of-a-federal-police-eacademy.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00640-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95571
663
2.890625
3
In today’s market, segments of the broader IoT ecosystem have been underserved, especially small-to-medium businesses and mid-sized cities. However, this oversight will change over the next several years as these entities seek to embrace the efficiencies and cost reductions that enterprises and government agencies are achieving through IoT implementations. In addition to their high-priority issues of affordability and seamless IT integration, a key requirement we’ve seen for IoT adoption by SMBs and municipalities is assurances of fail-safe security measures. While current solutions can ensure effective security today, there is a looming long-term threat to security as the IoT ecosystem proliferates – the Domain Naming System registry. The current DNS registry is used to ensure websites can be accessed simply by typing in their name rather than the series of numbers of the site’s IP address – 18.104.22.168. In a world in which just about any device you can name will have an IP address, however, it’s time for a new registry dedicated to IoT devices. A stark example of the ineffective security of current DNS protocols in the IoT age was seen recently in the Mirai botnet attack that brought down much of the internet, including CNN, Netflix, Reddit, Twitter, and many other sites. The main target of the distributed denial of service attack was the servers of Dyn, a company that controls much of the DNS for internet infrastructure. However, unlike other DDoS botnets, which take advantage of computers, Mirai was able to gather strength from IoT devices such as DVR players and IP cameras with little security protection and then throw junk traffic at Dyn’s servers until they could no longer support valid users. The current DNS registry was never intended for the IoT era, especially as the IoT ecosystem becomes inseparable from fog computing. Fog computing is a new paradigm for analyzing and acting on the most time-sensitive data at the network edge, close to where it is generated instead of sending vast amounts of IoT data to the cloud. It helps machines, on their own, act on IoT data in milliseconds based on human-set policies. In smart cities, this can mean landscape sensors noting the deluge of a recent rainstorm and shutting off irrigation systems. Or it could mean a connected trash receptacle sending a message to an autonomous trash truck that it should be included in the day’s pick-up schedule. This immediate, machine-to-machine communication can also be a major target for disruption by hackers, especially in mission-critical industries such as energy and transportation. At a time when cyberattacks can be launched via the most innocuous connection, the industry should focus on building a registry for every single IoT device, ensuring the legitimacy of the device and that the device can be easily monitored to stop and capture perpetrators of an attack. As every cybersecurity professional knows, any system’s security is only as effective as its weakest link. With disparate organizations implementing IoT systems throughout the world, we face a huge but urgent task to create a new registry of IP addresses for IoT devices. There are precedents and organizations capable of achieving this type of undertaking. For example, oneM2M, a global initiative to create standards for IoT security and interoperability, might be one answer. Formed in 2012, the body is composed of eight of the world’s top telecommunications and IT standards body and has more than 200 member organizations, including Cisco Systems, General Electric, Intel, MediaTek, and Samsung. With the assistance of telecommunications services firm iconectiv, oneM2M has already started an App-ID registry for IoT software installations. With its role in enabling mobile phone number portability and maintaining mobile device registries to protect against fraud and theft, it’s not a huge leap to see iconectiv or similar organization, with the support of standards bodies like oneM2M, creating a dedicated IP address registry for IoT devices as well. Bob Bilbruck is CEO of B2 Group/Directed IoT/Captjur (www.b2groupglobal.com). Edited by Ken Briodagh
<urn:uuid:9dca2206-3fb7-4b8c-9ea4-186a835879f6>
CC-MAIN-2017-09
http://www.iotevolutionmagazine.com/features/articles/429358-beyond-dns-why-iot-industry-should-create-its.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00516-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943952
846
2.859375
3
The recent hacking of the Associated Press’ Twitter account has begged the question, how secure are social media accounts? A study released by IObit reveals that 30% users always accept “Keep Me Logged-in” when they are logging into Facebook, Twitter, Pinterest and other social media sites. The study also found that 10% of users never clear browser cookies and cache. This data shows that there are still many people who choose “Keep Me Logged-in” features no matter risks they pose to their online privacy and security. While more and more people are active users of social media and online shopping, it appears many are still not doing enough to protect their personal information and privacy. For example, 45% of people will change their passwords only when they are required to do so, which means that their social media accounts may be suffer from a malicious attack at any time. Moreover, 15% of people never change their password. At this time, millions of people are still in the danger of having their social accounts attacked and personal information with possibly embarrassing information exposed to the public by hackers. “This survey is open to all the people all over the world. 10,157 people joined it. Keeping a strong, frequently changed password is the best guardian for one’s social media accounts. It should therefore be taken seriously and kept well protected. However, many people aren’t consciously aware that this small activity is threatening their personal privacy and security.” said Michael Zhao, Marketing Director at IObit. “We shouldn’t wait until something bad happens before we take action to protect our accounts. We will continue to remind users about this issue. We have strong confidence that our users will be following best-practices for keeping their privacy and online assets protected. A strong password and a good habit in password management is the simplest and the most effective method,” Zhao added.
<urn:uuid:52c5ee3c-bd48-45c9-b783-88ca55d0a8fe>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/05/07/many-social-accounts-are-still-in-danger/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00109-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955377
392
2.5625
3
Everyday Science Quiz Questions & Answers - Part 3 51. Question: Why is it less difficult to cook rice or potatoes at higher altitudes? Answer: Atmospheric pressure at higher altitudes is low and boils water below 100oC. The boiling point of water is directly proportional to the pressure on its surface. 52. Question: Why is it difficult to breathe at higher altitudes? Answer: Because of low air pressure at higher altitudes the quantity of air is less, and so that of oxygen. 53. Question: Why are winter nights and summer nights warmer during cloudy weather than when the sky is clear? Answer: Clouds being bad conductors of heat do not permit radiation of heat from land to escape into the sky. As this heat remains in the atmosphere, the cloudy nights are warmer. 54. Question: Why is a metal tyre heated before it is fixed on wooden wheels? Answer: On heating, the metal tyre expands by which its circumference also increases. This makes fixing the wheel easier and therefore cooling down shrinks it; thus fixing the tyre tightly. 55. Question: Why is it easier to swim in the sea than in a river? Answer: The density of sea water is higher; hence the up thrust is more than that of river water. 56. Question: Who will possibly learn swimming faster-a fat person or a thin person? Answer: The fat person displaces more water which will help him float much more freely compared to a thin person. 57. Question: Why is a flash of lightening seen before thunder? Answer: Because light travels faster than sound, it reaches the earth before the sound of thunder. 58. Question: Why cannot a petrol fire be extinguished by water? Answer: Water, which is heavier than petrol, slips down permitting the petrol to rise to the surface and continue to burn. Besides, the existing temperature is so high that the water poured on the fire evaporates even before it can extinguish the fire. The latter is true if a small quantity of water is poured. 59. Question: Why does water remain cold in an earthen pot? Answer: There are pores in an earthen pot which allow water to percolate to the outer surface. Here evaporation of water takes place thereby producing a cooling effect. 60. Question: Why do we place a wet cloth on the forehead of a patient suffering from high temperature? Answer: Because of body?s temperature, water evaporating from the wet cloth produces a cooling effect and brings the temperature down. 61. Question: When a needle is placed on a small piece of blotting paper which is place on the surface of clean water, the blotting paper sinks after a few minutes but the needle floats. However, in a soap solution the needle sinks. Why? Answer: The surface tension of clean water being higher than that of a soap solution, it cans support the weight of a needle due to its surface tension. By addition of soap, the surface tension of water reduces, thereby resulting in the sinking of the needle. 62. Question: To prevent multiplication of mosquitoes, it is recommended to sprinkle oil in the ponds with stagnant water. Why? Answer: Mosquitoes breed in stagnant water. The larvae of mosquitoes keep floating on the surface of water due to surface tension. However, when oil is sprinkled, the surface tension is lowered resulting in drowning and death of the larvae. 63. Question: Why does oil rise on a cloth tape of an oil lamp? Answer: The pores in the cloth tape suck oil due to the capillary action of oil. 64. Question: Why are ventilators in a room always made near the roof? Answer: The hot air being lighter in weight tends to rise above and escape from the ventilators at the top. This allows the cool air to come in the room to take its place. 65. Question: How does ink get filled in a fountain pen? Answer: When the rubber tube of a fountain pen immersed in ink is pressed, the air inside the tube comes out and when the pressure is released the ink rushes in to fill the air space in the tube. 66. Question: Why are air coolers less effective during the rainy season? Answer: During the rainy reason, the atmosphere air is saturated with moisture. Therefore, the process of evaporation of water from the moist pads of the cooler slows down thereby not cooling the air blown out from the cooler. 67. Question: Why does grass gather more dew in nights than metallic objects such as stones? Answer: Grass being a good radiator enables water vapour in the air to condense on it. Moreover, grass gives out water constantly (transpiration) which appears in the form of dew because the air near grass is saturated with water vapour and slows evaporation. Dew is formed on objects which are good radiations and bad conductors. 68. Question: If a lighted paper is introduced in a jar of carbon dioxide, its flame extinguishes. Why? Answer: Because carbon dioxide does not help in burning. For burning, oxygen is required. 69. Question: Why does the mass of an iron increase on rusting? Answer: Because rust is hydrated ferric oxide which adds to the mass of the iron rod. The process of rusting involves addition of hydrogen and oxygen elements to iron. 70. Question: Why does milk curdle? Answer: Lactose (milk sugar) content of milk undergoes fermentation and changes into lactic acid which on reacting with milk protein (casein) form curd. 71. Question: Why does hard water not lather soap profusely? Answer: Hard water contains sulphates and chlorides of magnesium and calcium which forms an insoluble compound with soap. Therefore, soap does not lather with hard water. 72. Question: Why is it dangerous to have charcoal fire burning in a closed room? Answer: When charcoal burns it produces carbon monoxide which is suffocating and can cause death. 73. Question: Why is it dangerous to & under trees at night? Answer: Plants respire at night and give out carbon dioxide which reduces the oxygen content of air required for breathing. 74. Question: Why does ENO's salt effervesce on addition of water? Answer: It contains tartaric acid and sodium bicarbonate. On adding water, carbon dioxide is produced which when released into water causes effervescence. 75. Question: Why does milk turn sour? Answer: The microbes react with milk and grow. They turn lactose into lactic acid which is sour in taste. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:d9f6c337-dae3-4fa7-bac5-75ad9644e83d>
CC-MAIN-2017-09
http://www.knowledgepublisher.com/article-647.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00461-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939247
1,483
3.265625
3
In order to protect your assets, you must first know what they are, where they are, and understand how they are tracked and managed. Are they secured? Who has access to them? Who tracks and manages them? Do you have functional procedures in place to respond and recover from a security breach quickly? Do you have a process improvement cycle to prevent re-occurrence? These are all important issues related to assets. It’s important to remember what an asset is — it’s anything used in a business task. Generally, asset protection involves identification of assets, assessment of an asset’s value, and a determination of the technologies needed to provide sufficient security for that asset. There are many facets to the job of asset security including: - Cloud Computing - Secure Coding - Identity Management - Information Assurance - Public Key Infrastructure The cloud offers computing services as a commodity. This involves a wide range of capabilities including online storage and backup, virtual/remote desktop, collaboration services, software as a service, platform as a service, and infrastructure as a service. Popular services include online office productivity (such as Google Docs or Office 365), computing services for custom applications (such as Engine Yard or Windows Azure), or complete back-end scalable datacenters (such as GoGrid or Rackspace). While cloud computing can greatly benefit an organization, it also introduces new and unique security concerns. Cloud services are at odds with some regulations and security standards. Each organization is responsible for their own compliance of issues like prohibition of comingling of certain data types, hardware types, or data locations. Also, traffic flow must be understood. Is your sensitive and critical data encrypted in transit and while stored/processed in the cloud? Who has access to the encryption keys? What procedures are in place to manage ease of access, recovery options, downtime concerns, backup, privacy protections, and speed of interaction and throughput? Cloud computing revolutionizes technology. The benefits and drawbacks need to be considered carefully before shifting aspects of your infrastructure into the cloud. Virtualization is the creation and/or support of the simulated copy of a real machine or environment. Virtualization can be used to provide virtual hardware platforms, operating systems/platforms, storage capacity, network resources, and applications. Virtualization can also be used to host applications on a different OS than they were originally designed or allow a single set of server hardware to host several server operating systems in memory simultaneously. Virtualization offers benefits of lower hardware costs, reducing operating costs, efficient backups/restoration, high-availability, portability of services, faster deployment, expandable/scalable, and more. Virtualization adds security to the computing environment by permitting servers to be logically separated from each other. However, virtualization can cause problems with licensing, patch management, and regulation compliance which may cause slower performance of services, greater potential of single point of failure, and potential security concerns due to hardware re-use or sharing. Secure coding practices are essential to reducing the threat caused by the exploitation of processes, bad/poor coding, and flaws in design. Secure coding includes the consideration of appropriate controls at the onset of development, proper consideration given to design, robust code and error routines, minimizing verbose error messages, eliminating programmer back doors, bounds checking, input validation, separation of duties, and comprehensive change management. Failure to use secure coding practices leads to software that is susceptible to buffer overflow attacks, DoS attacks, and malicious code injection attacks. Non-robust code can also provide a path for database and command injection attacks. Secure coding practices can include many aspects of secure design integration and attack prevention. For example, software can be designed to authenticate all resource requests and processing actions before allowing a task to operate. Additionally, software needs to limit and sanitize input to prevent scripting, meta-characters, and/or command injection are essential parts of secure coding. Secure coding is more than just a few extra lines of code; it is an entire process and architecture of software development. Secure coding is an essential security practice not just for vendors that sell/release products to the world-wide market but also for internal software developers that develop code for use exclusively by internal users or which is exposed to the world via an Internet service. One of the biggest mistakes companies make in relationship to the Internet is assuming their Internet servers are secure and cannot be compromised, and if they were ever compromised it would not lead to serious consequences or a breach of their private network. This is usually a poor assumption. With the growing popularity of fuzzing tools to find coding errors, the proliferation and distribution of buffer overflow exploit code, and with several variants of code injection attacks (including SQL, command, XML, LDAP, SIP, etc.), no Internet service can ever be assumed to be immune from breach. Companies collect a lot of customer and employee data. Identity management involves the protection of all personally identifiable information (PII). This protection includes proper classification of information, delineation of the lines of communication, and strict policies and procedures for access control. Accountability is a key requirement to hold all information requestors (‘subjects’, both internal users and outside attackers) liable for their actions. Credentials are a popular form of PII subject to attack. All repositories of personal information, access channels to those repositories, and exchange of information with those repositories needs to be protected with strong authentication and encryption. Today’s sharing of information, transient locations of data repositories, and society’s acceptance of weak authentication set the stage for transitive attacks. Transitive attacks occur when a trust is allowed without realizing that it included other trusts that you were unaware of, and that can defeat your security. Information assurance satisfies management’s desire for a given security profile, indicating that all data is properly protected and able to be accepted as accurate and readily available. The set of processes needed to support this assurance requires the establishment of a reliable means to lock down assets and track their usage. Specifically, information assurance is focused on the security of data or information typically stored in files. It is important to properly manage the risk of using, processing, transmitting, and storing these data files. Secure data management addresses not just electronic or digital issues, but physical storage media (especially portable media) as well. Public Key Infrastructure Public Key Infrastructure (PKI) is a security framework and is generally comprised of four main components: symmetric encryption, asymmetric encryption (often public key cryptography), hashing, and a reliable method of authentication. Symmetric encryption is used for bulk encryption for storage or transmission of information. Asymmetric encryption is used for digital signatures and digital envelopes (i.e., secure exchange of symmetric keys). Hashing is used to check and verify integrity. How will you assure reliable authentication is used to ensure that only valid entities participate in the PKI environment, secure key delivery, secure key use, and key revocation? Customers’ belief in the credibility of certificates, and therefore security of transactions with your website, depend on the reputation and reliability of the CA. Due to recent events by hackers, blind use of digital certificates has been called into question. As with any protection measure, companies need to understand what PKI technology affords us in terms of protection, as well as to be cognizant of the technology’s limitations and vulnerabilities.
<urn:uuid:d97eece2-4387-443c-b789-9d8402314f3b>
CC-MAIN-2017-09
http://blog.globalknowledge.com/2012/05/01/asset-protection-what-do-you-have/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00513-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930296
1,510
2.5625
3
Unmanned Aerial Vehicles The Defense Department is increasingly using a technology known as UAVs -- commonly called "drones" -- in the Middle East. However, other agencies are using drones in other capacities, such as Homeland Security is also using them to patrol the border. The use of drones is controversial, as some have taken expressed privacy and other concerns. November 17, 2014 This biodegradable vehicle, developed to protect sensitive ecosystems, invited a closer look at the positive uses of drone technology. November 17, 2014 Will the laser work in actual warfare? Nobody knows exactly. October 15, 2014 Many enthusiasts see a future in which drones could prove as transformational—and as popular—as the personal computer. October 14, 2014 Experts say the highly classified plane—which returns to Earth today—could be testing spacecraft longevity or developing anti-satellite weapons. September 29, 2014 This device, now in development, introduces a new way to use drones. August 28, 2014 One pastor is using drones as a metaphor. August 20, 2014 Why so many groups are against them, but haven't agreed on what they are August 7, 2014 This drone project could save lives in far-flung rural communities—and perhaps pioneer the system globally. August 5, 2014 An online installation asks us to accept an all-but-certain future of drones in cities, and to rethink our relationship to them. August 1, 2014 It's difficult to cross man with details on every secret drone strike you've authorized—especially the legally dubious ones. July 18, 2014 Drones can take photos, even stand in as the ring bearer. But are they legal? July 14, 2014 Once a symbol of outright American military superiority, drones are on their way to becoming an ordinary weapon of war. July 11, 2014 Just don’t expect one at your front porch anytime soon. July 8, 2014 Proposed emergency funding contains $39 million for aerial surveillance, including unmanned aircraft operations. June 24, 2014 Instead of a library card, you'll need training, a professor's endorsement, and a willingness to assume liability for accidents. June 3, 2014 If the FAA approves, filmmakers could be among the first to use commercial drones. May 15, 2014 Last week’s near-collision between a hobby drone and a passenger aircraft portends what could happen as the sky gets a lot more crowded. May 8, 2014 The discovery has fueled worry that the secretive and erratic northern country might soon use armed drones against South Korea. May 7, 2014 The National Park Service says its ban on drones doesn’t just apply to Yosemite. April 30, 2014 Out in the desert it's hard to know when one solar panel among millions has failed. Unless you're a drone. April 22, 2014 Sprawling, citywide events such as marathons present a particularly thorny security problem. April 17, 2014 People are, unsurprisingly, distrustful of unmanned aerial vehicles. April 15, 2014 DARPA’s turning unmanned aerial vehicles into very mobile hotspots. April 15, 2014 The drone maker will help Google with its "Project Loon" initiative, which looks to provide internet access to remote areas. March 28, 2014 Mark Zuckerberg sets his sights on bringing the Web to "every person in the world." March 25, 2014 The drones collect data on crops from hundreds of feet above. March 20, 2014 The FAA wants to clear up the confusion. Here's a helpful guide. March 17, 2014 The Flying Donkey Challenge will support the development of drones that can be used to carry goods to market and deliver medication to remote villages. March 11, 2014 Drones will cause an upheaval of society like we haven’t seen in 700 years. March 10, 2014 The FAA is appealing a judge’s ruling that a ban on commercial drones lacks proper authority. March 7, 2014 Today, Florida. Tomorrow, Mars. March 7, 2014 The fine was the first leveled by the Federal Aviation Administration over the commercial use of unmanned aircraft. March 4, 2014 Facebook is in talks to buy a drone company Titan Aerospace, which is developing autonomous solar-powered aircraft. February 26, 2014 But security is still Jeh Johnson's priority, the department's new secretary tells Congress. February 25, 2014 A recent case in Connecticut is testing whether remote-controlled reporting could one day be a real option. February 24, 2014 The U.S. military is becoming more digital, specialized, and automated—just like the rest of the world. February 18, 2014 The use of unmanned aerial vehicles for sports photography is far from a passing gimmick. February 12, 2014 Your red roses and blue violets just got grounded. January 30, 2014 It seemed like a foolproof business model, but the FAA says it’s illegal. January 30, 2014 They're quickly changing the art of visualizing buildings.
<urn:uuid:f48f89a6-e1bc-4d9b-ae63-7c761e734c71>
CC-MAIN-2017-09
http://www.nextgov.com/defense/unmanned-aerial-vehicle/55563/?oref=ng-trending
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00037-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932683
1,025
2.75
3
Researchers from the U.S. Army Armament Research, Development and Engineering Center recently patented a new type of bullet capable of self-destructing after traveling over a predetermined distance. The idea behind the new and advanced projectile is that it might help limit the extent of collateral damage (read: innocents dying) during battle or in other operational settings and environments. As for how it all works, the U.S. Army explains that when one of these limited-range projectiles is fired, a pyrotechnical material is ignited at the same time and reacts with a special coating on the bullet. The pyrotechnic material ignites the reactive material, and if the projectile reaches a maximum desired range prior to impact with a target, the ignited reactive material transforms the projectile into an aerodynamically unstable object. The transformation into an aerodynamically unstable object renders the projectile incapable of continued flight. The researchers add that the desired range of its limited-range projectile can be adjusted by switching up the reactive materials used. Put simply, the Army has come up with what effectively amounts to a self-destructing bullet that is rendered ineffective over certain distances. Currently, the invention is nothing more than a proof of concept, but the Army researchers involved are confident that they're onto something transformative. "The biggest advantage is reduced risk of collateral damage," researcher Stephen McFarlane said. "In today's urban environments others could become significantly hurt or killed, especially by a round the size of a .50 caliber, if it goes too far." The Army notes that the project currently lacks any funding from the U.S. Government, so it may be a while before this proof of concept becomes a working prototype, let alone an actual tool used in a combat setting.
<urn:uuid:ffa8fc59-e44f-4de6-ad67-3d8faecdfeea>
CC-MAIN-2017-09
http://www.networkworld.com/article/3037405/hardware/army-researchers-patent-self-destructing-bullet-designed-to-save-lives.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00333-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929049
360
3.25
3
Path MTU Discovery When a host needs to transmit data out an interface, it references the interface's Maximum Transmission Unit (MTU) to determine how much data it can put into each packet. Ethernet interfaces, for example, have a default MTU of 1500 bytes, not including the Ethernet header or trailer. This means a host needing to send a TCP data stream would typically use the first 20 of these 1500 bytes for the IP header, the next 20 for the TCP header, and as much of the remaining 1460 bytes as necessary for the data payload. Encapsulating data in maximum-size packets like this allows for the least possible consumption of bandwidth by protocol overhead. Unfortunately, not all links which compose the Internet have the same MTU. The MTU offered by a link may vary depending on the physical media type or configured encapsulation (such as GRE tunneling or IPsec encryption). When a router decides to forward an IPv4 packet out an interface, but determines that the packet size exceeds the interface's MTU, the router must fragment the packet to transmit it as two (or more) individual pieces, each within the link MTU. Fragmentation is expensive both in router resources and in bandwidth utilization; new headers must be generated and attached to each fragment. (In fact, the IPv6 specification removes transit packet fragmentation from router operation entirely, but this discussion will be left for another time.) To utilize a path in the most efficient manner possible, hosts must find the path MTU; this is the smallest MTU of any link in the path to the distant end. For example, for two hosts communicating across three routed links with independent MTUs of 1500, 800, and 1200 bytes, the smallest (800 bytes) must be assumed by each end host to avoid fragmentation. Of course, it's impossible to know the MTU of each link through which a packet might travel. RFC 1191 defines path MTU discovery, a simple process through which a host can detect a path MTU smaller than its interface MTU. Two components are key to this process: the Don't Fragment (DF) bit of the IP header, and a subcode of the ICMP Destination Unreachable message, Fragmentation Needed. Setting the DF bit in an IP packet prevents a router from performing fragmentation when it encounters an MTU less than the packet size. Instead, the packet is discarded and an ICMP Fragmentation Needed message is sent to the originating host. Essentially, the router is indicating that it needs to fragment the packet but the DF flag won't allow for it. Conveniently, RFC 1191 expands the Fragmentation Needed message to include the MTU of the link necessitating fragmentation. Now that the actual path MTU has been learned, the host can cache this value and packetize future data for the destination to the appropriate size. Note that path MTU discovery is an ongoing process; the host continues to set the DF flag so that it can detect further decreases in MTU should dynamic routing influence a new path to the destination. RFC 1191 also allows for periodic testing for an increased path MTU, by occasionally attempting to pass a packet larger than the learned MTU. If the packet succeeds, the path MTU will be raised to this higher value. You can test path MTU discovery across a live network with a tool like tracepath (part of the Linux IPutils package) or mturoute (Windows only). Here's a sample of tracepath output from the lab pictured above, with the MTU of F0/1 reduced to 1400 bytes using the ip mtu command: Host$ tracepath -n 192.168.1.2 1: 192.168.0.2 0.097ms pmtu 1500 1: 192.168.0.1 0.535ms 1: 192.168.0.1 0.355ms 2: 192.168.0.1 0.430ms pmtu 1400 2: 192.168.1.2 0.763ms reached Resume: pmtu 1400 hops 2 back 254
<urn:uuid:2708414c-cc41-4a7c-bc96-b9bf49b77d30>
CC-MAIN-2017-09
http://wiki.nil.com/Path_MTU_Discovery
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00253-ip-10-171-10-108.ec2.internal.warc.gz
en
0.877036
842
3.375
3
History of malware A brief history of viruses, worms and Trojans, part 3 In 1995 the first macro viruses appeared. Until that time, only executable files and boot sectors had been affected. Macro viruses placed ever higher demands on the detecting scanners. Melissa, Loveletter, Sobig et al. continue to set new speed records for spreading. With "DMV" and "Night watchman" the first macro viruses appear. "Concept 1995" was the first macro virus which broke out publicly and spread unchecked in English systems. Hunter.c was the first polymorphic macro virus to appear in Germany. Wm.Concept was the first 'in the wild' macro virus for Word (with the exception of the HyperCard infectors). It only contained the message, "That's enough to prove a point. " and shortly afterwards was the most widespread virus in the world. Wm.Concept established the category of "Proof of Concept" viruses. PoC viruses merely demonstrate that it is possible to exploit a given weak point without causing any real damage. Detection of macro viruses puts a lot of demands on virus scanners, not least because of the constantly changing formats of script languages and Office files. The first macro generators for German or English macro viruses appear. Macro viruses are no longer limited to Word, but also target Excel and AmiPro files. They also cross the boundaries between operating systems and infect both PCs and Macs. Laroux is the first to infect MS Excel files. Boza is the first virus to attack the PE EXE format of Windows 95 files. It was written by Quantum, a member of the Australian virus-writing group VLAD. Viruses are becoming more and more specialised and target specific weak points in programs, operating systems or hardware. The first mIRC scripts appear, which spread in a worm-like manner amongst Internet Relay Chat users. The first virus for the Linux OS appears. Strange Brew is the first Java virus. With the exception of macro viruses, which are now underway in both Access and other programs, PCs with MacOS had not been plagued with viruses for at least 3 years. That all changes with the Autostart.9805 worm. Autostart uses the QuickTime AutoStart mechanism on Power PCs and copies itself to hard drives and other data media. Certain files are overwritten with junk data and thus corrupted. AutoStart spreads all over the world from Hong Kong. Backdoors appear with Netbus and Back Orifice, with which it is possible to monitor and remotely control a computer without the victim being aware. Where Back Orifice is concerned, a lively discussion rages as to whether it is remote maintenance or remote control software. Since the remote control functions can be executed unbeknown to the user, Back Orifice is classed as a Trojan. In mid 2000, an attacker successfully broke into Microsoft's internal company network using BO. CIH (Spacefiller, Chernobyl) appears in June in Taiwan. It has one of the largest payloads in virus history. It raises the question as to whether viruses are capable of destroying hardware. When its harmful function becomes active (on the 26th of April), it overwrites the Flash BIOS and the partition table of the hard drive. The computer is thus no longer able to boot. On some motherboards, the BIOS components had to be replaced or reprogrammed. But even after restoring the system, the data were lost. The author, Chinese student Chen Ing-Hau is not legally prosecuted. More... VBS.Rabbit is the first program to use the Windows Scripting Host (WSH). Written in Visual Basic, it accesses other VBS files. HTML.Prepend demonstrates that HTML files can be infected using VBScript. Dr. Solomons was bought by Network Associates. As previously with McAfee, customers migrate away from the program. In March, the "Melissa" worm infects tens of thousands of computers and spreads like wildfire worldwide on the first day of its appearance. It sends e-mails to the first 50 addresses in the Outlook address book and many mail-servers crash under the weight of incoming e-mails. In August, David I. Smith admits that he wrote the worm. Happy99 creates a copy of every e-mail sent by the user and sends it again with the same text and the same subject line plus the worm as an attachment. This also operates with Usenet postings. In June, Explore.zip disguises itself as a self-extracting file which is sent as a reply to an e-mail received. It spreads through network sharing and can infect other computers in the network, if only one network user does not take sufficient care. The damaging function scans the hard drive for C and C++ programs, Excel, Word and PowerPoint files and deletes them. Besides email, Pretty Park also spreads through Internet Relay Chats (IRC). It had very effective protection and camouflage mechanisms, which prevented the worm from being able to be deleted. At the next virus scan, the worm was recognized as legitimate. Sometimes virus scans were even blocked. By manipulating the Registry, Pretty Park was executed before EXE files, which therefore resulted in all EXE files being reported as infected. For users of Outlook, the "Good Times" vision that a virus can infect the computer, if an email is opened (even in preview mode) comes true with Bubbleboy. To do this, Bubbleboy uses an error in a program library. Despite all the prophecies, there is no Millennium Bug, which would have been worthy of this name. Palm/Phage and Palm/Liberty-A are certainly rare, but quite capable of attacking PDAs running with Palm OS. The VB-Script worm VBS/KAKworm uses a weak point in scriplets and typelibs in Internet Explorer. Similarly to BubbleBoy, it was spread by opening an email (even in preview mode). In May, a worm sends emails in snowballing fashion from the Outlook address book with the subject line "I love you" and causes billions in damages primarily in large companies' networks. Here too, the networks were quickly completely overloaded. A host of versions are derived from the original version, created by a Filipino student by the name of Onel de Guzman. US experts refer to it as the most malicious virus in computer history. The author of W95/MTX took great pains to remove the worm/virus hybrid from the computer. It sent a PIF file with a double file extension via email. It blocked the browser's access to some anti-virus providers' websites, contaminated files with the virus component and replaced some files with the worm component.After Loveletter and its many versions, emails with the corresponding subject lines were simply filtered out at the email gateways. Stages of Life varied the subject line and thus slipped through the net. In September, Liberty, the first Trojan for PDAs, appears in Sweden. It transfers itself during synchronisation with the PC and then deletes all updates. In February, an email worm circulates, the attachment of which purports to contain a picture of the Russian tennis player Anna Kournikova. Whoever opens it, installs the worm, which sends itself to all addresses in the Outlook address book. Naked also spread by email. It purports to be a flash animation of a naked woman. After opening it installs itself and sends itself to all Outlook addresses. As it also deletes Windows and system directories, it makes the computer unusable. The computer can only be used again, once the operating system is reinstalled. In July, Code Red uses a buffer overflow error in the Internet Information Server (IIS) Indexing Service DLL of Windows NT, 2000 und XP. It randomly scans IP addresses on the standard port for Internet connections and transmits a Trojan which, between the 20th and 27th of a month, launches a Denial of Service (DoS) attack against the White House website. Removing the virus requires a great deal of effort and costs billions. In July, SirCam spreads over networks and via Outlook Express and brings in some innovations. It ensures that, each time the computer is started, an EXE file is activated. It is the first worm to bring its own SMTP engine with it. However it not only transmits itself, but also personal files, which it finds on the computer. In September, Nimda distributes an Internet worm, which requires no user interaction. For distribution, it uses security loopholes in programs alongside emails. Numerous web servers are overloaded and infected file systems can be read by the entire world. In November, the memory-resident Badtrans worm uses a security loophole in Outlook and Outlook Express to spread itself. It installs itself as a service, answers emails, spies on passwords and records key sequences. At the beginning of the year the "MyParty" worm proves that not everything that ends with ".com" is a website. Anyone who double-clicks on the e-mail attachment "www.myparty.yahoo.com" gets a worm with backdoor components instead of the expected pictures. In the spring and summer, Klez uses the IFRAME security loophole in Internet Explorer to automatically install itself when an email is viewed. It spreads via email and networks and attaches itself to executable files. On the 13th of even months (or other days in subsequent versions), all files on all accessible drives would be overwritten with random content. The content could only be restored through backups. In May, Benjamin spreads as the initial worm via the KaZaA network. It replicates itself under many different names in a network folder. A website with an advertisement is displayed on infected computers. Prior to this, Gnutella P2P networks were also affected. Lentin is a worm that exploits the fact many people don't know that SCR files are not only screen savers, but that they are also executable files. Compared to Klez, its video effect is just an annoyance as a harmful function. Nor does it manage to spread like Klez. At the end of September Opasoft (also called Brazil) spreads like an epidemic. On port 137, it scans any computer on the network and checks to see if file and/or printer sharing is enabled on any of them. Then it tries to copy itself onto the computer. If there is password protection, a list of passwords is run through and a weak point in the saving of passwords is exploited. Tanatos aka BugBear is the first worm since the spring to edge out Klez from the top position. The worm spreads via email and networks, installs a spyware component and sends records of keystrokes. In January, "SQL slammer" infects at least 75,000 SQL servers and consequently cripples the Internet for hours. It exploits a weak point in the Microsoft SQL server, which has been known about for 6 months, to neutralize database servers. As SQL slammer comprises only an incorrect query and is not loaded into the memory as a file, it remains undetected by antivirus programs. The result: in Seattle the emergency numbers for the police and fire brigade fail, Bank of America ATMs cease to function, 14,000 post offices in Italy remain closed, online stock market trading suffers severely. In Korea KT Corp was temporarily completely disconnected from the Net. The index fell by some 3%, in line with the greatly reduced trading volume. In China all foreign network traffic was blocked. In August, Lovesan (alias Blaster) spreads independently over the Internet. It uses a security loophole closed just four weeks previously by Microsoft in the RPC/DCOM service and randomly infects computers selected by IP address. Within a very short time hundreds of thousands of computers were infected (570,000 was bandied about). Shortly afterwards Welchia (alias Nachi) began to remove Lovesan/ Blaster from computers and close the RPC/DCOM security loophole. At the end of August 2003, 18 year-old, Jeffrey Lee Parsons was arrested as the author of Lovesan. In March 2005, he was given a hefty fine, which, with the agreement of Microsoft, was commuted to a weekly period of community service over 3 years. The mass mail worm "Sobig.F" sets a new record for propagation speed with its own mail engine. It spreads ten times faster than previous worms. Viruses become weapons in the armoury of organised crime. Countless Trojans spy on passwords, credit card numbers and other personal information. Backdoors make computers capable of being remotely controlled and integrate them in so-called botnets. Using the zombies of a botnet, denial of service attacks are made on online betting agencies during the European Football Championship. The operators are forced to pay the extortionists' demands. Rugrat is the first virus for 64 bit Windows. Cabir, the first virus for mobile telephones with Symbian OS and Bluetooth interface is developed by Group 29A, the group known for its proof of concept viruses. Shortly thereafter, the same group follows up with WinCE4Dust.A, the first PoC virus for Windows CE. The first worm for Symbian Smartphones, CommWarrior.A, spreads via MMS. The MMS messages are sent to all entries in the phone book and use variable accompanying texts to pose as anti-virus software, games, drivers, emulators, 3D software or interesting pictures.
<urn:uuid:d4449437-8951-4e90-9594-9617fa38d06a>
CC-MAIN-2017-09
https://www.gdata-software.com/seccuritylabs/information/history-of-malware/1996-today
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00073-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94374
2,800
2.796875
3
Cyber-attacks work the same way the Internet does, using the Domain Name System (DNS) to distribute malware, control botnets and phish login credentials. With the mainstream adoption of cloud services, bring-your-own-device programs and off-network workers, the attack surface has expanded beyond the traditional corporate network perimeter. This device and network diversity has created an environment where organizations must protect any device, anywhere it roams. Today’s security platforms, which are plagued by reactive intelligence, gaps in enforcement, and the inability to integrate the two, can’t keep up. This has paved the way for a new category of cyber-security platform called a Secure Cloud Gateway (SCG).A Secure Cloud Gateway uses a DNS-based foundation to provide broader security, improved coverage and deeper visibility. Legitimate Web browsing occurs on only two protocol (port) pairs — HTTP (80) and HTTPS (443). Yet malware is occasionally distributed over non-standard ports to infect devices, and botnets regularly use non-Web protocols to breach networks and steal data. A Secure Cloud Gateway uses DNS to provide protection across all ports, protocols and applications.Today, threats are targeted, but the targets are everywhere. Unmanaged, personal devices routinely connect to the corporate network, while employees take company devices containing sensitive data off the network and roam outside the secure perimeter. By using DNS a Secure Cloud Gateway provides security coverage for devices regardless of the network or location from which they connect.The appearance and behavior of cyber threats vary infinitely, yet they all originate from a finite number of Internet hosts. Some often share the same criminal infrastructures. To extract accurate security intelligence a Secure Cloud Gateway uses DNS infrastructure and Anycast routing technology to map every connection request across the Internet both spatially and temporally.While the vast majority of Web domains can be classified as either safe or malicious, some Internet hosts are harder to classify. That’s because they store both safe and malicious Web content, or their Internet origins are suspicious. However, performing deep inspection for every Web connection significantly reduces performance. In addition, redirecting every Web connection can significantly reduce manageability. A Secure Cloud Gateway identifies high-risk or suspicious domains and uses DNS redirection to route them for deeper inspection.Unlike Secure Web Gateway (SWG) appliances or services that send every Web connection through a proxy, a Secure Cloud Gateway only routes risky Web connections for deeper inspection. This concept is called Intelligent Proxy. Here’s how it works: Scenario 1: An employee attempts to visit site #1. A Secure Cloud Gateway has already determined that this domain is malicious, based on the risk score for the host. Perhaps the domain is related to an infrastructure known to be used for criminal attacks or there is a pattern where the domain is always requested after other malicious host requests. A Secure Cloud Gateway returns the IP address to its block page server instead of the malicious domain, thus protecting the organization’s network and data. Scenario 2: An employee attempts to visit site #2. A Secure Cloud Gateway continually analyzes the Internet origins of the site’s content hosts – both spatially (e.g. geography, network) and temporally (e.g. request volume, co-occurrences). Based on both known data and algorithmic risk predictions, a Secure Cloud Gateway determines that the site #2 domain is too low of a risk to proxy and it returns the IP address to connect directly to site’s host. The employee experiences no latency or any disruptions when accessing this host. Scenario 3: An employee attempts to visit site #3. A Secure Cloud Gateway has determined the content host for this domain is too risky and returns the IP address to its proxy. The proxy provides deeper inspection beyond just the host’s Internet origins – domain and IP address. After these inspections, if the content is deemed safe, it is sent to the browser, connecting the employee to the domain. If the domain is malicious, a Secure Cloud Gateway sends back a block page and the employee is prevented from accessing a malicious domain. Integrating Intelligence with Enforcement Effective security requires both intelligence and enforcement to protect against advanced threats and targeted attacks. Intelligence without timely enforcement will fail to block malware or contain botnets. Meanwhile, enforcement without predictive intelligence will fail to stay ahead of the most complex threats. A Secure Cloud Gateway reconciles intelligence and enforcement in new ways.Actionable intelligence requires maximum coverage and visibility. A Secure Cloud Gateway, because it uses the DNS infrastructure, can gather a tremendous volume, velocity and variety of data — enough to predict the Internet origins of emerging threats even if the attack, binary file or exploit is unknown. This data it collects reflects patterns of use across all devices regardless of network, location, type or ownership, and across all Internet connections, context and content regardless of port or protocol.Meanwhile, enforcement requires a security technology with maximum breadth and depth. Using recursive DNS a Secure Cloud Gateway can enforce security policy on traffic across 65,535 network ports and an unlimited number of protocols and apps. To provide advanced threat protection, a Secure Cloud Gateway redirects high-risk Web requests to its Intelligent Proxy which performs deeper inspection to detect and block malicious content hidden within Web sessions.Rather than using a traditional proxy or in-line architecture, a Secure Cloud Gateway uses a cloud-based infrastructure that integrates multiple security enforcement technologies with Internet scale threat intelligence gathering capabilities. This enables a Secure Cloud Gateway to stay ahead of constantly evolving attacks and emerging threats, without sacrificing performance and manageability.Hubbard is a noted information security researcher and Chief Technology Officer for OpenDNS, provider of the Umbrella cyber-security service.
<urn:uuid:c155eb8e-103f-4887-a7c0-9979016062ec>
CC-MAIN-2017-09
http://www.networkworld.com/article/2172911/tech-primers/secure-cloud-gateway--using-the-internet-to-fight-cyber-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00249-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887919
1,159
2.984375
3
Aug 14, 2012 Harvard Researchers Use D-Wave Quantum Computer to Fold Proteins Paper published in Nature Scientific Reports shows that optimization problems in biophysics can be solved with a quantum computer. Burnaby, BC - Milpitas, CA - August 14, 2012 - In a paper published yesterday in Nature Scientific Reports,http://www.nature.com/srep/index.html, a team of Harvard University researchers, led by Professor Alan Aspuru-Guzik, presented results of the largest protein folding problem solved to date using a quantum computer. The researchers ran instances of a lattice protein folding model, known as the Miyazawa-Jernigan model, on a D-Wave OneTM quantum computer. "It's gratifying to see that our machine can be used to serve the scientific community in this way," stated Dr. Geordie Rose, D-Wave CTO and Founder. "The D-Wave computer found the ground-state conformation of six-amino acid lattice protein models. This is the first time a quantum device has been used to tackle optimization problems related to the natural sciences," said Professor Alán Aspuru-Guzik from the Department of Chemistry and Chemical Biology at Harvard University. Proteins contribute to virtually every process that occurs within a cell. The shape of a protein is closely related to its function. Understanding the shape of a protein helps researchers understand how it behaves, accelerating advances in many different areas of life sciences, including drug and vaccine design. A cornerstone of computational biophysics, lattice protein folding models provide useful insight into the energy landscapes of real proteins. Understanding these landscapes, and how real proteins fold into the shapes that help give them their function, is an extremely difficult problem for today's computers to solve. Dr. Alejandro Perdomo-Ortiz, the lead author of the paper, stated that: "Knowing that we can use real quantum computers to solve hard problems in biology is an exciting and important result. The techniques developed in this report can also be used to tackle other biophysical problems such as molecular recognition, protein design, and sequence alignment." About D-Wave Systems Inc. Founded in 1999, D-Wave's mission is to integrate new discoveries in physics and computer science into breakthrough new approaches to computation. The company's flagship product, the D-Wave OnTM, is built around a novel type of superconducting processor that uses quantum mechanics to massively accelerate computation. In 2010 Lockheed Martin purchased serial number 1, completing the historic world's first sale of a commercial quantum computer. With headquarters near Vancouver, Canada, its U.S. offices, as well as its superconducting chip foundry, are located in Silicon Valley. D Wave has a blue-chip investor base including Business Development Bank of Canada, Draper Fisher Jurvetson, Goldman Sachs, Growthworks, Harris & Harris Group, International Investment and Underwriting, Kensington Partners Limited. Gartner Group analysts named D-Wave 2012 Cool Vendor in High-Performance Computing and Extreme-Low-Energy Servers. For more information, visit: www.dwavesys.com Media contact: Janice Odell - 415.738.2165 - [email protected] This press release may contain forward-looking statements that are subject to risks and uncertainties that could cause actual results to differ materially from those set forth in the forward-looking statements.
<urn:uuid:45f2d7a9-15b0-4faf-b175-78dcdefbf8ac>
CC-MAIN-2017-09
https://www.dwavesys.com/news/harvard-researchers-use-d-wave-quantum-computer-fold-proteins
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00125-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917403
711
3.1875
3
A space-time cloak? Time holes? It's not science fiction anymore. - By William Jackson - Jan 06, 2012 In the “Star Trek” saga, the United Federation of Planets never figured out the cloaking technology that allowed Romulan warships to pop in and out of Federation space without being detected. But 21st-century Earth scientists are working with ways to bend light and time to achieve the same effects. It is a technology that the government might well be interested in to mask online snooping. “Although this sounds like science fiction, the lesson from metamaterials research in the last decade has taught us that, within certain restrictions, such speculations are not fantasy,” researchers at Imperial College London wrote in an online article published in 2010 in the Journal on Optics. “We here show how the magic of editing history can be achieved by introducing the concept of the space-time cloak.” NIST offers a how-to for must-do continuous monitoring It is not just cloaking that could be possible with the technology, they wrote. “The space-time cloak can achieve the illusion of a matter transporter,” in which an object appears to move from one location and instantaneously appear in another. The Imperial College research, which described the theoretical creation of a space-time cloak, or history editor, was funded in part by the Defense Advanced Research Projects Agency. A practical demonstration of the cloak was described by researchers at Cornell University in a recent issue of the journal Nature. “This approach is based on accelerating the front part of a probe light beam and slowing down its rear part to create a well controlled temporal gap — inside which an event occurs — such that the probe beam is not modified in any way by the event,” the Cornell team wrote. “The probe beam is then restored to its original form by the reverse manipulation of the dispersion.” They succeeded in creating a time hole in a fiber-optic cable lasting 50 picoseconds (trillionths of a second) in which events could be hidden. It is easy to imagine why the military would be interested in cloaking. It would come in handy getting an aircraft carrier past those pesky Iranians in the Straits of Hormuz. It is unlikely that the space-time cloak could be implemented on this scale before the 23rd century, however. But consider that most of the world’s data today travels over or is accessed via fiber-optic cable. The ability to open a gap and slip something past sensors without being observed could be a powerful tool for hacking. I don’t pretend to understand the mathematics involved in either the theory or the application, but time-cloaking and history-editing on the fly could make system monitoring and event logs useless in understanding what is going on in IT systems, especially as those systems move to optical computing as well as transport. It would be the equivalent of the old (in movies, anyway) trick of foiling a video monitoring system by running a loop image while the bad guys do their thing. Given the resources required to develop this kind of technology and field it, we are not likely to see it in the hands of garden-variety hackers any time soon. But the U.S. government is interested. Who can say what the practical applications of this research will be? William Jackson is a Maryland-based freelance writer.
<urn:uuid:af1f1619-3336-441a-ac4e-9ae87ba0bb3e>
CC-MAIN-2017-09
https://gcn.com/articles/2012/01/06/cybereye-hackers-could-exploit-space-time-cloak.aspx?admgarea=TC_EMERGINGTECH
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00245-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945798
716
3.03125
3
The U.S. military uses unmanned aerial vehicles of all sizes from the 4-pound, hand-launched Raven to the 15,000-pound Global Hawk that takes off and lands on a runway. Most of the military’s mid-sized drones are fixed wing, which means they need launch and recovery mechanisms that often aren't available in the remote areas the UAVs are most useful. The Defense Advanced Research Projects Agency and Aurora Flight Sciences recently tested a portable horizontal launch and recovery system that can capture a UAV up 1,100 pounds. The SideArm basically snatches a UAV out of the air to recover it without damage. The SideArm fits into a standard 20-foot shipping container for easy transport and can be set up by two to four people. Watch it in action:
<urn:uuid:25da221b-003d-47e7-9648-b6cc39420624>
CC-MAIN-2017-09
http://m.nextgov.com/defense/2017/02/video-darpa-tests-drone-catching-prototype/135216/?oref=m-ng-channelriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00297-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944152
167
2.5625
3
Shaders and Rendering The surface of any object or character is described by a shader, whether it's intended for a game engine or a software renderer. For those who thought that was a texture's job, the texture is just a sub-node of a shader. A simple shader might have a diffuse color file texture with another texture to control the specular highlights and another texture for the self-illumination: Above, the shader is the iPhone and the textures are attached to different components of the shader. Teach a man to fish: The advantage of nodal and procedural shaders The benefits of a nodal/procedural system is most apparent when creating shaders. When you open up the Create Render Node panel in Maya, you can be faced with a daunting set of render elements, shader nodes and utilities: At the heart of a procedural shader is the procedural texture node, and most 3D texturing/rendering apps have these. Think of these as fractals—you get high and watch them move around. Wait, that's not right. They are like fractals in that they are images created by mathematical algorithms, rather than from pixel-based image files. The main advantage to this is that procedural textures conserve memory and they're resolution-independent. You can imagine that having to load and filter a ton of giant 32-bit images for a 3D IMAX film would get out-of-hand quickly, so procedural textures are often used in animation, when possible. For something more photo-realistic, like Avatar, painted textures are used, and they are truly huge—I read that one Avatar ship had a texture file that was over 60GB. I usually just use image-based textures unless I need to do something that's heavily tiled or impractical to do with raster images. Aside from the decreased memory footprint, the real benefit to a procedural approach to shader design is the same as that of the nodal animation system: it's a toolbox for making custom shaders that would be difficult to do otherwise. Instead of waiting for someone to write a velvet shader for your application, you make one yourself by connecting a color ramp to the camera's facing ratio to the camera: The effect isn't physically accurate but it's a good fake: If your program lacks a layered shader and you need to make a wet look, just use an add node to make a wet specular on top of a different base shader: Procedural textures also reduce redundancy by letting you do things like use one texture as a source for the color, the bump, specularity, etc. A nodal shader system is essential once you get into really complex shader effects like sand: A render with that shader: A particularly elegant example of the power of nodal/procedural shaders is shown in Eric Keller's Maya Visual Effects: The Innovator's Guide. He combines the ambient occlusion (light attenuation) between two objects with a texture to create an X-Ray effect: This ability to connect seemingly unrelated elements for a particular look is what I mean when I say a particular program is good for visual effects. That's what makes Houdini popular in high-end VFX—it's deeply procedural and nodal, and that combination, along with an excellent particle system, makes it perfect for very tricky effects like the birth of Sandman sequence mentioned before. If your program or renderer doesn't have a nodal system, then it should at least have a layered shader with an "add" option for layers, to do effects like wetness/clear-coat. Currently on OS X, the only applications with nodal shader networks are Blender, Maya, Cheetah3D, RenderMan's Slim, LightWave and Houdini. Modo and Cinema 4D's shaders are layered. If you do strictly architectural design with few effects, then this won't be a big problem for you, but I personally like having the power of a nodal shader. Also, keep in mind that if you're buying a renderer as a plug-in for a program, it may not work with all aspects of a nodal shader. Maxwell for Maya doesn't work with Maya's procedural shaders, but it works with Cinema 4D's layered one. Advanced shader options Aside from the basic needs like normal map support, 16-bit texture file support, etc., there are a number of things that you'll need in a professional-grade shader. This is not a complete list of everything you'd ever want in a 3D shader, but this is what you should look for as a base feature set if you are paying for a professional 3D program and renderer. With these shader options, you should have enough to make amost all types of realistic surfaces. - Layered materials. You often have to combine different types of materials in one shader and mask out different layers to make something like a clear-coat wood. A layered shader is the easiest way to do this, since you can have a matte diffuse base (the wood) with separate reflective layers (the lacquer): - Fresnel reflections. The Fresnel effect affects how an object's surface reflects at different viewing angles. As the angle of incidence reduces toward the edges of reflective materials, the reflectivity increases. Without Fresnel options and a layered shader, a car paint shader is not very convincing. - Lambert roughness options. A Lambert shader is used to simulate materials with diffuse reflections like matte paint, uncoated wood, or the powdery look of a ceramic pot, a renderer should offer a control over Lambertian Oren-Nayar "roughness." - Anisotropic reflections (examples). Many reflective surfaces, like hair or brushed metal, have highlights that aren't circular, and specular highlight length is controlled with an anisotropy value. - Emissive options. If you want to make something like a neon sign, it would be dumb to try and string a bunch of tiny lights through a tube, so you need a light material. If the shader also supports 32-bit textures, you can use HDR images as lights for a nice soft-box: - Ambient Occlusion. This was mentioned in the modeling and texturing primer and it's important enough to mention again. You'll have a harder time getting realism with detailed surfaces if AO isn't factored into your final rendering. Most renderers do ambient occlusion as a separate render pass, where unbiased renderers incorporate it into the rendering. - Displacement maps with micro-polygon displacement. The technical-sounding "micro-polygon" part just means that your renderer should automatically subdivide your object at render time so you don't have to work with multi-million face models. You'll also want 16-bit and 32-bit image support for displacement. - Energy conservation. This is a physics term and a requirement to get physically accurate light results. To quote the Maya docs: "This means that it makes sure that diffuse + reflection + refraction <= 1, i.e. that no energy is magically created and the incoming light energy is properly distributed to the diffuse, reflection and refraction components in a way that maintains the first law of thermodynamics." - True refraction and caustics. In order to accurately mimic refractive materials like glass, water, or clear crystals, you need a shader that lets you input a refractive index value. This is a number that tells the light how much to bend when it enters and leaves the volume. All refractive materials have a different refractive index value so using a number that corresponds to a real-world measurement saves a lot of guess work. Caustics are the effect produced after light is bent inwards and hits another surface: - Subsurface scattering (SSS). This is similar to light refraction, except the light repeatedly scatters inside the volume. In order to make shaders for milk, wax, marble and skin, you'll need an SSS shader. If you're doing character animation, it's going to be hard to make them look realistic without SSS. Some renderers (V-Ray, Maxwell, Mental Ray) also have a "thin" SSS option that makes it a lot easier to do things like leaves on a tree since you don't need a volume, just a single-sided polygon with a texture on it: That also helps keep geometry counts down for nature scenes. Since most stock tree models use single-sided textures for tree leaves, it's a help for architectural rendering. And, since SSS can be incredibly slow to calculate, some renderers like V-Ray and Mental Ray have a fast SSS shader that uses tricks to get good results while significantly speeding up the SSS computation. In a way, that lists sets the bar low for a renderer but it's also a lot to ask for any program. Without the ability to specify refractive indices, absorption, scatter coefficient and caustics, your glass of wine is going to look like shiny air holding shiny red. You'll have an easier time selling the product with drunk Orson Welles. At this point, all of the high-end programs, including Blender, have these shader features. SSS was late coming to Cinema 4D but was added to the Advanced Render 3 module for Cinema 4D R11.5. Some sweet shader options that are becoming more common - Vector displacement maps. This should have been mentioned in the sculpting feature but it makes sense here, too. This is a color displacement map that tells your model to deform in three directions. It can be used to make complex shapes with overhanging peaks, where a traditional grayscale displacement map is just in and out. The image map: And the rendered displacement: Currently, only V-Ray, Modo, and RenderMan officially support vector displacement maps, and they can only be made from Mudbox. Mental Ray for Maya users can download a shader for vector displacement here. - Water/Ocean shaders. Some programs offer specialized ocean shaders since it's slow and impractical to render them with a layered shader and refraction. - CSG shader. This is a render-time Boolean operation that is controlled by shaders. If the meshes being combined are complex, Boolean operations tend to fail in most programs. Having a renderer like RenderMan renderer do it with a CSG shader is a way to avoid the crusty or failed output of polygonal Booleans: Currently, the only renderers I know of that have CSG shaders are RenderMan, 3Delight and Mental Ray. - Ptex support. Unlike the other stuff mentioned here, this is in the very early stage of adoption, but it's a crucial feature that no one doubts will change 3D texturing workflows for the better (no more UV seams!). If you want to read about the benefits of Ptex, read my SIGGRAPH 2010 coverage of it here. Currently, the only programs to work with Ptex on OS X are Pixar's RenderMan Studio, 3D-Coat, Houdini 11, and V-Ray for Maya version 2.0. I would expect to see this as a standard feature of all 3D applications and renderers within the next two years. If your 3D application of choice doesn't have plans for Ptex support, start filing those feature requests. There are more shader features of specific renderers covered further on.
<urn:uuid:303270a4-3666-4a47-8ee6-a65cae049172>
CC-MAIN-2017-09
https://arstechnica.com/apple/2010/09/an-intro-to-3d-on-the-mac-part-ii-animation-and-rendering/4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00473-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934553
2,385
2.640625
3
Tech View: Technology for Making TV Viewing Easy TV viewing has become harder. With massive amounts of available content—some homes have access to 400+ channels, a number that may rise soon to 1,000—and accessory devices such as set-top boxes, DVRs, converter boxes, TiVo®, and VCRs, watching and recording TV requires some effort. Programs of true interest and their times have to be hunted down amid all the other programs, buttons on multiple remotes have to be identified, and the interfaces of the DVR and other devices must be mastered (some more intuitive than others). Cutting through this clutter to make TV viewing easy again is becoming increasingly important. Solutions include universal remotes, Windows®-type interfaces, and even hand gestures, but they themselves require some learning and may be just a different way to manage the clutter. One solution, however, requires little or no learning, is simple and natural and intuitive, and actually does away with clutter-speech. Instead of scrolling through lists or navigating a menu or tapping out instructions, a speech interface let's you say what you want to watch. Speaking "The Real Housewives of New Jersey" or "Knicks game" into a microphone-based remote would return a short, selectable list of programs fitting the description, giving times and days. How long before speech-controlled TV is a reality? Maybe not too long. The technologies are pretty much in place. What's missing to make speech-controlled viewing possible are the computing resources needed to perform speech recognition on the thousands of TV shows and videos available to consumers. A speech recognition application works by matching spoken words to words (really sounds of words) contained in a language model that has been carefully constructed for a specific application and context, such as a banking call center. For speech-controlled viewing, the language model would contain everything in an electronic program guide. Not only is this a huge collection of words to recognize, it's one that would have to be updated at least nightly to keep it current with programming changes. While some devices have onboard speech recognition (the iPhone, for example), device-based speech recognition would be impractical for a model as large as the one required by an electronic program guide. Neither the TV nor set-top box (or remote) has sufficient computing resources for performing speech recognition on all available content. While there are hardware solutions for adding PC-type resources to these devices, new hardware increases the costs to the consumer. In addition, there is the very practical problem of having to download to each device a new language model as programming changes are made. Downloading might occur as often as every night. Software changes would also have to be periodically downloaded. The preferred solution is to do all the speech processing on networked servers, and connect the TV set-top box to the same network using a residential, or home, gateway. Commands spoken into a remote would be relayed from the gateway over the network to a central location where servers would perform the necessary speech recognition. This scheme is similar to using an iPhone or other smartphone to request a listing from a business search such as YELLOWPAGES.COM®. The spoken request is relayed via the cell network to speech recognition servers, where the spoken request is converted to commands used for the database lookup. . . . do all the speech processing on networked servers, and connect the TV set-top box to the same network . . . With speech recognition done at a central location, updates to both software and the language models can be done easily and as frequently as needed. There is an added benefit. The spoken commands used by people in the home represent hard-to-get, real-world sample data that can be used in training the models to improve performance. If speech recognition was done on the device, this valuable resource would be more difficult to exploit for refining the models. Benefits of a connected TV If server resources become available to at-home TV viewers, a lot more can happen to make TV viewing easy again. With servers performing program lookup, searches can become more complex and involve metadata such as program type, actor, or any combination of search criteria. Thus you could search not just by title but also by genre or actor. Speaking "Late Night Comedy Show," "Basketball games on Sunday night," or "Movies with Kiefer Sutherland" would get back a list of programs fitting the description. But why stop there? With computing resources available, why not put the server resources to work to suggest programs or videos you might like? You could just say What's on tonight? and have the TV return a list of suggestions tailored for your preferences. If Netflix® and book vendors can make reasonable guesses as to what you'd like, TV providers can run similar algorithms on TV programming data stored on the centralized servers. The servers could also program the DVR for you. If you want to record a program, just say what you want to record and once the speech recognition is performed, the instructions could be sent directly to the DVR. No need for you to be involved other than to say what you want. The potential of IPTV Connecting TV devices to a content provider's servers over a network is the IPTV model, a relatively new paradigm in which a private provider (usually the telephone company) distributes content over an existing broadband infrastructure by first encoding it and relaying it as a series of IP (Internet Protocol) packets over a broadband network, rather than over traditional cable or the airwaves. (Although IP is the same protocol used to relay video over the Internet, IPTV is not TV over the Internet; it's not the same as watching YouTube clips on a PC. Instead, IPTV is high-quality, high-resolution video that's delivered over a broadband connection, which can also deliver Internet content such as web pages, YouTube video, email, etc.) An example of an IPTV service is AT&T's U-verseSM, where a home gateway connects the TV set-top box to AT&T's broadband network, enabling the set-top box to communicate with servers running AT&T's WATSON ASR (and other speech technologies) as well as with other devices on the network. The TV set-top box is essentially an endpoint on the network, just the same as a PC, laptop, or iPhone. To stream speech to the network servers, it is advantageous to exploit the Wi-Fi feature of the U-verse gateway. With its Wi-Fi capability, the gateway could also serve as an access point for any Wi-Fi-enabled devices (TV remote, laptop, iPhone) to communicate with other endpoints on the IP-based home network, including any home PCs. Thus computer files, including emails and photos, could be viewable on the TV, and any Wi-Fi enabled device (such as the iPhone) could control the set-top box and DVR. If the TV's set-top box is a node on a network, communication between the home and the provider becomes two-way, with commands going out and programs and device instructions coming in. While the full ramifications of two-way communication are not yet known, it is certain that interactivity will be a major benefit of IPTV and will include much more than just shopping or participating in game shows from home. Speech-controlled viewing is not quite ready due to several factors. Some are long-standing ASR problems, such as the constant puzzle of predicting what people may say and the difficulty in recognizing uncommon accents or the high-pitched voices of children. In addition, speech-controlled viewing brings its own set of hard problems. But these are problems for the engineers. What would consumers have to do to make TV-viewing easy again? Not much, other than subscribing to an IPTV service that offers speech recognition and a voice remote. For many people, having IPTV means replacing a cable service with an IPTV one. (IPTV service will normally be offered as part of a triple- or quadruple-play package that includes phone (wireline and wireless), Internet, and TV. From then on everything else is easy since the provider maintains the servers and software and takes on the responsibility for programming the DVR. All consumers need to do is say what they want to watch using their own words. How to Use a Voice Remote Tech View: Views on Technology, Science and Mathematics Sponsored by AT&T Labs Research This series presents articles on technology, science and mathematics, and their impact on society -- written by AT&T Labs scientists and engineers. For more information about articles in this series, contact: [email protected].
<urn:uuid:9740dbd7-c30e-488c-9071-5be81e7bdba6>
CC-MAIN-2017-09
http://www.research.att.com/articles/featured_stories/2009/200910_techview_tech_making_tv_viewing_easy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00469-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950864
1,802
2.515625
3
Secure Sockets Layer (SSL) is a protocol designed to enable encrypted, authenticated connection across the Internet. SSL is used mostly in communication between web browsers and web servers. URL's that begin with 'https' indicate that an SSL connection will be used. Online shopping sites frequently use SSL technology to safeguard credit card information. In order to use the SSL protocol you need to have an SSL certificate which is used for the server authentication, data encryption, and message integrity checks. Commercial SSL certificates are issued by official Identity Assurance authorities Login into your Plesk Control Panel at https://your-server-ip-address:8443 using your Plesk Administrative Access. Click on “Domains” link located in the left panel. You will be redirected to a list that contains all domain names you have configured hosting for. Click on the domain name that you’d like to manage. Click the "SSL Certificates" icon, located in the services section. Click the "Add New Certificate" icon. Enter a name of your own choice for the new certificate object in the Certificate name text field. Specify the certificate properties: Certificate name. This will help you identify the specific certificate. Encryption level. Choose the encryption level of your SSL certificate. Different Certificate Authorities have different requirements so please check with them for the correct level. Specify your location and organization name. The values you enter should not exceed 64 characters. Specify the domain name for which you wish to purchase an SSL certificate. This should be a fully qualified domain name. Example: www.domain.com Enter the domain administrator's e-mail address. Make sure that all the provided information is correct and accurate, because it will be used to generate your private key and will appear on your certificate. Click "Request" to generate the certificate request. In the Certificate list, click the name of the certificate you need. A page showing the certificate properties will open. Locate the CSR section on the page, and copy the text that starts with: -----BEGIN CERTIFICATE REQUEST----- and ends with: -----END CERTIFICATE REQUEST----- to the clipboard. Using the CSR provided, go to the Certificate Authority of your choice and purchase a certificate. Once you receive your certificate, save the files to your local computer. Follow steps 1-4 to return to the Certificate list Click "Browse" in the middle of the page and select the saved certificate from your local computer Click “Send File” to upload the certificate. Return to your domain Home page (Follow steps 2 and 3) and click Web Hosting Settings. Select the SSL certificate that you wish to install from the “Certificate” drop-down. Select the “SSL support” check box and click OK. - I have a dedicated server. How can I install my commercial SSL certificate on my server using Plesk? - How do I manage IP addresses on my dedicated server with Plesk? - I have a dedicated server with plesk, how do I create a new email account? - I have a dedicated server with Plesk 8.x. how can I change the hostname of my server from IPAddress.dedicated.abac.net to a domain name which I currently own? - How do I reboot my dedicated server with Plesk?
<urn:uuid:924aee45-8740-44e5-94b7-51b22ac0c944>
CC-MAIN-2017-09
http://www.codero.com/knowledge-base/content/37/99/en/i-have-a-dedicated-server-how-can-i-install-my-commercial-ssl-certificate-on-my-server-using-plesk.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00169-ip-10-171-10-108.ec2.internal.warc.gz
en
0.854633
714
2.625
3
Cookies provide a method for creating a stateful HTTP session and their recommended use is formally defined within RFC2965 and BCP44. Although they are used for many purposes, they are often used to maintain a Session ID (SID), through which an individual user can be identified throughout their interaction with the site. For a site that requires authentication, this SID is typically passed to the user after they have authenticated and effectively maintains the authentication state. If an attacker can use a mechanism (such as sniffing or cross site scripting) to gain access to the SID, then potentially they can incorporate it within their own session to successfully assume the users identity. The cookie specifications provide arguments for restricting the domain and path for which the user agent (browser) will supply the cookie. Both of these should be matched by the request before the user agent sends the cookie data to the server. It is common for the path argument to be specified as the root of the origin server; a practise that can expose the application cookies to unnecessary additional scrutiny. It is worth noting however, that whilst the various “same origin” security issues still afflict the browser vendors, the specification of the cookie path argument is somewhat of a moot point. Download the paper in PDF format here.
<urn:uuid:eb01cfea-1bf2-452d-ba54-f8b7469e3575>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2004/06/27/cookie-path-best-practice/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00169-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936829
259
3.140625
3
Excerpt from Chapter 6 of Copyright © 2002 Addison-Wesley. This paper was also published as an article in CSI’s Computer Security Journal, Summer 2002 (see Note 14). Strong Password Policies (I cheat and make all my computer accounts use the same password.) – Donald A. Norman, The Design of Everyday Things Since passwords were introduced in the 1960s, the notion of a “good” password has evolved in response to attacks against them. At first, there were no rules about passwords except that they should be remembered and kept secret. As attacks increased in sophistication, so did the rules for choosing good passwords. Each new rule had its justification and, when seen in context, each one made sense. People rarely had trouble with any particular rule: the problem was with their combined effect. The opening quotation illustrates one well-known assumption about proper password usage: it’s “cheating” to use the same password for more than one thing. This is because passwords may be intercepted or guessed. If people routinely use a single password for everything, then attackers reap a huge benefit by intercepting a single password. So, our first rule for choosing passwords might be: 1. Each password you choose must be new and different. An early and important source of password rules was the Department of Defense (DOD) Password Management Guideline (see Note 1). Published in 1985, the Guideline codified the state of the practice for passwords at that time. In addition to various technical recommendations for password implementation and management, the Guideline provided recommendations for how individuals should select and handle passwords. In particular, these recommendations yielded the following password rule: 2. Passwords must be memorized. If a password is written down, it must be locked up. Password selection rules in the DOD Guideline were based on a simple rationale: attackers can find a password by trying all the possibilities. The DOD’s specific guidelines were formulated to prevent a successful attack based on systematic, trial-and-error guessing. The Guideline presented a simple model of a guessing attack that established parameters for password length and duration. This yielded two more password rules: 3. Passwords must be at least six characters long, and probably longer, depending on the size of the password’s character set. 4. Passwords must be replaced periodically. The DOD Guideline included a worked example based on the goal of reducing the risk of a guessed password to one chance in a million over a one-year period. This produced the recommendation to change passwords at least once a year. Passwords must be nine characters long if they only consist of single-case letters, and may be only eight characters long if they also contain digits. Shorter passwords would decrease the risk of guessing to less than one in a million, but that still provided good security for most applications. The DOD Guideline didn’t actually mandate eight-character passwords or the one-in-a-million level of risk; these decisions were left to the individual sites and systems. In fact, the chances of guessing were significantly greater than one in a million, even with eight- and nine-character passwords. This is because people tend to choose words for passwords-after all, they are told to choose a word, not a secret numeric code or some other arbitrary value. And there are indeed a finite number of words that people tend to choose. Dictionary attacks exploit this tendency. By the late 1980s, dictionary attacks caused so much worry that another password rule evolved: 5. Passwords must contain a mixture of letters (both upper- and lowercase), digits, and punctuation characters. Now that we have these five rules in place, it is time to click on this link. The evolving rules, and the corresponding increases in password complexity, have now left the users behind. None but the most compulsive can comply with such rules week after week, month after month. Ultimately, we can summarize classical password selection rules as follows: The password must be impossible to remember and never written down. The point isn’t that these rules are wrong. Every one of these rules has its proper role, but the rules must be applied in the light of practical human behavior and peoples’ motivations. Most people use computers because they help perform practical business tasks or provide entertainment. There’s nothing productive or entertaining about memorizing obscure passwords. Passwords and Usability Traditional password systems contain many design features intended to make trial-and-error attacks as hard as possible. Unfortunately, these features also make password systems hard to use. In fact, they violate most of the accepted usability standards for computer systems. Of the eight “Golden Rules” suggested by Ben Shneiderman for user interface design, password interactions break six of them (see Table 1). People can’t take shortcuts: the system won’t match the first few letters typed and fill in the rest. Most systems only report success or failure: they don’t say how close the password guess was, or even distinguish between a mistyped user name and a mistyped password. Many systems keep track of incorrect guesses and take some irreversible action (like locking the person’s account) if too many bad guesses take place. To complete the challenge, people rarely have a chance to see the password they type: they can’t detect repeated letters or accidental misspellings. | Golden Rules of User Interface Design (See Note 2) |True for Passwords?| |1. Strive for consistency||YES| |2. Frequent users can use shortcuts||NO| |3. Provide informative feedback||NO| |4. Dialogs should yield closure||YES| |5. Prevent errors and provide simple error handling||NO| |6. Easy reversal of any action||NO| |7. Put the user in charge||NO| |8. Reduce short-term memory load||NO| To appreciate another truly fundamental problem with passwords, consider what happens when changing a password. Imagine that a user named Tim needs to change his password, and he wishes to follow all of the rules. While it’s possible that he might have a particular password in mind to use the next time the occasion arises, many (perhaps most) people don’t think about passwords until they actually need to choose one. For example, Windows NT can force its users to immediately change a password during the logon process, usually because the existing password has become “too old.” If Tim hasn’t thought of another good password ahead of time, he must think of one, fix it permanently in his mind, and type it in twice without ever seeing it written. This presents a significant mental challenge, especially if Tim tries to follow the classic password selection rules. He has to remember and apply the rules about length, reuse, and content. Then he must remember the password he chose. This is made especially hard since the system won’t display the password he chose: Tim must memorize it without the extra help of seeing its visual representation. Human short-term memory can, on average, remember between five and nine things of a particular kind: letters, digits, words, or other well-recognized categories. The DOD Guideline spoke of eight- or nine-character passwords, which lie on the optimistic end of peoples’ ability to memorize. Moreover, Tim’s short-term memory will retain this new password for perhaps only a half minute, so he must immediately work at memorizing it. Studies show that if Tim is interrupted before he fully memorizes the password, then it will fall out of his working memory and be lost. If Tim was in a hurry when the system demanded a new password, he must sacrifice either the concentration he had on his critical task or the recollection of his new password. Or, he can violate a rule and write the password down on a piece of paper (see Note 3). Passwords were originally words because it’s much easier for people to remember words than arbitrary strings of characters. Tim might not remember the password “rgbmrhuea,” but he can easily remember the same letters when they spell out “hamburger.” Tim more easily remembers a word as his password because it represents a single item in his memory. If Tim chooses an equally long sequence of arbitrary characters to be his password, he must mentally transform that sequence into a single item for him to remember. This is hard for people to do reliably. While there are techniques for improving one’s memory, they are difficult to learn and require constant practice to retain. Strong passwords simply aren’t practical if they require specialized training to use correctly. Later in this chapter we examine a few simple and practical memory techniques for producing memorable passwords. The techniques do not necessarily provide the strongest possible secrets, but they are within the reach of most peoples’ abilities (see Note 4). Dictionary Attacks and Password Strength Note to purists: This section doesn’t really appear in Chapter 6 of Authentication. It was added to explain the notion of the average attack space and provide enough context to fully appreciate the weakness of passwords. The material in this section came from Chapters 2 and 3. In general, strong authentication techniques require a person to prove ownership of a hard-to-guess secret to the target computer. Traditionally, a user would transmit the password during the login operation, and the computer would verify that the password matched its internal records. More sophisticated systems require a cryptographic transformation that the user can only perform successfully if in possession of the appropriate secret data. Traditional challenge response authentication systems use symmetrically shared secrets for this, while systems based on public key cryptography will use the transform to verify that the user possesses the appropriate private key. In all cases, successful authentication depends on the user’s possession of a particular piece of secret information. In this discussion, that secret information is called the base secret. A simple way to compare different authentication techniques is to look at the number of trial-and-error attempts they impose on an attacker. For example, an attacker faced with a four-digit combination lock has 10 times as hard of a job as one faced with a three-digit lock. In order to compare how well these locks resist trial-and-error attacks and to compare their strength against the strength of others, we can estimate the number of guesses, on average, the attacker must make to find the base secret. We call this metric the average attack space. Many experts like to perform such comparisons by computing the length of time required, on average, to guess the base secret’s value. The problem with such estimates is that they are perishable. As time goes on, computers get faster, guessing rates increase, and the time to guess a base secret will decrease. The average attack space leaves out the time factor, allowing a comparison of the underlying mechanisms instead of comparing the computing hardware used in attacks. Each item counted in an average attack space represents a single operation with a finite, somewhat predictable duration, like hashing a single password or performing a single attempt to log on. When we look for significant safety margins, like factors of thousands, millions, or more, we can ignore the time difference between two fixed operations like that. If all possible values of a base secret are equally likely to occur, then a trial-and-error attack must, on average, try half of those possible values. Thus, an average attack space reflects the need to search half of the possible base secrets, not all of them. In practice, people’s password choices are often biased in some way. If so, the average attack space should reflect the set of passwords people are likely to choose from. In the case of a four-digit luggage lock, we might want to represent the number of choices that reflect days of the year, since people find it easy to remember significant personal dates, and dates are easily encoded in four digits. This reduces the number of four-digit combinations an attacker must try from 10,000 to 366. When we try to measure the number of likely combinations, we should also take into account the likelihood that people chose one of those combinations to use on their luggage. The average attack space, then, doesn’t estimate how many guesses it might take to guess a particular password or other secret. Instead, it estimates the likelihood that we can guess some base secret, if we pick it randomly from the user community. Biases in password selection are the basis of dictionary attacks, and practical estimates of password strength must take dictionary attacks into account. In the classic dictionary attack, the attacker has intercepted some information that was derived cryptographically from the victim’s password. This may be a hashed version of the password that was stored in the host computer’s user database (i.e. /etc/passwd on classic Unix systems or the SAM database on Windows NT systems) or it may be a set of encrypted responses produced by a challenge response authentication protocol. The attacker reproduces the computation that should have produced the intercepted information, using successive words from the dictionary as candidates. If one of the candidates produces a matching result, the corresponding candidate matches the user’s password closely enough to be used to masquerade as that user. This whole process occurs off-line with respect to the user and computing system being targeted, so the potential victims can’t easily detect that the attack is taking place. Moreover, the speed of the search is limited primarily by the computing power being used and the size of the dictionary. In some cases, an attacker can precompile a dictionary of hashed passwords and use this dictionary to search user databases for passwords; while this approach is much more efficient, it can’t be applied in every situation. We can compute an estimate of password strength by looking at the practical properties of off-line dictionary attacks. In particular, we look at dictionary sizes and at statistics regarding the success rates of dictionary attacks. In this case, the success rate would reflect the number of passwords subjected to the dictionary attack and the number that were actually cracked that way. The 1988 Internet Worm provides us with an early, well-documented password cracking incident. The Internet Worm tried to crack passwords by working through a whole series of word lists. First, it built a customized dictionary of words containing the user name, the person’s name (both taken from the Unix password file), and five permutations of them. If those failed, it used an internal dictionary of 432 common, Internet-oriented jargon words. If those failed, it used the Unix on-line dictionary of 24,474 words. The worm also checked for the “null” password. Some sites reported as many as 50% of their passwords were successfully cracked using this strategy (see Note 5). Adding these all up, the worm searched a password space of 24,914 passwords. To compute the average attack space, we use the password space as the divisor, and we use the likelihood of finding a password from the space as the dividend. We use the constant value two to reflect the goal of searching until we find a password with a 50-50 chance, and we scale that by the 50% likelihood that the password being attacked does in fact appear in the dictionary. This yields the following computation: 24,914 / (2 x 0.5) = 24,914, or 215 average attack space Since the most significant off-line trial-and-error attacks today are directed against cryptographic systems, and such systems measure sizes in terms of powers of two (or bits), we will represent average attack spaces as powers of two. When assessing average attack spaces, keep in mind that today’s computing technology can easily perform an off-line trial-and-error attack involving 240 attempts. The successful attack on the Data Encryption Standard (DES) by Deep Crack (see Note 6) involved 254 attempts, on average, to attack its 56-bit key (we lose one bit when we take the property of complementation into account). We can also use the average attack space to compute how long a successful attack might take, on average. If we know the guess rate (guesses per second) we simply divide the average attack space by the guess rate to find the average attack time. For example, if a Pentium P100 is able to perform 65,000 guesses per second, then the P100 can perform the Internet Worm’s dictionary attack in a half-second, on average. The Worm’s 50% likelihood figure plays an important role in computing the average attack space: while users are not forced to choose passwords from dictionaries, they are statistically likely to do so. However, the 50% estimate is based solely on anecdotal evidence from the Internet Worm incident. We can develop a more convincing statistic by looking at other measurements of successful password cracking. The first truly comprehensive study of this was performed in 1990 by Daniel V. Klein (see Note 7). To perform his study, Klein collected encrypted password files from numerous Unix systems, courtesy of friends and colleagues in the United States and the United Kingdom. This collection yielded approximately 15,000 different user account entries, each with its own password. Klein then constructed a set of password dictionaries and a set of mechanisms to systematically permute the dictionary into likely variations. To test his tool, Klein started by looking for “Joe accounts,” that is, accounts in which the user name was used as its password, and quickly cracked 368 passwords (2.7% of the collection). Klein’s word selection strategies produced a basic dictionary of over 60,000 items. The list included names of people, places, fictional references, mythical references, specialized terms, biblical terms, words from Shakespeare, Yiddish, mnemonics, and so on. After applying strategies to permute the words in typical ways (capitalization, obvious substitutions, and transpositions) he produced a password space containing over 3.3 million possibilities (see Note 8). After systematically searching this space, Klein managed to crack 24.2% of all passwords in the collection of accounts. This yields the following average attack space: 3,300,000 / (2 x .242) = 223 average attack space Klein’s results suggest that the reported Internet Worm experience underestimates the average attack space of Unix passwords by about 28. Still, a 223 attack space is not a serious impediment to a reasonably well-equipped attacker, especially when attacking an encrypted password file. The guess rate of a Pentium P100 can search that average attack space in less than two minutes. The likelihood statistic tells us an important story because it shows how often people pick easy-to-crack passwords. Table 2 summarizes the results of several instances in which someone subjected a collection of passwords to a dictionary attack or other systematic search. Spafford’s study at Purdue took place from 1991 to 1992, and produced a variety statistics regarding people’s password choices. Of particular interest here, the study tested the passwords against a few dictionaries and simple word lists, and found 20% of the passwords in those lists. Spafford also detected “Joe accounts” 3.9% of the time, a higher rate than Klein found (see Note 9). |Report||When||Passwords Searched||Percentage Found| |Internet Worm (note 5)||1988||thousands||~50%| |Study by Klein (note 7)||1990||15,000||24.2%| |Study by Spafford (note 9)||1992||13,787||20%| |CERT Incident IN-98-03 (note 10)||1998||186,126||25.6%| |Study by Yan et al. (note 11)||2000||195||35%| The CERT statistic shown in Table 2 is based on a password cracking incident uncovered at an Internet site in 1998. The cracker had collected 186,126 user records, and had successfully guessed 47,642 of the passwords (see Note 10). In 2000, a team of researchers at Cambridge University performed password usage experiments designed in accordance with the experimental standards of applied psychology. While the focus of the experiment was on techniques to strengthen passwords, it also examined 195 hashed passwords chosen by students in the experiment’s control group and in the general user population: 35% of their passwords were cracked (see Note 11). Although the statistics from the Internet Worm may be based on a lot of conjecture, the other statistics show that crackable passwords are indeed prevalent. If anything, the prevalence of weak passwords is increasing as more and more people use computers. The average attack space lets us estimate the strength of a password system as affected by the threat of dictionary attacks and by people’s measured behavior at choosing passwords. As shown in Table 3, we can also use the average attack space compare password strength against other mechanisms such public keys. In fact, we can compute average attack spaces for any trial-and-error attack, although the specific attacks shown here are divided into two types: off-line and interactive. Off-line attacks involve trial-and-error by a computation, as seen in the dictionary attacks. Interactive attacks involve direct trial-and-error with the device that will recognize a correct guess. Properly designed systems can defeat interactive attacks, or at least limit their effectiveness, by responding slowly to incorrect guesses, by sounding an alarm when numerous incorrect guesses are made, and by “locking out” the target of the attack if too many incorrect guesses are made. |Example||Style of Attack||Average Attack Space| |Trial-and-error attack on 1024-bit public keys||Off-line||286| |Trial-and-error attack on 56-bit DES encryption keys||Off-line||254| |Dictionary attack on eight-character Unix passwords||Off-line||223| |Trial-and-error attack on four-digit PINs||Interactive||213| For an example of an interactive attack, recall the four-digit luggage lock. Its average attack space was reduced when we considered the possibility that people choose combinations that are dates instead of choosing purely random combinations. Even though a trial-and-error attack on such a lock is obviously feasible, it obviously reflects a different type of vulnerability than that of a password attacked with off-line cryptographic computations. The principal benefit of considering the different average attack spaces together is that they all provide insight into the likelihood with which an individual attack might succeed. Forcing Functions and Mouse Pads If strong security depends on strong passwords, then one strategy to achieve good security is to implement mechanisms that enforce the use of strong passwords. The mechanisms either generate appropriate passwords automatically or they critique the passwords selected by users. For example, NIST published a standard for automatic password generators. Mechanisms to enforce restrictions on the size and composition of passwords are very common in state-of-the-art operating systems, including Microsoft Windows NT and 2000 as well as major versions of Unix. While these approaches can have some value, they also have limitations. In terms of the user interface, the mechanisms generally work as forcing functions that try to control user password choices (see Note 12). Unfortunately, forcing functions do not necessarily solve the problem that motivated their implementation. The book Why Things Bite Back, by Edward Tenner, examines unintended consequences of various technological mechanisms. In particular, the book identifies several different patterns by which technology takes revenge on humanity when applied to a difficult problem. A common pattern, for example, is for the technological fix to simply “rearrange” things so that the original problem remains but in a different guise (see Note 13). Forcing functions are prone to rearrangements. In the case of strong password enforcement, we set up intractable forces for collision. We can implement software that requires complicated, hard-to-remember passwords, but we can’t change individuals’ memorization skills. When people require computers to get work done, they will rearrange the problem themselves to reconcile the limits of their memory with the mandates of the password selection mechanism. Coincidentally, mouse pads are shaped like miniature doormats. Just as some people hide house keys under doormats, some hide passwords under mouse pads (Figure 2). The author occasionally performs “mouse pad surveys” at companies using computer systems. The surveys look under mouse pads and superficially among other papers near workstations for written passwords. A significant number are found, at both high-tech and low-tech companies. Authentication © 2002, used by permission People rarely include little notes with their passwords to explain why they chose to hide the password instead of memorize it. In some cases, several people might be sharing the password and the written copy is the simplest way to keep all users informed. Although many sites discourage such sharing, it often takes place, notably between senior managers and their administrative assistants. More often, people write down passwords because they have so much trouble remembering them. When asked about written passwords, poor memory is the typical excuse. An interesting relationship noted in these surveys is that people hide written passwords near their workstations more often when the system requires users to periodically change them. In the author’s experience, the likelihood of finding written passwords near a workstation subjected to periodic password changes ranged from 16% to 39%, varying from site to site. At the same sites, however, the likelihood ranged from 4% to 9% for workstations connected to systems that did not enforce periodic password changes. In some cases, over a third of a system’s users rearranged the password problem to adapt to their inability to constantly memorize new passwords. These surveys also suggest an obvious attack: the attacker can simply search around workstations in an office area for written passwords. This strategy appeared in the motion picture WarGames, in a scene in which a character found the password for the high school computer by looking in a desk. Interestingly, the password was clearly the latest entry in a list of words where the earlier entries were all crossed off. Most likely, the school was required to change its password periodically (for “security” reasons) and the users kept this list so they wouldn’t forget the latest password. Using the statistics from mouse pad searches, we can estimate the average attack space for the corresponding attack. Table 4 compares the results with other average attack spaces. In the best case, the likelihood is 4%, or one in 25, so the attacker must, on average, search 12 or 13 desks to find a password. That yields an average attack space of 24. The worst case is 39%, which is less than one in three. Thus, the attacker must, on average, search one or two desks to find a written password. |Example||Style of Attack||Average Attack Space| |Trial-and-error attack on 56-bit DES encryption keys||Off-line||254| |Dictionary attack on eight-character Unix passwords||Off-line||223| |Trial-and-error attack on four-digit PINs||Interactive||213| |Best-case result of a mouse pad search||Interactive||24| |Worst-case result of a mouse pad search||Interactive||21| The mouse pad problem shows that we can’t always increase the average attack space simply by making passwords more complicated. If we overwhelm people’s memories, we make certain attack risks worse, not better. The reason we want to discourage single-word passwords is that they’re vulnerable to off-line dictionary attacks. Table 4 shows that such attacks involve a 223 attack space. We don’t increase the average attack space if forgettable passwords move to the bottom of people’s mouse pads. If you are following the notes to see if they contain more technical details, don’t bother. The notes only provide sources for the information in the text. If you are interested in general in the sources, it’s best to postpone looking at the notes until you’ve read the entire paper. Then just read all of the notes. 1. See the DOD Password Management Guideline, produced by the NCSC (CSC-STD-002-85, Fort Meade, MD: National Computer Security Center, 12 April 1985). 2. See Chapter 2 of Designing the User Interface: Strategies for Effective HumanComputer Interaction by Ben Shneiderman (Reading, MA: Addison-Wesley, 1998). For a point of view more focused on usability and security, see the papers by Alma Whitten and J. D. Tygar: “Usability of Security: A Case Study” (CMU-CS-98-155, Pittsburgh, Pennsylvania: Carnegie Mellon University Computer Science Department, 18 December 1998), and “Why Johnny Can’t Encrypt,” (Proceedings of the 8th USENIX Security Symposium, USENIX Association, 1999). 6. The best description of attacks on DES is in Cracking DES: Secrets of Encryption Research, Wiretap Politics, and Chip Design, by the Electronic Frontier Foundation (Sebastopol, CA: O’Reilly & Associates, 1998). 13. Edward Tenner was inspired to write Why Things Bite Back: Technology and the Revenge of Unintended Consequences (New York: Alfred A. Knopf, 1996) after noticing how much more paper gets used in a modern “paperless” office. Tenner summarized his taxonomy of revenge effects in Chapter 1. 14. The CSI Journal actually published the article twice. The first time, in Spring 2002, the printer eliminated all exponents, so that 2128 became 2128. The Summer 2002 version contains the correct text.
<urn:uuid:3cdb439c-977c-4fd1-8a4a-d273129f51af>
CC-MAIN-2017-09
https://cryptosmith.com/password-sanity/dilemma/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00165-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934603
6,218
3.34375
3
Warnings are Futile if People Don't Respond At the time, the PTWC was the only agency to detect that tragic event, but its warnings to governments surrounding the Indian Ocean went mostly ignored, with deadly results. Now, a warning system is in place, and for the moment, it's all run by the U.S. National Weather Service on behalf of the United Nations. The only other visible sign of this global data network is something you see only if you know what to look for. Back in the late 1990s, when I first started working with the University of Hawaii's Advanced Network Computing Lab to test enterprise-class products for the long-departed CommunicationsWeek, where I was the reviews editor, I happened to be driving along the north shore of O'ahu when I spotted some tall masts topped with sirens. I asked my friend and colleague Brian Chee who created the lab what those might be. "They're the tsunami-warning sirens," he told me.Unfortunately, those sirens aren't everywhere in vulnerable areas. However the data network operated by the Pacific Tsunami Warning Center is nearly everywhere, and it has the ability to provide timely warnings, which, if heeded, can save the lives of hundreds of thousands of people. But, sadly, if they're ignored as they were in 2004, those same numbers can be lost. In the United States where warnings are usually taken seriously, the March 11, 2011, earthquake in Japan and the subsequent tsunami were taken seriously, and people were evacuated. But a global data network can do only so much. As critical as this infrastructure is, it only works when it's used. The good news is that most governments in the Pacific and in the Indian Ocean now take the threat seriously, they have plans in place to evacuate residents in affected areas, and they probably won't be struck by the unimaginable loss of life that happened in 2004. But there's another tsunami warning area that gets little attention. It monitors the North Atlantic, the Mediterranean and the seas connected to them. The Atlantic Ocean is also capable of generating tsunamis as the Atlantic Ocean spreads along the mid-Atlantic ridge. Imagine a 36-foot-high tsunami coming ashore in Manhattan, and then ask yourself where the warning system is and where you'd go to evacuate. On March 11, those sirens began sounding hourly, warning residents of low-lying areas throughout Hawaii to seek higher ground. The sirens were triggered as the last stage of the tsunami-warning data network. You might consider that these are the human interface of this vast global network that starts with reports of earthquakes and continues with measurements of ocean waves by a string of sensors spanning thousands of miles of open ocean.
<urn:uuid:e8353e96-ca30-46e3-9f67-0facccb5ebe1>
CC-MAIN-2017-09
http://www.eweek.com/c/a/IT-Infrastructure/Japan-Earthquake-Sends-Pacific-Tsunami-Warning-Data-Network-into-Action-104446/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00462-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970638
554
2.90625
3
iOS vs. Android: Text selection and copying iOS 4 makes it easy to edit text. Simply insert the text cursor where you want to make a change; iOS even magnifies the area you are touching to make the text more legible (upper left). When you tap and hold on text in any app, iOS provides selection handles and pop-up buttons such as Copy, Delete, and Paste, as appropriate for the current context (lower left). It also can copy graphics. Android OS 2.2 lets you tap in text to move your cursor to a specific location, but if you tap too long, the Edit Text contextual menu appears, taking up the entire screen (upper right). Also, many apps do not allow you to select a range of text; one that does is the browser (lower right). And Android can't select graphics.
<urn:uuid:c0c18754-b200-4e93-9371-28826eba2310>
CC-MAIN-2017-09
http://www.networkworld.com/article/2869706/network-security/mobile-deathmatch--apple-ios-4-vs--android-2-2--side-by-side.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00462-ip-10-171-10-108.ec2.internal.warc.gz
en
0.906476
171
2.578125
3
I will not, under any conditions, make some snarky Harry Potter quips here. Nor will I descend in to Star Trek analogies or Star Wars references. What I will say is that scientists from Duke University have made history by making a cylinder disappear from a certain perspective by essentially bending light around the object. They managed to bent light around a 1 centimeter high, 7.5 centimeter wide cylinder without reflections. “We built the cloak, and it worked,” said Nathan Landy, a graduate student working in the laboratory at Duke’s Pratt School of Engineering. “It split light into two waves which traveled around an object in the center and re-emerged as the single wave minimal loss due to reflections.” One drawback from the research is that can only hide objects so small they are not visible to the naked eye.
<urn:uuid:a3a8dfac-14ab-4dea-af62-5e5a598ca4ee>
CC-MAIN-2017-09
http://www.cio.com/article/2370934/internet/duke-scientists-make-an-object--invisible-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00458-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950202
179
3.203125
3
Five Steps to Resolving Workplace ConflictBy Larry and Meagan Johnson | Posted 2010-12-21 Email Print Conflicts often arise in a multigenerational environment, so it’s important for managers to understand the differences among age-groups. For the first time in history, five generations are working side by side. Since conflicts often arise in a multigenerational environment, it’s important for managers to understand the differences among the generations. Traditionals (born before 1945): “The Depression Babies” are influenced by the Great Depression and World War II. They are loyal and respectful of authority; stubbornly independent; dependable with a great work ethic; experienced with a lot to offer; high commitment to quality; great communication and interpersonal skills; able and willing to learn. Baby Boomers (born between 1946 and 1964): “The Woodstock Generation” is influenced by the Sixties, the Vietnam War and postwar social change. They are interested in spirituality and making a difference; pioneers of antidiscrimination policies; well-educated and culturally literate; questioners of authority; good at teamwork, cooperation and politics; seekers of financial prosperity; not in a rush to retire early. Generation X (born between 1965 and 1980): “The Latchkey Generation” is influenced by pop culture and may be children of divorce. They are highly independent workers who prefer to fly solo; responsible, family-focused; little patience for bureaucracy and what they consider nonsensical policies; constantly preparing for potential next job; hardworking and wanting to contribute; expect to be valued and rewarded; thrive on adrenaline-charged assignments. Generation Y (born between 1981 and 1995): “The Entitled Generation” is influenced by technology and doting parents. They are into friends and socializing; at ease with technology and multitasking; used to hovering, involved authorities; value social responsibility; expect praise and notice; need constructive feedback routinely; want work-life balance; will stay put if their loyalty is earned. Linksters (born after 1995) “The Facebook Crowd” is influenced by a chaotic, media-saturated world. They are still living at home; used to taking instruction; best friends with their parents; live and breathe technology; tuned in to pop music and TV culture; tolerant of alternative life styles; involved in green causes and social activism; loathe dress codes. Resolving Intergenerational Conflicts Here are five tips for dealing with intergenerational friction: 1. Look at the generational factor. There is almost always a generational component to conflict: Recognizing this offers new ways to resolve it. For example, Traditionals and Baby Boomers don’t like to be micromanaged, while Gen Y employees and Linksters crave specific, detailed instructions about how to do things and are used to hovering authorities. Baby Boomers value teamwork, cooperation and buy-in, while Gen X individuals prefer to make unilateral decisions and move on—preferably solo. 2. Air different generations’ perceptions. When employees of two or more generations are involved in a workplace conflict, invite them to share their perceptions. For instance, a Traditional employee may find a Gen Y worker’s lack of formality and manners offensive, while a Gen Y staffer may feel “dissed” when an older employee fails to respect his or her opinions and input. 3. Find a generationally appropriate fix. Work with the set of workplace attitudes and expectations that come from everyone’s generational experience. For instance, if you have a knowledgeable Boomer who is frustrated by a Gen Y employee’s lack of experience and sense of entitlement, turn the Boomer into a mentor. Or if you have a Gen X individual who is slacking off, give him or her a super-challenging assignment linked to a tangible reward. 4. Find commonality. Shared and complementary characteristics can be exploited when dealing with intergenerational conflict. For instance, Traditionals and Gen Y employees both tend to value security and stability. Traditionals and Boomers tend to resist change—but crave training and development. Gen X and Gen Y employees place a high value on workplace flexibility and work-life balance. Boomers and Linksters are most comfortable with diversity and alternative life styles. Gen Y employees and Linksters are technologically adept and committed to socially responsible policies. 5. Learn from each other. Traditionals and Boomers have a wealth of knowledge that younger workers need. Gen X employees are known for their fairness and mediation abilities. Gen Y workers are technology wizards. And Linksters hold clues to future workplace, marketing and business trends. Organizations that make an effort to reconcile the differences and emphasize the similarities among the various generations will be rewarded with intergenerational harmony and increased productivity. Larry and Meagan Johnson, a father-daughter team, are partners in the Johnson Training Group. They are experts on managing multigenerational workplaces, and are co-authors of Generations, Inc.: From Boomers to Linksters—Managing the Friction Between Generations at Work.
<urn:uuid:ff538e84-8e2c-400e-b52e-3805ba45abd0>
CC-MAIN-2017-09
http://www.baselinemag.com/careers/Five-Steps-to-Resolving-Workplace-Conflict
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00634-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946285
1,054
2.765625
3
Your Mobile Phone Is Safe - Don't believe virus hoaxes 06 Nov 2002 Kaspersky Lab brings attention to the spread of a rumor among Internet users regarding a new computer virus that infects mobile telephones and renders them junk. The message being sent around looks as follows: If you receive a phone call and your mobile phone displays ACE-? on the screen DON'T ANSWER THIS CALL - END THE CALL IMMEDIATELY. IF YOU ANSWER THE CALL, YOUR PHONE WILL BE INFECTED BY THIS VIRUS. This virus will erase all IMEI and IMSI information from both your phone and your SIM card, which will make your phone unable to connect with the telephone network. You will have to buy a new phone. This information has been confirmed by both Motorola and Nokia. There are over 3 million mobile phones being infected by this virus in USA now. You can also check this news in the CNN web site. Please forward this piece of information to all your friends. Kaspersky Lab reports that this virus does not exist, thereby classifying "Ace-?" with other such virus rumors as a hoax. We recommend users refrain from further spreading this unfounded virus rumor and to in turn inform colleagues and friends that this is actually a "non-existent" or "hoax" virus. A guide that will help you detect virus hoaxes can be found here More detailed information about virus hoaxes is contained in the Kaspersky Virus Encyclopedia and can be viewed here
<urn:uuid:869f5584-9c91-40a8-a078-4636db86870b>
CC-MAIN-2017-09
http://www.kaspersky.com/au/about/news/virus/2002/Your_Mobile_Phone_Is_Safe_Don_t_believe_virus_hoaxes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00634-ip-10-171-10-108.ec2.internal.warc.gz
en
0.914597
315
2.75
3
The FCC is making it easier to launch in-flight Internet services on planes in the U.S. by setting up a standard approval process for onboard systems that use satellites. Since 2001, the Federal Communications Commission has approved some satellite based Internet systems for airplanes, called Earth Stations Aboard Aircraft (ESAA), on an ad-hoc basis. On Friday, the agency said it had formalized ESAA as a licensed application, which should cut in half the time required to get services approved, according to the FCC. In-flight Internet access is typically delivered via Wi-Fi in an airplane's cabin, but that access requires a wireless link outside the plane to the larger Internet. Some services make that link via special 3G cellular towers on the ground, while others exchange their data over satellites. Row44, a provider of satellite-based in-flight Wi-Fi, names Southwest Airlines and Allegiant Air as customers on its website. Under the new rules, all it will take for airlines to implement onboard ESAA systems is to test the technology, establish that it meets FCC standards and doesn't interfere with any aircraft systems, and get Federal Aviation Administration approval, the FCC said. The result should be quicker deployments and more competition among in-flight Internet systems, according to the agency.
<urn:uuid:56cb05e8-7346-40f0-b9a7-ccce4e5914cf>
CC-MAIN-2017-09
http://www.computerworld.com/article/2494275/mobile-wireless/fcc-eases-licensing-for-in-flight-internet-gear-on-aircraft.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00158-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941522
264
2.59375
3
In an effort to further human and robot cooperation in space, a NASA astronaut aboard the International Space Station flying 240 miles above Earth controlled a robot on the ground. NASA researchers are working to someday make it possible for astronauts onboard an orbiting spacecraft to control robots working on the moon, an asteroid or even Mars. The project, dubbed Surface Telerobotics, will help scientists figure out the needs for future human-robotic systems. "A robot on the surface controlled by crew in an orbiting or approaching vehicle could get a lot of the precursor surface exploration work done," said Maria Bualat, a technical lead on the project based at NASA's Ames Research Center. "A robot could prepare a landing site, they could scout for a clear area, make sure the ground is firm or even build a landing strip... But it would need guidance." Bualat said in a videotaped interview on the NASA website that researchers have built a special control system for astronauts, who have to manage weightlessness and other factors in space, to use with robots on the surface. "There's a communications delay between the station and a robot on the ground," she added. "It makes it very difficult to joy stick because of that delay. We use something called supervisory control. A robot is pretty smart. It can perform tasks and keep itself safe. Then the astronaut can take over if the rover runs into any trouble." NASA needs to know how a person working in the weightlessness of space, with the stresses and disorientation that can put on a body, reacts to this new robotic system. "We've never done any kind of testing in space," Bualat said. To that end, last month Astronaut Chris Cassidy, who is part of the current crew on the space station, completed a two-and-a-half-hour robotics experiment, the first of three planned for this summer. The first experiment had Cassidy working with a robot in a simulation of the machine deploying a radio telescope on the far side of the moon. The robot actually was working inside the Ames Research Center in an area set up to simulate the surface of the moon. Cassidy, who pressed buttons on a control board to send the robot commands, was able to see live images from the robot's cameras, along with 3D virtual views of the robot. NASA has not reported on the results of the first experiment with Surface Telerobotics. However, the other two scheduled experiments are set to take place this month and next month. "We will analyze the data to see how the systems work and see if there are any new technologies that will be needed," said Bualat. This article, In first, NASA astronaut in space controls robot on Earth, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "In first, NASA astronaut in space controls robot on Earth" was originally published by Computerworld.
<urn:uuid:c9bcf526-8014-4be2-a34b-059a3ab105ac>
CC-MAIN-2017-09
http://www.networkworld.com/article/2167869/data-center/in-first--nasa-astronaut-in-space-controls-robot-on-earth.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00158-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941938
672
4.125
4
IBM the Humanitarian Opinion: By using its expertise in backing the World Community Grid project, IBM also gets the chance to demonstrate the benefits of grid computing.Making use of unused CPU cycles on your client computers isnt a new idea. Going back into the 1980s, there was network database software that let you install a small piece of agent software on your client computers that, after normal business hours, would allow the database server to distribute its indexing load to any computer that was running its agent. In the early 1990s, graphics software was developed that, using the same agent model, was able to distribute the image rendering process to many different types of client operating systems, speeding up what is still a very CPU-intensive process, rendering graphic images. But the end of the 20th century saw not only a massive increase in the number of network computers, but also freely distributable client software that worked together with a centralized server to complete a specific task. These clients, such as the RSA encryption cracking contest tool from Distributed.net and the search for extraterrestrial intelligence from SETI@Home, provided clients for just about every common client operating system, let the user determine how much CPU resource they would use and when the software would run, and gave users a sense of camaraderie in creating teams that competed to devote the greatest number of excess computing cycles to the selected project. Now IBM has taken this concept a step further by stepping up as the technical muscle behind the World Community Grid project, joining United Devices (the folks behind SETI@Home) and a host of academic and scientific organizations to create an organization that uses these spare CPU cycles to work on projects designed to benefit humanity.
<urn:uuid:8a252b7e-0f4c-426c-8cb6-1d4239c850d2>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Cloud-Computing/IBM-the-Humanitarian
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00210-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95634
346
2.640625
3
If Smart Grid technology is to work, the entire communications industry will have to align with the information technology and power and energy industries, with the input of manufacturers, policymakers, educators, academics, governments, engineers, computer scientists, researchers and others. Simply getting everyone involved together is a gargantuan task, so the IEEE is trying to establish a central clearinghouse of information with its Smart Grid Web Portal . Wanda Reder, 2008-09 president of the IEEE Power & Energy Society and chair of the IEEE Smart Grid Task Force, said: “Contributions from across the global power and energy, communications and IT industries, as well as government and academia, are needed to ensure successful implementation of Smart Grid throughout the world. The IEEE Smart Grid Web Portal is designed to be an essential resource for anyone involved in Smart Grid, whatever their industry or technical discipline.” Currently – no pun intended – it is simply not possible to know in full detail what is going on in the worldwide power grid. The concept of a “Smart Grid” is to manage not only the electrical power system and power delivery, but also consumption of electrical energy. The IEEE, through its Smart Grid initiative announced last May, intends to organize, coordinate, leverage and build upon the strength of various entities within and outside of the IEEE with Smart Grid expertise and interest. The IEEE says it alone has more than 100 standards published and in development that are crucial to the Smart Grid, spanning digital information and controls technology, networking, security, reliability, assessment, interconnection of distributed resources, including renewable energy sources to the grid, sensors, electric metering, Broadband over Power Line (BPL) and systems engineering. An overview of those standards is available. Communications standards used by other branches of the electronics industry will also have to be brought in. “The Smart Grid is a revolutionary undertaking, entailing new capabilities for communications and control, integration of new energy sources, distributed generation and adoption of a regulatory structure,” said Erich Gunther, chairman and CTO with EnerNex and a member of the Department of Energy (DOE) GridWise Architecture Council. “Successful rollout requires a phenomenal diversity of expertise and experience, proven standards-development capability and shared vision.” |More Broadband Direct 1/19/10:| |• IEEE sets up Smart Grid clearinghouse | |• Zodiac touts EBIF user agent's versatility | |• Cisco adds features for better WLAN video | |• Report: Microsoft chats up Disney on Xbox streaming | |• Study: Web-enabled devices proliferate | |• Study: Sprint No. 1 for large business users | |• JDSU handhelds test optical networks | |• Ciena's shares rise | |• Broadband Briefs for 01/19/10 |
<urn:uuid:ef54cddd-3d08-4dbe-ac91-52ac7f3edaa9>
CC-MAIN-2017-09
https://www.cedmagazine.com/print/news/2010/01/ieee-sets-up-smart-grid-clearinghouse
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00330-ip-10-171-10-108.ec2.internal.warc.gz
en
0.907613
596
2.859375
3
One of the less well-known aspects of information technology – but arguably one of the most critical to modern businesses – is the SCADA platform. SCADA stands for Supervisory Control And Data Acquisition, the computer control systems at the heart of many industrial automation and control systems. First developed in the 1960s – and evolving rapidly as the first PCs started shipping in the 1980s – SCADA-driven systems are found in energy power plants, electricity supply grids, chemical plants and many other industrial systems that require a high degree of computerized control – but also demand total, 100% systems availability. This is Mission Critical with a giant capital ‘M’ and ‘C.’ Many organizations claim their IT processes are mission critical, but SCADA control systems truly are critical to the national infrastructure. If the national power grid goes down, for example, it can cost a country many hundreds of millions of dollars per hour and, in the case of hospitals, air traffic control systems and the like, will actually place people’s lives in jeopardy. Lost production and commerce is one thing, but lost lives raise the security ballgame to an entirely new level of governance. Here in the US – as in the UK and Europe – many SCADA-driven systems are connected to the Internet. Where previously these systems were connected using a dial-up modem – with password security that at the time was highly resistant to attack – the trend today is to plug these devices into the Internet using a standard Ethernet connection – or worse yet, by WIFI or some other wireless protocol that lacks the encryption and authentication needed to prevent tampering. This approach, as you might surmise, is a ticking time bomb. Cybercriminals are not stupid – they understand weaknesses, possess the means to guarantee success, and understand the impact of an attack. Until now the only documented exploits in the SCADA security space have targeted foreign infrastructures, but I believe that this is certain to change. A study carried out at the end of 2012 by Bob Radvanovsky and Jacob Brodsky of InfraCritical, a US-based security consultancy – and conducted with assistance by the US Department of Homeland Security – found that thousands of SCADA-based systems accessible from the Internet have weak default passwords defending them. The two researchers used automated scripts to interrogate the grey hat SHODAN (Sentient Hyper-Optimised Data Access Network) and identified over 7,000 vulnerable, default logins out of an initial pool of 500,000 SCADA systems. The good news is that the Department of Homeland Security has now started to reach out to the IT admins of this particular group of vulnerable SCADA-based systems, but reports suggest the remediation progress has been relatively slow. Against this backdrop, there are discussions making the rounds in US IT security markets that, in return for allowing their SCADA systems to be scanned – essentially vetted – by the federal government, the utilities and other critical national infrastructure (CNI) system owners will be protected against legal or regulatory action in the future. The real issue with the security of SCADA systems is that, while you can employ software patches to make a system more secure, there is, unfortunately, no similar patch against human stupidity. SCADA systems should never, ever, be connected directly to the Internet, because they are simply not resilient enough to hook up to the public network. They require the use of advanced layers of security – firewalls, privileged identity management, secure proxies – to be implemented as soon as possible for their defence. I believe that the problem is rooted in the fact that – as my research teams repeatedly discover – utility companies almost without exception fail to make the requisite investments in IT security that you’d find in other industries of comparable size – unless, of course, the utilities are forced by federal agencies and auditors to take action. Making SCADA systems more secure Given that the very heart of our nation’s infrastructure runs on SCADA, how do we make these systems more secure? Are there really so many active threats out there? Here’s what I believe is the heart of the issue: SCADA systems can be based on a combination of embedded controllers combined with Windows or Linux systems. This combination isn’t terribly insecure in isolation, but once connected to the Internet (as a matter of convenience and for holistic management), every component now needs to be patched and managed for access and authorization since there are no longer any locked doors keeping the wrong people out. Corporate IT systems are – most of the time – protected by network firewalls, intrusion and anomaly detection systems, endpoint security software, and other prevailing safeguards. Once they’re connected to the Internet there’s simply no excuse for SCADA networks not to employ – at the very least – those same essential layers of security to protect against external attacks. The bad news is that a great many SCADA deployments do not even begin to utilize these broadly adopted technologies. And the bottom line is… The bottom line is that a great many SCADA networks are designed and deployed by electrical engineers who lack IT security training, and I believe that this engineering culture is often naïve when it comes to the threats that foreign powers and sociopaths could have on their designs. Consequently many SCADA networks have a security blind spot, with a healthy dose of attention paid to whether the controls interact safely with their physical environments but far too little focus on how well the systems can withstand cyber attacks. We’ve also found that management teams – especially at smaller utilities -fail to understand the need to change passwords regularly – believing they can trust everyone because they know everyone. This is a culture of: `We need to know the password for everything – because when the power is down, we need access in a hurry.’ Consequently these same admin teams, we find, have a habit of using factory/default passwords on their systems to ensure easy levels of access – at all times – for all engineers. This is a cultural issue, and it’s one that security vendors need to address head on. There is also an interesting sociological angle here. Criminal gangs might have diminished interest in utilities because there may be little profit in breaking into them. And while Hactivists could conceivably cause problems, our observations suggest that many of these groups will avoid infrastructure targets because of the moral implications. This leaves state-sponsored attackers as a primary threat, and makes CNI security an issue that screams for government oversight. The reality is that governments around the world have already staged attacks on rival states’ CNI, but we hear about very few of these incidents in public. In the event of an attack on the US infrastructure – in all likelihood originating from a smaller rogue state – the outcome could constitute an act of war as damaging as any action taken with troops and physical armament. In the US there is now a very clear focus on the CNI – and the federal government is starting to probe for vulnerabilities on these SCADA networks and then reporting back to the operators. The question we have to ask is whether it really is the government’s place to complete these probes. The free pass concept is that, if the government or its agencies complete the scan and give the `thumbs up’ to your SCADA system security, then if your systems do subsequently get attacked, you are exempt from possible legal action. This is a positive approach as has the potential to bring everyone – from the lowest engineer to the highest security strategist -on board with SCADA security to ensure that we are all working toward a common goal: making our CNI more secure. Some time ago I believed it was unlikely that any government would footprint or probe other states’ CNIs. My observations have caused me change my mind, and I now believe it is naive to underestimate any foe. SCADA vulnerability is a central challenge to our national security – and we really do need to address this issue now, before a major incident takes place. So what are the solutions? There are a number of recommendations that I would make to ensure that SCADA-based systems are better protected. The good news is that most of these actions can be implemented using existing technologies and legislation, though there may be a need for some tweaks to the statute books. It should be remembered that we are talking about the IT systems that control our national infrastructure. (1) Take a leaf out of the German statute books on data breach law and impose potential prison sentences on those managers that fail to take their SCADA defense obligations seriously. (2) Impose hefty financial penalties as a stepping stone to the penalties outlined in (1) above. (3) Issue comprehensive SCADA security guidance – in the form of white papers and best practices recommendations – and stipulate fines for those that fail to comply. A good model could be the PCI DSS rules that govern processing of payment card credentials. (4) Use existing government cyber-warfare resources to simulate attacks against CNIs and issue confidential reports to the appropriate managers of the organizations concerned. If the organizations fail to remediate their security problems in a timely fashion (that is, within a few months), local country CERT officials will complete the planning element of the task, and a court-imposed mandate will be placed on the organization to deploy the recommendations in the planning document. Further infractions will be treated as a contempt of court process. (5) Require CNI-based SCADA system operators to adhere to appropriate integrity verification processes on at least a monthly basis, with continuous compliance as the mainstay of the reporting system. An auditing process similar to the PCI DSS governance rules can also be applied.
<urn:uuid:b1d44d7b-c53b-42a1-8034-650bcdf449b3>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/03/07/the-scada-security-challenge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00506-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951216
2,000
2.84375
3
Cross site scripting (also known as XSS) occurs when a web application gathers malicious data from a user. The data is usually gathered in the form of a hyperlink which contains malicious content within it. The user will most likely click on this link from another website, web board, email, or from an instant message. Usually the attacker will encode the malicious portion of the link to the site in HEX (or other encoding methods) so the request is less suspicious looking to the user when clicked on. After the data is collected by the web application, it creates an output page for the user containing the malicious data that was originally sent to it, but in a manner to make it appear as valid content from the website. Download the paper in TXT format here.
<urn:uuid:4722d30a-b1c7-4ddd-9ab9-70729d0a112d>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2002/11/06/the-cross-site-scripting-faq/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00206-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93427
156
3.578125
4
As silicon-based electronics come up against the physical limitations of nanoscale, researchers are scrambling to find a viable replacement that would breath new life into Moore’s law and satisfy the demand for ever faster, cheaper and more energy-efficient computers. A new computer made of carbon nanotubes, created by a team of Stanford engineers, may be the first serious silicon challenger. A scanning electron microscopy image of a section of the first ever carbon nanotube computer. Credit: Butch Colyear Carbon nanotubes, long chains of carbon atoms, have remarkable material and electronic properties which make them attractive as a potential electronics substrate. The Stanford team, led by Stanford professors Subhasish Mitra and H.-S. Philip Wong, contends that this new semiconductor material holds enormous potential for faster and more energy-efficient computing. “People have been talking about a new era of carbon nanotube electronics moving beyond silicon,” said Mitra, an electrical engineer and computer scientist at Stanford. “But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.” According to a paper in the journal Nature, the simple computer is comprised of 142 low-power transistors, each of which contains carbon nanotubes that are about 10 to 200 nanometer long. The prototype has about the same power as a 1970s-era chip, called the Intel 4004, Intel’s first microprocessor. “The system is a functional universal computer, and represents a significant advance in the field of emerging electronic materials,” write the authors in the Nature article. The device employs a simple operating system that is capable of multitasking and can perform four tasks (instruction fetch, data fetch, arithmetic operation and write-back). The inclusion of 20 different instructions from the commercial MIPS instruction set highlights the general nature of this computer. For the demonstration, the team ran counting and integer-sorting workloads simultaneously. Professor Jan Rabaey, a world expert on electronic circuits and systems at the University of California-Berkeley, noted that carbon had long been a promising candidate to replace silicon, but scientists weren’t sure if CNTs would be able to overcome certain hurdles. While the first carbon nanotube-based transistors came on the scene about 15 years ago, the Stanford team showed that they could be used as the basis for more complex circuits. “First, they put in place a process for fabricating CNT-based circuits,” explained Professor Giovanni De Micheli, director of the Institute of Electrical Engineering at École Polytechnique Fédérale de Lausanne in Switzerland. “Second, they built a simple but effective circuit that shows that computation is doable using CNTs.” By showing that CNTs have a role in designing complex computing systems, other researchers will be more motivated to take the next step, potentially leading to the development of industrial-scale production of carbon nanotube semiconductors. “There is no question that this will get the attention of researchers in the semiconductor community and entice them to explore how this technology can lead to smaller, more energy-efficient processors in the next decade,” observed Rabaey.
<urn:uuid:ea19b181-1c39-4fa5-92e0-0435b5691edf>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/09/27/stanford_debuts_first_carbon_nanotube_computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00202-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918096
681
4.0625
4
Researchers Develop Home Gestural Control Using Wi-Fi By Barry Levine / CIO Today. Updated June 05, 2013. You wave your arm while in the shower to lower the music volume, or you make a hand gesture in the air from bed to turn up the thermostat. That vision of a gesture-anywhere life may have taken a step closer toward reality as a result of newly revealed research on using household Wi-Fi signals instead of a camera for gestural interaction. The research, conducted by a team at the University of Washington's computer science department, has been submitted to the 19th Annual International Conference on Mobile Computing and Networking, scheduled to take place in the fall in Miami. The research team calls the system WiSee, and it leverages Wi-Fi signals to read body movements without special sensors or cameras, such as in Microsoft's Kinect system. Instead, WiSee utilizes an adapted router and some wireless devices connected to household devices and appliances. Lead researcher Shyam Gollakota, an assistant professor at UW in computer science and engineering, said in a statement that WiSee repurposes wireless signals "that already exist in new ways." The team said that, while the concept is similar to Kinect and other gestural input system, it is simpler, cheaper, does not need cameras or distributed sensors, and does not require users to be in the same room as the device they're controlling, since Wi-Fi signals can travel through walls. The system involves an intelligent receiver, which could be an adapted Wi-Fi router, that monitors all wireless transmissions from smartphones, laptops, tablets and other devices in the home. Movement by a person in this kind of field creates a slight change in the wireless signal frequencies, similar to the Doppler effect, a change in the perceived frequency of electromagnetic waves based on the positions of the source and an observer. The resulting changes are on the order of a few hertz in Wi-Fi signals that operate at 5 gigahertz and have a bandwidth of about 20 MHz. The receiver's software can detect those tiny shifts, as well as account for devices that have stopped transmitting, such as a smartphone that's been turned off. The software is currently designed to recognize nine body gestures, including pushing, pulling, punching and full-body bowling. On tests with five users in a two-bedroom apartment and in an office environment, the WiSee system could accurately recognize 94 percent of the gestures. The receiver has multiple antennae, each turned to a specific user's movements, to allow up to five users to perform gestural commands in the same home. The researchers plan next to work on the ability to control multiple devices with one gesture. 'Complementary' to Kinect In order to avoid random gestures that the receiver might interpret as commands, WiSee would require a gesture sequence prior to a command, the equivalent of the Star Trek crew saying "computer" before they give a command to their all-knowing system. Additionally, a specific gesture could be programmed to refer to a specific device, such as an up-and-down arm motion indicating that the volume on the main sound system should be lowered or raised. Ross Rubin, principal analyst for Reticle Research, said the system "sounds complementary to systems like Kinect." He noted there are two trends in gestural interaction -- less expensive systems and more precise ones. Leap Motion, for instance, promises highly accurate movement detection, and the new, high resolution Kinect on the just-unveiled Xbox One supposedly can detect and measure a user's heart rate. The WiSee system, Rubin said, appears to fall into the "less expensive" category. This system could "enable gestural control where it is not feasible today," he noted, and other systems could then provide higher levels of precision when needed for gaming and other specific applications.
<urn:uuid:9cf7d03e-9ee6-4b08-808c-fc7e69cfa781>
CC-MAIN-2017-09
http://www.cio-today.com/article/index.php?story_id=111008IQA7J6
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00554-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950537
790
2.78125
3
PROBLEM/SITUATION: Digital signatures are not yet legal in many states. SOLUTION: Utah's digital signature law. JURISDICTION: Utah Department of Commerce. VENDORS/GROUPS: Rankin Technology Group, Informix, Four Gen. USER CONTACT: George R. Danielson, 801/530-6421; Utah, which in early 1995 was the first state to pass a digital signature act, has created a model now being followed across the nation. California followed Utah's lead last year by passing similar statutes. The American Bar Association has proposed a draft model for national legislation based on Utah's example. The idea is catching on in other nations too. Canadian provinces, Chile, and others are setting up digital signature infrastructures. The new state laws in California and Utah provide the legal framework to make digital signatures as binding as pen on paper -- a necessary step for widespread employment of electronic commerce -- as well as create a system for ensuring the integrity of digital signatures. IMPORTANCE OF INFRASTRUCTURE A digital signature is a way to authenticate both electronic documents and the signatories of the documents. Before the Utah law passed, parties in that state could set up a digital signature system by contract. But this approach has been unworkable for the state's courts, who have been reluctant to sign a new contract every time they wish to accept a digital signature. With the new guidelines for a digital signature infrastructure, the state courts hope to create a new electronic records system that will allow attorneys to file documents and courts to issue search warrants electronically. Ultimately, police officers in the field may be able to download warrants directly to their laptops and rapidly execute searches. Other Utah legal experts view access to a wide array of electronic court documents as a boon to attorneys and clients alike. A legal process called discovery now requires the delivery of signed paper documents related to evidence expected to be presented in trial to all parties involved in a case. Pages of questions are typed and retyped by each law office, and couriers race between them. But if electronic discovery documents and digital signatures become the norm, parts of these documents will only have to be entered into a computer once. Then the documents can be distributed electronically to all parties simultaneously, with the parties assured that the documents are official. Courts and attorneys are not the only ones looking for ways to incorporate the use of digital signatures into their organizations. The Department of Commerce, the state agency responsible for implementing the new law, plans to take corporate and Uniform Commercial Code filings electronically. The Department of Human Services is exploring the use of digital signatures as part of an electronic contracting system. Currently, state agencies often require multiple copies of contracts to be signed by six or more parties. In the future, contracts may be written, distributed, signed and stored all from the signers' e-mail boxes. The Utah State Tax Commission would like to use digital signatures with electronic tax filings. However, according to Janice Perry, Tax Commission spokeswoman, any future implementation will be coordinated with the IRS so that both agencies are "headed in the same direction." SAVINGS AND COSTS Government and industry interest in digital signature technology is expanding, mainly because electronic communication has become widespread. Electronic commerce is also getting more attention by companies looking to get into untapped markets. Digital signatures are needed for electronic commerce and official communication so that electronic transactions can be done with as much confidence as a signed paper contract or document. But digital signatures can save money by reducing the amount of paper in an office. George Danielson, digital signatures coordinator for the Utah Division of Corporations and Commercial Code, argues that once an organization purchases the equipment, it quickly pays for itself through a reduction in clerical time. A machine can send, receive, read and sort electronic files with minimal human intervention. Also, using electronic documents can dramatically reduce retrieval time when searching for document types and case numbers. Digital signatures coupled with workflow software can also reduce the time that paper sits on desk in-baskets. Still, the future is not without pitfalls. Some major issues remaining include who would build and pay for the infrastructure and what the cost for users will be. In Utah, explained Danielson, "the private sector will build it and the government will only lightly regulate it." He believes that the market will determine the cost for creating, authenticating and storing public keys, rather than government regulation. Higher fees will be charged depending on the exposure of the certification authority. For example, a higher fee would be charged for a $100 million signature key than for a $50 key. "You will have the large national banks that are certification authorities," Danielson continued. "They will do the high-risk things for a high fee. And you will have your Nick and Tony's Body Shop with digital signature capability on the corner that will give you a $250 key that will allow you to make a purchase from the Sears Catalog over the Internet." Still, the ultimate cost to government for access to digital signatures remains a question mark. But Danielson is convinced that the savings will outweigh the costs. Whatever the cost, digital signatures are coming. Once the infrastructure is built and tested, an electronic John Hancock may become as much a part of your life as your old pen-based signature. Alan Sherwood is a freelance writer who lives in Salt Lake City. To explain how digital signatures work, a bank safety deposit box makes a good analogy. Two keys are needed to open and close the document, or deposit box. When a deposit box is locked, the bank retains one key, called a public key, while the client has the other, called a private key. The private key is always to be kept in confidence by its owner. Both keys are used to create a digital signature, which is actually an encrypted message. To decrypt the document, the receiver must get the sender's public key. The software recognizes if the message was opened in transit because the secret key is needed to properly reclose it. The infrastructure being created in Utah utilizes third parties who register and hold public-key data. The third party creates and certifies the public key, and will provide it to those receiving a signed message from a registered client. The third parties, whose role is similar to notary publics, ensure the identity of the public key holder and are intended to ensure system integrity. For a full explanation of public and private keys, see "Access," Government Technology, December 1995.
<urn:uuid:db9309f5-28c0-4de6-8eb4-3edd8b770aac>
CC-MAIN-2017-09
http://www.govtech.com/featured/Digital-Signature-Law-Inked.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00426-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9346
1,344
2.546875
3
A story first posted on ArabNews.com has been making the rounds on the Internet, involving an Indian student who has allegedly found a method of storing compressed digital information on a regular sheet of paper. Sainul Abideen claims that his technique, dubbed Rainbow Technology, can store between 90 and 450 GB on a single sheet of paper. The system allegedly works by encoding data into small geometrical shapes (circles, squares, and triangles) in various colors, then printing them out on a piece of paper. A scanner is used to read the data back in to the computer. Abideen claims that his storage method is more environmentally friendly due to the biodegradable nature of paper, and envisions magazine publishers printing tear-out sheets of paper containing demos and programs, replacing the traditional plastic-wrapped CD or DVD. Storing digital information on paper dates back to the earliest days of computing. When I was a little kid, my dad used to bring home punched cards from his job programming a mainframe computer at Vancouver General Hospital. The cards had 80 columns—an artifact that remains with us today as the default width for console-mode applications—and could only store a maximum of 120 bytes (about one-eighth of a kilobyte) per card. Abideen demonstrates paper storage However, despite technological advances in scanning and printing technology since those days, Abideen's claims quite simply do not hold water. A little bit of math is in order here. Starting with a scanner with a maximum resolution of 1,200 dots per inch, this leads to a maximum of 1,440,000 dots per square inch, or just over 134 million dots on a sheet of standard 8.5" by 11" paper (excluding margins). Getting a scanner to accurately pick up the color of a single dot on a page is a difficult affair (it would take near-perfect color calibration, for example, and be prone to errors from ambient light and imperfections in the paper) but let's be generous and say that the scanner can accurately pick out 256 shades of color for each dot. That's a single byte per dot, making the final calculation easy: a maximum theoretical storage of 134MB, which would likely go down to under 100MB after error correction. It's a decent amount of storage, but several orders of magnitude smaller than the 450GB claimed by Abideen. The claim that "circles, triangles, and squares" can achieve these extra orders of magnitude can be easily challenged. There is a word for using mathematical algorithms to increase the storage space of digital information: it's called compression. No amount of circles and triangles could be better than existing compression algorithms: if it was, those formulas would already be in use! Compression could easily increase the 100MB theoretical paper storage by a factor of two or three, but so could simply compressing the files you wished to store into a .zip archive before converting them to a color printout. Ultimately, storage is about bits, and the smaller the bits are physically, the more storage can be packed into a given space. The magnetic bits on hard drive platters and the tiny pits in optical media are orders of magnitude smaller than the smallest dot that can be recognized by any optical scanner, and this is the simple reason why they store orders of magnitude more information. Even if a much higher-density printer were used (such as an expensive laser printer or offset printing process) the limiting factor is still the scanner required to get the information back into the computer. In the end, a picture may be worth a thousand words, but it cannot be worth half a thousand megabytes.
<urn:uuid:99c0df5e-0ffc-4fde-a818-a8bfcd1659f3>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2006/11/8288/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00426-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935529
749
3.109375
3
Interesting article on Analyzing DNS Logs Using Splunk and being able to identify if splunk sees a DNS lookup for a known bad domain name. Again, if you use our data as this article does, do not pull the zone file more than once every 12 hours or you will be banned. Better yet, check to see if the file has changed first (such as via a wget option) BEFORE pulling the zone file. And please DONATE if you consider the list useful. A years worth of donations does not even equal one month’s hosting and infrastructure costs and we are not sure how much longer we can continue to pay these expenses out-of-pocket. Article here: http://www.stratumsecurity.com/2012/07/03/splunk-security/ Log DNS queries and the client that requested it: It’s been said that DNS is the linchpin of the Internet. It’s arguably the most basic and under appreciated human-to-technology interface. It’s no different for malware. When you suspect that a device has been compromised on your network, it’s important to be able to see what the suspected device has been up to. The DNS logs of a compromised machine will quickly allow responders to identify other machines that may also be infected.
<urn:uuid:26a55155-d44c-4858-afac-f40e4ff5ae9f>
CC-MAIN-2017-09
http://www.malwaredomains.com/?cat=31
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00422-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93139
272
2.59375
3
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss Cisco Router Basics. Basics Of Cisco Routers Cisco is well known for its routers and switches. I must admit they are very good quality products and once they are up and running, you can pretty much forget about them because they rarely fail. We are going to focus on routers here since that's the reason you clicked on this page ! Cisco has a number of different routers, amongst them are the popular 1600 series, 2500 series and 2600 series. The ranges start from the 600 series and go up to the 12000 series (now we are talking about a lot of money). Below are a few of the routers mentioned : All the above equipment runs special software called the Cisco Internetwork Operating System or IOS. This is the kernel of Cisco routers and most switches. Cisco has created what they call Cisco Fusion, which is supposed to make all Cisco devices run the same operating system. We are going to begin with the basic components which make up a Cisco router (and switches) and I will be explaining what they are used for, so grab that tea or coffee and let's get going ! The basic components of any Cisco router are : - The Processor (CPU) - Internetwork Operating System (IOS) - RXBoot Image - Flash memory - Configuration Register Now I just hope you haven't looked at the list and thought “Stuff this, it looks hard and complicated” because I assure you, it's less painful than you might think ! In fact, once you read it a couple of times, you will find all of it easy to remember and understand. These allow us to use the router ! The interfaces are the various serial ports or ethernet ports which we use to connect the router to our LAN. There are a number of different interfaces but we are going to hit the basic stuff only. Here are some of the names Cisco has given some of the interfaces: E0 (first Ethernet interface), E1 (second Ethernet interface). S0 (first Serial interface), S1 (second Serial interface), BRI 0 (first B channel for Basic ISDN) and BRI 1 (second B channel for Basic ISDN). In the picture below you can see the back view of a Cisco router, you can clearly see the various interfaces it has:(we are only looking at ISDN routers) You can see that it even has phone sockets ! Yes, that's normal since you have to connect a digital phone to an ISDN line and since this is an ISDN router, it has this option with the router. I should, however, explain that you don't normally get routers with ISDN S/T and ISDN U interfaces together. Any ISDN line requires a Network Terminator (NT) installed at the customer's premises and you connect your equipment after this terminator. An ISDN S/T interface doesn't have the NT device built in, so you need an NT device in order to use the router. On the other hand, an ISDN U interface has the NT device built in to the router. Check the picture below to see how to connect the router using the different ISDN interfaces: Apart from the ISDN interfaces, we also have an Ethernet interface that connects to a device in your LAN, usually a hub or a computer. If connecting to a Hub uplink port, then you set the small switch to “Hub”, but if connecting to a PC, you need to set it to “Node”. This switch will simply convert the cable from a straight through (hub) to a x- over (Node): The Config or Console port is a Female DB9 connector which you connect, using a special cable, to your computers serial port and it allows you to directly configure the router. The Processor (CPU) All Cisco routers have a main processor that takes care of the main functions of the router. The CPU generates interrupts (IRQ) in order to communicate with the other electronic components in the router. The Cisco routers utilise Motorola RISC processors. Usually the CPU utilisation on a normal router wouldn't exceed 20 %. The IOS is the main operating system on which the router runs. The IOS is loaded upon the router's bootup. It usually is around 2 to 5MB in size, but can be a lot larger depending on the router series. The IOS is currently on version 12, and Cisco periodically releases minor versions every couple of months e.g 12.1 , 12.3 etc. to fix small bugs and also add extra functionality. The IOS gives the router its various capabilities and can also be updated or downloaded from the router for backup purposes. On the 1600 series and above, you get the IOS on a PCMCIA Flash card. This Flash card then plugs into a slot located at the back of the router and the router loads the IOS “image” (as they call it). Usually this image of the operating system is compressed so the router must decompress the image in its memory in order to use it. The IOS is one of the most critical parts of the router, without it the router is pretty much useless. Just keep in mind that it is not necessary to have a flash card (as described above with the 1600 series router) in order to load the IOS. You can actually configure most Cisco routers to load the image off a network tftp server or from another router which might hold multiple IOS images for different routers, in which case it will have a large capacity Flash card to store these images. The RXBoot Image The RXBoot image (also known as Bootloader) is nothing more than a “cut-down” version of the IOS located in the router's ROM (Read Only Memory). If you had no Flash card to load the IOS from, you can configure the router to load the RXBoot image, which would give you the ability to perform minor maintenance operations and bring various interfaces up or down. The RAM, or Random Access Memory, is where the router loads the IOS and the configuration file. It works exactly the same way as your computer's memory, where the operating system loads along with all the various programs. The amount of RAM your router needs is subject to the size of the IOS image and configuration file you have. To give you an indication of the amounts of RAM we are talking about, in most cases, smaller routers (up to the 1600 series) are happy with 12 to 16 MB while the bigger routers with larger IOS images would need around 32 to 64 MB of memory. Routing tables are also stored in the system's RAM so if you have large and complex routing tables, you will obviously need more RAM ! When I tried to upgrade the RAM on a Cisco 1600 router, I unscrewed the case and opened it and was amazed to find a 72 pin SIMM slot where you needed to attach the extra RAM. For those who don't know what a 72 pin SIMM is, it's basically the type of RAM the older Pentium socket 7 CPUs took, back in '95. This type of memory was replaced by today's standard 168 pin DIMMs or SDRAM. The NVRAM (Non-Volatile RAM) The NVRAM is a special memory place where the router holds its configuration. When you configure a router and then save the configuration, it is stored in the NVRAM. This memory is not big at all when compared with the system's RAM. On a Cisco 1600 series, it is only 8 KB while on bigger routers, like the 2600 series, it is 32 KB. Normally, when a router starts up, after it loads the IOS image it will look into the NVRAM and load the configuration file in order to configure the router. The NVRAM is not erased when the router is reloaded or even switched off. ROM (Read Only Memory) The ROM is used to start and maintain the router. It contains some code, like the Bootstrap and POST, which helps the router do some basic tests and bootup when it's powered on or reloaded. You cannot alter any of the code in this memory as it has been set from the factory and is Read Only. The Flash memory is that card I spoke about in the IOS section. All it is, is an EEPROM (Electrical Eraseable Programmable Read Only Memory) card. It fits into a special slot normally located at the back of the router and contains nothing more than the IOS image(s). You can write to it or delete its contents from the router's console. Usually it comes in sizes of 4MB for the smaller routers (1600 series) and goes up from there depending on the router model. Keeping things simple, the Configuration Register determines if the router is going to boot the IOS image from its Flash, tftp server or just load the RXBoot image. This register is a 16 Bit register, in other words has 16 zeros or ones. A sample of it in Hex would be the following: 0x2102 and in binary is : 0010 0001 0000 0010. We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career.
<urn:uuid:d7bd906a-2135-4f95-9112-8a12c1d4ce27>
CC-MAIN-2017-09
https://www.certificationkits.com/cisco-router-basics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00474-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941338
1,977
3.28125
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content. Embed code for: Unit 3 Study Guide Select a size Spanish I A - Unit 3 Study Guide Instrucciones: Complete this study guide as you complete the unit. Study the notes from each lesson before quizzes and tests. Page numbers are in parenthesis for each question. For vocabulary tables, fill in the blanks with the missing Spanish or English word and take notes on use and pronunciation. To study before quizzes/tests, use another piece of paper to cover up the English side and quiz yourself, and then use the piece of paper to cover the Spanish side and quiz yourself. el amigo, la amiga Fill in the blank with the best vocabulary word. “Me gusta ir a la escuela. Me gusta estudiar. Me gusta leer libros. Yo soy ____________ .” (1, 2) In Spanish-speaking cultures, what are some types of events that families spend time together? (3) Describe the relationship between families and close family friends. How are close family friends addressed, even though they aren’t related to the family by blood? (3) What is the boulevard in La Habana that is a major tourist attraction? (4) When was the Catedral de la Habana built? In what architectural style was it built? (4) What is the Cascada en Río Brazo? In what region of Cuba is it located? (4) What are you like? What’s his/h er name? Are you . . .? he/she likes . . . he/she doesn’t like . . . When salsa music came about in the 1960s, what types of music did it combine? (3) In the 1970s, what happened to salsa music that helped further define it? (3) What is cubism? Who cofounded it? (4) Describe the art style of Fernando Botero. (4) a, an (feminine) Do we use the word “muy” before or after the adjective? (2) What are two ways you learn in this lesson to say “the” in Spanish? When do you use each one? (3) Complete the phrases with the correct form of the word “the” in Spanish (3): What are two ways you learn in this lesson to say “a” or “an” in Spanish? When do you use each one? (3) Complete the phrases with the correct form of the word “a”/”an” in Spanish (3): What countries participate in the Pan American Games? Are European countries included? (4) Describe how the Pan American Games were started. When did people start to talk about having something like the Pan American Games? When did the Games actually start? (4) What are some sports that are in the Pan American Games but not the Olympics? (4) Where will the 2015 Pan American Games be hosted? (4) What theory did Charles Darwin develop through study on the Galápagos Islands? (5) What type of islands are the Galápagos that makes them a particularly harsh environment? (5) Fill out the chart to describe the general rule for adjective agreement found on page 2. The adjective ends in –o when the noun is _____________ and ________________ . The adjective ends in –a when the noun is _____________ and ________________ . The adjective ends in –os when the noun is ____________ and________________ . The adjective ends in –as when the noun is ____________ and________________ . Fill in the correct form of the word “reservado” according to the phrase. Change the ending according to the above chart when necessary (2): La chica _____________________ Un amigo ___________________ Los chicos ____________________ Las amigas ___________________ Do adjectives in Spanish typically come before the noun or after the noun they modify? (3) What are some things that you can do in Punta del Este, Uruguay? What can you do outdoors, for artistic interests, and at night? (4) What is important about María Nsué Angüe as an author? What is cultural important about her novel Ekomo? What country brought the game of dominoes to the New World? (4) Describe the cultural relationship between playing dominoes, the past, and the present. (4) What type of animals do the Cadejos look like? (5) What is the purpose of the white cadejo? (5) What does the legend of the white and black cadejos represent? (5), talentosa Las amigas __________
<urn:uuid:e05a6856-9420-4a1b-a74d-51c429b2eaa7>
CC-MAIN-2017-09
https://docs.com/danielle-poppell/1057/unit-3-study-guide
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00418-ip-10-171-10-108.ec2.internal.warc.gz
en
0.840583
1,108
2.59375
3
Thanks to user-friendly distributions like Ubuntu, more people are running Linux than ever before. But many users stick to the GUI and point and click their way through tasks, missing out on one of the key advantages of Linux: the command line. The command line interface is the most efficient and powerful way to interact with Linux; by typing commands, users can quickly move files, install new packages, and make complex tasks easy. The Linux Command Line is a complete introduction to the command line. Author William Shotts, a Linux user for over 15 years, guides readers from their first keystrokes to writing full programs in Bash, the most popular Linux shell. The book’s extensive coverage tackles file navigation, environment configuration, command chaining, pattern matching with regular expressions, and much more. “The command line is like a window into Linux,” said No Starch Press founder William Pollock. “Strip away the GUI and you’re in control of your machine. The difference is kind of like driving a stick versus an automatic. The automatic is great for shepherding the family around town, but the stick puts you in control of that souped up sports car.” Among the command line’s many features, readers will learn how to: - Create and delete files, directories, and symlinks - Administer their system, manage networking, and control processes - Use standard input and output, redirection, and pipelines - Edit files with Vi and write shell scripts to automate tasks - Slice and dice text files with cut, paste, grep, patch, and sed.
<urn:uuid:898f70e5-1020-4f67-bd18-74dbf40eef5c>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2012/01/11/the-linux-command-line/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00342-ip-10-171-10-108.ec2.internal.warc.gz
en
0.910422
337
2.84375
3
If you’re one of the 175 million Pandora users, then you have surely experienced the excitement of having the Internet’s most popular radio station introduce you to a brand-new artist or song. While it may seem like magic, there is a perfectly logical explanation behind Pandora’s ability to seemingly read your mind and know your taste in music. The true magic behind Pandora lies hidden in the numbers and data collected from music analysis, personalization, and the music delivery methods it uses. The Music Genome Project – A Mind Reader for Music Starting with the analysis of the raw data, musicologists undergo a lengthy process analyzing the distinct characteristics of each piece of music. Pandora’s Music Genome Project looks at more than 450 attributes in order to create a musicological “DNA” for each track, including melody, harmony, instrumentation, rhythm, vocals and lyrics, to name a few. It states on Pandora’s website, “the Music Genome Project’s database is built using a methodology that includes the use of precisely defined terminology, a consistent frame of reference, redundant analysis, and ongoing quality control to ensure that data integrity remains reliably high. Pandora does not use machine-listening or other forms of automated data extraction.” In 2012, Pandora’s library had over one million tracks by more than 100,000 artists. When you consider that this categorization is done manually, the scale of the project becomes almost overwhelming. The Music Genome Project is the largest musical categorization process of its kind. However, what makes Pandora unique and popular is the ability to personalize its music delivery. A user creates a station from a “seed” such as an artist, track, or genre. The Music Genome process then begins finding new songs of the same “DNA” and further personalizes itself as a user starts giving music a “thumbs up” or “thumb down.” In 2012, users created over 1.6 billion unique stations, each personalized by one of the 175 million registered members. The “thumbs ups” and “thumbs down” feedback is invaluable. Beyond the benefit of personalized stations, Pandora is able to take that feedback and use it to enrich the Music Genome Project, allowing Pandora to curate better stations based on its listeners. Delivering Music Everywhere You Go Pandora is the largest Internet based radio station, capturing more than a 70 percent market share in Internet radio listening. In January 2013, Pandora owned an eight percent share of the total U.S. radio market, delivering 1.39 billion hours of music. In 2012, Pandora users listened to 13 billion hours of music. That’s the equivalent of 1.5 million years of straight music listening. Of the staggering 13 billion listening hours, 75 percent of the music delivered by Pandora was through mobile and other connected devices. Pandora just recently announced that it has over 1,000 partner integrations – 760 of them being consumer electronic devices such as phones, TVs, Blu-ray players, etc. Pandora is also available in 85 new car models and 175 different aftermarket car radio devices. In order to maintain the high performance in delivery for each user, Pandora relies heavily on a caching system to help deliver its most popular tracks. Aaron Porter, Pandora’s Director of System Administration, explained that the growing popularity of Pandora presented challenges of scalability and reliability with this caching tier. At first, Pandora loaded its servers with RAM to ensure a quick and quality experience for the end user. Scalability, however, became extremely difficult with this approach. Pandora turned to Fusion-io and its ioDrive platform, allowing it to use flash memory as a caching tier. “The ioDrives perform as well as our RAM caches, but offer 10 times the capacity per server,” said Aaron. “Our total frequently-accessed music cache now holds 10 times the songs it used to, which both enhances existing user experience and gives us plenty of headroom for future growth.” With the increase in capacity and performance delivered by the flash-based servers, Pandora was able to decrease its overall server footprint by 40 percent, allowing it to slow down its scale-out plans, and receive an almost instant ROI from the flash. You can learn more about Pandora’s experience with flash memory in this case study by Fusion-io. Scaling Users and Scaling Performance It’s easy to see how impressive Pandora’s technology is when it comes to serving up its music library. But even more astounding is seeing how the company is capable of handling database demands as they continue to add music to their library, refine their personalization algorithms, and grow their user-base. Despite these increasing demands, Fusion-io’s flash-based memory tier has helped slow Pandora’s hardware scale out. It will be interesting to see how Pandora’s continued innovations inside the datacenter, delivering higher performance and reduced energy consumption, will allow the company to enhance it’s magical customer experience.
<urn:uuid:655c0943-481b-472b-822d-489fbc8bce01>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/02/18/scaled_out_music_scaled_down_infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00339-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933867
1,055
2.984375
3
For my security concentration last semester I took an interesting course on the principles of Cryptography. My proffesor, Dr. Shouhuai Xu is a huge crypto enthusiast and has published many articles and papers on his experiments that I have found very interesting. This particular paper discusses memory disclosure attacks and how easy it is to aquire private keys from allocated as well as unallocated space in memory. Cryptography is based on the assumption that the key should be kept secret and in this paper he explains how the "secret" keys of OpenSSH and Apache servers are easily compromised through data recovery in memory. Really cool stuff, a worthy read. Cryptography has become an indispensable mechanism for securing systems, communications and applications. While offering strong protection, cryptography makes the assumption that cryptographic keys are kept absolutely secret. In general this assumption is very difficult to guarantee in real life because computers may be compromised relatively easily. In this paper we investigate a class of attacks, which exploit memory disclosure vulnerabilities to expose cryptographic keys. We demonstrate that the threat is real by formulating an attack that exposed the private key of an OpenSSH server within 1 minute, and exposed the private key of an Apache HTTP server within 5 minutes. We propose a set of techniques to address such attacks. Experimental results show that our techniques are efficient (i.e., imposing no performance penalty) and effective — unless a large portion of allocated memory is disclosed.Protecting Cryptographic Keys From Memory Disclosure Attacks
<urn:uuid:0e0a6bfd-0786-41f1-bc7f-7ba84fa71ad0>
CC-MAIN-2017-09
https://www.ibm.com/developerworks/community/blogs/242fafe4-766c-4c93-bb7d-3d2a5ee1cbd6/entry/memory_disclosure_attacks_do_you_know_where_your_keys_are2?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00215-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950758
300
2.515625
3
The device looks like a small piece of carry-on luggage, but it has a more important job than carrying a toothbrush, deodorant and a couple of pairs of underwear. The suitcase-size device is a microwave transmitter designed by two U.S. government agencies to help rescue workers find living victims buried in rubble after disasters such as earthquakes, floods or bombings. The groundbreaking technology, called FINDER or Finding Individuals for Disaster and Emergency Response, uses microwave signals to identify the breathing patterns and heart beats of disaster victims buried in rubble, has the potential to be one of the "biggest advances in urban search and rescue in the last 30 years," said John Price, program manager of the First Responders Group at the U.S. Department of Homeland Security's Science and Technology Directorate. The suitcase device sends a low-power microwave signal into rubble to look for heart beats and breathing patterns, and rescue workers see readouts on a tablet-size Panasonic Toughbook controller. The reflections of the microwave signal can show tiny movements in rubble piles, said Jim Lux, FINDER tax manager at the Communications Tracking and Radar Division at NASA's Jet Propulsion Laboratory. The technology is based on NASA tools to measure movements of objects in space and ocean levels, Lux said. FINDER can find living victims buried under 30 feet of crushed materials or behind 20 feet of solid concrete, and the device can distinguish between humans and animals, based on heart rates and breathing patterns, officials said. DHS and NSA have been developing FINDER for more than a year, and this week, they tested a prototype at an urban search and rescue training site in Lorton, Virginia, near Washington, D.C. Rescue workers from search-and-rescue team Virginia Task Force 1 and the Fairfax County, Virginia, Fire and Rescue Department were able to find a woman hidden in a pile of concrete rubble within minutes. Finding disaster victims quickly "greatly increases their chances of survival," Price said. In previous tests of prototypes at the Lorton training center, rescue workers gave NASA and DHS some "painful" but necessary feedback, Lux said. NASA and DHS plan to make FINDER available to search and rescue teams worldwide when it's fully tested, Lux said. The agencies are already getting suggestions from the public on other ways that the technology can be used, with a 9-year-old from India emailing Lux some suggestions recently, he said. Grant Gross covers technology and telecom policy in the U.S. government for The IDG News Service. Follow Grant on Twitter at GrantGross. Grant's email address is [email protected].
<urn:uuid:36a07de4-0f57-4a2d-8bf3-1f1854734189>
CC-MAIN-2017-09
http://www.cio.com/article/2382220/government-use-of-it/suitcase-size-device-may-help-save-lifes-of-disaster-victims.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00335-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936472
551
3.015625
3
A few days ago, a critical bug was found in the common OpenSSL library. OpenSSL is the library that implements the common SSL and TLS security protocols. These protocols facilitate the encrypted tunnel feature that secure services -- over the web and otherwise -- utilize to encrypt the traffic between the client (user) and the server. The discovery of such a security bug is a big deal. Not only that OpenSSL is very common, but the bug that was found is one that can be readily exploited remotely without any privilege on the attacker's side. Also, the outcome of the attack that is made possible is devastating. Exploiting the bug allows an attacker to obtain internal information, in the form of memory contents, from the attacked server or client. This memory space that the attacker can obtain a copy of can contain just about everything. Almost. There are many essays and posts about the "everything" that could be lost, so I will take the optimistic side and dedicate this post to the "almost". As opposed to with other serious attacks, at least the leak is not complete and can be quantified, and the attack is not persistent.I will focus on the server as the target of the attack. Say an attacker exploits the newly discovered bug, and starts dumping out contents of memory addresses from your server. The bug allows to exfiltrate 64K at a time, but multiple iterations are possible to exfiltrate as much data as needed. This dump can contain anything that is stored in memory. The memory involved is the process space of the application or web server that happened to call OpenSSL. What is there as loot for the attacker? In essence, there is all the state information of the application, including that of the web server process, if the application is web-based. The actual state information depends on what the application or web server is doing, but at a minimum it contains: This is not to be taken lightly. This is a lot. Notwithstanding, let us see what was not put at risk. There are three resources that are clearly left out of scope for the attacker: First and foremost, install the necessary updates so to close the tap. Second, comprehend the scope of the leakage that might have occurred. Unfortunately, there is no way to tell if your server was hacked and to what extent, so assume it was and enumerate the data that might have been compromised. This data, which we refer to as "session data", consists of all data that is served, processed or obtained by the web (or other application) process that calls OpenSSL. This includes all its inputs that come over the web (or other bearer), all data that goes out, and all data that may be processed in between by the same process that calls OpenSSL (e.g., the web server). What is not in the scope of leaked data is all data that may be processed but is not served, or otherwise made available to the process that uses OpenSSL. For example, if the application is a web-application, then data that is neither sent nor received over the web, and which is not processed by the web server, would never find itself in the process memory space of the web server, and is thus safe. Also, other data on the server, such as files in home directories, is safe. The private key of the web server is also at immediate risk, but in most cases it accounts for a change in quantity, not in quality, of the leaked data. In other words, this key will allow an attacker to decipher more sessions, so the attacker can leak not only session data during the attack, but other session data as well, yet it is still session data by the definition above, which we already considered to be entirely lost. Passwords may be another issue. In most cases, however, stolen passwords only account for yet more session data that can be accessed by impersonating the user, so it is still in the sense of "more of the same". Obviously, if the passwords that your application uses are also used for granting access to other assets -- those may be at risk as well. Private keys of users, if used by the application, are not at risk, because they are never made available to the server in the first place. To summarize, in the usual case, the maximum leakage that could have occurred consists of all data served or processed by the process calling OpenSSL. Data of other applications and back-end data are safe. Third, return to secure state. We got lucky with the "heartbleed bug" in that it is passive and cannot cause your system to be "owned", or to be contaminated in a way that calls for a complete re-install or serious scrubbing. After installing the patch to OpenSSL, you need to generate a new key-pair for the OpenSSL deployment, get it certified if your previous key was, revoke the previous key, and change application passwords that might have been leaked. Once this is done, aside of the data that might have been leaked forever, you can consider the incident to be behind you. If you run an application server utilizing OpenSSL which was subject to attack, a lot of data might have been stolen, both in terms of application data and in terms of credentials. However, the only bright side is that, as opposed to with other serious attacks: “The integrity of the system. This is probably the most important point. The attack is passive in the sense that it gets data out, but cannot change anything in the system.” What if the exfiltrated data allows a hacker to then log in to the server and install a rootkit, or exploit some other installed software to do this? In this case, all bets are off… The sentence that follows the one you quoted reads: “An obvious exception would be if a password that was captured happens to open the door to other attack venues.” Hello, thank you for this article. I would like to draw your attention to something else. The sad fact is, that I understand some of this and still would not know how to change keys. Worse is, that in my neighborhood I am the Nurde .. So I would like to ask if anyone would be willing and able to give a “How to” instruction to all the oblivious users that really do not know how to help themselves with this? Replacing the SSL keys (yes, you need to replace the keys and the certificates, not just the certificate) is done by repeating the same process of installing SSL in the first place. The technicality depends only on your web server and OS. Search for “set up ssl apache” or “set up ssl iis” to get hundreds of useful guides. A good one for SSL on Linux/Apache is at: http://www.htmlgoodies.com/beyond/security/article.php/3774876 I was talking to a colleague last week who said that their IT security co-workers were examining their logs and traffic over the last couple weeks and have not seen any attempts to exploit the Open SSL issue. Is the bug too new and black hats have not yet had the time to write the code to exploit it yet. If what I’ve been told is true, does anyone have suggestions why we’re not hearing about open SSL attacks yet. The bug is not new. It is there for more than two years. The only question is whether it was known to black-hats before it was “officially” discovered or not. It is not trivial to determine if it was exploited or not on a given system, because the typical HTTP logs Apache keeps do not show heartbeat packets. Here you can find possible evidence to past exploitation: Form is loading...
<urn:uuid:c0856127-6731-4944-9e83-edb1eb896781>
CC-MAIN-2017-09
https://www.hbarel.com/analysis/itsec/openssl-heartbleed-bug
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00511-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961075
1,614
2.609375
3
Technology is meant to make life easier for those who take advantage of it, but an advantage for some is a disadvantage for others. No one can deny that the Internet has made life easier, but the findings of a recent global United Nations report found that only 3 percent of Web sites are accessible to persons with disabilities. According to the 2002 U.S. census, there are approximately 51 million people with disabilities in America. The International Technology and Persons with Disabilities conference held by California State University, Northridge brought people with varying degrees and types of disabilities together to discuss the issues affecting people with disabilities -- whether it be preparing for an emergency or ensuring accessible voting -- and what part technology plays in helping and/or hindering these issues. When there is an emergency, people need immediate access to vital information. Accounting for people with disabilities in emergency situations means closed captioning of information broadcasts, audio descriptions of visual images such as maps, and special considerations for those with mobility issues. In 2004, the Federal Communications Commission (FCC) held hearing to determine the effectiveness of the Emergency Alert System. Part of these hearings, which were held in response to the Sept. 11th attacks, was to verify if people with disabilities were able to receive emergency information. The Rehabilitation Engineering Research Center for Wireless Technologies (Wireless RERC) , which researches and works to improve access to municipal and other wireless issues, recommended that the FCC improve access to Emergency Alert systems by upgrading technology. "The RERC emphasized to the FCC the importance of providing parity of service with respect to emergency communications and expand[ed] TRS [Telecommunications Relay Service] requirements so as to allow text messages to become a regular part of emergency communication services," explained Nathan Moon, a research specialist with the Wireless RERC. Advantage should be taken of assistive technologies such as TRS, STS (speech-to-speech services for those with speech disabilities). Pairing them with wireless will bring them into the public, Moon said. Wireless's capability to reach millions of people was taken into account when the RERC made recommendations to the FCC again in 2004 regarding the future of the Emergency Alert System. Expanding rules to cover new digital technologies and devices "essential for providing emergency information to people with disabilities;" encouraging wireless manufactures to build TTY capabilities into products; and "more comprehensive planning and coordination among state and federal agencies and focused on the benefits of digital and alternative technologies for people with disabilities" were some of the recommendations made. According to the results of a policy Delphi conducted by Wireless RERC between October 2004 and March 2006 regarding "Use of and Access to Wireless Technologies by People with Disabilities," device incompatibility or poor interoperability cited as most important technology issue. "A little bit of a push will make a big difference," remarked Paul M. A. Baker, also of the Wireless RERC. Help America Vote Act Although not a life-threatening problem, voting accessibility is a right for all citizens in America. Ensuring that those with disabilities are given the opportunity to cast a ballot on Election Day makes certain that all people have a voice in their government. The November 2006 mid-term elections were the first federal elections to employ voting system improvements mandated by the Help America Vote Act (HAVA). "The primary purpose of the Help America Vote Act is to provide funding to replace punch card voting systems," explained Dr. Sarah J. Swierenga, professor at Michigan State University and director of the Usability and Accessibility Center . "Part two of HAVA, however, set aside funding for local governments to assure access for individuals with disabilities." It is believed that the move to electronic voting machines, including touchscreen, will improve access for all voters, including those with disabilities. But some debate has arisen around this issue of electronic voting machines. One such controversy came in the November 2006 Sarasota County, Florida, 13th Congressional district race, which had an extreme undervote of 12 to 15 percent compared to the rest of the ballot. This district used a touchscreen voting machine. Swierenga presented the findings of a study into the reasons for the undervote, as conducted by the Usability and Accessibility Center. On the first page of the ballot, obvious color bars (red and blue) are used to designate titles of sections, and pale grey lines are used to separate the individual races. When moving on to page two, the eye is automatically drawn to the title in blue, which was the gubernatorial race. Above this section, barely noticeable, is the 13th Congressional district race. "In this case," explained Swierenga, "Usability testing revealed that the heading for the [13th congressional district] race probably was not prominent enough." It was in the same place that the ballot heading was on the first page, which "may cause voters to ignore it, assuming subconsciously that it's the same thing." The ballot was not consistent from page to page, making it difficult for voters, including those who have visual impairments. Swierenga did make recommendations for improving voting accessibility for all voters: - Don't assume that the voters are familiar with technology - Headings and instructions should be active voice, in simple, declarative sentences, using plain language - There should be one race per page - Make sure there is contrast between backgrounds and text With the many issues affecting the daily lives of those with disabilities, adding technology to the gamut may not amount to much. But as technology becomes even more integral to life, it is vital to make reasonable accommodations. San Diego State University University of Colorado at Boulder ADA Guide for Local Governments: Emergency Preparedness and People with Disabilities
<urn:uuid:f12cea98-5332-440a-bb8e-af1c3bbe4342>
CC-MAIN-2017-09
http://www.govtech.com/policy-management/Technology-Can-Improve-Lives.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00331-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955901
1,173
3.265625
3
A group of researchers from University of California, Berkeley, claims to have achieved 99 percent accuracy when using brainwave signals instead of passwords for user authentication. The timing was right, they say, because while EEG data was in the past captured with invasive probes, this data can now be collected using “consumer-grade non-invasive dry-contact sensors built into audio headsets and other consumer electronics.” “We briefed subjects on the objective of the study, fitted them with a Neurosky MindSet headset, and provided instructions for completing each of seven tasks. As the subjects performed each task we monitored and recorded their brainwave signals,” the researchers explained in their report. The tasks that the fifteen subjects were instructed to do were to focus on breathing, imagine moving a finger up and down in sync with breathing, imagine that they are singing a song, count (in their mind) the number of boxes in a grid that were of a specific color, imagine moving their body to perform a motion related to a sport, choosing and thinking about a pass-though (a concrete mental though), and so on. After repeating the seven tasks five times per session, the researchers had recorded 1050 brainwave data samples after only two sessions. The data was then repeatedly compressed in order to end up with a “one-dimensional column vector with one entry for each measured frequency” against which later authentication attempts would be compared. The testing led them to conclude that using brainwaves for authentication is both feasible and extremely accurate, but that tracing a brainwave signal back to a specific person would be much too difficult. By asking questions about the enjoyability of the specific tasks and by taking stock of the difficulties that the subjects had remembering some of the things they chose to think about during the tests, the researchers also discovered that users tend to better remember secrets that they come up with themselves (song, sport, pass-thought) instead of secrets they are forced to select from a menu. “In comparing the results of the usability analysis with the results of the authentication testing, we observe that there is no need to sacrifice usability for accuracy. It is possible to achieve accurate authentication with easy and enjoyable tasks,” they pointed out. Still, there are many questions still to be answered: can an attacker fool the authentication system by performing the same customized task the user has chosen for himself, is the solution scalable, and so on, but they believe that there could be a future for using EEG signals for all kinds of things in a number of industries, including computing.
<urn:uuid:0be45880-beb5-4568-9cf9-42ac3ba251ef>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/04/16/pass-thoughts-as-a-solution-to-the-password-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00031-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970491
523
3.0625
3
Secure storage of data has always been essential for any organisation, of whatever size. In the past this involved accurate filing of paper records, and then keeping the physical archive secure – whether it was simply locking a filing cabinet, or guarding an entire building. Modern business technology may have virtualised much of this function, but the principle remains the same: preserving an accurate record of business activity, and ensuring that it is readily accessible to those who require it. What has changed, however, is the regulatory environment within which many organisations now operate. Corporate governance legislation demands that certain information is retained securely, particularly when it relates to the financial management of the company and the manner in which it interacts with customers. Furthermore companies are required to manage their operational risks effectively through business continuity, which also relies on essential information being securely stored. As a result of recent high-profile cases of infringements, the regulators have become more vigilant, focusing on preventing any breaches, rather than post facto investigations. As a result secure storage and the protection of stored data has zoomed up the corporate agenda, and organisations need an effective policy for managing it. There are three elements to any policy: people, processes and technology. It is tempting to focus almost exclusively on the IT, at the expense of everything else, and it is easy to see why. There are numerous technologies available for securing storage that operate at several levels. The data that is being stored can itself be secured through the use of encryption; digital certificates and watermarks; file splitting; or even highly locked down pdfs that prevent records being tampered with once they have been created and saved. In addition, the storage systems themselves can be protected. A new generation of wide area and caching systems can be used in conjunction with encryption technologies to preserve data when at rest, in transit or at presentation. Record management systems and storage-specific WORM (Write Once Read Many) products are also available to enhance archiving and storage security. But, no matter how intelligent and sophisticated the technology, it is still subject to the whims of users. It’s much harder to change human behaviour than it is to install systems. Ignoring the other two elements of the policy – the people and the processes – will inevitably compromise the capability of the technology to protect stored documents, databases and other information. Any policy must therefore take into account the way that employees currently work and should not constrict their ability to carry out their day to day tasks by introducing overly complicated procedures, and unnecessary red tape. People will simply find the easiest route to carrying out their job: and if that means bypassing the security policy then that is what the majority will do. If major behavioural changes are required, then these need to be carefully planned and gradually introduced. Consider this scenario: a busy senior executive gives his PA his password to check his email, and with it all his access privileges to stored data. It’s not an uncommon event, but it does present a potential security risk. Even if a policy forbids this, the chances are it will still happen, simply because it is the most convenient way for the senior executive to fulfil his role. When it comes to writing the policy and considering the procedures required, the business needs to answer several questions. First of all: what gets stored? Clearly it is impractical to store everything – indeed it runs the risk of breaching either the Data Protection or the Human Rights Acts. So choices need to be made. Organisations also need to ask themselves where the information will be held? If only the essential documents are stored the implication is that they will need to be retrieved at some point. Accessing it in the future is going to be much more time consuming and inefficient if their whereabouts isn’t planned and recorded – not knowing where corporate knowledge is held is just as dangerous as not having good data security policies. Which leads to the next question: what happens to the data once it has been stored? Who is going to look at it? And, equally important, who is not? Security is all about maintaining the confidentiality, integrity, and availability of information and proving non-repudiation. All the security technology in the world come to nothing if there is no way of controlling who can access the archives. And, with the increased need for reliable audit trails in mind, the enterprise also needs to prove who has, and hasn’t been viewing saved records and indeed, who has made copies. Organisations need to address this issue from two angles: classifying the information, and identifying the user. Document management and identity management technologies are therefore two of the most crucial elements for any storage security policy. Most businesses underestimate how much data they produce: technology, especially email, has enabled unprecedented levels of duplication and filing anarchy. Unless a company has been exceptionally meticulous in its IT use there is usually little or no knowledge of what information has been created. Document management procedures will identify which records, files, and data need to be secured, and how long they need to be saved for. Identifying and classifying the information involved is the first step to ensuring that only authorised personnel have access to it. The next is to allocate access privileges to individuals, based on who they are and the role they fulfil. User authentication, based on comprehensive identity management, therefore plays an essential role in keeping storage secure and will be able to provide the three As of any security measures: authentication, authorisation and audit. Furthermore, by making it easier to integrate data storage with desktop access, identity management assists the organisation to fulfil the first criteria of its security policy: making it user-friendly. The final consideration for the storage policy is that it must be communicated to the user group. There’s no point in having a carefully drafted plan of action if no one knows about it. Education is essential, and is the responsibility of not just the IT or risk management team, but also business managers and HR. But with everyone involved, and an effective programme of communication in place an appropriate policy for secure storage will ensure that investments made in data encryption and the like will be maximised, and that an organisation need not fear a visit from the regulators. Electronic data is now essential for modern business and information management, and security, policies form the instruction set by which it will be used. This in turn forms one of the key foundations for best practice business operations.
<urn:uuid:af1ca134-d2ce-40aa-9a15-52966f770563>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2005/09/08/popular-policies-keeping-storage-secure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00207-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953813
1,293
2.703125
3
Wendy Harman, social strategy director for the American Red Cross, monitors social media using tech platform Radian6 to gain real-time insight during disasters. Photo by David Kidd. During the Deepwater Horizon oil spill, citizens used social media to share health effects, odor, smoke, whether wildlife was effected, and if there was oil onshore or, as shown here, in the water. Photo by Kris Krug/Flickr During the spill, open source platform Ushahidi harvested that social media information and mapped it.
<urn:uuid:f46cf995-a706-41e7-9a92-4dc8f69da5a4>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Government-Technology-January-2013.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00383-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945176
109
2.609375
3
Power plants deliver usable energy to the world and are among the top-rated, high-risk facilities. Conservative and regimented in tried and proven SOPs, these energy generators operate on strict day-to-day practices that ensure the security and safety of people and assets. As the threat of terrorist attacks become more real, governments, energy and electric organizations, and plant owners must review and increase security measures. Depending on the plant and its location, threats can include syndicated theft and extreme environmental activities. "You must understand your adversary, to define, design and plan your security system," said Javier Prieto, Security Leader for Spain and Portugal, Honeywell Building Solutions. Most power plants today follow regulations and best practices for critical infrastructure. However, system guidelines are broad, leaving actual equipment specifications to be agreed upon by plant owners, integrators and the operations team. Planning and installation must be carefully considered at the onset, to avoid using equipment unfit for the environment. The type of plant and its location makes every solution unique. Current systems installed at power plants are analog and becoming obsolete. Introducing digital systems gradually brings more hybrid systems to the fore. However, integration is challenged by a lack of support from existing equipment manufacturers, some of whom are no longer in business. Information management and sharing is still at a basic level and often done manually. As most power plants are privately-owned, external information is shared through e-mail distribution lists and hotlines. Within the plant, information is distributed from the command and control center. SCADA systems dominate, but have rudimentary mapping capabilities, which is crucial for response during an emergency. Fully integrated platforms, such as sophisticated CMS or PSIM, have yet to reach power plants. When existing systems break down, outdated parts are becoming more difficult to find. Users are gradually moving towards IP-based systems, which offer more flexibility, scalability and cohesive information management. The time is ripe for change. Regulations and Standards Excluding nuclear plants, no regulations govern power plant security, so best practices and recommendations are followed. However, government organizations such as the US Nuclear Regulatory Commission (NRC), North American Electric Reliability Corporation (NERC) and the US Department of Homeland Security have been actively involved in standardizing requirements for the energy sector. "The U.S. pioneered nuclear power stations, and many countries around the world, such as Japan, Mexico and Canada, follow American standards," said Hagai Katz, Senior VP of Marketing at Magal S3. NERC requires power plants to look outside service territories and establish security principles based on the electric grid's reliability, which requires visibility at a higher level. "This means that security technologies applied should be designed with a 'protection-indepth' philosophy — to deter, detect, assess and respond to an incident," said Dale Zahn, VP of Business Development at Intellibind Technologies. The corporation holds quarterly Critical Infrastructure Protection Committee meetings with representatives from the federal government, as well as industry representatives from the eight NERC operating regions. "Representatives have IT, operations or physical security backgrounds, and come from the investor-owned municipal and cooperatives," Zahn said. System guidelines are broad, leaving users flexibility to specify their equipment wants and needs, said Darryl Polowaniuk, Manager of Security and Fire Safety Solutions at Johnson Controls. The market is large and relatively untapped. Power companies will need to spend US$1.4 trillion over the next 22 years to meet power demands and modernize the transmission and distribution grid, according to the "Improving Power Plant Performance Through Technology Upgrade" white paper by Honeywell Process Solutions. Most power plants have been using the same security systems installed 10 or 20 years ago, making refurbishment or replacement a priority. "In Europe, power plants are increasingly implementing security systems with 12 to 15 percent growth," Prieto said. Global growth is slightly lower, averaging 6 to 8 percent. In India, there are one or more plant projects underway in each state. Some are old plants, with little surveillance. "There is a big opportunity to install video surveillance systems in these plants," said Anantharam Varayur, Director of Webcom Information Technology. In South Africa, security makes up at least a quarter of the project's budget. "If security systems are found noncompliant, plants can be fined," said Kevin Pearman, AccountManager, Integrated Security and Building Management Systems, Bytes Systems Integration. Designing and Planning Most project tenders are indiscreet — there can be separate tenders for video surveillance, access control and intrusion detection systems. "At this time, orders can be awarded to multiple vendors, which creates a challenge in integrating the systems," Varayur said. "Customers must take the initiative of putting requirements together at the project onset for a comprehensive tender." Driven by the need to meet local requirements, planning and design is usually standards-driven, Zahn said. Involved parties include representatives from plant operations, engineering, safety, the supply chain, IT, security and plant maintenance. Once needs are determined, an experienced system integrator will be hired to ensure consistency across a fleet of generating stations, involving equipment selection, operation, maintenance and repair of applied technologies. As power plants are often located at remote sites, maintenance for faulty equipment requires long waits for repair technicians. "Sometimes customers actually buy spare parts, including cameras, network switches, encoders, additional servers, monitors and power supplies, to lessen the downtime of a system breakdown," Varayur said. Security managers and their corporate security departments have a vested interest in the final design, as they will likely be stewards of the system upon completion, Polowaniuk said. IT managers also play a crucial role in supplying the network, involved in considerations for bandwidth requirements and redundancy. The type of power plant and its environment impacts security requirements. In general, hydroelectric, coal and fossil fuel, solar and wind plants follow best practices. Security systems at nuclear power plants are doubled or tripled compared to other plants, as they should comply with legislation, Prieto said. For example, all systems at nuclear plants must be redundant, including networks, fences, control rooms and servers. In comparison, a solar plant might have a single perimeter solution equipped with cameras and fences, but nuclear plants can have up to three layers of perimeter protection. Coal and other fossil burning plants in the environmental spotlight must follow procedural detection measures to protect against activists, Polowaniuk said. Hydroelectric plants typically border large bodies of water, exposing them to more complex risks. "If a terrorist was to strike via a boat coming into the dam, it would be disastrous," said Aluisio Figueiredo, COO of Intelligent Security Systems. Armed military personnel usually patrol seaside or water borders at all times. Cameras equipped with video analytics are necessary to track boats coming into secure areas. "This unique requirement is very common for water dams," Figueiredo said. For seaside plants, noncorrosive solutions need to be implemented. The salty and moist environment of ten results in equipment replacement after just one or two years. "Even standards such as IP66 or IP67 are sometimes not enough to protect against corrosion, so special anti-corrosive standards and practices must be used," Katz said. Power plants located in rural areas with limited natural barriers become simpler to protect, Polowaniuk said. Thermal cameras and radars can be used to survey areas beyond the plant's perimeter. This is not so in urban environments. In urban environments, plants must be careful not to disturb neighboring residences or commercial buildings. For example, strobe lighting and audible alarms could be disruptive, Polowaniuk said. The high foot traffic in cities presents unique challenges. "Security incidents related to conventional delinquency, such as theft, increase for plants located in urban areas," Prieto said. "You cannot use long-range perimeter devices to survey areas beyond your perimeter, which means that other perimeter protection systems need to be considered." Paired with a preference for aesthetics, perimeter security in urban areas can opt for noninvasive systems such as buried cables or decorative fences, Katz said. Most power plants are dated facilities, with traditional analog systems in place. Systems and parts become obsolete, which make integration with management platforms difficult. In security, the shift toward convergence is an appealing solution to all high-risk critical facilities, but power plants are adopting slowly. "The maturity of the market is an issue, and often security managers at power plants, who have been trained and are familiar with traditional systems, are reluctant to switch out existing systems," Prieto said. Aging systems are the most pressing issue, and at some point, when the system is no longer scalable or does not provide adequate protection, it should be replaced, Polowaniuk said. Most security systems in power plants are stand-alone and manually controlled. For example, if an alarm from the perimeter sounds, a security operator will maneuver a joystick to pan a camera toward the detection zone. "This is the common practice," Katz said. A balance must be struck between manpower and technology — as technology develops and becomes more automated, power plants can save on manpower. Experts agreed that the energy sector, once at the forefront of security technology, is now lagging. "Most of these systems are coming to the end of their useful lives, and next generation power plants will be free to go straight into IP-based systems," said Richard Lack, Sales and Marketing Director at ASL Safety and Security.
<urn:uuid:ed782c9c-dc4c-4316-adb4-da519b22673c>
CC-MAIN-2017-09
https://www.asmag.com/showpost/9303.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00083-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948441
1,978
2.75
3
A small robot says, "Good morning," and with that one phrase, takes a huge step forward in robot-human cooperation in space, as well as robotic companions. Kirobo, a 13.4-in. tall, 2.2-pound humanoid robot, spoke its first words in space, while floating onboard the International Space Station last month. A newly released YouTube video shows the Japanese-built robot talking, in Japanese, in space. "Good morning to everyone on Earth. This is Kirobo. I am the world's first talking robot astronaut. Nice to meet you," the robot said on Aug. 21. "A robot took one small step toward a brighter future for all." The humanoid robot arrived at the space station last month aboard a Kounotori 4 cargo spacecraft that lifted off from Japan's Tanegashima Space Center. Kirobo is awaiting Japanese astronaut Koichi Wakata, who is scheduled to arrive on the space station in November to take part in a human-robotics experiment. Wakata and the robot are expected to take part in what will be the first experiment on conversation in space between a human and a robot. While the experiment could go a long way to helping astronauts feel less disconnected while working in space, it also could further efforts to have astronauts and robots work together. The effort also could speed the development of small companion robots that people could carry in their pockets like smartphones. "This could be huge for robotics and for space research," said Zeus Kerravala, an analyst with ZK Research. "Long term, I think the plan is to have robots co-exist with humans for long voyages so this would be a good proof point for that." Whether orbiting Earth or in deep space, having a companion to talk with, even a robotic companion, could be essential. "I would think a connection with anything, including a robot, is better than no connection," said Kerravala. "I imagine, over time, these will look more and more humanoid. Maybe not to the level of Data in Star Trek, but certainly they'll be able to act as part of a space team." Wakata, a Japanese engineer and a veteran of four NASA space shuttle missions and a long-duration stay on the space station, is scheduled to launch onboard the Soyuz TMA-11M in November. During his mission, he will become Japan's first station commander. Kirobo, which can move its head and arms, stand up and balance on one leg, is expected to help keep Wakata company, having conversations with him and possibly relaying information to him from the control room or ground engineers. In an interview with the Agence-France Press news service, a French-based news agency, Kirobo's creator, Tomotaka Takahashi, a roboticist and a leader in the Kirobo project, said he wanted to create a tiny robot that users could carry in their pocket like a smartphone. "By bringing a robot into space, the development of a symbiotic robot is expected to move along much faster," Takahashi said. Though Kirobo is the first robot to talk on the space station, it is not the first robot to "live" and work there. The space station, which uses several robotic arms to lift bulky cargo and maneuver equipment and spacewalking astronauts (http://www.computerworld.com/s/article/9141196/NASA_Astronaut_rides_robotic_arm_in_successful_spacewalk ) outside the station, also is home to Robonaut 2 , which also is known as R2. Robonaut 2 is a 300-pound robot designed to use its arms and hands to manipulate tools and to perform cleaning and maintenance jobs on the space station. The humanoid robot, which arrived on the space station in 2011, is expected to one day work outside the orbiter so astronauts won't have to make as many dangerous spacewalks. This article, Hello from space! Kirobo takes huge step for robotics, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
<urn:uuid:dddcc318-2f61-4f18-b553-effe6858f943>
CC-MAIN-2017-09
http://www.computerworld.com.au/article/525783/hello_from_space_kirobo_takes_huge_step_robotics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00203-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958748
938
2.84375
3
One of the problems with smartphone apps is that one has no control over where often sensitive permissions and personal content is stored. While we’re allowed a certain amount of input when it comes to downloading the app and installing it: agree to the permissions or else, we have no control over where or how all the data is stored. We know that it’s probably in the cloud somewhere, but it could be anywhere, even on the phone itself. And each app developer has its own idea about how to handle the stuff. That is a problem for security—not the app developers’ but ours. And it doesn’t stop at phones. Anyone know where the password for an IoT oven is located, and how securely? The answer is no and maybe not very. Here’s a solution, though: Create your own cloud with all your own personal data in it, and then allow the smartphone apps and fitness bands to access it when it needs to pull down or write data. The app developer should be out of the equation, some computer scientists think. The personal data is controlled by the user with an interest in its security, not the developer who may not care much. “This is a rethinking of the web infrastructure,” Frank Wang says in a CSAIL press release. Wang is a student at MIT’s CSAIL and is one of the concept’s planners. “Maybe it’s better that one person manages all their data. There’s one type of security and not 10 types of security,” he says. MIT calls its project Sieve. The idea is that all of a user’s personal data is encrypted in the cloud. When an app needs to use some of the data, it simply requests it. A decryption key is then sent to the app for the relevant chunks of data. Fall out with the developer, and the user can revoke access. Keys can be re-made at any time. It’s a simple idea that could improve security. The party who cares about the security controls it. One slight hiccup is just how to implement the encryption. It’s not as simple as the encryption of a file, transaction, or e-mail, say. In this case each piece of data needs an attribute that only allows decryption if the requester has permissions. That’s an amusing and ironic turning of the tables. A name and city, but not a Social Security number, or street name, could be an example of the kind of bespoke personal information delivery allowed for one app, for example. The technique is called attribute-based encryption and is not in itself a problem to implement. The trouble arises because it’s slow to encrypt and decrypt, CSAIL explains in its press release. The solution is a kind of lumping of data “under a single attribute,” the release says. “For instance, a doctor might be interested in data from a patient’s fitness-tracking device but probably not in the details of a single afternoon’s run. The user might choose to group fitness data by month,” it says. All very well and good, one might think of the plan. But why would the app developers go for it? It would be partly to differentiate themselves from others as being more security-au-fait, the researchers think. And also, the end user might decide to share certain bits of previously unobtainable, unrelated data—data that he or she now owns. Throwing the dog a bone from time to time, as it were. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:f28603a4-b0c0-480b-b1a1-e7bf12d999d1>
CC-MAIN-2017-09
http://www.networkworld.com/article/3047821/security/user-controlled-private-clouds-could-help-with-security-think-scientists.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00255-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943705
777
2.578125
3
Alaska's scale puts USGS mapping systems to the test - By Patrick Marshall - Feb 27, 2014 The U.S. Geological Survey is spearheading an effort to collect high-resolution elevation data for the entire United States that will be used to create 3D maps for gauging flood risks, monitoring the health of crops and measuring the biomass of forests. The technology for gathering the data, called 3DEP for the 3D Elevation Program, is an advanced version of LiDAR (light detection and ranging) that is accurate for measuring elevation and offers greater resolution than earlier technologies. But while all of the continental United States will be included in 3DEP's LiDAR data set, USGS has to turn to a different set of tools to capture data on the rugged contours of Alaska. Because of its size, remoteness and nearly constant cloud cover, Alaska is being scanned with IfSAR, or interferometric synthetic aperture radar. Where LiDAR can't penetrate cloud cover, IfSAR can. And IfSAR can be used effectively in jets flying at higher altitude than is optimal for LiDAR scanning. That's critical in reaching remote areas that are far from refueling facilities. "The IfSAR is less accurate," said Mark DeMulder, chief of USGS's National Geospatial Program. "Generally you get one elevation value for every 5 meters. The vertical accuracy is about 1 meter rather than just over 9 centimeters. But even at that level, that accuracy is so much better than what is been available for Alaska in the past. It's a tremendous improvement." The current statewide base maps for Alaska were created around 1960 and at a lower resolution than the maps made over the continental United States. Improved data holdings for Alaska are required to meet current safety, planning, research and resource management standards, according to USGS. Currently, DeMulder said, all public agencies combined are collecting elevation data on about 5 percent of the country each year. "Our goal is to move that up to the 11 percent to 12 percent per year," he said. And then begins the job of refreshing the data. "The 3DEP initiative is designed to establish a national baseline of LiDAR data," said Larry Sugarbaker, a senior adviser at the USGS National Geospatial Program, which manages 3DEP. "For events such as Superstorm Sandy, having a baseline is really important to be able to do change analysis. In areas of coastal impact like that, every time there is an event we may need to acquire new data." Marie Peppler, flood and hazards program coordinator at the U.S. Geological Survey, agrees. "The coastal system of the highly variable and changing landscape needs to be updated as often as you can afford to," she said. "For environments that are not changing as much and are not as variable, you don't need to collect as often. For Iowa they are using 5-year-old and 8-year-old LiDAR data, and that is perfectly fine." Patrick Marshall is a freelance technology writer for GCN.
<urn:uuid:21b604d5-2589-4f12-bfc1-4be93dfcc04d>
CC-MAIN-2017-09
https://gcn.com/articles/2014/02/27/mapping-alaska.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00255-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961299
642
2.90625
3
RAM is a commodity component and therefore relatively inexpensive, yet it possesses outstanding performance characteristics and lacks the shortcomings that are attributed to Flash devices. RAM is an excellent technology to leverage for accelerating I/O performance and delivers a very high benefit-to-cost ratio across the entire storage architecture. High-Speed Caching is critical for maintaining application performance since RAM is many orders of magnitude faster than the fastest Flash technologies and it resides as close to the CPU as possible. It is the fastest storage component in the architecture, delivering a 3-5x performance boost to applications and freeing up application servers to perform other tasks. It also extends the life of traditional storage components by minimizing the stress experienced from disk thrashing.
<urn:uuid:7d93ba01-9a86-42ed-94d7-e4d755a65adf>
CC-MAIN-2017-09
https://www.datacore.com/products/features/High-Speed-Caching.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00075-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940665
142
2.515625
3
Scientists are working on creating a new biometric bracelet that could also “talk” to devices on a person’s body, allowing data collected by blood pressure cuffs and heart monitoring devices to be matched to correct electronic records. The devices could prevent mix-ups of health records at military and veteran hospitals. The researchers, led by Dartmouth College computer scientist Cory Cornelius, have developed technology that matches a person's bioimpedance -- their physiological response to the flow of electric current passing through tissues -- to a unique identity. Bioimpedance can be used to pinpoint specific people because everyone has a different structure of bone, flesh and blood vessels. “Significant impedance differences exist between the varying tissue types, anatomic configurations, and tissue state, each of which may provide a unique mechanism for distinguishing between people,” according to the research paper. The devices could be configured to discover the presence of other health monitoring devices on a patient’s body, recognize that they are on the same body and share information securely. The researchers demonstrated the technology at the Usenix Advanced Computing System Association workshop in Bellevue, Wash., this week. The biometric system has been demonstrated to recognize people in a household with 85 percent accuracy.
<urn:uuid:603e1ddc-91dc-44c2-8141-9ed71d43ebbb>
CC-MAIN-2017-09
http://www.nextgov.com/health/2012/08/biometric-wristband-could-match-health-monitoring-devices-electronic-records/57296/?oref=ng-HPriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00171-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94065
258
2.96875
3
Any new technology deployment presents challenges, and long-term evolution (LTE) is no exception. Moreover, as LTE is not an evolution of existing deployments, but rather a replacement that is backward-compatible, the challenges this technology brings with it seems more pronounced and includes the following: distance, spectrum, backhaul, interoperability, coverage and cellular offload. Below is a brief description of these challenges. Chart 1.1 represents the technical challenges associated with the adoption of LTE. Distance – Robust LTE signals do not travel as far as signals that use existing radio techniques. As a result, the infrastructure designs include more towers, more base stations and more expense for land, permits, equipment, backhaul, monitoring, management and testing. Spectrum – Spectrum tends to be in short supply with LTE, as it requires more radio spectrum. New or repurposed spectrum can take a decade or more before it becomes available; however, customers require mobile data services now and are prepared to switch to a different service provider, if their needs are not met. Backhaul – Most cellular sites are still serviced by a T-1 for backhaul, which is insufficient for mobile data and LTE because of current and future capacity demands. Cable operators may have an advantage as many have high-bandwidth services available in areas where cell sites are or will be located. The movement toward Ethernet and IP mobile backhaul is expected to continue throughout the next few years. Interoperability – Although LTE is intended to be compatible with most existing wireless transmission methods, cellular service providers (CSP) have been known to customize their networks. This can create problems with handoffs, roaming, packet loss, jitter/delay for voice traffic and a number of other assurance issues. Coverage – LTE requires a large number of high-bandwidth mobile data users to justify the investments of service providers (SP) and network equipment manufacturers (NEM). It is therefore most likely that LTE will not be deployed in locations outside major metropolitan areas right away. As customers roam, they will experience numerous handoffs and service degradations. The exception may be the rollout of LTE for fixed-broadband substitution in areas where it is more practical to use wireless than wireline. Cellular Offload – Carriers are constantly looking for ways to offload network traffic to Wi-Fi networks. However, new challenges arise from cellular offload, which extend beyond traditional Wi-Fi. Test Equipment is the Answer All of the LTE technical challenges listed above have a direct impact on the quality of service (QoS) and quality of end-user experience (QoE), crucial for SPs. Today’s consumers are used to doing things on the go. If a smartphone user continually experiences difficulties downloading e-mails or videos on his/her smart devices, a prompt switch of SPs will occur. This connection between positive user experience and customer churn has been clearly determined through various studies. According to Frost & Sullivan’s latest research, the global LTE test equipment market generated revenue of $760.8 million in 2011. In 2018, revenue is expected to reach $2,845.6 million, at a CAGR of 20.7 percent. The market is driven by the increasing deployment of LTE and the explosion of wireless data. The growth of mobile data traffic is due partially to the availability of high-speed networks; the increased penetration of smartphones, as well as connected devices, such as laptops, netbooks, notebooks and tablets; and the use of higher bandwidth-consuming applications and services. LTE is expected to create numerous opportunities for wireless test equipment vendors during the forecast period. Chart 1.2 represents the revenue forecast for global LTE test equipment market from 2011 to 2018 Testing for Interoperability LTE operates within the multi-vendor environment, thus SPs and NEMs must ensure that all the vendors implement the newest standards and that compliance is properly met. Test equipment solutions are important to ensure multivendor interoperability. The earlier interoperability testing is implemented during the deployment process, the better it is overall for SPs, NEMs and consumers. Mobile data traffic is on the rise primarily because of consumers’ desire to access internet and video on the go. Increased usage of smartphones, laptops, netbooks, notebooks and tablets – coupled with bandwidth-hungry applications and services – have contributed to the rapid growth of mobile data. According to Facebook, for example, more than 350 million active users currently access the social network through their mobile devices. Additionally, more than 475 mobile operators work globally to deploy and promote Facebook mobile products. All factors listed above drive mobile data traffic growth; however, not all data traffic is generated by consumers. Actual data performance becomes extremely important and SPs need to ensure that they receive good throughput numbers in accordance to the technology in order to understand how it affects real world conditions. Mobile Backhaul Testing Due to the deployment of LTE, the whole backhaul is moving to new architectures, such as Internet protocol (IP) and Ethernet, creating growth opportunities for testing. New architectures are trying to provide the same level of quality required in time-domain multiplexing (TDM) networks. In addition, smartphone applications, wireless data and video drive the need for increasing numbers of Ethernet 1G/10G test equipment to perform turn-up and maintenance on emerging LTE backhaul networks. High volume data usage puts tremendous strain on networks, thus, the demand for mobile backhaul testing is on the rise. Wi-Fi Offload Testing Historically, Wi-Fi access has been seen as "nice to have," and not put to significant use. Wi-Fi operators have therefore optimized their networks to provide coverage rather than capacity and QoS. In current offload networks, service providers use Wi-Fi as a continuation of their 3G/4G networks. Cellular offload requires Wi-Fi to deliver a positive customer experience, especially when there are a large number of users in high-density locations. As a result, Wi-Fi networks have to become carrier-grade as it is a critical infrastructure element for the cellular network. Without this, it can lead to a negative user experience, which can result in the loss of revenue from high-profit margin smartphone consumers. Consumers are constantly becoming more sophisticated and now have a choice of service providers that offer the best overall network for their smartphone needs. Today’s subscribers are no longer willing to accept poor Wi-Fi performance, as this leads to an unsatisfactory smartphone experience. Consumers expect the same quality of service on Wi-Fi networks as they have on 3G or 4G networks. It is worth mentioning that one of the central themes of the Mobile World Congress in 2012 was Wi-Fi offload. Concurrently, cellular offload was the main focus for SPs and NEMs. Even though Wi-Fi has been around for many years, remarkable increases in mobile users' demands for data triggered by the popularity of smartphone adoption is driving a renewed interest in the idea of offloading cellular traffic onto Wi-Fi networks.There are a number of test equipment vendors that offer solutions to ensure that the mobile traffic is offloaded into Wi-Fi networks successfully. Ixia, through the acquisition of VeriWave, which offers wireless SPs an effective way of testing the performance of Wi-Fi offload. Ixia’s new IxVeriWave test system is designed to go beyond measuring network availability and signal strength to test a wider range of capabilities in order to demonstrate how a Wi-Fi network is delivering various applications such as voice, data, unicast or multicast video. Spirent Communications is another key test equipment company. They introduced the addition of the Wi-Fi Offload Gateway testing capability to its Spirent Landslide solution. Landslide is capable of testing the performance of Wi-Fi Offload Gateways that handle offloading of data from sources such as over-the-top video (OTT), as well as traditional services such as voice calling and SMS, ranging from a 3G/4G/LTE cellular network to a Wi-Fi network. Even though LTE is bringing in various technical challenges, there are numerous benefits associated with adoption of the technology. Communications test equipment vendors such as Anritsu, Rohde & Schwartz, Agilent, Aeroflex, Anite, Ascom, Spirent, Ixia, JDSU, EXFO and many others offer a number of effective testing solutions to address every challenge involved in LTE adoption. Want to learn more about today’s powerful mobile ecosystem? Then be sure to attend the Mobility Tech Conference & Expo, collocated with ITEXPO West 2012 taking place Oct. 2-5 2012, in Austin, TX. Co-sponsored by TMC Partner Crossfire Media the Mobility Tech Conference & Expo provides unmatched networking opportunities and a robust conference program representing the mobile ecosystem. The conference not only brings together the best and brightest in the wireless industry, it actually spans the communications and technology industry. For more information on registering for the Mobility Tech Conference & Expo click here. Stay in touch with everything happening at Mobility Tech Conference & Expo. Follow us on Twitter. Edited by Braden Becker
<urn:uuid:ba7dcc4b-4b01-402a-af13-72aeaa64e6a2>
CC-MAIN-2017-09
http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2012/06/27/296712-increasing-mobile-data-traffic-drives-demand-performance-testing.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00115-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93761
1,904
2.890625
3
In the age of the digitally innovative classroom, nearly all school districts have some kind of Internet blocking and filtering mechanism to keep kids safe from inappropriate online content. It is also likely that almost all students have figured out ways to get around these blocks and filters to get to content that they want. This poses the question, “Is blocking the only answer?” We propose a new way of thinking: Monitoring online behavior in addition to blocking and filtering content is more beneficial to schools, teachers, and students. Here’s why. monitoring software aids in qualifying for federal funding In order to qualify for federal e-RATE funding for technology, school districts must comply with the Children’s Internet Protection Act (CIPA). The act states that protection measures must include blocking or filtering obscene, illegal, and harmful content. In addition, schools subject to CIPA have two certification requirements: their Internet safety policies must include monitoring the online activities of minors; and they must educate minors about appropriate online behavior. This includes education on interacting with other individuals on social networking websites and in chat rooms, cyberbullying awareness and response to online bullying or abuse. This is where Impero Education Pro monitoring software steps in. In addition to blocks and filters that prevent access to indecent content, the software’s comprehensive view of all student screens enables teachers to manage online behavior in real time. Our advanced monitoring software will also allow specific websites to be blocked or allowed when required, so students can be provided with access to websites, such as YouTube (which can be great for educational purposes), in a controlled environment. This all-inclusive software allows schools to easily comply with CIPA rules. monitoring software prevents and addresses cyberbullying Monitoring software works by using categories, such as lists of words or phrases, to capture and identify inappropriate activity on desktop and laptop computers and other digital devices. Once captured, an automatic screenshot or video recording is logged; this allows school staff to identify the context of any potential concerning activity, such as a screenshot identifying a concerning word or phrase, a logged-in user, or an IP address. When students use certain keywords, the software alerts the teacher. This can identify cyberbullying and present a way to confront the situation. As new slang terms emerge, keyword lists can be updated on a regular basis. In addition to keyword detection, Impero monitoring software provides students with a confidential way of reporting questionable online activities through its Confide function. Student submissions are anonymously sent to authorities, and this allows the safe exposure of a predator without fear of further harassment to the victim. monitoring software saves time for teachers Impero’s student monitoring software acts like a digital classroom assistant by saving teachers time and helping them manage their students online. The software’s tools allow teachers to prevent access to inappropriate websites, including the monitoring of usage patterns to identify popular sites and applications. The software prevents unauthorized use of proxy sites, enforces acceptable usage policies, and restricts Internet, application, and hardware usage. All of this is monitored through a single interface that provides live thumbnails of all network computers. Additionally, if a student disrupts the class, teachers can turn off screens and lock keyboards, disable Internet access, USB ports, sound and printers, and broadcast the educator’s screen to all or selected students. Talk about a time-saving powerhouse! monitoring software teaches students responsible online behavior According to Bloom’s Taxonomy Cognitive Domain, there are six levels of thinking. The highest level of thinking is evaluation. Student behaviors that show the Evaluation level of thinking are assessing effectiveness of whole concepts, in relation to values, outputs, efficacy, viability, critical thinking, strategic comparison and review, and judgment related to external criteria. When Internet sites are blocked, the student is not given the opportunity to evaluate and create strategy – other than how to strategically hack through to banned sites. By monitoring students’ online activity, combined with providing procedures and communicating about problems, the teacher is providing opportunities for the highest level of thinking. the best way to learn Internet usage The Impero software team believes that monitoring online usage is the best way to help students learn to use the Internet safely. Research has shown that blocking measures have little impact when students are determined to access content. Now is the time to adopt a different approach and add monitoring of online behavior instead of only blocking. This allows schools to be proactive and react appropriately in the event of protocol breaches. In addition, this affords teachers more time, promotes higher-level thinking in students, and provides schools with better tools to comply with CIPA. Impero Education Pro software provides schools with the ability to proactively monitor the online activities of digital devices while they are being used in classrooms. To find out more about this solution, go to the product features page here. Impero offers free trial product downloads, webinars, and consultations. Call us at 877.883.4370 or email us at [email protected] today for more information.
<urn:uuid:339e1d4e-0d83-429b-b020-31d8db392de7>
CC-MAIN-2017-09
https://www.imperosoftware.com/the-key-benefits-of-adding-monitoring-software-to-your-school-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00112-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920509
1,042
2.609375
3
Google has revealed it plans to build its own self-driving cars from the ground up, per an announcement from founder Sergey Brin at the Code Conference Tuesday. The company revealed one such car to Recode, a highly compact two-seater without a steering wheel. Google had previously been retrofitting Toyota Priuses and Lexus SUVs with its self-driving technology. The cars were approved last week for use on public roads in California, and Google demonstrated the technology's ability to navigate complex traffic situations in cities at the end of April. The prototype Google revealed differs from the Priuses and Lexuses in that they can't let humans take over the job of piloting; they are completely controlled by the onboard computer. In addition to lacking a steering wheel, the Google-built car also has no accelerator, no brake, no mirrors, no glove compartment, and no soundsystem (your tiny smartphone speaker will have to do). The cars are capped at a modest 25mph and are started and stopped by a button. In a Q&A with Recode, head of the self-driving car project Chris Urmson stated that the car uses "fault-tolerant architecture" to minimize damage "should something happen." Urmson says that the the front end of the car is "compressible foam" and the windshield is flexible, which "should do a much better job of protecting people if an accident should occur." Google gave no hints as to where the car was manufactured or a timeframe for official launch, so the project remains experimental. "We’re going to learn a lot from this experience, and if the technology develops as we hope, we’ll work with partners to bring this technology into the world safely," states the official Google blog.
<urn:uuid:562f576a-9515-4e3b-8cfe-4e645a654d24>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2014/05/google-builds-a-prototype-self-driving-car-sans-steering-wheel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00640-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967101
361
2.5625
3
User Account Control – What Penetration Testers Should KnowMarch 20, 2014 UAC is User Account Control. Introduced in Windows Vista, UAC is a collection of technologies that make it convenient possible to use Windows without administrator privileges and elevate your rights when needed. UAC has a lot of moving parts and encompasses a lot of things. This post focuses on Windows Integrity levels and UAC elevation prompts. I will first explain some UAC concepts and then dive into three attacks to get past UAC. Process Integrity Levels In Windows Vista and later, processes run at three different levels of integrity: high, medium, and low. A high integrity process has administrator rights. A medium integrity process is one that runs with standard user rights. A low integrity process is very restricted. A low integrity process can not write to the registry and it’s limited from writing to most locations in the current user’s profile. Protected Mode Internet Explorer runs with low integrity. The idea is to limit the amount of damage an attacker may do if they exploit the browser. Most desktop applications run in a medium integrity process, even if the current user is a local administrator. Use Process Explorer to see which Integrity level your programs are running at. To perform a privileged action, a program must run another program and request the high integrity level at that time. If the user is an administrator, what happens next will depend on their UAC settings. There are four UAC settings: Always Notify. This setting is the highest UAC setting. It will prompt the user when any program, including a built-in Windows program wants higher privileges. Notify me only when programs try to make changes to my computer. This is the default UAC setting. This setting does not prompt the user when some built-in Windows program want higher privileges. It will prompt the user when any other program wants higher privileges. This distinction is important and it plays into the UAC bypass attack that we will cover in a moment. Notify me only when programs try to make changes to my computer (do not dim my desktop). This is the same as the default setting, except the user’s desktop does not dim when the UAC elevation prompt comes up. This setting exists for computers that lack the computing power to dim the desktop and show a dialog on top of it. Never notify. This option takes us back to life before Windows Vista. On Windows 7, if a user is an administrator, all of their programs will run with high integrity. On Windows 8, programs run at the medium integrity level, but anything run by an Administrator that requests elevated rights gets them without a prompt. If the user is not an administrator, they will see a prompt that asks for the username and password of a privileged user when a program tries to elevate. Microsoft calls this “over the shoulder” elevation as someone is, presumably, standing over the shoulder of the user and typing in their password. If the UAC settings are set to Never Notify, the system will automatically deny any requests to elevate. Who Am I? When I get a foothold from a client-side attack, I have a few questions I like to answer right away. First, I like to know which user I’m currently executing code as. Second, I like to know which rights I have. With UAC this becomes especially complicated. One way I like to sort myself out is with the Windows command: whoami /groups. This command will print which groups my current user belongs to. This command will also print which integrity level my command ran with. If my command ran in a high integrity context, I will see the group Mandatory Label\High Mandatory Level. This means I have administrator rights. If my command ran in a medium integrity context, I will see the group Mandatory Label\Medium Mandatory Level. This means I have standard user rights. If I find myself in a medium integrity process run by a user in an administrators group, there is potential to elevate from standard user rights to administrator user rights. One option is to use the ShellExecute function with the runas verb. This will run a program and request elevated rights. If UAC is set to anything other than Never Notify, the user will see a prompt that asks them if they would like to allow the action to happen. This is not completely implausible. Oracle’s Java Updater randomly prompts me all of the time. If the user accepts the prompt, the system will run my program in a high integrity context. Remember, medium integrity is standard user rights. High integrity is administrator rights and this is what we’re after. The RunAs option prompts the user and that’s an opportunity to get caught. We want a way to spawn a high integrity process from a medium integrity process without a prompt. Fortunately, there is a way to do this, it’s the bypass UAC attack. This attack comes from Leo Davidson who made a proof-of-concept for it in 2009. David Kennedy and Kevin Mitnick popularized this attack in a 2011 DerbyCon talk. They also released the exploit/windows/local/bypassuac Metasploit Framework module that uses Leo’s proof-of-concept for the heavy lifting. The bypass UAC attack requires that UAC is set to the default Notify me only when programs try to make changes to my computer. If UAC is set to Always Notify, this attack will not work. This attack also requires that our current user is in an administrators group. Bypass UAC: How It Works This is a fascinating attack whose inner workings are taken for granted. Please allow me the blog space to describe it in depth: Our story starts with COM, the Component Object Model in Windows. COM is a way of writing components that other programs may use and re-use. One of the benefits of COM is that it’s language neutral. I find it extremely complicated and unappealing to work with. I suspect others share my feelings. Some COM objects automatically elevate themselves to a high integrity context when run from a program signed with Microsoft’s code signing certificate. If the same COM object is instantiated from a program that was not signed by Microsoft, it runs with the same integrity as the current process. The COM distinction between Microsoft and non-Microsoft programs has little meaning though. I can’t create a COM object in a high integrity context because my programs are not signed with Microsoft’s certificate. I can spawn a Microsoft-signed program (e.g., notepad.exe) and inject a DLL into it though. From this DLL, I may instantiate a self-elevating COM object of my choice. When this COM object performs an action, it will do so from a high integrity context. Leo’s Bypass UAC attack creates an instance of the IFileOperation COM object. This object has methods to copy and delete files on the system. Run from a high integrity context, this object allows us to perform a privileged file copy to any location on the system. We’re not done yet! We need to go from a privileged file copy to code execution in a high integrity process. Before we can make this leap, I need to discuss another Windows 7 fun fact. Earlier, we went over the different UAC settings. The default UAC setting will not prompt the user when some built-in Windows programs try to elevate themselves. More practically, this means that some built-in Windows programs always run in a high integrity context. These programs that automatically elevate have a few properties. They are signed with Microsoft’s code signing certificate. They are located in a “secure” folder (e.g., c:\windows\system32). And, they request the right to autoElevate in their manifest. We can find which programs autoElevate themselves with a little strings magic: cd c:\windows\ strings –s *.exe | findstr /i autoelevate Now, we know which programs automatically run in a high integrity context AND we have the ability to perform an arbitrary copy on the file system. How do we get code execution? We get code execution through DLL search order hijacking. The public versions of the bypass UAC attack copy a CRYPTBASE.dll file to c:\windows\system32\sysprep and run c:\windows\system32\sysprep.exe. When sysprep.exe runs it will search for CRYPTBASE.dll and find the malicious one first. Because sysprep.exe automatically runs in a high integrity context (when UAC is set to default), the code in the attacker controlled CRYPTBASE.dll will execute in this high integrity context too. From there, we’re free to do whatever we like. We have our administrator privileges. Holy Forensic Artifacts Batman! I mentioned earlier that the Metasploit Framework’s bypassuac module uses Leo Davidson’s proof-of-concept. This module drops several files to disk. It uses Leo’s bypassuac-x86.exe (and bypassuac-x64.exe) to perform the privileged file copy from a medium integrity context. It also drops a CRYPTBASE.dll file to disk and the executable we want to run. This module, when run, also creates a tior.exe and several w7e_*.tmp files in the user’s temp folder. I have no idea what the purpose of these files are. When you use this module, you control the executable to run through the EXE::Custom option. The other artifacts are put on disk without obfuscation. For a long time, these other artifacts were caught by anti-virus products. A recent commit to the Metasploit Framework strips several debug and logging messages from these artifacts. This helps them get past the ire of anti-virus, for now. A better approach is to use a module that has as little on-disk footprint as possible. Fortunately, Metasploit contributor Ben Campbell (aka Meatballs) is here to save the day. A recent addition to the Metasploit Framework is the exploit/windows/local/bypassuac_inject module. This module compiles the UAC bypass logic into a reflective DLL. It spawns a Microsoft-signed program and injects the UAC bypass logic directly into it. The only thing that needs to touch disk is the CRYPTBASE.dll file. Bypass UAC on Windows 8.1 In this post, I’ve focused heavily on Windows 7. Leo’s proof-of-concept and the bypassuac modules in the Metasploit Framework do not work on Windows 8.1. This is because the DLL hijacking opportunity against sysprep.exe does not work in Windows 8.1. The Bypass UAC attack is still possible though. A few releases ago, I added bypassuac to Cobalt Strike’s Beacon. I do not invest in short-term features, so I had to convince myself that this attack had a viable future. I audited all of the autoElevate programs on a stock Windows 8.1 to find another DLL hijacking opportunity. I had to find a program that would load my DLL before displaying anything to the user. There were quite a few false starts. In the end, I found my candidate. Beacon’s Bypass UAC command is similar to Ben Campbell’s, it performs all of the UAC bypass logic in memory. Beacon’s UAC bypass also generates an anti-virus safe DLL from Cobalt Strike’s Artifact Kit. Beacon’s UAC bypass checks the system it’s running on too. If it’s Windows 7, Beacon uses sysprep.exe to get code execution in a high integrity context. If it’s Windows 8, it uses another opportunity. If you’re having trouble with the alternatives, Beacon’s version of this attack is an option. Bypass UAC on Windows Vista The Bypass UAC attack does not work on Windows Vista. In Windows Vista, the user has to acknowledge every privileged action. This is the same as the Always Notify option in Windows 7 and later. The UAC settings in Windows 7 came about because UAC became a symbol of what was “wrong” with Windows Vista. Microsoft created UAC settings and made some of their built-in programs auto-elevate by default to prompt the user less often. These changes for user convenience created the loophole described in this post. Lateral Movement and UAC The concept of process integrity level only applies to the current system. When you interact with a network resource, your access token is all that matters. If your current user is a domain user and your domain user is a local administrator on another system, you can get past UAC. Here’s how this works: You may use your token to interact with another system as an administrator. This means you may copy an executable to that other system and schedule it to run. If you get access to another system this way, you may repeat the same process to regain access to your current system with full rights. You may use the Metasploit Framework’s exploit/windows/local/current_user_psexec to do this. These UAC bypass attacks are among my favorite hacker techniques. They’re a favorite because they take advantage of a design loophole rather than a fixed-with-the-next-update memory corruption flaw. In theory, we will have these attacks for a long time.
<urn:uuid:234abd4a-45df-4cb1-8d9d-dd14337d6f13>
CC-MAIN-2017-09
https://blog.cobaltstrike.com/2014/03/20/user-account-control-what-penetration-testers-should-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00640-ip-10-171-10-108.ec2.internal.warc.gz
en
0.912051
2,883
2.84375
3
Adam B.,Institute For Angewandte Okologie | Faller M.,Aussenstelle Des Institute For Angewandte Okologie | Gischkat S.,Aussenstelle Des Institute For Angewandte Okologie | Hufgard H.,Aussenstelle Des Institute For Angewandte Okologie | And 2 more authors. WasserWirtschaft | Year: 2012 Since the new double slot pass has gone into operation at the Geesthacht dam, an extensive, long term fish-ecological monitoring has been accomplished, which includes the old bypass facility on the southern river bank. Daily counts show that the double slot pass is used by about eight times the number offish as that of the bypass. In addition, the range of species that use the double slot pass is higher than that of the bypass, with 43 and 37 species, respectively. The double slot pass is also used by fish species, which are typical of the River Elbe, such as zander and smelt, but do not appear or are underrepresented in the species range present in the bypass. By implementing transponder technology, important insights are gained into how the traceability of the two fish passes differs, the results often being highly dependent on the respective species. These findings demonstrate that some species prefer ascending the double slot pass, located on the point bank, whilst more powerful swimmers may choose to use the bypass on the cut bank. Source
<urn:uuid:12d5da49-02ca-47bd-ba6b-5309b334b311>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/aussenstelle-des-institute-for-angewandte-okologie-1491675/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00640-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916964
306
2.6875
3
Identifying a Skills Gap in the Workforce The gulf between the capabilities of a collective workforce and the level of aptitude an employer demands is known as a “skills gap” in the labor market. Serious problems arise when a workforce’s proficiency cannot keep pace with economic development. In an increasingly interconnected and global business environment, one lagging sector can drag down several others. So, what does this have to do with credentialing programs? The role of certification—as with education and training in general—is to prepare people for the challenges of the world. Certifications don’t exist for their own sake. They exist to give employees and employers a reliable and effective way to supply people with a standard set of skills. This usually entails identifying a skills gap within a discipline. In the certification universe, there are a myriad of offerings for skills, job roles and disciplines, from school teachers to accountants to information technology. The National Commission for Certifying Agencies (NCCA), which is part of the National Organization for Competency Assurance (NOCA), promotes 21 standards for administering premium certifications. “The NCCA standards are a blueprint—almost a business plan in some ways—for how to build a quality certification program,” said Wade Delk, executive director of NOCA. “Whether you intend to be accredited by the NCCA or not, if you follow the standards to the best of your ability, you’re going to create a very high-quality certification program.” Delk said employers should identify deficiencies in the workforce’s skills to determine if the certification is necessary. “First, determine there is a need for the certification,” Delk explained. “If you’re sitting around a table and saying, ‘You know, it might be fun to have a certification in this area,’ that’s certainly not relevant and valid enough to start it.” Ideally, once the skills gap is identified, a certification will be developed and rolled out promptly. However, this is seldom the case, as job roles and requisite expertise change rapidly and program managers face resource limitations. For Scott Grams, director of the GIS Certification Institute (GISCI), forming a credentialing program in geographical information systems (GIS) took more than a decade. “Certification was an idea that had been discussed in the geographic information systems community for sometime—probably for 10 or 15 years in backroom discussions at various conferences,” he said. “As GIS continued to grow and got integrated into disciplines like planning emergency management, crime analysis, health and environmental sciences, etc., this profession sort of emerged out of it. In order to have a true profession, a number of GIS professionals felt that there needed to be some kind of credentialing program and a code of ethics.” Roughly five years ago the Urban and Regional Information Systems Association researched the need for GIS certification to see if it was viable. “What it did was create a committee of 40 individuals from a wide variety of disciplines—academia, non-profit organizations and private and public sectors,” Grams said. “All those individuals started to investigate how such a program would work. Would it be examination based? Portfolio based? Would there be different tiers of certification? Would it be a binary certification—you’re in or you’re out?” Once the ball gets rolling on the certification, determine how the levels of proficiency will be evaluated. To find a method to identify GIS professionals’ needs, GISCI ran a pilot program for the first few months of the certification’s existence. It used the applications the candidates submitted to the organization to identify their abilities. “The first versions of the program were very open,” Grams said. “The documentation requirements weren’t as strong. While the pilot program was going on, the committee kept meeting, and they were given updates of the program and saw all of the portfolios. The application wasn’t changed dramatically but significantly enough. They really wanted to do a certification program based on an application, and the only thing that’s going to give that approach some teeth is by having strict documentation requirements.” When the pilot phase was complete, GISCI decided that it would keep this approach, evaluating applicants through a points system based on education, professional experience and industry contributions. It eschewed exam-based evaluations because of the diversity of GIS solutions. “There are different GIS platforms, and they felt creating an examination that involved all of these various activities would be something that numerous groups in the profession would be debating about until the end of time,” Grams said. –Brian Summerfield, [email protected]
<urn:uuid:377aff0c-882b-4d4f-9c3e-ab4816276443>
CC-MAIN-2017-09
http://certmag.com/identifying-a-skills-gap-in-the-workforce/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00164-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954563
1,009
2.578125
3
OTTAWA, ONTARIO--(Marketwired - May 22, 2014) - The Royal Canadian Legion acknowledged Aboriginal Awareness Week as well as the many Aboriginal cultures in Canada, including First Nations, the Inuit and the Métis earlier today. "This week, designed to honour the Canadian Mosaic is a welcomed part of our Canadian Heritage," says Dominion President of The Royal Canadian Legion, Gordon Moore. "We are pleased not only to acknowledge but also participate in this event and recognize many Aboriginals who served and continue to do so in the Canadian Armed Forces and the Royal Canadian Mounted Police," adds Moore. First Nations, the Inuit and the Métis have an important military history. For example, during the First World War, more than 4,000 Aboriginal Canadians volunteered to join the military. During the Second World War, more than 3,000 Aboriginal Canadians served in our military overseas. A few years later, hundreds volunteered to help the United Nations defend South Korea during the Korean War. This proud history of support to defend this country continues to this day. Likewise, in the early days of the Legion, many Aboriginal Veterans were some of the first members to join this organization and play a key role in the direction the Legion would take in support of all Veterans. Today, many Aboriginals are members of the Legion where they are still engaged in shaping the future of Canada's largest Veterans' not-for-profit organization. The Legion participated in the AAW by having a kiosk on the main concourse in the MGen. George R. Pearkes Building, 101 Colonel By Drive, Ottawa today. This event is part of a larger Aboriginal Affairs Secretariat (AAS) initiative, in conjunction with Parks Canada, the Department of National Defence and the Canadian Armed Forces to recognize Aboriginals in the public service - including military and Royal Canadian Mounted Police service. ABOUT THE LEGION Established in 1926, the Legion is the largest Veterans' and community support organization in Canada with more than 320,000 members. Its mission is to serve all Veterans including serving Canadian Armed Forces and Royal Canadian Mounted Police members as well as their families, to promote Remembrance and to serve our communities and our country. The Legion's Service Bureau Network provides assistance and representation to all Veterans regarding their disability claims, benefits and services from Veterans Affairs Canada and the Veterans Review and Appeal Board. In communities across Canada it is the Legion that perpetuates Remembrance through the Poppy Campaign and Remembrance Day ceremonies. With more than 1, 460 branches, the Legion supports programs for seniors, Veterans' housing, outreach and visitation, youth leadership, education, sports, Cadets, Guides and Scouts. We Will Remember Them.
<urn:uuid:a98227b5-ca53-483a-9949-ee2c3750ea7d>
CC-MAIN-2017-09
http://www.marketwired.com/press-release/legion-supports-aboriginal-awareness-week-1913057.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00208-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960576
558
2.546875
3
PostgreSQL is a first-rate, enterprise-worthy open source RDBMS (relational database management system) that compares very favorably to high-priced closed-source commercial databases. Databases are complex, tricksy beasts full of pitfalls. In this two-part crash course, we'll get a new PostgreSQL database up and running with elegant ease, and learn important fundamentals. If you're a database novice, then give yourself plenty of time to learn your way around. PostgreSQL is a great database for beginners because it's well documented and aims to adhere to standards. Even better, everything is discoverable -- nothing is hidden, not even the source code, so you can develop as complete an understanding of it as you want. The most important part of administering any database is preparation, in planning and design, and in learning best practices. A good requirements analysis will help you decide what data to store, how to organize it, and what business rules to incorporate. You'll need to figure out where your business logic goes -- in the database, in middleware, or applications? You may not have the luxury of a clean, fresh new installation, but must instead grapple with a migration from a different database. These are giant topics for another day; fortunately there are plenty of good resources online, starting with the excellent PostgreSQL manuals and Wiki. We'll use three things in this crash course: PostgreSQL, its built-in interactive command shell psql, and the excellent pgAdmin3 graphical administration and development tool. Linux users will find PostgreSQL and pgAdmin3 in the repositories of their favorite Linux distributions, and there are downloads on PostgreSQL.org for Linux, FreeBSD, Mac OS X, Solaris, and Windows. There are one-click installers for OS X and Windows, and they include pgAdmin3. Any of these operating systems are fine for testing and learning. For production use, I recommend a Linux or Unix server, because they're reliable, efficient, and secure. Linux and FreeBSD split PostgreSQL into multiple packages. You want both the server and client. For example, on Debian the metapackage postgresqlinstalls all of these packages: # <b>apt-get install postgresql</b> postgresql postgresql-9.0 postgresql-client-9.0 postgresql-client-common postgresql-common See the detailed installation guides on the PostgresSQL wiki for more information for all platforms. The downloads page also includes some live CDs which make it dead easy to set up a test server; simply boot the CD and go to work. For this article, I used a Debian Wheezy (Testing) system running PostgreSQL 9.0.4, the current stable release.
<urn:uuid:03c218dc-3f9f-4b7e-96e4-b3fda2129348>
CC-MAIN-2017-09
http://www.itworld.com/article/2738507/data-center/crash-course-in-postgresql--part-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00028-ip-10-171-10-108.ec2.internal.warc.gz
en
0.88088
572
2.703125
3
How Google Works: The Google File SystemBy David F. Carr | Posted 2006-07-06 Email Print For all the razzle-dazzle surrounding Google, the company must still work through common business problems such as reporting revenue and tracking projects. But it sometimes addresses those needs in unconventional—yet highly efficient—ways. Other The Google File System In 2003, Google's research arm, Google Labs, published a paper on the Google File System (GFS), which appears to be a successor to the BigFiles system Page and Brin wrote about back at Stanford, as revamped by the systems engineers they hired after forming Google. The new document covered the requirements of Google's distributed file system in more detail, while also outlining other aspects of the company's systems such as the scheduling of batch processes and recovery from subsystem failures. The idea is to "store data reliably even in the presence of unreliable machines," says Google Labs distinguished engineer Jeffrey Dean, who discussed the system in a 2004 presentation available by Webcast from the University of Washington. For example, the GFS ensures that for every file, at least three copies are stored on different computers in a given server cluster. That means if a computer program tries to read a file from one of those computers, and it fails to respond within a few milliseconds, at least two others will be able to fulfill the request. Such redundancy is important because Google's search system regularly experiences "application bugs, operating system bugs, human errors, and the failures of disks, memory, connectors, networking and power supplies," according to the paper. The files managed by the system typically range from 100 megabytes to several gigabytes. So, to manage disk space efficiently, the GFS organizes data into 64-megabyte "chunks," which are roughly analogous to the "blocks" on a conventional file system—the smallest unit of data the system is designed to support. For comparison, a typical Linux block size is 4,096 bytes. It's the difference between making each block big enough to store a few pages of text, versus several fat shelves full of books. To store a 128-megabyte file, the GFS would use two chunks. On the other hand, a 1-megabyte file would use one 64-megabyte chunk, leaving most of it empty, because such "small" files are so rare in Google's world that they're not worth worrying about (files more commonly consume multiple 64-megabyte chunks). A GFS cluster consists of a master server and hundreds or thousands of "chunkservers," the computers that actually store the data. The master server contains all the metadata, including file names, sizes and locations. When an application requests a given file, the master server provides the addresses of the relevant chunkservers. The master also listens for a "heartbeat" from the chunkservers it manages—if the heartbeat stops, the master assigns another server to pick up the slack. In technical presentations, Google talks about running more than 50 GFS clusters, with thousands of servers per cluster, managing petabytes of data. More recently, Google has enhanced its software infrastructure with BigTable, a super-sized database management system it developed, which Dean described in an October presentation at the University of Washington. Big Table stores structured data used by applications such as Google Maps, Google Earth and My Search History. Although Google does use standard relational databases, such as MySQL, the volume and variety of data Google manages drove it to create its own database engine. BigTable database tables are broken into smaller pieces called tablets that can be stored on different computers in a GFS cluster, allowing the system to manage tables that are too big to fit on a single server. Also in this Feature:
<urn:uuid:1e49cc13-a11b-4181-a30f-e14eba74c3ad>
CC-MAIN-2017-09
http://www.baselinemag.com/c/a/Infrastructure/How-Google-Works-1/4
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00256-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931283
769
3.078125
3
Digital video using standards such as that from Motion Picture Experts Group (MPEG) for encoding video to compress, transport, uncompress, and display it has led to a revolution in computing ranging from social networking media and amateur digital cinema to improved training and education. Tools for decoding and consuming digital video are widely used by all every day, but tools to encode and analyze uncompressed video frames are needed for video analytics, such as Open Computer Vision (OpenCV). One of the readily available and quite capable tools for encoding and decoding of digital video is FFmpeg; for still images, GNU Image Processing (GIMP) is quite useful (see Resources for links). With these three basic tools, an open source developer is fully equipped to start exploring computer vision (CV) and video analytics. Before exploring these tools and development methods, however, let's first define these terms better and consider applications. The first article in this series, Cloud scaling, Part 1: Build your own and scale with HPC on demand, provided a simple example using OpenCV that implements a Canny edge transformation on continuous real-time video from a Linux® web cam. This is an example of a CV application that you could use as a first step in segmenting an image. In general, CV applications involve acquisition, digital image formats for pixels (picture elements that represent points of illumination), images and sequences of them (movies), processing and transformation, segmentation, recognition, and ultimately scene descriptions. The best way to understand what CV encompasses is to look at examples. Figure 1 shows face and facial feature detection analysis using OpenCV. Note that in this simple example, using the Haar Cascade method (a machine learning algorithm) for detection analysis, the algorithm best detects faces and eyes that are not occluded (for example, my youngest son's face is turned to the side) or shadowed and when the subject is not squinting. This is perhaps one of the most important observations that can be made regarding CV: It's not a trivial problem. Researchers in this field often note that although much progress has been made since its advent more than 50 years ago, most applications still can't match the scene segmentation and recognition performance of a 2-year-old child, especially when the ability to generalize and perform recognition in a wide range of conditions (lighting, size variation, orientation and context) is considered. Figure 1. Using OpenCV for facial recognition To help you understand the analytical methods used in CV, I have created a small test set of images from the Anchorage, Alaska area that is available for download. The images have been processed using GIMP and OpenCV. I developed code to use the OpenCV application programming interface with a Linux web cam, precaptured images, or MPEG movies. The use of CV to understand video content (sequences of images), either in real time or from precaptured databases of image sequences, is typically referred to as video analytics. Defining video analytics Video analytics is broadly defined as analysis of digital video content from cameras (typically visible light, but it could be from other parts of the spectrum, such as infrared) or stored sequences of images. Video analytics involves several disciplines but at least includes: - Image acquisition and encoding. As a sequence of images or groups of compressed images. This stage of video analytics can be complex, including photometer (camera) technology, analog decoding, digital formats for arrays of light samples (pixels) in frames and sequences, and methods of compressing and decompressing this data. - CV. The inverse of graphical rendering, where acquired scenes are converted into descriptions compared to rendering a scene from a description. Most often, CV assumes that this process of using a computer to "see" should operate wherever humans do, which often distinguishes it from machine vision. The goal of seeing like a human does most often means that CV solutions employ machine learning. - Machine vision. Again, the inverse of rendering but most often in a well-controlled environment for the purpose of process control—for example, inspecting printed circuit boards or fabricated parts to make sure they are geometrically correct within tolerances. - Image processing. A broad application of digital signal processing methods to samples from photometers and radiometers (detectors that measure electromagnetic radiation) to understand the properties of an observation target. - Machine learning. Algorithms developed based on the refinement of the algorithm through training data, whereby the algorithm improves performance and generalizes when tested with new data. - Real-time and interactive systems. Systems that require response by a deadline relative to a request for service or at least a quality of service that meets SLAs with customers or users of the services. - Storage, networking, database, and computing. All required to process digital data used in video analytics, but a subtle, yet important distinction is that this is an inherently data-centric compute problem, as was discussed in Part 2 of this series. Video analytics, therefore, is broader in scope than CV and is a system design problem that might include mobile elements like a smart phone (for example, Google Goggles) and cloud-based services for the CV aspects of the overall system. For example, IBM has developed a video analytics system known as the video correlation and analysis suite (VCAS), for which the IBM Travel and Transportation Solution Brief Smarter Safety and Security Solution for Rail [PDF] is available; it is a good example of a system design concept. Detailed focus on each system design discipline involved in a video analytics solution is beyond the scope of this article, but many pointers to more information for system designers are available in Resources. The rest of this article focuses on CV processing examples and applications. Basic structure of video analytics applications You can break the architecture of cloud-based video analytics systems down into two major segments: embedded intelligent sensors (such as smart phones, tablets with a camera, or customized smart cameras) and cloud-based processing for analytics that can't be directly computed on the embedded device. Why break the architecture into two segments compared to fully solving in the smart embedded device? Embedding CV in transportation, smart phones, and products is not always practical. Even when embedding a smart camera is smart, so often, the compressed video or scene description may be back-hauled to a cloud-based video analytics system, just to offload the resource-limited embedded device. Perhaps more important, though, than resource limitations is that video transported to the cloud for analysis allows for correlation with larger data sets and annotation with up-to-date global information for augmented reality (AR) returned to the devices. The smart camera devices for applications like gesture and facial expression recognition must be embedded. However, more intelligent inference to identify people and objects and fully parse scenes is likely to require scalable data-centric systems that can be more efficiently scaled in a data center. Furthermore, data processing acceleration at scale ranging from the Khronos OpenVX CV acceleration standards to the latest MPEG standards and feature-recognition databases are key to moving forward with improved video analytics, and two-segment cloud plus smart camera solutions allow for rapid upgrades. With sufficient data-centric computing capability leveraging the cloud and smart cameras, the dream of inverse rendering can perhaps be realized where, in the ultimate "Turing-like" test that can be demonstrated for CV, scene parsing and re-rendered display and direct video would be indistinguishable for a remote viewer. This is essentially done now in digital cinema with photorealistic rendering, but this rendering is nowhere close to real time or interactive. Video analytics apps: Individual scenarios Killer applications for video analytics are being thought of every day for CV and video analytics, some perhaps years from realization because of computing requirements or implementation cost. Nevertheless, here is a list of interesting applications: - AR views of scenes for improved understanding. If you have ever looked at, for example, a landing plane and thought, I wish I could see the cockpit view with instrumentation, this is perhaps possible. I worked in Space Shuttle mission control long ago, where a large development team meticulously re-created a view of the avionics for ground controllers that shadowed what astronauts could see—all graphical, but imaging fusion of both video and graphics to annotate and re-create scenes with meta-data. A much simplified example is presented here in concept to show how an aircraft observed via a tablet computer camera could be annotated with attitude and altitude estimation data (see the example in this article). - Skeletal transformations to track the movement and estimate the intent and trajectory of an animal that might jump onto a highway. See the example in this article. - Fully autonomous or mostly autonomous vehicles with human supervisory control only. Think of the steps between today's cruise control and tomorrow's full autonomous car. Cars that can parallel park themselves today are a great example of this stepwise development. - Beyond face detection to reliable recognition and, perhaps more importantly, for expression feedback. Is the driver of a semiautonomous vehicle aggravated, worried, surprised? - Virtual shopping (AR to try products). Shoppers can see themselves in that new suit. - Signage that interacts with viewers. This is based on expressions, likes and dislikes, and data that the individual has made public. - Two-way television and interactive digital cinema. Entertainment for which viewers can influence the experience, almost as if they were actors in the content. - Interactive telemedicine. This is available any time with experts from anywhere in the world. I make no attempt in this article to provide an exhaustive list of applications, but I explore more by looking closely at both AR (annotated views of the world through a camera and display—think heads-up displays such as fighter pilots have) and skeletal transformations for interactive tracking. To learn more beyond these two case studies and for more in-depth application-specific uses of CV and video analytics in medicine, transportation safety, security and surveillance, mapping and remote sensing, and an ever-increasing list of system automation that includes video content analysis, consult the many entries in Resources. The tools available can help anyone with computer engineering skills get started. You can also download a larger set of test images as well as all OpenCV code I developed for this article. Example: Augmented reality Real-time video analytics can change the face of reality by augmenting the view a consumer has with a smart phone held up to products or our view of the world (for example, while driving a vehicle) and can allow for a much more interactive experience for users for everything from movies to television, shopping, and travel to how we work. In AR, the ideal solution provides seamless transition from scenes captured with digital video to scenes generated by rendering for a user in real time, mixing both digital video and graphics in an AR view for the user. Poorly designed AR systems distract a user from normal visual cues, but a well-designed AR system can increase overall situation awareness, fusing metrics with visual cues (think fighter pilot heads-up displays). The use of CV and video analytics in intelligent transportation systems has significant value for safety improvement, and perhaps eventually CV may be the key technology for self-driving vehicles. This appears to be the case based on the U.S. Defense Advanced Research Projects Agency challenge and the Google car, although use of the full spectrum with forward-looking infrared and instrumentation in addition to CV has made autonomous vehicles possible. Another potentially significant application is air traffic safety, especially for airports to detect and prevent runway incursion scenarios. The imagined AR view of an aircraft on final approach at Ted Stevens airport in Anchorage shows a Hough linear transform that might be used to segment and estimate aircraft attitude and altitude visually, as shown in Figure 2. Runway incursion safety is of high interest to the U.S. Federal Aviation Administration (FAA), and statistics for these events can be found in Resources. Figure 2. AR display example For intelligent transportation, drivers will most likely want to participate even as systems become more intelligent, so a balance of automation and human participation and intervention should be kept in mind (for autonomous or semiautonomous vehicles). Skeletal transformation examples: Tracking movement for interactive systems Skeletal transformations are useful for applications like gesture recognition or gate analysis of humans or animals—any application where the motion of a body's skeleton (rigid members) must be tracked can benefit from a skeletal transformation. Most often, this transformation is applied to bodies or limbs in motion, which further enables the use of background elimination for foreground tracking. However, it can still be applied to a single snapshot, as shown in Figure 3, where a picture of a moose is first converted to a gray map, then a threshold binary image, and finally the medial distance is found for each contiguous region and thinned to a single pixel, leaving just the skeletal structure of each object. Notice that the ears on the moose are back—an indication of the animal's intent (higher-resolution skeletal transformation might be able to detect this as well as the gait of the animal). Figure 3. Skeletal transformation of a moose Skeletal transformations can certainly be useful in tracking animals that might cross highways or charge a hiker, but the transformation has also become of high interest for gesture recognition in entertainment, such as in the Microsoft® Kinect® software developer kit (SDK). Gesture recognition can be used for entertainment but also has many practical purposes, such as automatic sign language recognition—not yet available as a product but a concept in research. Certainly skeletal transformation CV can analyze the human gait for diagnostic or therapeutic purposes in medicine or to capture human movement for animation in digital cinema. Skeletal transformations are widely used in gesture-recognition systems for entertainment. Creative and Intel have teamed up to create an SDK for Windows® called the Creative* Interactive Gesture Camera Developer Kit (see Resources for a link) that uses a time-of-flight light detection and ranging sensor, camera, and stereo microphone. This SDK is similar to the Kinect SDK but intended for early access for developers to build gesture-recognition applications for the device. The SDK is amazingly affordable and could become the basis from some breakthrough consumer devices now that it is in the hands of a broad development community. To get started, you can purchase the device from Intel, and then download the Intel® Perceptual Computing SDK. The demo images are included as an example along with numerous additional SDK examples to help developers understand what the device can do. You can use the finger tracking example shown in Figure 4 right away just by installing the SDK for Microsoft Visual Studio® and running the Gesture Viewer sample. Figure 4. Skeletal transformation using the Intel Perceptual Computing SDK and Creative Interactive Gesture Camera Developer Kit The future of video analytics This article makes an argument for the use of video analytics primarily to improve public safety; for entertainment purposes, social networking, telemedicine, and medical augmented diagnostics; and to envision products and services as a consumer. Machine vision has quietly helped automate industry and process control for years, but CV and video analytics in the cloud now show promise for providing vision-based automation in the everyday world, where the environment is not well controlled. This will be a challenge both in terms of algorithms for image processing and machine learning as well as data-centric computer architectures discussed in this series. The challenges for high-performance video analytics (in terms of receiver operating characteristics and throughput) should not be underestimated, but with careful development, this rapidly growing technology promises a wide range of new products and even human vision system prosthetics for those with sign impairments or loss of vision. Based on the value of vision to humans, no doubt this is also fundamental to intelligent computing systems. |OpenCV Video Analytics Examples||va-opencv-examples.zip||600KB| |Simple images for use with OpenCV||example-images.zip||6474KB| - The IBM Smarter Planet initiative includes Smarter Public Safety to define uses for digital video and video analytics to keep cities and public places safe. The moose detection example is a real safety issue in Alaska, where hundreds of moose are hit on Anchorage and Kenai peninsula highways every year. If drivers don't see moose, could intelligent transportation systems see them, instead, and provide warning signage? Likewise, FAA is concerned with runway incursion at airports and keeps statistics on this dangerous scenario, often involving airport maintenance vehicles and aircraft on taxi, takeoff, and landing. - Read the IBM Travel and Transportation Solution Brief, Smarter Safety and Security Solution for Rail. - Learning OpenCV by Gary Bradski and Adrian Kaehler (O'Reilly, 2008) is probably the best place to start learning about CV. Numerous excellent academic textbooks with algorithm details and fundamental theory are - Computer Vision: Models, Learning, and Inference by Simon J.D. Prince (Cambridge UP, 2012) - Computer and Machine Vision: Theory, Algorithms, Practicalities by E.R. Davies (Academic Press, 2012) - Computer Vision: Algorithms and Applications by Richard Szeliski (Springer, 2011) - Computer Vision by Linda Shapiro and George Stockman (Prentice Hall, 2001) Courses at universities on CV, video analytics, and interactive or real-time systems are becoming more widely available at both the graduate and undergraduate levels. - Universities such as Carnegie Mellon and the Computer Vision Group, Stanford and the Stanford Vision Lab in the Stanford AI Lab, and the Massachusetts Institute of Technology (MIT) and the CSAIL Computer Vision Research Group have large research and teaching programs. - I work at two state universities that have significant coursework, including undergraduate courses and research at University of Alaska Anchorage in the Computer Prototype and Assembly Lab for classes such as Computer and Machine Vision and the University of Colorado at Boulder in the Embedded Certificate Program as an adjunct professor. - The courses at CU-Boulder in Real-time embedded systems are offered by the Electrical Computer and Energy Engineering department on campus and via distance for summer courses, including Real-Time Digital Media and a summer version of Real-Time Embedded Systems taught via the Center for Advanced Engineering and Technology Education. - It is also possible to learn more about these topics through Udacity, such as this great course, Introduction to Artificial Intelligence, which covers machine learning and artificial intelligence-related image processing and computer vision and Introduction to Parallel Programming, which covers the use of GP-GPUs that can be used to speed up graphics and CV processing. - Research by IBM and partners in CV and video analytics includes IBM Exploratory Computer Vision, IBM Smart Surveillance Research, IBM Augmented Reality, Microsoft Research Cambridge in CV, and Intel's Tomorrow Project: Computer Vision. . - Medical uses for video analytics and CV range from the Artificial Retina Project to smart microscopes and radiology equipment, most often not to fully replace medical clinicians but rather to assist them or extend their reach to rural areas through telemedicine. The Medical Vision Group at MIT is a good place to start. - Learn more about VCAS. - Video analytics requires image analysis as well as encode/decode tools for digital video. A great place to start is with open systems software and hardware, including OpenCV, the OpenVX hardware acceleration standard, and FFmpeg tools for encoding/decoding digital video. Finally, GIMP tools are great for interactive work—for example, choosing thresholds based on histogram analysis or taking a quick look at a Sobel edge transformation. - CV methods can be used for search, such as the Google Image search services, to detect faces for social networking such as Facebook face detection, used to assist with tagging friends in photos, which has not been without some controversy, as described and explored in detail at this 2011 FTC Face Facts Forum. The Google image search works well finding identical matches but not so well for true recognition—for example, my picture of cows returned no other images of cows. Either way, facial recognition, which might include automatic identification of individuals rather than just segmentation of the face, involves public policy controversy. FTC has put out this Best Practices Guide for use of facial recognition. More recently, Facebook acquired Face.com, and a host of interesting features could come from it, including mood and age estimation. - Purchase the Creative Interactive Gesture Camera Developer Kit from Intel. - Learn more about cloud computing technologies at cloud at developerWorks. - Access IBM SmartCloud Enterprise. - Follow developerWorks on Twitter. - Watch developerWorks demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers. Get products and technologies - Of course, you need to download and install OpenCV. I found this OpenCV installation procedure for Ubuntu easy to follow, and it includes a great facial-recognition example from OpenCV for Haar Cascade detection. More on this method for face detection in video can be found in the OpenCV documentation. - Download Microsoft Kinect SDK. - Download the Intel Perceptual Computing SDK. - Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently. - Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. Dig deeper into Cloud computing on developerWorks Exclusive tools to build your next great app. Learn more. Crazy about Cloud? Sign up for our monthly newsletter and the latest cloud news. Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.
<urn:uuid:942786d3-8817-46e1-b608-e55b384be67b>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/cloud/library/cl-cloudscaling3-videoanalytics/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00076-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91707
4,497
2.953125
3
It seems incredible to believe that of all the people on the planet, Stephen Hawking’s communication system has never used the sort of predictive typing found in modern smartphones, or even a backspace key. Now it does, via a partnership between Intel and Swiftkey. Hawking, who suffers from an advanced stage of amyotrophic lateral sclerosis (also known as ALS or Lou Gehrig’s disease), communicates entirely through a text-based communication system controlled by his facial muscles. Up until now, the means of input has been a continually scrolling cursor that cycles through the letters of the alphabet, which Hawking “selected” by twitching a muscle in his cheek. That equated to about one or two words per minute, according to Intel chief technical officer Justin Rattner, in 2013. Intel and Hawking have worked together for more than a decade, and Intel has said that it’s tried to improve the communications system that Hawking and other ALS patients used. On Tuesday, Intel released the ACAT (Assistive Context Aware Toolkit) designed to do just that. In 2013, Hawking’s communication system consisted of a tablet PC with a forward-facing Webcam that he can use to place Skype calls, according to Scientific American. A black box beneath his wheelchair contains an audio amplifier and voltage regulators. It also has a USB hardware key that receives the input from an infrared sensor on Hawking’s eyeglasses, which detects changes in light as he twitches his cheek. A hardware voice synthesizer sits in another black box on the back of the chair and receives commands from the computer via a USB-based serial port. Software improvements, not hardware ACAT doesn’t improve the hardware, but rather the software used to interpret Hawking’s facial movements into computer commands. ACAT has doubled Hawking’s typing speed. It's also achieved a 10-times improvement in common tasks, such as moving a mouse and opening email—true challenges for someone who can’t push a mouse around. Intel prepared a video showing off the system in action. WiredUK reports that initially, Intel thought about EEG sensors, gestures, or other complex ways of communicating that could actually convey more data in less amounts of time. That didn’t work, however: EEG sensors weren’t able to pick up the signals they needed from Hawking’s brain, and Hawking—a man who has never operated an iPhone—wasn’t quite sure initially how to use predictive text, as he preferred his slow, but precise, typing method. Gaze sensors failed, too, blocked by Hawking’s drooping eyelids. Intel moved to a combination of predictive text input, as well as algorithms that suggest “hole” after “black," for example. A backspace key can also delete text and back out of operations. As ALS is a progressively degenerative disease, Hawking’s ability to communicate is decreasing. Predictive input will become more of a necessity as the disease advances. Before 2008, Hawking could type 15 words a minute, using a thumb clicker. Now, at 78, he is too weak to use it. What Intel and Hawking hope, however, is that the new system will help Hawking communicate for as long as he can. This story, "Intel, SwiftKey Upgrade Stephen Hawking's Communication Technology" was originally published by PCWorld.
<urn:uuid:5dcc8adc-7126-4971-8143-900217fce09e>
CC-MAIN-2017-09
http://www.cio.com/article/2854453/consumer-technology/intel-swiftkey-upgrade-stephen-hawkings-communication-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00252-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95936
705
2.609375
3
The technology and computerization increasingly being installed in courtrooms across the country is changing the job of court reporters, and is, in some cases, replacing them. Overloaded and underfunded courts are increasingly looking at audio and video recording as a way to cut costs, which is achieved mainly by eliminating the salaries and related costs of court reporters. The California Judicial Council, in a 1992 report to the Legislature on several pilot projects, estimated that each video-recorded courtroom could save about $41,000 per year, and each audio-recorded courtroom -- which requires an employee to monitor the equipment -- could save about $28,000 annually. Courts around the nation find these and similar figures attractive and are increasingly adopting the technology. This trend would seem to mean that court reporting is a dying profession about to be replaced in many cases by electronic recording devices. But reporters have adapted to changes over the centuries, and remain confident that they will not meet the same fate as buggy whip manufacturers. "As technology changes in the future, we will adapt," said Gary Cramer, spokesman for the California Court Reporters Association. "I am confident that the court reporting field will stay healthy." Reporters have been in this country's courts since the early 1800s. Their tools have changed over the centuries from inkwells to stenograph machines, and more recently to today's computer-aided transcription, or CAT. Fredric I. Lederer of the law school at the College of William and Mary and an administrator at the National Center for State Courts' Courtroom of the Future, said rather than simply recording proceedings, the court reporter's job may expand to running technology in the courtroom. Reporters "who keep pace with technology will become court technologists," running recording equipment and computers to ensure an accurate record of the hearing, Lederer said. But as courts around the country increasingly use or consider audio or video recording, some reporters are being put out of work. This movement is causing some strain between court reporters and the courts. In California, an attempt to allow courts to use audio-visual recording has led to a court battle between reporters and the state Judicial Council. The council, which sets court rules, wants to allow local courts the discretion to use recording in place of reporters. Reporters are challenging the council's authority to make these rules. The case was appealed to the state Supreme Court, which as of press time hadn't decided if it would hear it. A key argument by reporters against audio recording is that costs are shifted from the courts to attorneys and ultimately their clients. "There are huge amounts of hidden costs in this," said Cramer. "It costs the litigant more and the court less." While getting a copy of a tape from the court doesn't cost parties very much, he explained, having it transcribed can be more expensive than buying an official transcript from a reporter. Reporters' associations aren't against all recording of court proceedings. When there is a relatively low volume and low likelihood of appeal in a court -- said Paula Laws, president of the National Association of Court Reporters -- the group is not against using tapes to create the official record. Traffic or bankruptcy courts are examples when reporters may not object to recording. "If it is a low-volume court and you see attrition taking place, it's not an issue," Laws said. "If they are just replacing them, it could be an issue. Where you see concern is when [the court] says they want to use tape or video in high-volume courts." An example of how video used in the courtroom has affected reporters and others is the Kentucky Circuit Courts, which have been recording proceedings on video for about 10 years. "Reporters are seldom used anymore here," said Donald Taylor, court administrator of the Fayette County Circuit Court, which includes Lexington. Reporters in Kentucky Circuit Courts were mainly phased out through attrition and reassignment over a period of years. No figures were available on the savings, but the court doesn't have to pay reporter salary and benefits. And records, usually owned and sold to the court by the reporter who created them, are now owned outright by the court. "It could cost thousands of dollars for a transcript" made by a reporter in a courtroom, Taylor said, depending on its length. "Now we just send a copy of a tape to the court of appeals." With Kentucky's videotape record, attorneys can go across the street and have a copy made for about $10, then have someone make a paper transcript from the tape. Tapes sent to appeals by trial courts are usually transcribed, depending on the judge's preference of tape or paper. Lawyers filing legal briefs on appeals cases reference the tape rather than page numbers of a transcription. Reporters no longer employed by the Kentucky courts found other work, mainly doing depositions and other freelance work. They also get work creating a paper record of trials for attorneys who don't want to use a videotape to review a case. Generally, it costs about $1,000 to transcribe a day's worth of court proceedings from videotape, said Ann Le Roy, president of An-Dor Reporting Service Inc. in Lexington, Ky. A transcript sold to lawyers after a hearing when the reporter is in the courtroom for a day costs around $600, she said. Attorneys, meanwhile, generally prefer to work with a transcript rather than a video recording when preparing briefs, said Joe Savage, a plaintiff's attorney in Lexington. "Most attorneys like a transcript before trial," he said. "I can read a transcript in 30 minutes while it would take two hours with video." What all this means is that clients have to pay more for legal representation when a tape is the official transcript because counsel will likely have it transcribed. Meanwhile, the public saves money because the courts don't have to hire reporters for trials. WHAT REPORTERS WANT Reporters, Cramer said, see opportunity as courtrooms are increasingly computerized. Computer-integrated courtrooms need people to run them, he said, and reporters using real-time transcription or CAT could produce quick transcripts for the court. The optimum situation, Cramer said, is for a court reporter with real-time readouts to be used in conjunction with videotape. CAT machines cost between $15,000 and $20,000 excluding a stenograph, depending on the sophistication desired by the reporter who usually must buy his or her own equipment. Stenographs, which now come with connected notebook computers, run about $3,000 to $4,000. A reporter using CAT equipment types the proceedings into a steno machine as they occur, and the language is immediately translated to English and displayed on monitors. Judges and attorneys can get a record on a floppy disk at the end of each court day, rather than waiting for an overnight translation from stenography to English. And because the proceedings are displayed immediately, judges and counsel can mark testimony or make notes to themselves for later use in the case. "It makes the reporter more productive," said Laws. With CAT, the reporter can get a rough draft to attorneys at the end of a court day, and have an official transcript available soon after because there is less editing required than with traditional stenograph machines. More common are steno machines with a laptop computer attached. Unofficial transcripts are translated by software and made available to attorneys and judges at the end of the day on floppy disks. It is still not unusual, however, for reporters to have to provide transcripts on paper, sometimes because the court or attorneys don't have the right equipment to read them electronically. Reporters want the various courts to computerize and standardize so the record can be stored electronically. Ultimately, a trial can be recorded and stored on a database, then transferred by disk or even modem to an appeals court. "But the trial court computers can't communicate with superior court," said Cramer. "We would like to see court reporter notes stored electronically to improve our ability to save those records" in case of an appeal, he said. It is unlikely court reporters will disappear altogether. In high-volume courts, cases likely to be appealed, and capital crime cases, reporters will likely be used. Even with the advent of audio and video recording, the profession doesn't seem threatened with extinction. Yet reporter capabilities are evolving with the arrival of computer-integrated courtrooms and CAT. "We suffer some of the same fear you see in other professions," Cramer said. "But there are a lot of younger people moving into reporting, and they have more of an ability to use more sophisticated machinery." For more information, contact Linda Walker, technology specialist at the National Center for State Courts at 804/253-2000. A tape of a secret hearing related to the Oklahoma City bombing was later found to be blank last spring when it was unsealed. The tape was the only record of the hearing. James Nichols' hearing before a U.S. magistrate judge was closed to the press and public, and an audio tape was sealed by the court when the hearing finished. When the Detroit Free Press won an order to have the record unsealed, the tape was blank. Court officials said they assume the tape recorder was not properly activated or monitored during the hearing. "This is very embarrassing," Court Administrator John Mayer told the Free Press. "It has happened once or twice before in the past 15 years, but never in a case of this magnitude." Nichols, who was held in Michigan as a material witness soon after the Oklahoma City bombing, has since been released for reasons unrelated to the non-recorded hearing. March Table of Contents
<urn:uuid:4be71d90-bf02-40b6-9a77-189bb5982149>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Court-Reporting-From-Stenography-to-Technology.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00424-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965203
1,989
2.5625
3
Global investment in the renewable energy (RE) market exceeded $300 billion in 2014, at a 16% growth rate over 2013. Biomass and waste-to-energy (WTE) projects saw more than $8 billion worth of investment in 2014. With rising primary energy needs, Southeast Asia is experiencing power turbulence across most of its economically backward regions, indicating a heightened need to implement RE projects for better energy security. There is an urgent need for effective waste management spurring the growth of biomass and waste-to-power projects in Southeast Asia. •Understand the potential of biomass in the region and countries that would take the lead. •Government support in the form of regulations and policy framework will provide the much needed push. •What are the best practices and key success factors for increasing biomass power projects in the region? •How will the market change with the application of new technologies to treat biomass and waste? There are many items to include when considering the sustainability of biomass for cofiring, and some of them are hard to quantify. The focus of this webinar is on the greenhouse gas emission aspects of sustainability. The reduction of greenhouse gas emissions achieved by substituting biomass for coal depends on a number of factors such as the nature of the fossil fuel reference system, the source of the biomass, and how it is produced. Relevant issues in biomass production include the energy balance, the greenhouse gas balance, land use change, non-CO2 greenhouse gas emission from soils, changes to soil organic carbon, and the timing of emissions and removal of CO2 which relates to the scale of biomass production. Certification of sustainable biomass is slow to emerge at the national and international level, so various organisations are developing and using their own standards for sustainable production. The EU does not yet have sustainability standards for solid biomass, but the UK and Belgium have developed their own.Read more > Dr Rohan Fernando presents the findings of his latest report on biomassRead more > Ian Barnes presents the findings of his latest reportRead more > Biomass could have an important role in the strategy to reduce greenhouse gas emissions from large coal plants. Amongst the plethora of different biomasses, wood pellets have emerged as one of the most successful and fast growing internationally traded commodities. Wood (and straw) pellets offer a more energy dense and transportable alternative to the traditional wood chip, a product most commonly associated with the paper and pulp industry. A few large scale projects in Europe have drawn on North American sources to supplement local supplies of biomass without any major problems. At current levels of demand, there appears to be an abundance of wood resource. However, extending cofiring at low rates (5-10%) to the world’s coal-fired fleet will increase demand for wood pellets significantly. Meeting this demand will offer opportunities and challenges for the entire biomass supply chain, not least forest resources. This presentation accompanies a report by the IEA Clean Coal Centre to review the current understanding of world biomass resources using published forestry data from the UN Forestry and Agricultural Organization (FAO). From these data, the author attempts to identify a global and regional resource figure for wood in the form of residues and waste by-products that arise from the forestry industry; and discusses the broad issues that affect forest resources worldwide. Recent developments in process waste recycling and biomass utilisation have driven the use of these so-called ‘low value fuels’ for energy generation on a stand-alone basis, and in combination with coal. One particular technology stands out as being particularly well suited to utilising these low value fuels, circulating fluidised bed combustion (CFBC). The upcoming webinar sets out examples of the range of low value fuels, their reserves and properties, with particular emphasis on coal-derived materials, the issues for CFB plant in utilising these fuels and selected examples of manufacturer and operator experience with purpose built, or modified CFB plant.Read more > Microalgal removal of CO2 from flue gas Various methods have been developed to remove CO2 from the flue gas of coal-fired power plants. Biological post-combustion capture is one of these. Microalgae may be used for bio-fixation of CO2 because of their capacity for photosynthesis and rapid growth. The ability of microalgae to withstand the high concentrations of CO2 in flue gas, as well as the potentially toxic accompanying SOx and NOx has been researched. Microalgal strains that are particularly suitable for this application have been isolated. Most of the research on algal bio-fixation has been concerned with carbon fixation strategies, photobioreactor designs, conversion technology from microalgal biomass to bioenergy, and economic evaluations of microalgal energy. This webinar considers current progress in algal technology and product utilisation, together with an analysis of the advantages and challenges of the technologies. It opens with a brief introduction to the theory of algal bio-fixation and factors that influence its efficiency especially in terms of flue gas characteristics, and then discusses culturing, processing technologies and the applications of bio-fixation by-products. Current algae-based CO2 capture demonstration projects at coal-fired power stations around the world are described. District Heating in the UK is growing in popularity as it is ideally linked with renewable energy sources, such as Biomass or Biogas. There are dramatic improvements in energy efficiency by producing heat on a local level and also maintenance benefits in having one single plant. This 1 hour CPD seminar will cover: 1. An Introduction to REHAU 2. What is District Heating? 3. Potential Heating Sources 4. Biogas/Anaerobic Digestion 5. Pipe Materials & Properties (steel vs. polymer) 6. Installation & Design 7. Case Studies Morrison & Foerster's Cleantech practice group and Silicon Valley Bank presented the Annual Cleantech Roadshow Seminar in Palo Alto on June 17, 2010. Financing is a crucial component of any successful renewable energy project, especially during difficult economic times. For many capital intensive technologies in the wind, solar, biomass, and geothermal sectors, innovative project finance techniques make large scale deployment possible. The program discussion focuses on the various financing structures available and the current and future financing trends. Renewable energy projects rely on traditional financing methods, such as debt and private equity sources, but they often incorporate innovative new approaches to these transactions. In addition, renewable energy financing is increasingly drawn from government resources, through Department of Energy grants, loan guarantees, and tax incentives. The panel explains the existing renewable finance options, discusses the benefits and disadvantages of various financing methods, and provides guidance on the efficient use and monetization of tax and other government incentives. In addition, the panel explores how renewable energy finance has been impacted by the economic recession and makes predictions about future financing trends that are expected to accompany the economic rebound. The program consists of a moderated panel of finance experts who provide insight from the legal, investor, and company perspectives. Tim Walsh, Head of Structured Products, Silicon Valley Bank Bill Baker, Director, GCA Savvian Jill Feldman, Partner, Morrison & Foerster LLP (Finance) Robert Cudd, Partner, Morrison & Foerster LLP (Tax)
<urn:uuid:5c028fe9-ec3a-44fa-a104-85015e78b412>
CC-MAIN-2017-09
https://www.brighttalk.com/search?q=biomass
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00600-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926921
1,501
2.78125
3
As seminal punk band NOFX once sang, “Electricity / All we need to live today / A gift for man to throw away.” The data center industry has a love hate relationship with electricity. It’s obviously a crucial resource that enables the productivity and innovation gains of cloud and large-scale computing, but it comes from polluting power plants, it’s expensive, and it’s delivered from an increasingly unreliable power grid in the United States. Data centers are also using more and more electricity every day. New developments in electric generation and delivery as well as data center design innovations could help develop the much-hyped smart grid, bringing cost savings, increased reliability, and cleaner power generation. How can data centers and the electric grid work together to create the future of electricity? The Electric Catch-22 The transformers hooking data centers to the grid might as well be chains. Servers, cooling, storage—infrastructure can’t run without access to grid power. That’s not necessarily a bad thing. Electricity has enabled countless innovations. But the state of the power grid, especially in the United States, is worrisome. We’ve noted on the blog before that blackouts can be caused by squirrels (a favorite fact of mine). Many components of the grid have reached the end of their lifespan, with original pieces built in the early 1900s and many structures still running after 70 years. Data centers are on the front lines when it comes to negative business impact from the aging grid. Recent natural disasters like Hurricane Sandy may have made this dramatically apparent, but even common brownouts lead to server downtime after UPS batteries die and generators run out of fuel. In addition to unreliability, grid electricity comes overwhelmingly from polluting power plants that run on coal or natural gas. Renewable energy only contributed about 13% of the total United States power in 2012. A 10 MW data center emits 33,000 – 91,000 metric tons of CO2 even at the relatively low PUE of 1.2. This electric use is attracting attention from the media and activist groups, and while many data centers are striving for efficiency, there’s no escaping the grid. Finally, energy is simply expensive. Demand spikes, hot days, and the cost of fuel at generation plants can all lead to dramatic increases in costs. Instead data center managers and researchers at universities across the world are starting to look at ways data centers can turn their energy use into a benefit rather than a necessary evil. Here are three of the ways future data centers can help build more reliable, more efficient “smart grids”. 1) Increasing on-site generation and cleaning up the grid This is one area where data centers have already taken action. Rooftop solar panels and large solar arrays are relatively common and some lucky data centers are located near hydroelectric or geothermal power generators, allowing them to use entirely renewable energy. If a facility has some on-site generation, there are two primary models to use both that energy and the grid: grid ties and transfer switches. Grid ties combine on-site electric generation from rooftop solar, hydroelectric turbines, or wind turbines with grid sources. Electricity produced on-site reduces the net draw from the grid, which fills in the blanks when on-site generation can’t cover the entire server and equipment load. When there is excess energy generated, it feeds back into the grid for a net profit. Transfer switches keep the on-site generated energy separate from the grid entirely. The equipment only receives energy from one source at a time. This falls victim to the same, if not worse, reliability issues as the grid without a very steady source like geothermal or hydroelectric as there is not always a steady source of solar or wind power. When using renewable energy, data centers must choose between performance and green power. Battery technology is improving but still cannot store enough renewable energy to power a facility during extended periods of low generation. Alternatively, performance adjustments can be made to lower the power draw, but in an enterprise data center this is unlikely to be a real solution due to SLAs and the requirement for constant uptime. Alternatively, companies can support increased renewable generation by the power companies operating the grid itself. Google and others have made large scale investments into wind farms. Renewable Energy Credits also support the development of renewable generation on the grid. 2) Migrating data center loads cross-country to avoid peak demand Here is where things start to get crazy with smart grids and data center infrastructure management (DCIM). These new tools can help avoid brownouts or blackouts as well as peak-demand, when energy rates increase dramatically. Hardware, software, sensors, and controls can be tightly integrated into data center operations and tied to the electric grid with programming and data center automation software. With real-time pricing and energy information from grid providers, this software can migrate entire data center loads geographically according to increasing and decreasing grid loads. These tools still need development, but experiments have been performed as proof-of-concept. Studied data centers took about eight minutes to move their loads and reduced 10% of their energy use. Another study found energy decreases of up to 46% by moving data center loads. By dynamically moving workloads, the overall demand is lower and energy cost is less. This improves the grid reliability for everyone, not just data centers. The opportunity is greatest for non-critical loads, which are risky to move. Rescheduling routine backup and storage for off-peak hours is one method of demand reduction that can already be implemented in data centers. 3) Energy storage and micro-grids Data center technology can give the grid a boost in other ways besides feeding it excess renewable energy or reducing peak demand. Energy storage is developing rapidly, enabling self-healing smart grids within a data center facility itself. These systems store large amounts of energy and constantly monitor the flow of electricity throughout the facility, allowing power to be rerouted during emergencies. Generators are still necessary but their use can be minimized. Micro grids and backup systems (even current UPS systems) can be combined with DCIM for frequency regulation or boosting power during peak demand. One can even imagine a scenario where data centers are sitting at a low internal load with fully charged battery systems, selling power back to the grid to meet peak demand. Many power utilities pay hourly for frequency regulation, enabling a new (though minor) revenue stream for data centers whose UPS systems and batteries are sitting idle the majority of the time. Of course, this has to be balanced with SLAs to ensure that unexpected outages don't cause downtime. These solutions may be a ways off from wide scale implementation, but it’s exciting to see how such a major consumer of electricity can actually help improve the generation methods and reliability of the larger grid. Instead of being power hogs, the data center industry can aim to help save electric infrastructure in the United States through innovative power management, onsite generation, and new storage technology. Posted By: Joe Kozlowicz
<urn:uuid:8c1b9e3b-4597-4153-9602-4eceb05bf12b>
CC-MAIN-2017-09
https://www.greenhousedata.com/blog/three-ways-data-centers-can-build-the-future-of-electricity-and-smart-grids
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00244-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937201
1,453
2.90625
3
PHP is a general purpose scripting language widely used in World Wide Web (WWW) pages and applications. Two buffer overflows have been reported in the PHP session extension; the str_replace() function; and the imap_mail_compose() function. An attacker could pass a very long string to the str_replace() function and create an integer overflow in memory allocation. An attacker could use a script to create a new MIME message from an untrusted source with the imap_mail_compose() to cause a heap overflow. An attacker that exploited either of these vulnerabilities could possibly execute arbitrary code as the 'apache' user. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2007-0906 to these issues. An issue has been discovered where unserializing untrusted data on 64-bit platforms, the zend_hash_init() could potentially be forced into an infinite loop, causing consumption of CPU resources until the script timeout alarm halted the script. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2007-0988 to this issue. An issue has been identified in WDDX which could expose a random portion of heap memory. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2007-0908 to this issue. An issue has been identified with the odbc_result_all() function. An attacker with control of the database table contents could use a specially crafted string to execute arbitrary code. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2007-0909 to this issue. A one byte memory read always occurs before the beginning of a buffer. This could be triggered, for example, by any use of the header() function in a script. However it is unlikely that this would have any effect. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2007-0907 to this issue. Several flaws in PHP could allow attackers to "clobber" certain super-global variables via unspecified vectors. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2007-0910 to this issue. More information about these vulnerabilities can be found in the security advisories issued by RedHat Linux: |Product:||Affected Version(s):||Risk Level:||Actions:| |Avaya Communication Manager||CM 2.x, 4.0||None||Avaya Communication Manager uses versions of PHP which are not vulnerable to these issues.| |Avaya Messaging Storage Server||MSS 3.0||Low||Upgrade to MSS 3.1 to resolve this issue.| |Avaya CCS/SES||All||Low||Upgrade to SES 5.0 to resolve this issue.| |Avaya AES||AES 4.0||None||Avaya AES uses versions of PHP which are not vulnerable to these issues.| For all system products which use vulnerable versions of php, Avaya recommends that customers restrict local and network access to the server. This restriction should be enforced through the use of physical security, firewalls, ACLs, VPNs, and other generally-accepted networking practices until such time as an update becomes available and can be installed. Avaya software-only products operate on general-purpose operating systems. Occasionally vulnerabilities may be discovered in the underlying operating system or applications that come with the operating system. These vulnerabilities often do not impact the software-only product directly but may threaten the integrity of the underlying platform. In the case of this advisory Avaya software-only products are not affected by the vulnerability directly but the underlying Linux platform may be. Customers should determine on which Linux operating system the product was installed and then follow that vendor's guidance. |Product:||Affected Version(s):||Risk Level:||Actions:| |CVLAN||All||None||Depending on the Operating System provided by customers, the affected package may be installed on the underlying Operating System supporting the CVLAN application. The CVLAN application does not require the software described in this advisory.| |Avaya Integrated Management Suite(IMS)||All||None||Depending on the Operating System provided by customers, the affected package may be installed on the underlying Operating System supporting the IMS application. The IMS application does not require the software described in this advisory.| Avaya recommends that customers follow recommended actions supplied by RedHat Linux or remove the affected package. Additional information may also be available via the Avaya support website and through your Avaya account representative. Please contact your Avaya product support representative, or dial 1-800-242-2121, with any questions. ALL INFORMATION IS BELIEVED TO BE CORRECT AT THE TIME OF PUBLICATION AND IS PROVIDED "AS IS". AVAYA INC., ON BEHALF ITSELF AND ITS SUBSIDIARIES AND AFFILIATES (HEREINAFTER COLLECTIVELY REFERRED TO AS "AVAYA"), DISCLAIMS ALL WARRANTIES, EITHER EXPRESS OR IMPLIED, INCLUDING THE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND FURTHERMORE, AVAYA MAKES NO REPRESENTATIONS OR WARRANTIES THAT THE STEPS RECOMMENDED WILL ELIMINATE SECURITY OR VIRUS THREATS TO CUSTOMERS' SYSTEMS. IN NO EVENT SHALL AVAYA BE LIABLE FOR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE INFORMATION OR RECOMMENDED ACTIONS PROVIDED HEREIN, INCLUDING DIRECT, INDIRECT, CONSEQUENTIAL DAMAGES, LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, EVEN IF AVAYA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE INFORMATION PROVIDED HERE DOES NOT AFFECT THE SUPPORT AGREEMENTS IN PLACE FOR AVAYA PRODUCTS. SUPPORT FOR AVAYA PRODUCTS CONTINUES TO BE EXECUTED AS PER EXISTING AGREEMENTS WITH AVAYA. V 1.0 - March 26, 2007 - Initial Statement issued. V 2.0 - January 8, 2008 - Updated actions for several products and changed Advisory Status to "Final". Send information regarding any discovered security problems with Avaya products to either the contact noted in the product's documentation or [email protected]. © 2007 Avaya Inc. All Rights Reserved. All trademarks identified by the ® or ™ are registered trademarks or trademarks, respectively, of Avaya Inc. All other trademarks are the property of their respective owners.
<urn:uuid:13bb470f-95f5-4682-9536-29b1e35c17c9>
CC-MAIN-2017-09
https://downloads.avaya.com/elmodocs2/security/ASA-2007-136.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00420-ip-10-171-10-108.ec2.internal.warc.gz
en
0.853255
1,469
2.640625
3
During a murder investigation, sometimes all that surfaces are human bones. Without proper identification to verify who the deceased person is, it’s up to forensic artists to re-create an image of what the person may have looked like while alive to help law enforcement identify the individual. At the National Center for Missing and Exploited Children (NCMEC), forensic imaging specialists are tasked with re-creating an image of what a child may have looked like based on skull and bone remains that surface during police investigations. Since bones and in some cases a few articles of clothing are the only evidence that remain, the NCMEC then assists law enforcement investigations by using the bones and skulls as a template for re-creating the image of the deceased’s face. The hope is that by creating a 3-D picture of what the deceased may have looked like, someone will be able to recognize and identify the individual to authorities. Joe Mullins, a forensic imaging specialist for NCMEC who teaches classes on facial reconstruction, performs this kind of computerized facial reconstruction based on skeletal remains. With the help of special 3-D imaging software, forensic artists factor in information from a skull’s forensic anthropology report, like ancestry and age range, to start recreating what Mullins considers an ambiguous picture of the victim’s face. “Art and science have to work together to come up with the correct face based on what that skull is telling you,” Mullins said. He said it’s crucial that forensic artists leave room for ambiguity because any image reconstructed based on skeletal remains is not going to be 100 percent accurate. What is important is creating the image based strictly on facts. And some features are more obvious indicators, like gapped teeth or a crooked nose, he explained. When skeletal remains are broken or have pieces missing, it’s more challenging for forensic artists to recreate an accurate image. But no matter what, Mullins said forensic artists should have no artistic license and should only base the image only off the information they have. Mullins said unlike television shows like CSI, images the forensic artists create from skeletal remains are just projections based on the bones themselves. The public should keep in mind that they are not exact images of the deceased individuals. Cold Case Help for the New Hampshire State Police Recently the New Hampshire State Police began working with the NCMEC on a murder investigation involving four female victims. According to local media, two victims were found 15 years prior to the latter two. To speed up the facial reconstruction process, the NCMEC reaches out to hospitals to perform CT scans on the skulls and bones waiting to be identified. Mullins said after the CT scans are complete, images from the scan are sent to the NCMEC where they are opened up in the software so that the facial reconstruction process can begin in a digital environment. Prior to using a computerized method, the NCMEC used to create 3-D facial reconstructions by applying clay directly to the skull to recreate the face. Mullins said utilizing the technology can help complete the imaging process as quickly as four days from the time the remains are scanned at the hospital. Mullins said that so far, the hospitals the NCMEC has worked with have been very accommodating in completing the CT scans, and it’s more logical to send bones to a hospital near an investigation rather than shipping them directly to the NCMEC in Alexandria, Va. – something that could cause the bones to get broken or lost in the mail. His hope is to build a bigger network of hospitals that will work with the NCMEC in the future. “It’s like building a Rolodex of hospitals that can offer this service,” Mullins said.
<urn:uuid:d08efe6c-e3e1-4bab-90fb-4cf309e827b3>
CC-MAIN-2017-09
http://www.govtech.com/technology/Facial-Reconstruction-Tech-Helps-Identify-Skeletal-Remains.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00296-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954965
780
2.875
3
Online tracking is on the rise, but efforts to create a practical Do Not Track policy have slowed to a crawl. Meanwhile, users and browser companies are taking matters into their own hands. How do you feel about your Web-browsing activity being tracked? During a visit to any given website -- including this one -- the average user's browser may execute a dozen or more tracking scripts, each with its own associated tracking cookie, stored on the user's computer. This enables website publishers and ad distribution networks to record a visitor's online activity and then serve up "interest-based" or "behaviorally targeted" ads -- customized messaging based on that activity. The benefit to website producers is that targeted ads can be sold to advertisers at higher rates because, presumably, they will be more effective than the traditional banner ads that have long been used on websites. Ad networks generally do the tracking by placing a cookie on consumers' computers when they visit a participating publisher's website. The industry refers to these as "third-party cookies" because the ad network is a third party to the relationship between the user and website publisher. Users are typically unaware that they're being tracked -- and that has made the practice controversial. [Concerned about your privacy? Check out our three-part series: The paranoid's survival guide.] There are disagreements even among those who depend on website advertising. While digital ad networks and many website publishers push forward with the practice, some publishers remain cautious. "They get more money from more targeted ads, but they also have brand [reputation] considerations," says Justin Brookman, director of consumer privacy at the Center for Democracy and Technology. He's also co-chair of the World Wide Web Consortium's (W3C) Tracking Protection Working Group, which is developing a Do Not Track (DNT) standard for the industry. "Do they want to be seen as enabling third party tracking?" Brookman asks. "They're a little more cautious around perceptions than are the third-party ad networks." Here's how the practice of tracking affects both consumers and website publishers -- and what each side of the equation is doing to try to fix matters. Whys and wherefores of Do Not Track In 2011, Do Not Track (DNT) technology was introduced as a method to ensure user privacy. DNT is an optional browser feature that signals advertisers to not track the user's Web activity. It does this by sending an HTTP header with the syntax DNT:1 to every website the browser visits. The W3C working group was supposed to develop a standard to define what DNT means and how ad networks should respond, but made little progress for the first two years. So while the DNT signal was eventually adopted by most major browsers, many Web publishers and advertisers have been ignoring any privacy requests sent by the signal. As user awareness has increased, so has the level of discomfort with the idea of having all of one's online browsing activity recorded. That has left consumers who don't want to be tracked with a more drastic option: Turn on the third-party cookie blocking setting in the browser and install special browser add-on software that prevents tracking scripts from running (because not all tracking is cookie-based). It's not a complete solution, however. Anti-tracking tools defend against tracking only by third-party advertising networks that deliver ads through the content publisher's website -- although the tools do block all third-party requests, whether from ad networks, social media or analytics companies. The tools don't prevent any tracking by a "first party" -- the publisher of the site or any affiliated advertising networks it owns. Replacing the cookie While cookies assign a unique identifier to a user's browser, they can't easily be used to track the user's activity across different devices or even across different browsers running on the same computer. New techniques, such as those recently disclosed by Facebook, Google and Microsoft, will assign a unique identifier to each type of device the user has and link those together to track activity across all of the devices the person uses. These new tracking mechanisms, if they catch on, could be used across each vendor's ecosystem -- and beyond. Other advertising networks have also been working with statistical identification methods -- browser and device "fingerprinting" techniques -- that don't require the presence of a cookie file. Meanwhile, as user awareness has increased, so has the level of discomfort with the idea of having all of one's online browsing activity recorded -- particularly by third-party advertising networks that consumers don't know and with whom they have no relationship. And as the number of tracking scripts has increased, so has the bandwidth consumed when the user attempts to load the page. "Up to 26% of bandwidth goes to loading trackers," says Sarah Downey, privacy advisor at Abine, the distributor of a free anti-tracking add-on program called DoNotTrackMe. According to Downey, the percentage comes from a 2012 Web crawling exercise conducted by Abine. "As the industry moves toward stealthier methods of tracking [such as device and browser fingerprinting], the only way we can reliably prevent tracking is to block entire requests," says Brian Kennish, co-CEO of Disconnect. Tools like Disconnect take the draconian step of blocking requests to third-party ad networks to deliver an ad when the user visits the site -- which means even a non-targeted ad can't be delivered to the user. In contrast, a universally accepted Do Not Track mechanism would still allow third-party advertising networks to substitute a contextually appropriate ad for a behaviorally targeted one (e.g., a game ad for users on a gaming site) rather than cutting off the request entirely. "We'd prefer a more subtle solution where we don't have to throw out the entire request," Kennish says. "It's a very blunt tool. That's why we're trying to find a middle ground with Do Not Track," says the Center for Democracy and Technology's Brookman. The DNT controversy W3C formed the Tracking Protection Working Group in 2011. Its mission is "to improve user privacy and user control by defining mechanisms for expressing user preferences around Web tracking and for blocking or allowing Web tracking elements." But debate among the members of the organization -- which include privacy advocates, Web publishers, advertising networks and many others -- has been contentious, culminating last year with some well-publicized resignations on both the consumer and advertiser sides of the debate. The industry has created a default where you're followed wherever you go by hundreds of companies. Sarah Downey, privacy advisor, Abine More recently, the group has been making slow progress on its Tracking Preference Expression standard, which determines the syntax and meaning of the DNT signal. This specification should be ready to be released this spring, according to Brookman. But that may turn out to be the easy part. The group still needs to agree on the Tracking Compliance and Scope specification, which deals with what actions ad networks must take to comply with the DNT request -- and that is still controversial, he says. For the third-party advertising networks in particular, the DNT discussions represent a potential crisis. Eliminating all tracking is unfair, says Mike Zaneis, senior vice president of public policy at the Interactive Advertising Bureau (IAB), a trade organization for website publishers and online ad sellers; Zaneis is also the IAB representative to the W3C Tracking Protection Working Group. Advertisers increasingly pay based not on whether users view an ad but whether they respond to it. "You need a way to track user interactions, both on the publisher page and throughout the purchase process. This represents basic accounting and measurement practices for digital advertising," he says. Not unexpectedly, privacy advocates disagree. "We don't want to break the Web," Abine's Downey says, but adds that users should have a choice as to whether to share -- and with whom. "The industry has created a default where you're followed wherever you go by hundreds of companies." And the information gathered isn't used to just deliver behaviorally targeted ads, she says, but can be used in other ways, resulting in lower credit scores, price discrimination on e-commerce sites based on your tracking profile or higher insurance premiums. (Downey keeps a running list of examples of such abuses.) "You don't have a say in any of this," she says. Users, she explains, should have a choice when it comes to tracking. But they do have a choice, argues Zaneis. While no global Do Not Track program is available yet, many publishers and advertising networks allow users to opt out of interest-based advertising for individual sites and services. In addition, the Digital Advertising Alliance's Ad Choices program lets consumers opt out of receiving interest-based advertising from the trade group's 118 members, which include third-party ad networks. And when users opt out, he says, members also agree to stop tracking their online activity. Is the W3C working group working? What the W3C's working group was supposed to deliver is that global option -- a choice for users in the form of a universally recognized Do Not Track option that, when turned on, would enable the browser to communicate a Do Not Track signal to publishers and ad distribution networks. The browser vendors were to offer the feature and the working group was to develop the standards dictating what Do Not Track means and how advertisers should respond. All organizations would then be obligated to honor the user's request, following the specifications laid out by the working group. For instance, Brookman says, "you can't [manually] opt out of every single tracking company. You need a global opt out." But the effort has bogged down. Since its founding, the working group's membership has ballooned to more than 100 voting participants that represent a wide range of competing constituencies -- including consumers, Web publishers, ad networks, browser vendors, ISPs, cable companies and others. Until recently, the group hadn't even been able to agree on the basic definitions behind Do Not Track, says group member Mark Groman, president and CEO of the Network Advertising Initiative, a self-regulatory industry association that counts 95 advertising companies as members. "What does it mean to track -- or not track? What is a first party versus a third party?" And, he adds, does Do Not Track mean "don't gather any information on the user at all," or "don't deliver behaviorally targeted advertising based on that data"? Last fall, Groman says, they were still having discussions over how to define the words "collection" and "sharing." "That presents a real problem when you're trying to develop a standard," he says. "Instead of defining what we wanted to control, we delved right into the minutiae," says the IAB's Zaneis. But Brookman, who joined the group in 2011 and became co-chair in September, says the group finally has agreed upon definitions, including the terms "tracking," "collect" and "share." The group has "only a couple unresolved issues that we're working out in the technical document, and then we'll proceed to last call," which is the last opportunity for public input before the standard is approved, he says. "Perhaps those should have been nailed down earlier, but they are the first things we are settling under the new plan to move forward," he says. The gathering of some tracking data, such as screen resolution, IP address and referring URL, is required for the basic operation of the Web. But how much information is acceptable to users, and needed or just wanted by the advertisers who are funding commercial websites? "We're trying to walk through what is the least amount you can collect and retain while still allowing the third-party ad ecosystem to work," Brookman says. "We don't need to tell the Web server nearly so much as we do right now," says Jonathan Mayer, a Stanford University grad student and former working group member. "We can limit it to the bare bones required for the Internet to do its thing." Mayer has a strong bias against the retention of tracking data by third-party ad networks and has been at the center of some of the more contentious exchanges within the working group. "I don't want companies I've never heard of keeping track of where I go on the Web," he says flatly. "One side wants the cessation of data collection for any purpose. The other side wants the status quo. It's difficult to rectify those positions, particularly when those tend to be the loudest voices in the room," says Alan Chapell, president of Chapell & Associates, a consumer privacy law firm serving the advertising industry, and working group member.
<urn:uuid:f4f482e5-eb8f-4078-a1f5-d3797142432b>
CC-MAIN-2017-09
http://www.networkworld.com/article/2175737/byod/ad-tracking--is-anything-being-done-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00172-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959298
2,632
2.53125
3
802.11ad Will Vastly Enhance Wi-Fi: The Importance of the 60 GHz Band to Wi-Fi’s Continued Evolution 05 Apr 2016 Wi-Fi continues to evolve in the 2.4 GHz and 5 GHz bands, but as those bands get more crowded, the industry will increasingly look to IEEE 802.11ad, also known as WiGig, in the unlicensed 60 GHz band. 802.11ad provides multiple Gigabit per second data rates and solves congestion issues by: - Using ultra-wideband channel widths of 2.16 GHz - Using the 60 GHz band instead of the 2.4 GHz or 5 GHz band - Using beamforming to form narrow beams in 60 GHz spectrum, allowing for other products to use even the same channel at the same time in many cases - Being a part of the Wi-Fi ecosystem with tri-band solutions that can do handoffs between 60 GHz and the other Wi-Fi bands The goal of 802.11ad is to address the congestion and capacity issues, resulting in an improved user experience.
<urn:uuid:260f3ca9-0aa4-43e8-916a-6da54f8709ce>
CC-MAIN-2017-09
https://www.abiresearch.com/whitepapers/80211ad-will-vastly-enhance-wi-fi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00172-ip-10-171-10-108.ec2.internal.warc.gz
en
0.905686
224
2.625
3
A network card interface or NIC Card (also referred to as a Network Interface Controller, Network Adapter, LAN Adapter or LAN card) is how your computer connects to the wired network in your home or office. It is a physical and data link layer device which uses MAC addresses. NIC adapter is plugged inside the computer either in the PCI slot or built-in the motherboard. Without it, there would be no interaction between the cord in the wall and your computer! The network interface card is installed in an expansion slot of the computer. This card connects the computer to a network, and contains information on the computer’s location and also instructions for sending and receiving data over the network. It adds a serial port to the computer and the port connects directly to a network. NIC converts the computers’ low power signals to high power signals that can be transmitted over the network. NIC’s speed is measured in megabits per second (Mbps). External LAN cards are larger and are placed in any PCI slot on the motherboard, except the PCI Express slot. Internal LAN cards on the other hand come integrated with your chipset. Internal LAN cards are integrated into the motherboard of your computer and generally provide higher transfer speeds on a network. Internal LAN cards require drivers to function, thus if your internal LAN card stop working for any reason you will first need to reinstall your drivers with a fresh download. If however that still doesn’t fix your problems, you will need to buy an external network interface driver. Make sure to buy a network interface card that offers the same transfer speeds as your internal LAN card. LAN cards usually support network transfer rates of 10, 100 or 1000, megabits per second. Depending on your requirements and network you should choose a LAN card that will provide the optimal transfer rate for your network. LAN cards are used mostly in ethernet networks and designate an IP address to your connection. This IP address is what defines your computer’s connection on the Internet or your network. Every NIC has unique MAC address and no two NIC cards from two different vendors can have the same MAC address. NIC has twisted pair, BNC and AUI sockets. The one end of the network cable is used to connect with the NIC and the other end is used to connect with the hub or switch. NIC provides the full-time connectivity for the data transmission. Sometimes computers do not communicate with each other due to the malfunctioning of the NIC. A twisted pair UTP/STP with RJ45 connector is used to connect the computer with the Hub or Switch. Fiber optics cable or fiber patch cables can also be used to connect the computer to the hub or switch. A NIC can be wired or wireless and it has digital circuitry and microprocessor. Before buying and installing a network interface card, make sure that it is compatible with the other network devices. There are different vendors of the NIC such as D-Link, 3Com, Intel, Realtek, Baylan and FiberStore.
<urn:uuid:442ab2b3-a4d2-4ce4-af4a-919e832cead5>
CC-MAIN-2017-09
http://www.fs.com/blog/performance-of-network-interface-cards.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00292-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926498
608
3.328125
3
To continue on the topic of passwords: not only should you use a proper iteration count when implementing password hashing in code — the same thing also applies to password safe software such as KeePass. As strong passwords are pain to remember, many people opt to use KeePass or other password managers, and then copy the password manager to one or another sync service. Passwords can then be available on all devices whether a desktop, laptop, phone or tablet. However, this brings a potential problem. The password file is more likely to end up in the wrong hands if one of the devices is compromised, stolen or the sync service is hacked. An obvious defense for this is to use a strong password on the password database file. But strong passwords are a pain to enter on a mobile phone. And so that causes many people to use shorter passwords than is wise. A greater than 14 character password or passphrase is the proper way of doing things but we all know that most people just won't do it. One can mitigate the problem of a short password in mobile use by adjusting key iteration count in the password manager configuration. Common wisdom is to set the iteration count so that it takes about 1 second to verify password on slowest device your are using. For example, if you use KeePass the default key derivation iteration count is 6,000. On the typical mobile phone you can get about 200,000 iterations per second. So by setting a proper key iteration count you make password cracking ~33 times more expensive for attacker. Of course adding one character to your password gives about the same protection and adding two characters gives about 1024 times better protection. But that is no reason to leave the key iteration count to a ridiculously low default value. Here's KeePass on a Windows laptop, set to a value of 4,279,296: And a free tip to anyone who is developing mobile password manager: the low CPU power of mobile devices seriously limit the key iteration count from proper figures, which should be around 4-6 million instead of hundreds of thousands. So how about using the phone's GPU for password derivation? Using that you could have a proper iteration count for key derivation, and you will have a more level playing ground against password crackers which use GPU acceleration.
<urn:uuid:60f321d3-56f0-4985-bd69-fc637d97c5bb>
CC-MAIN-2017-09
https://www.f-secure.com/weblog/archives/00002382.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00344-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91103
462
2.5625
3
Chang E.R.,University of Groningen | Veeneklaas R.M.,University of Groningen | Veeneklaas R.M.,Bosgroep Noord Oost Nederland Forest Support Group | Bakker J.P.,University of Groningen | And 3 more authors. Applied Vegetation Science | Year: 2016 Questions: How successful was the restoration of a salt marsh at a former summer polder on the mainland coast of the Dutch Wadden Sea 10 yr after de-embankment? What were the most important factors determining the level of restoration success? Location: Noard-Fryslân Bûtendyks, northwest Netherlands. Methods: The frequencies of target plant species were recorded before de-embankment and monitored thereafter (1, 2, 3, 4, 6 and 10 yr later) using permanent transects. Vegetation change was monitored using repeated mapping 14 yr before and 1, 7 and 10 yr after de-embankment. A large-scale factorial experiment with 72 sampling plots was set up to determine the effects of distance to a breach point, distance to a creek and grazing treatment on species composition. Abiotic data were also collected from the permanent transects and sampling plots on elevation, soil salinity and redox potential. Results: Ten years after de-embankment, permanent transect data showed that 78% to 96% of the target species were found at the restoration site. Vegetation mapping, however, showed that the diversity of salt marsh communities was low, with 50% of the site covered by the secondary pioneer marsh community. A multivariate analogue of ANOVA indicated that the most important experimental factor determining species composition was the interaction between distance to the nearest creek and livestock grazing. The combination of proximity to a creek and exclusion from livestock grazing always resulted in development of the high marsh community. In contrast, the combination of being located far from a creek, grazed and situated at low elevation with accompanying high salinity resulted in development of the secondary pioneer marsh community. Conclusions: Using target species as criteria, restoration success could be claimed 10 yr after de-embankment. However, the diversity of communities in the salt marsh was lower than desired. Variable grazing regimes should be applied to high-elevation areas to prevent dominance by single species of tall grasses and to promote formation of vegetation mosaics. Low-elevation areas need lower grazing pressure. Also, an adequate soil drainage network should be preserved or constructed in low-elevation areas before de-embankment. © 2016 International Association for Vegetation Science. Source Veenklaas R.M.,University of Groningen | Veenklaas R.M.,Bosgroep Noord Oost Nederland Forest Support Group | Koppenaal E.C.,University of Groningen | Bakker J.P.,University of Groningen | And 2 more authors. Journal of Coastal Conservation | Year: 2015 Salt marshes provide an important and unique habitat for plants and animals. To restore salt marshes, numerous coastal realignment projects have been carried out, but restored marshes often show persistent ecological differences from natural marshes. We evaluate the effects of elevation and marsh topography, which are in turn affected by drainage and livestock grazing, on soil salinity after de-embankment. Salinity in the topsoil was monitored during the first 10 years after de-embankment and compared with salinity in an adjacent reference marsh. Additionally, salinity at greater depths (down to 1.2 m below the marsh surface) was monitored during the first 4 years by measuring the electrical conductivity of the groundwater. Chloride concentration in the top soil strongly decreased with increasing elevation; however, it was not affected by marsh topography, i.e. distance to creek or breach. Chloride concentrations higher than 2 g Cl−/litre were found at elevations below 0.6 m + MHT. Salinization of the groundwater, however, took several years. At low marsh elevations, the salinity of the deep groundwater (at 1.2 m depth) increased slowly throughout the full 4-year period of monitoring but did not reach the level of seawater. Compared to the ungrazed treatment, the grazed treatment led to lower accretion rates, lower soil-moisture content and higher chloride content of soil moisture. The de-embankment of the agricultural grasslands resulted in a rapid increase of soil salinity, although deeper ground-water levels showed a much slower response. Elevation accounted for most of the variation in the salinization of the soil. Grazing may enhance salinity of the top soil. © 2015 The Author(s) Source Bos D.,Altenburg and Wymenga Ecological Consultants | Bos D.,University of Groningen | Boersma S.,Fryske Feriening foar Fjildbiology FFF | Engelmoer M.,Fryske Feriening foar Fjildbiology FFF Op Dijksman | And 5 more authors. Journal of Coastal Conservation | Year: 2014 In this study we evaluate the effect of coastal re-alignment on the utilisation of coastal grasslands by staging geese. We assessed vegetation change and utilisation by geese using repeated mapping and regular dropping counts in both the restored marsh and adjacent reference sites. All measurements were started well before the actual re-alignment. In addition, we studied the effects of livestock grazing on vegetation and geese, using exclosures. The vegetation transformed from fresh grassland into salt-marsh vegetation. A relatively large proportion of the de-embanked area became covered with secondary pioneer vegetation, and the overall cover of potential food plants for geese declined. Goose utilisation had initially dropped to low levels, both in autumn and in spring, but it recovered to a level comparable to the reference marsh after ten years. Exclosure experiments revealed that livestock grazing prevented the establishment of closed swards of grass in the poorly drained lower area of the restored marsh, and thereby negatively affected goose utilisation of these areas during spring staging. Goose grazing in the restored marsh during spring showed a positive numerical response to grass cover found during the preceding growing season. (1) The value of restored salt marsh as foraging habitat for geese initially decreased after managed re-alignment but recovered after ten years. (2) Our findings support the idea that the value of foraging habitats depends largely on the cover of forage plants and that this can be manipulated by adjusting both grazing and drainage. © 2014 Springer Science+Business Media Dordrecht. Source
<urn:uuid:9115fdc1-41c4-4601-b2a7-72b4841d92bb>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/bosgroep-noord-oost-nederland-forest-support-group-1567802/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00344-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931511
1,397
2.609375
3
Twitter and Flickr, along with remote sensor data, can be used to identify flooded areas, a team of university researchers say. It's faster than using publicly available satellite images on their own. That imaging can sometimes take days to become available, the researchers say. It's also easier to identify the flooded streets. Algorithms are the key to making all the data work together, the scientists reckon. A computer can learn what is and what isn't water in a flood, for example. It does it by analyzing publicly posted images and thousands of public tweets and posts generated during incidents in urban flooding situations. Satellite analysis, the former method, becomes secondary. As an experiment, the team of scientists from Penn State, the University of Wisconsin, and other groups, analyzed 2013 flooding in Colorado and found 150,000 tweets from people affected. They then processed those tweets with an existing tool called CarbonScanner and found "clusters of posts," they say in their press release on Penn State's website. CarbonScanner analyzes tweet hashtags and matches their locations onto a map. Those clusters implied damage. The team then looked at over 22,000 images from around the area with another tool. This was one that they had developed themselves. It uses a "machine learning algorithm that automatically analyzes several thousand images," the website says. "It allowed them to quickly identify individual pixels in images that contained water," the report continues. The raw imagery that they used was "obtained through satellites, Twitter, Flickr, the Civil Air Patrol, unmanned aerial vehicles and other sources," they say. The computer successfully figured out where there was water. "We looked at a set of images and manually selected areas that we knew had water and areas that had no water" in writing the algorithm, says Elena Sava, one of the graduate students. "Then, we fed that information to the algorithm we had developed, and it allowed the computer to 'learn' what was and wasn't water," she added. The names of rivers and streets in the tweets, along with remarks related to how the individual tweeter couldn't get home, and so on, were giveaways of flooding found in the social media data. That was combined with the patches of water discovered with the machine learning algorithm. The result was better than simple satellite data, the researchers think. Satellite didn't show floods But it's not just because it could be produced in a more timely manner. "If you look at satellite imagery, downtown Boulder showed very little flooding," said one of the professors quoted on the website. "However, by analyzing Flickr and Twitter data, we could find several cues that many areas were underwater," he says. The combination produced the results. Weather in 2013 produced 17 inches of rain over nine days in parts of Boulder—almost a year's worth, the press release says. The Penn State et al studies may just be the beginning of the use of common Internet use—such as their Twitter-captured data—in future real-time analysis of disasters. Interestingly, during monster snow storm Jonas, while most traffic remained the same, communications app FaceTime traffic, throughout last Saturday, was double what it was on the previous non-storm weekend, according to Sandvine, an Internet traffic analyst. There's intelligence in that traffic spike alone. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:54dae6ab-643d-4d02-906e-b1ca109a3997>
CC-MAIN-2017-09
http://www.networkworld.com/article/3026968/internet/machine-learning-social-media-data-help-spot-flooded-areas.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00113-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965548
710
3.6875
4
This resource is no longer available IP Version 6 Transitions Mechanisms Today businesses are in the midst of transitioning from IP version 4 to IP version 6. This is not a process that happens overnight - there is almost always a period of migration and/or coexistence when installing new components into an infrastructure. This is lead to the development of mechanisms designed to phase the protocol into existing networks. What are these various mechanisms? How do each of them work? What makes some more desirable than others? Find the answers to these questions by discovering the advantages and disadvantages of the three following mechanisms: - Dual stack: One of the very first approaches, which allows IPv4 and IPv6 to allow simultaneously on network devices - Network Address Translation (NAT) Protocol: Uses traditional NAT dynamics to translate between each respective protocol - Tunneling: makes use of a configured tunnel to transport IPv6 over a native IPv4 network, which may consist of two or more sites
<urn:uuid:34a8f9a3-1678-467b-8ade-83e5f0542b8f>
CC-MAIN-2017-09
http://www.bitpipe.com/detail/RES/1357306703_135.html?asrc=RSS_BP_TERM
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00641-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928159
194
2.546875
3
Packet sniffing is a technique of monitoring network traffic. It is effective on both switched and nonswitched networks. In a non-switched network environment packet sniffing is an easy thing to do. This is because network traffic is sent to a hub which broadcasts it to everyone. Switched networks are completely different in the way they operate. Switches work by sending traffic to the destination host only. This happens because switches have CAM tables. These tables store information like MAC addresses, switch ports, and VLAN information. Before sending traffic from one host to another on the same local area network, the host ARP cache is first checked. The ARP cache is a table that stores both Layer 2 (MAC) addresses and Layer 3 (IP) addresses of hosts on the local network. If the destination host isn’t in the ARP cache, the source host sends a broadcast ARP request looking for the host. When the host replies, the traffic can be sent to it. The traffic goes from the source host to the switch, and then directly to the destination host. This description shows that traffic isn’t broadcast out to every host, but only to the destination host, therefore it’s harder to sniff traffic. This paper discusses several methods that result in packet sniffing on Layer 2 switched networks. Each of the sniffing methods will be explained in detail. The purpose of the paper is to show how sniffing can be accomplished on switched networks, and to understand how it can be prevented. Download the paper in PDF format here.
<urn:uuid:6309fb89-f4c3-4367-a816-0d875834171f>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2003/12/15/packet-sniffing-on-layer-2-switched-local-area-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00165-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927313
319
3.40625
3
I think it’s fair to say that Google has done some pretty innovative things. They seem to be a smart and focused bunch of people. Not surprisingly, companies with talented, focused people can be quite innovative. When you throw billions of dollars of funding behind them, the results can be pretty impressive. Recently, Google’s use of recycled water as a part of their data center operations created a lot of press. A blog from their Facilities Manager, Jim Brown, revealed that Google was using recycled water to cool 100% their data center here in Georgia. The use of recycled water for cooling is pretty smart. Data centers are huge consumers of resources. Unless they are built and managed intelligently, they can have significant environmental impacts. Using recycled water allows a data center provider or colocation facility to avoid placing additional strain on the local water supply because recycled water has already been used once and has not yet been returned to the environment. According to the blog, Google intercepts recycled water from the local water authority, treats it further and uses it for heat exchange in their data center cooling. A portion of the water that is used evaporates. The rest of the water is treated again and returned to the environment in a clean, clear and safe form. Google has done something truly impressive here, both by providing a clever solution that addresses an environmental concern associated with their data centers and increasing the awareness of intelligent options to lessen the environmental impact of data center operations. And of course, it’s nice to see another group of smart, focused data center individuals coming to the same conclusion we did around recycled water use. Internap has been using recycled water to cool its data center in Santa Clara, CA since it opened in 2010. While we knew it was making a difference for us, the third party validation from Google sure feels good. What other green practices are you interested in? Visit our SlideShare page and download our presentation from the recent Green Data Center Conference in Dallas to learn more.
<urn:uuid:6bc8eb60-ac00-4cf9-99a5-b8889bff4129>
CC-MAIN-2017-09
http://www.internap.com/2012/04/12/recycled-water-good-for-google-too/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00341-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961337
406
2.828125
3
A malicious program that secretly integrates itself into program or data files. It spreads by integrating itself into more files each time the host program is run. Once detected, the F-Secure security product will automatically disinfect the suspect file by either deleting it or renaming it. Detailed instructions for F-Secure security products are available in the documentation found in the Downloads section of our Home - Global site. You may also refer to the Knowledge Base on the F-Secure Community site for further assistance. Duts is a parasitic file infector virus. It is the first known virus for the PocketPC platform. Duts affects ARM-based devices only.Duts is a 1520 bytes long program, hand written in assembly for the ARM processor. When an infected file is executed the virus asks for permission to infect: WinCE4.Dust by Ratter/29A Dear User, am I allowed to spread? When granted the permission, Duts attempts to infect all EXE files in the current directory. Duts only infects files that are bigger than 4096 bytes and have not been infected yet. As an infection marker the virus writes the string 'atar' to the Windows Version field of the EXE header. The infection routine is fairly simple. The virus body is appended to the file and the last section is made readable and executable. The entry point of the file is set to the beginning of the virus code. Duts contains two messages that are not displayed: - This is proof of concept code. Also, i wanted to make avers happy. - The situation when Pocket PC antiviruses detect only EICAR file had to end ... The other one is a reference to the science-fiction book Permutation City by Greg Egan, where the virus got its intended name from: - This code arose from the dust of Permutation City
<urn:uuid:a6e3a564-06b6-4f60-ae1c-4279bab3a2ff>
CC-MAIN-2017-09
https://www.f-secure.com/v-descs/dtus.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00517-ip-10-171-10-108.ec2.internal.warc.gz
en
0.911846
385
2.578125
3
The U.S. Environmental Protection Agency (EPA) announced Oct. 11 that it awarded $30 million for clean diesel projects across the country. The funding is part of the Diesel Emission Reduction Program (DERA) and State Clean Diesel Grant Program, which are intended to replace or retrofit old diesel-powered engines like those used by marine vessels, locomotives, trucks and buses. "We are pleased EPA is supporting clean diesel projects with this important funding," said Allen Schaeffer, executive director of the Diesel Technology Forum. "DERA has been one of the most bipartisan and successful clean air programs in the past decade. The combination of new clean diesel technology and ultra-low sulfur diesel fuel has helped to reduce diesel emissions to near zero levels for new buses, trucks and off-road equipment. Now the older engines that continue to power our economy will also benefit from the upgraded engines and filters provided by DERA." One of the reasons for the success of clean diesel programs is the support they receive from both political parties, Schaeffer said. “EPA has found that $1 in government investment returns $13 worth of health and environmental benefits to the American people,” he said. However, according to Wikipedia, low-sulfur diesel has a lower energy content than standard diesel fuel, giving it a lower fuel economy and the manufacturing process requires a more costly grade of oil. Regardless, low-sulfur diesel allows lower emissions and adoption of such fuel has steadily increased in Europe over the past few years and now its use continues to spread in the U.S. "There are an estimated 11 million existing older diesel engines and equipment that do not have the most recent clean diesel technology, which has reduced emissions by 97 percent. The U.S. needs a two-fold approach based on a solid economic plan that gets the nation's contractors and truckers to invest in the new generation of the cleanest and most fuel efficient diesel engines ever made,” Schaeffer said.
<urn:uuid:b4228aa5-e6f1-42c1-8a25-0112cd2fd758>
CC-MAIN-2017-09
http://www.govtech.com/transportation/EPA-Promotes-Clean-Diesel-With-30-Million.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00161-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965681
410
2.765625
3
You have probably heard the term ‘Adware’. This term describes an advertising-supported software, which seeks to make a profit out of commercial advertisements. There is a small line between legitimate advertising, illegal advertising and annoying advertising. Nonetheless, the fact is that the majority of free programs and apps are supported by advertising. That is how developers of such programs are making revenues. However, there is also another term that describes an application for smartphones and tablets that aggressively promotes various products and generates intrusive advertisements. Such programs are labeled as ‘Madwares’ (adwares that were developed entirely for smartphones/tablets). Juniper Networks (Network Security and Performance) has recently released an interesting report about mobile threats. It looks like smartphones and tablet are becoming the main target for cyber criminals. Drastic increase has been noticed (approximately 615%, from 2012 till 2013) in cyber threats for smartphones/mobiles phones and tablets. Juniper Networks has managed to examine around 1,85 million mobile apps and spotted more than 276,200 of malicious or hazardous apps. It was estimated that almost every single person in the word that has a mobile phone, at some point have received a so-called SMS Trojan. This trojan is a very primitive form of an online scam. Usually, you may get an SMS/MMS with some questionable phone number. Furthermore, if you reply or call the given number, you may get charged an enormous amount of money. That is how this scam works. As more and more people are starting to use smartphones and tablets, cyber criminals, scammers and hackers are not standing still, and they are also trying to keep up with changing technologies. Thus, keeping that in mind, these security tips should be helpful for people who are using smartphones, mobile phones or tablets: - Avoid questionable SMS/MMS This tip should be useful for all mobile phone users, even if you are still using an old phone. Do not reply to questionable messages from unknown numbers. You should also avoid calling such numbers. If you do so, there is a high possibility that by the end of the month, you will receive an enormous phone bill. - Avoid opening spam emails There is a possibility to get a virus, trojan, potentially unwanted program or malware when opening corrupted spam email attachments. That is why you should be very careful when opening unfamiliar emails. - Lock your phone Use 4 number PIN code in order to protect your SIM card. On top of that, protect your phone with a different code, voice unlock, fingerprint or a similar protection measure. - Keep your OS updated Whether you are using an Android, Apple or a different device, you should keep your OS updated. Older versions of software may have vulnerabilities and flaws that new and updated versions should cover. - Carefully choose what apps to install. As we have mentioned above, there are many applications that may try to initiate unwanted activities behind your back, such as track your online browsing habits, your location, etc. You shouldn’t blindly allow unfamiliar apps to track your location, access your personal information or access your photo profile. - Online Shopping Avoid using questionable free apps for online shopping. There is no telling what information may be recorded if you use an unsafe app. If you are using your smartphone for online shopping, it is better to use a basic internet browser (Internet Explorer, Google Chrome, Apple Safari, Opera, Mozilla Firefox, etc.) - Social networks The same rule applies to social networks as to online shopping. Avoid using unfamiliar apps in order to browse social networks. Your private information may be recorded and even used for various scams. - Valuable files and documents You should avoid keeping your credit card numbers, picture of your scanned passport, passwords (in text or as a photo file). It is highly recommended not to keep such information in your phone. No matter if you are keeping your private/valuable information in text files or you have taken a picture of your banking codes. Keep in mind that you may lose your phone, your phone may be stolen in the street by some burglar, or your phone may get hacked by cyber crooks. - Wi-Fi connection If the Wi-Fi connection option is turned on, smartphones, tablets and even laptops are always scanning the nearby areas and looking for new connections. You may accidentally connect to an unsafe Wi-Fi and expose your computer to cyber criminals. If you are not using a Wi-Fi and you are not connected to one, you should switch it off. - Bluetooth connection. The same rule applies to Bluetooth connection as to Wi-Fi connection. - Browsing history. It is recommended to clean your browsing history (at least one time in a month) in order to remove various cookies, tracking beacons and similar files. - Questionable ads Ads can be as dangerous as various madwares, potentially unwanted programs, dubious apps and even malwares. Some cyber criminals may use ads in order to get your attention and to make you click them. Right after that, you may end up in an unsafe website and expose your device to various cyber threats. Such tricky ads may include promotions, notifications about prizes that you have allegedly won and even fake updates. - Security Tool It is recommended to use a legitimate security tool that should protect your smarthphone/tablet. Make a research and find the best tool that suits your needs. However, be careful and try not to download a spyware instead of a legitimate security program. As we have mentioned before, free programs are usually not the best solution if you are looking for a reliable program.
<urn:uuid:4d731731-8e54-442d-987b-aa7e5f8dceee>
CC-MAIN-2017-09
http://www.2-spyware.com/news/post4435.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00513-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938561
1,160
2.796875
3
Strong PulseBy Wendy Wolfson | Posted 09-01-2004 Foodborne diseases kill more and more people every year, yet they do not get the attention or funding showered on more exotic contagious diseases such as smallpox and anthrax, which strike only a few people per year, but do have the potential to be used as bioterror agents. In 1997, a year the Centers for Disease Control and Prevention uses as an indicator, an estimated 76 million Americans contracted a foodborne infection. One in four got sick enough to miss work; 325,000 were hospitalized; 5,000 died. Food poisoning is like a stealthy serial killer. An outbreak that lasts over several months and across different states can be very difficult to detect. But if epidemiologists and public health labs could use computers to identify, compare and track bacterial strains back to their source quickly, the chances of bringing effective remediation efforts to bear on the outbreak would be much improved. So in 1996, a group of entrepreneurial CDC public health doctors began an effort called PulseNet, on a shoestring, $150,000 budget. PulseNet is now the CDC's primary surveillance system for foodborne disease. Each main foodborne pathogensuch things as salmonella, shigella, E. coli , listeria and campylobacterhas thousands of variants. The probability that two people will get sick from the same bug but contract it from different sources is very low. In forensic terms, it is like two bullets with the exact same markings coming out of two different gun barrels. PulseNet "fingerprints" bacterial DNA, comparing it with other samples collected in its central database. If a match is found, the information is shared over a client-server network linked to state and local public health departments across the country, as well as the U.S. Department of Agriculture and the Food and Drug Administration. Outbreak information is also posted to a WebBoard where there is constant dialogue. Thanks to PulseNet, improved detection has been linked to lowered rates of most of the main foodborne diseases in the U.S., Now, PulseNet is being replicated in other countries around the world including Canada, Europe, South America and Asiaat their request. Yet PulseNet's current operating budget is estimated at under $15 million a yearand that covers all 50 participating state and local labs, the FDA and USDA, as well as the CDC. Not a bad return on investment.
<urn:uuid:f52e18e1-73c0-4ad4-9a18-8b02ac7b4549>
CC-MAIN-2017-09
http://www.cioinsight.com/print/c/a/Past-News/Strong-Pulse
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00513-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950456
500
3.046875
3
In four short years, we will all experience the unique transition of one millennium to another. Ona personal level, no real impact will be felt for most of us. Life will go on as usual. Unless, of course, you earn a living as an information technology (IT) professional. Analyst estimates indicate 20 percent of business applications will fail because of invalid date computations in 1995 (0.6 probability), and that without corrective measures, this number will increase to more than 90 percent by 1999 (0.8 probability). The issue will also not be solved via procrastination. Unlike other vexing IT issues of the 1960s and 1970s, analysts agree no panacea is forthcoming from the vendor community. Therefore, the year 2000 date change creates a major problem for all government enterprises with computerized information systems that want to avoid the specter of government operations lurching to a halt as systems fail when the clock strikes midnight on Dec. 31, 1999. ORIGIN OF THE CRISIS Put simply, the problem is the absence from almost all software of the two-digit century value within a date-field that distinguishes dates as either 19xx or 20xx. For example: Birth year: 1954 Age in 1999 is: 99 - 54 = 45 Age in 2000 is: 00 - 54 = -54 Correct calculation should be: 2000 - 1954 = 0046 The problem was created by limitations of earlier technology and the historically higher cost of storing information. In the 1960s, the dominant method of entering data into a system was the 72-character key-punched card (with eight characters reserved for control information). This advocated a data-entry strategy that optimized characters because of limited space. Additionally, early databases (e.g., IMS) were designed based on hierarchical structures and optimized for transaction-based processing so the relative dates of a transaction were stored in the same database record as the other transaction data. This translated into a two-bytes-per-date savings by not storing the century value when a date was stored, and was a tremendous cost saver at the time. The problem then spread into application code because applications are designed based on data, and the data was stored without the two-digit century value. Even when organizations migrated to relational database management systems, the date data were not always modified to include the century, and due to costs the applications were rarely modified or upgraded. The emphasis was on moving the data to a more responsive storage management system, not on date formats. It is clear that business pressures and horizons drive awareness and allocation of budgets. This "truth" made it virtually impossible for applications development organizations to focus on the year 2000. While the legislature or other governing boards -- along with program areas and/or business units -- were stressing requirements that had to be met within the next six to 12 months for survival of the agency/department, the year 2000 was beyond that horizon and therefore not a concern. Increasingly, however, the year 2000 will fall within critical time horizons as applications begin to fail more frequently during the next few years. SOLUTIONS WILL BE COSTLY AND TIME CONSUMING The stakes are high, and solutions will be costly -- both in time and dollars. Yet the consequences of not responding could be far more damaging. Worse yet, the vast majority of agencies and departments are not geared up to solve the problem. So what is the solution? Let's look at the experiences of one organization trying to prepare for the year 2000. The organization is a multi-national private service company, with annual revenues of $15 billion. Though it may not seem like this organization correlates to government, it does have many similarities: multiple offices spread across geography, multiple lines of business (a.k.a. different departments), and it delivers a service, not a product. I will reference it as Company X. Company X recently completed its analysis of resources required to solve the year 2000 issue. It determined that over the next four years it would spend nearly $400 million, with 75 percent of this going to labor! A few footnotes to this staggering figure are important: This investment is two and a half times more than Company X spends annually on new IT functionality, yet no new functionality will be generated by this project. No interim resources will be available during the project period. Given that this project will be huge in scope, it will require all of IT's best people. Once again, big dollars but no new functionality. The project will replace or reengineer all applications. Half the investment will go toward application work-arounds and bridges that will be thrown away later. Roughly one-third of the money will go toward application packages that will be used as is (no time for customization). Client's analysis indicates no vendor's product or service could meet their needs, i.e. "no salvation through technology." Company X will cut-over to new code one year before deadline (or 12/31/99) to allow for trouble-shooting. This means they have only 36 months to solve the problem! Company X realized the life-threatening nature of this issue (life-threatening to the corporation, that is), and convinced executive management to invest hundreds of millions of dollars on a project that will offer no new functionality. Government IT executives need to gain the same degree of management support within their jurisdictions to move forward with a solution. This challenge will prove particularly onerous in an environment of constant cost-cutting. Yet the costs to government may indeed be greater than the private sector given potential litigation costs the private sector often does not have to contend with. The bottom line is, although good business and technical decisions were made in the past based on the state of business and technology at the time, steps must be taken to ensure that application and data assets are secure. To minimize exposure to the year 2000 crisis, IT organizations must begin immediately to analyze their application portfolios, assess the extent of the problem and begin budgeting, planning for and implementing the potentially extensive corrective measures that will be required. Ian D. Temple is Program Director of GartnerGroup's IT Executive Program for Government.
<urn:uuid:457a4a43-c39a-4d76-a6c3-b89493a7975c>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Managing-Technology-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00389-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952521
1,272
2.828125
3
Previously, in our continuing series on selling to the government, we looked at contracting methods and the federal government's view of "fair and reasonable pricing." Let's now turn our attention to how the federal government deals with the financial and performance risks in contracting, which it addresses through contract type. There are three main contract types: fixed-price, cost-reimbursable, and time-and-materials. The government can append incentives for good behavior to fixed-price and cost-reimbursable contracts, but not to time-and-materials contracts. Fixed-price is the government's preferred contract type because it puts all the risk on the contractor. Fixed-price contracts are completion contracts, under which payment is contingent on delivery of a defined item or service. Because the price is fixed, the private sector takes full responsibility for all costs and the resulting profit or loss. The government especially favors firm-fixed-price and fixed-price with economic price adjustment (FP-EPA) contracts when procuring commercial items. In theory, this works out great for everybody since commercial items should be the epitome of a well-defined item or service. Contracting officers sometimes attempt to administer firm-fixed-price contracts as if they were level-of-effort contracts, which are meant for situations where vendors are paid for the amount of work they perform, as opposed to delivery of a specific good or well-defined service. The Department of Defense especially trains its acquisition workforce to favor fixed-price incentive and fixed-price redetermination contracts over cost-reimbursement or time-and-materials contracts wherever possible. Cost-reimbursement contracts are level-of-effort contracts, mostly of the cost-plus variety -- that is, the government pays vendors their costs plus a "fee," which is how the government often refers to profit. Under level-of-effort contracts, the vendor is paid for work rendered irrespective of whether the intended goal gets accomplished. The Federal Acquisition Regulation (FAR) bans cost-reimbursement contracts for acquisition of commercial items, so by definition these contracts are to be used when an agency cannot define its requirements with precision, or for when those requirements are out of the ordinary. An inherent danger of even well-managed cost-reimbursement projects is "scope creep" -- an easy trap to fall into when requirements are inexact. And companies have an incentive to rack up costs despite the safeguards baked into the terms and conditions of cost-reimbursement contracts. One such safeguard is a cap on vendor profit of no more than 10 percent of the contract's initial estimated cost, unless the work is for research and evaluation, in which case the margin is 15 percent, or for architect-engineer services, in which case it's 6 percent. It's illegal for the government to calculate a vendor's fee as a percentage of actual costs instead of initial estimated costs. Yet safeguards can't control for some facts of life. Contracting officers extend contracts or raise contract ceilings, and rare is the company witnessing scope creep that does anything except go along or even encourage it. NEXT: Cost Accounting
<urn:uuid:c18b1498-e3da-423a-9cf7-5dad4748c47b>
CC-MAIN-2017-09
http://www.crn.com/news/channel-programs/240157726/selling-to-the-government-contract-types-in-federal-procurement.htm?cid=rss_tax_Channel-programs
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00565-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93855
652
2.5625
3
[This is my second post on a series of articles that I would like to cover different tools and techniques to perform file system forensics of a Windows system. The first article was about acquiring a disk image in Expert Witness Format and then mount it using the SIFT workstation. The below one will be about processing the disk image and creating a timeline from the NTFS metadata. LR] After evidence acquisition, you normally start your forensics analysis and investigation by doing a timeline analysis. This is a crucial step and very useful because it includes information on when files were modified, accessed, changed and created in a human readable format, known as MAC time evidence. This activity helps finding the particular time an event took place and in which order. Different techniques and tools exist to create timelines. In recent years an approach known as super timeline is very popular due to ability to bring together different sources of data. However, in this article we will focus on creating a timeline from a single source. The Master File Table file. Before we move to the hand-on exercise let’s review some concepts behind the Master File Table. The Master File Table is a special system file that resides on the root of every NTFS partition. This file contains a wealth of forensic evidence. The file is named $MFT and is not accessible via user mode API’s but can been seen when you have raw access to the disk e.g, forensic image. This special file contains entries for every file and directory including itself. As written by Brian Carrier the MFT is the heart of NTFS. Each entry of the $MFT contains a series of attributes about a file, directory and indicates where it resides on the physical disk and if is active or inactive. The active/inactive attribute is the flag that tracks deleted files. If a file gets deleted, its MFT record becomes inactive and is ready for reuse. The size of these entries are usually 1Kb. Because each record doesn’t fill 1Kb each entry contains an attribute stating if contains resident data or not. Due to file system optimization, NTFS might store files directly on MFT records. A good example of this are Internet cookie files. Microsoft reserves the first 16 MFT entries for special metadata files. These entries point to a special file that begins with $. The $Bitmap and $LogFile are examples of such files. A list of the first MFT entries are shown in the below picture. As well, it shows how to read the MFT record of a disk image on SIFT workstation using istat. The 0 at the end of the command is the record number you want to read for this partition that starts at offset 206848. The record 0 is the $MFT file itself. Each record contains a set of attributes. Some of the most important attributes in a MFT entry are the $STANDART_INFORMATION, $FILENAME and $DATA. The first two are rather important because among other things they contain the file time stamps. Each MFT entry for a given file or directory will contain 8 time stamps. 4 in the $STANDARD_INFORMATION and another 4 in the $FILENAME. These time stamps are known as MACE. - M – Modified : When the contents of a file were last changed. - A – Accessed : When the contents of a file were accessed/read. - C – Created : When the file was created. - E – Entry Modified : When the MFT record associated with the file changed. For our exercise, this small introduction will suffice. Please see the references for great books on NTFS. Now that we have reviewed some initial concepts on MFT let’s move to our hands-on exercise. For this exercise we will need the SIFT workstation with our evidence mounted – this was done on previous article. Then we need a Windows machine where we will access the mounted evidence on the SIFT workstation using a network drive. Finally, we will need the Mft2Csv tool from Joakim Schicht on the Windows machine to read, parse and produce the MFT timeline. To start we share the mounted evidence on our SIFT workstation. In this case its /mnt/windows1 and was mounted on previous article. To perform this we edit the smb.conf and we add the lines as shown in the below figure. Then we restart the SMB deamon. Next, from your windows machine, which needs to be in the same network segment as your SIFT workstation. you can view the shares by using the net view command. Then using the net use command you can map a drive letter. With this step on our Windows machine we will have access to our mounted evidence over the Z: drive. Next step is to run Mft2Csv tool. Mft2Csv is a powerful and granular tool developed by Joakim Schicht. For those who are not familiar with Joakim Schicht, he is a brilliant engineer who has enormously contributed to the Forensics community with many powerful tools.The tool has the ability to read $MFT from a variety of sources, including live system acquisition. It runs on Windows and has GUI and CLI capabilities and needs admin rights. The tool can be downloaded from here. As we speak the last version is v126.96.36.199. In this case, we will launch it from our Windows machine. The command line parameters define from where you are reading the $MFT file and the Time zone. The output by default will be saved in a CSV format but could be saved in a log2timeline or bodyfile. If you are familiar with the log2timeline format than you could use /OutputFormat:l2t. Below picture illustrate this step. The command executed is Mft2Csv.exe /MftFile:Z:\$MFT /TimeZone:0.00 /OutputFormat:l2t When the command is finished you can open the timeline in Excel or copy it to SIFT workstation and use grep, awk and sed to review the entries. Another approach to create a timeline of the MFT metadata is using an old version of log2timeline which is still available on the SIFT workstation. This old version has a MFT parser. You can use log2timeline directly on the mounted evidence. First we capture the Time Zone information from the mounted evidence using Registry Ripper – which we will cover on another post. Then we run log2timeline with -f MFT suffix to read and parse the $MFT file. The -z defines the time zone and the -m is a marker that will show prepended to the output of the filenames. Or if you don’t have the evidence mounted you can export the $MFT using icat from TheSleuthKit. Below picture illustrates the output of both tools using the l2t format. In this case the cache.txt is an executable file part of a system that has been compromised with w32.morto worm. That’s it! In this article we reviewed some introductory concepts about the Master File Table and we used Mft2Csv and Log2timeline to read, parse and create a timeline of it. The techniques and tools are not new. However, they are relevant and used in today’s digital forensic analysis. Next step, review more NTFS metadata. Windows Internals, Sixth Edition, Part 2 By: Mark E. Russinovich, David A. Solomon, and Alex Ionescu File System Forensic Analysis By: Brian Carrier SANS 508 – Advanced Computer Forensics and Incident Response
<urn:uuid:f8072797-6497-4eba-93ac-91404089301f>
CC-MAIN-2017-09
https://countuponsecurity.com/2015/11/10/digital-forensics-ntfs-metadata-timeline-creation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00565-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926327
1,589
2.546875
3
Cloud Computing Definition As with any burgeoning technology, cloud computing remains in flux in terms of its definition. Cloud steadily rises in popularity and familiarity with both tech nuts and the general public, thanks to an increase in advertising and more widely distributed education on just how useful it proves for a variety of sectors: major conglomerates, small businesses, enterprising individuals, and casual genre devotees. Yet despite the uptick in cloud’s presence, the cloud community has yet to agree in consensus on its best denotation. Dictionary.com provides a serviceable and fairly clear explanation about what the term “cloud computing” entails: “Internet-based computing in which large groups of remote servers are networked so as to allow sharing of data-processing tasks, centralized data storage, and online access to computer services or resources.” Yet Merriam Webster’s dictionary currently pegs cloud as: ” The word you’ve entered isn’t in the dictionary.” Indeed, the most trusted word volume has yet to recognize cloud computing as an official technology, or an official anything. It would appear as though one of cloud computing’s own resident experts stands in firm agreement with the well-known definition tome. PCWorld’s David Linthicum has essentially declared outright war against any semblance of a definition for cloud. “These days, when somebody wants me to define ‘cloud computing,’ I fight the urge to eject them from the conference room,” he rages. The National Institutes of Standards and Technology, arguably a major force in technology whose acceptance institutionalizes new concepts, provides a definition that also dissatisfies Linthicum, probably as a direct result of NIST’s rep for institutionalization: “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” I wholeheartedly agree with Linthicum and his rejection of attempts to pin down cloud. From what I can make out, such phrasing as “sharing of data-processing tasks” and “minimal management effort” fails to convey cloud’s propensity to inspire creativity and resourcefulness within a community — ideals that surpass cloud as strictly technology and redefine it as a synergistic medium. As Linthicum states, “Perhaps the best definition is around how cloud computing, or whatever you want to call it, will redefine how we consider and use technology to make us better at doing whatever we do. The key word there? “How,” not “what” or “who.” If anything, cloud computing (as it stands in 2012) channels and synthesizes creative and intellectual output so that it does more. There’s a reason why clouds float in the sky. Attempts at fencing them in, however romantic, inevitably prove futile. Nevertheless, defining is what we do as a species. We quantify so that we can better understand and utilize. And so I put it to you, dear CloudTweaks reader. Odds are that you’re quite cloud proficient. I’d love to get your take on this: how would you define the cloud, version 2012? By Jeff Norman
<urn:uuid:1443c0a0-082b-4586-b053-548d7a023e95>
CC-MAIN-2017-09
https://cloudtweaks.com/2012/03/coining-the-cloud-an-assessment-of-cloud-computings-shifty-definition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00209-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925767
709
2.625
3
Some time ago I was working on IPv6 implementation and in that period I wrote an article about NDP (you can read it here). After a while I received some comments that is not written very well so I reviewed a huge part of it. It looks my english was far worst two years ago that I was really aware of 🙂 In the reviewing process I realised that NDP usage of Solicited-Node multicast addresses was not clearly explained. This is the follow-up article which should explain how and why Solicited-Node multicast address are used in NDP. After all this kind of multicast addresses are there to enable IPv6 neighbor discovery function of NDP to work properly. Solicited-node multicast address is IPv6 multicast address used on the local L2 subnet by NDP Network Discovery Protocol. NDP uses that multicast address to be able to find out L2 link-local addresses of other nodes present on that subnet. NDP replaces ARP As we know, NDP in IPv6 networks replaced the ARP function from IPv4 networks. In IPv4 world ARP used broadcast to send this kind of discovery messages and find out about neighbours IPv4 addresses on the subnet. With IPv6 and NDP use of broadcast is not really a good solution so we use special type of multicast group addresses to which all nodes join to enable NDP communication. Why is not a good solution to use broadcast? ARP uses broadcasts for ARP request to the broadcast MAC address ff:ff:ff:ff:ff:ff. That kind of message will be received by everyone on L2 segment, although only one neighbour needs to respond back with an answer. Others need to receive that message, process it and discard the request afterwards. This action can cause network congestions if the amount of broadcast is excessive at some point. And all this on IPv4 network. Imagine if we implemented the same ARP in IPv6. Average IPv4 L2 segment is a subnet with, let’s say, 192.168.1.0/24 subnet that will enable us to have 254 IPv4 addresses (254 hosts) on L2 segment. Usually in IPv6 a “normal” L2 network segment will use subnet with /64 which will enable us to have 2^64 addresses. Broadcast between so many possible devices would kill our network segment, that’s the main reason broadcast does not even exist in IPv6 protocol and that is the reason NDP will need to use something better like multicast to get to all nodes on that segment. Just a quick reminder: There is no broadcast address type in IPv6, there are only: - Unicast addresses. A packet is delivered to one host - Multicast addresses. A packet is delivered to multiple hosts. - Anycast addresses. A packet is delivered to the nearest of multiple host with the same IPv6 address Solicited-node multicast addresse is our answer. Solicited-node multicast address is generated from the last 24-bits of an IPv6 unicast (or anycast) address of an interface. Number of devices on some L2 segment that are subscribed to each solicited-node multicast address is very small, typically only one device. This enable us to reduce almost to none “wrong” host interruptions by neighbour solicitation requests, compared to ARP in IPv4. There is a issue here with switches on which we have our IPv6 L2 segment devices connected. Those switched need to be multicast aware and implement MLD snooping. MLD snooping will enable the switch to send traffic that is addressed to a solicited-node multicast address only on the ports that lead to devices subscribed to receive that multicast traffic. If we do not think of MLD, Ethernet switches will probably tent to flood the multicast frames out of all switch ports converting our nice multicast setup to broadcast mess. How Solicited-Node multicast address is created We use the last 24 bits from our interface unicast or anycast address and append that part of the address to the prefix FF02::1:FF00:0/104. Our interface unicast or anycast address is maybe EUI-64 SLAAC generated or DHCPv6 configured. NDP will do his thing and calculate Solicited-Node multicast address for that interface and join that multicast group. In the process of generating.. We toked 104 bits from the address but in that way so that last byte of the penultimate field 00 is not used in the prefix. Our example shows that last 24 bits of the multicast address begin after FF. In the process of generating Solicited-Node multicast address we will get an address from multicast range from FF02:0:0:0:0:1:FF00:0000 to FF02:0:0:0:0:1:FFFF:FFFF A host is joining Solicited-Node multicast group for each of its unicast or anycast addresses for all its interfaces which is basically enabling normal NDP protocol function Let’s say that we have one interface with an address fe80::2bb:fa:ae11:1152 the associated Solicited-Node multicast address is ff02::1:ff11:1152. So in this example our host must join to the multicast group represented by this address.
<urn:uuid:34f5cda8-43ee-4a0b-a8f0-0d26c5078d08>
CC-MAIN-2017-09
https://howdoesinternetwork.com/2015/solicited-node-multicast-address
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00561-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930748
1,120
2.765625
3
In a world where the reliance on electronic data transmission and processing is becoming every day more prevalent, it is of critical importance for organizations to guarantee the integrity and confidentiality of mission critical information exchanged over communication networks. Contrary to a false perception, intercepting information transmitted over an optical fibre cable – an optical fibre is a thin glass fiber which transmits light to carry information – is not only possible but also not very difficult in practice. “Tapping a fibre-optic cable without being detected, and making sense of the information you collect isn’t trivial but has certainly been done by intelligence agencies for the past seven or eight years” explains John Pescatore, VP of Security at the Gartner Group and a former US National Security Agency analyst. “These days, it is within the range of a well-funded attacker, probably even a really curious college physics major with access to a fiber-optics lab and lots of time on his hands” adds Pescatore. Bending an optical fibre is indeed sufficient to extract light from it. Optical taps are readily available from a variety of manufacturers and inexpensive. Optical fiber cables have replaced copper cables for all high bandwidth links and they become every day more prevalent in the telecommunication networks worldwide. Organizations almost certainly rely on optical fibers to transmit some, if not all, of their information. Because of this vulnerability, optical links carrying critical information must be identified and protected with appropriate countermeasures. As telecommunication links are intrinsically vulnerable to eavesdropping, cryptography is routinely used to protect data transmission. Cryptography is a set of techniques that can be used to guarantee confidentiality and integrity of communications. Prior to its transmission, information is encrypted using a cryptographic algorithm and a key. After the information has been received, the recipient reverses the process and decrypts the information. Even if he intercepted the encrypted information, an eavesdropper would not be able to gain knowledge about it without knowing the cryptographic key. Current cryptographic techniques are based on mathematical theories. In spite of the fact that they are very widespread, they do not offer a foolproof security. They are in particular vulnerable to increasing computing power and theoretical advances in mathematics. These techniques are thus inappropriate in applications where long-term confidentiality is of paramount importance (financial services, banking industry, governments, etc.). Quantum cryptography was invented about twenty years ago and complements conventional cryptographic techniques to raise security of data transmission over optical fibre links to an unprecedented level. It exploits the laws of quantum physics to reveal the interception of the information exchanged between two stations. According to the Heisenberg Uncertainty Principle, it is not possible to observe a quantum object without modifying it. In quantum cryptography, single light particles – also known as a photons – which are described by the laws of quantum physics, are used to carry information over an optical fibre cable. By checking for the presence of disturbance, it is possible to verify if a transmission has been intercepted or not. Because of this, quantum cryptography was identified in 2002 by the MIT Technology Review and by the Newsweek magazine as one of the ten technologies that will change the world. This technology can be used to exchange keys between two remote sites connected by an optical fibre cable, and to confirm their secrecy. The keys are then used with secret key algorithms to securely encrypt information. With such an approach it is possible to guarantee future-proof data confidentiality based on the laws of quantum physics. Its deployment on critical links allows thus to raise the information security level of an organization.
<urn:uuid:fc48f15c-4c90-403b-bc03-3104d897a99d>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2005/05/05/securing-optical-networks-with-quantum-cryptography/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00085-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938938
715
3.234375
3
The first letter R stands for rapport. A definition of rapport is a harmonious, empathetic, or sympathetic relation or connection to another self an accord or affinity. In other words, a close relationship in which people understand each others feelings, ideas and communicate well. As I pointed out earlier the purpose of the mind is survival and the survival of whatever the mind considers itself to be. If you Google the top fears human beings have you will discover that the No.1 fear is public speaking or, using IT terminology, speaking in the "public domain." When you speak in the public domain you are making yourself open, visible and vulnerable to others. You are putting yourself on display to be judged, evaluated and assessed by the audience. If you say to your mind that we are going to play a game in which you get to be open, visible and vulnerable the mind will respond by saying, No way!" However, if you do stand up in front of the audience, how will the mind, which is concerned about survival, protect itself from exposure in the public domain? It protects itself by creating a psychological defense or, in IT terms, a firewall. Building rapport is essential for effective communication because rapport disables the firewalls and allows for maximum throughput of your communication. This will increase your effectiveness. So what specific techniques can you use to build rapport? One of the roles and responsibilities a speaker plays in managing the conversation from the front of the room is being a host or hostess. By treating the audience as guests building rapport becomes the natural thing to do. There is another rapport building formula which works just like magic: Imagine each persons firewall is made up of bricks and every time a person communicates, it removes a brick from their firewall. Every time a brick is removed it opens up a hole through their firewall and therefore, you have greater access to the private domain in the other person. The more bricks you remove the greater is your throughput. As a master communicator you want to be in the brick pulling business. You start pulling bricks as soon as you walk through the door of the presentation. This is the unofficial connecting and gathering phase of the conversation. Do not wait for the official start of the conversation to build rapport. Two simple techniques that are commonly used to build rapport are walking up to a person, shaking their hands and introducing yourself. By doing this two major bricks are removed from your firewall, the other persons firewall and the group firewall. What do I mean by the group firewall? Imagine each persons firewall is composed of 25 bricks and say there are 10 people in the room. The total number of bricks in the group firewall would be 250. Every time a communication takes place in the space a brick is taken out of the group firewall. When this occurs the flow of energy and communication increases and the space becomes lighter. What do I mean by the space becomes lighter? The analogy to explain this would be a hot air balloon: There are two ways to make a hot air balloon go up. One is to increase the hot air and secondly, drop ballast. Every time you remove a brick it reduces the ballast in the gondola and the space gets lighter. You want the space to be as light as possible because it promotes the free flow of communication, openness, humor and creativity. In the above connecting and gathering scenario the first brick removed is in the exchange of names and the second brick is through physical touch. A persons name is very important and if you can remember a persons name it is an excellent rapport building skill. One method you can use to remember a persons name during the introduction is to repeat the persons three times. For example, Hello, my name is Richard, you say. Hi Richard, my name is Bill, says Bill." You say, Bill, what do you do for the XYZ data storage company? Are you in sales, Bill?" "Have you been with them long?", etc. You get the idea. By repeating the name three times in the first two are three sentences it will be enough to retain it in your short-term memory. Shaking hands, e.g., physical touch, is a form of non-verbal communication that creates, among other things, a feeling of safety. Many cultures have a physical social dance they do when meeting each other. For example, in the U.S. shaking hands is common. In Europe kissing on the cheek and I have even seen men in Saudi Arabia greet each other by touching noses. A psychological reason for all this physical touching is that it alleviates fear and reduces the supposed threat from the other person. The shaking of the hand communicates that you are not holding a weapon and therefore, not dangerous. In training presenters I have always encourage them to shake hands and meet as many people as possible in the room. I suggested that they especially want to meet people they dont know because any feared attack will usually come from the person with whom you have the least rapport. Before I conclude, I want to share with you one more technique that builds intimacy and rapport. I refer to this as the "sharing of the self." Often in watching people communicate they focus on dumping data into the space between them, which is very impersonal. I believe that your stories and experiences about the data are more interesting to the audience then the data itself. Why? Because sharing personal stories and experience establishes your credibility, reveals your humanness , lowers the firewalls, promotes participation and is a great way to hold the attention of the audience. So, in conclusion, a key element to mastering the outflow of your communication is to reduce the psychological firewalls in the space. You dismantle firewalls by building rapport. You build rapport by being a host or hostess and getting the audience to communicate. That completes the letter R and next time I will share with you the letter A. Thank you and the best of luck in all your communications. Alan Carroll, author of The Broadband Connection: The Art of Delivering a Winning IT Presentation and the founder of Alan Carroll & Associates, has been a successful public speaker, sales trainer and corporate consultant since 1983. Clients include: Cisco Systems, Synoptics Communications, Symantec Corporation, Digital Equipment Corporation, Unocal Corporation, Covey Leadership Center, BP Chemical, Peak Technologies, Vantive Corporation, Jet Propulsion Laboratories, Lucent Technologies, HP, Symbol Technologies, etc.
<urn:uuid:aa640665-e925-416f-a516-142ab5e163b2>
CC-MAIN-2017-09
http://www.cioupdate.com/career/article.php/3922341/The-7-Habits-of-Highly-Effective-Presenters.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00029-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941775
1,316
3.390625
3
IP security cameras have become part of our daily lives. As security camera owners, we like to see the feed or snapshot of live video on our phones, remote computers, and many other places. In order to accomplish that, we need to do a configuration on our home routers or modems that is called port forwarding. We will get into port forwarding tutorial’ for many known brands, but in this article we would like to delve into the definition of port forwarding and how it came about. Have you ever wondered what port forwarding actually stands for? Why do we need to configure port forwarding when we want to access our security camera in our house or at our business? We wondered about these questions and put together this guide for those who would like to learn about the computer networks we use every day. If we take one step back and look at the reason we are needing port forwarding, there is a quick and also very detailed explanation. We would like to explore this term in detail to give you a better insight about networking and port forwarding. When Internet Protocol version 4 (IPv4) designed back in 1980, the engineers at IETF had no idea that internet would become what it had become today. According to their calculations, 4 billion IP addresses were way more then what would be their hypothetical limit if mankind would ever reach. In old days, you could assign each computer on the internet with their own IP address and that machine would be reachable by anybody on the internet instantly. Fast forward to today, we have millions of computers, printers, cameras and networks connected to internet all over the world thus we are limited with how many IP addresses we can be given by our internet service providers (Timewarner, AT&T, Verizon etc.) These IP addresses are called public IP addresses. Everybody who would like to access to internet needs to have a public IP address. These IP addresses are divided into two categories: Static IP addresses and Dynamic IP addresses. Since ISPs are limited with the amount of IP addresses they can own, they allocate pools of public IP addresses and they assign these IP addresses their customers by changing it at random time frames, these IP addresses are called Dynamic IP addresses. If you would like to have static IP address that is solely assigned to you, and does not change in a day or a week, that option is likely available to you by making a phone call to your ISP. Due to the allocation limits of IPv4, our home and business routers get assigned one IP address and all the computers behind that router use this IP address to access the internet. Magical Network Device known as 'Router' Although we are given one public IP address, we have many devices that we want to connect to internet. There comes the magic device called a router, or as many home users call it a modem. These devices remove the barrier of one public IP address and allows every device access to internet. How does the conversation between router and our computer take place? Say, you would like to access to www.yahoo.com on the internet, your computer would put forward a request to the your home router saying “I cannot find any computer called ‘www.yahoo.com’, can you please look around you and see if you can find me this computer?” and your router would then take this request and hands it over to its neighbor which is your ISP's router. This step basically takes you to “internet highway” and once the demanded data arrives back from your ISPs router to your router, it then hands over the information to your device. What is Port and Port Forwarding? With the same approach, when you would like to access to a device in your home network from internet, you need to make configurations on your home router. This is what we now call port forwarding. Before we get to port forwarding, let's look at the definition of port. We are given a list of virtual addresses that are called "port" that each of them is mapped to a software or service running on your computer. In other words, if you would like to send or receive data in your network, you would need a port number for both outgoing and incoming. Each device that is connected to an IP network will have 65,535 ports on them by default. Say you would like to access to a computer, you would have to know that device's IP address as well as the port number that your software is running on. If you only know the IP address but don't know the port number, you can only 'ping' the device but you cannot access to any software service on it. We can use the analogy of street address. 123 Elm Street. In this case, "Elm Street" would refer to IP address and house number would refer to the port number. You may know the street name but if you do not know the door number, you are basically clueless for the final destination. How does Port Forwarding work? When you would like to access to a device from internet, you would go on to your router and identify the port number that you would like to access from internet. For example, our router has the IP address of 220.127.116.11. We have a device behind this router that has IP address of 192.168.1.20 which has a program listening on port 80. We would then go on to our router and make the following configuration: When somebody on the Internet requests information on my port 80, I will take this demand and hand it over to 192.168.1.20 on port 80. Whatever the response is, I will then send it back to the device requesting. Internet ---> 18.104.22.168:80 ---> 192.168.1.20:80 Internet <--- 22.214.171.124:80 <--- 192.168.1.20:80 Port Address Translation (PAT) In more advanced routers, you can define what’s called as ‘port address translation’ or PAT. In this case, the port numbers from outside of your network do not have to match the port number on your device. You can assign whatever the port number you would like to define on your router then assign it to your internal device’s IP address and port number. Internet ---> 126.96.36.199:8000 ---> 192.168.1.20:80 With this, we conclude the definition of port forwarding. Please send us any questions you might have about port forwarding.
<urn:uuid:bc50306f-f4b1-411f-a2bd-1e35659835b4>
CC-MAIN-2017-09
https://www.a1securitycameras.com/port-forwarding-for-ip-security-cameras-and-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00205-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965733
1,339
3.296875
3
How big of a problem is spyware? It’s big enough that the U.S. House of Representatives voted unanimously to stiffen jail sentences for those who use secret surveillance programs to steal credit card numbers or commit other crimes. Under the bill, known as the Internet Spyware (I-SPY) Prevention Act of 2004, those found guilty of using spyware to commit other crimes would face up to five years in prison on top of their original sentences. Those who use spyware to steal personal information with the intent of misusing it, or use spyware to compromise a computer’s defenses, could face up to two years behind bars. The bill would also apply to those who perpetrate so-called “phishing” attacks — official-looking email messages that aim to trick people into disclosing their bank-account numbers or other personal information. In addition, the I-SPY bill allocates $10 million to the Department of Justice to combat spyware and phishing scams. Two days before the I-SPY vote, House lawmakers approved a separate bill that establishes multimillion-dollar fines for spyware perpetrators. (Some observers predict that the two bills will be combined with a spyware bill that is currently working its way through the Senate.) A pervasive problem Antivirus products allow users to protect themselves from a variety of potential software and Internet threats. These include malicious code such as viruses and Trojans, as well as expanded threats, which include spyware, adware, and dialers. While definitions of spyware vary, it’s generally agreed that these programs have the ability to scan systems or monitor activity and relay information to other computers or locations in cyberspace. Among the information that may be actively or passively gathered and disseminated by spyware: passwords, log-in details, account numbers, personal information, individual files or other personal documents. Spyware may also gather and distribute information related to the user’s computer, applications running on the computer, Internet browser usage, or other computing habits. Many popular file-sharing programs come bundled with spyware. In fact, spyware is embedded in hundreds of programs — including games, utilities, and media players – that can be downloaded for free from the Internet. Spyware is also how many file-sharing vendors make money while not charging for their products. With these programs, it has been said, you pay with your privacy instead of with money. For that reason, the Federal Trade Commission has repeatedly warned consumers as well as businesses about the trade-offs involved in shareware. In an alert issued last year, the FTC was unambiguous: “Before you use any file-sharing program, you may want to buy software that can prevent the downloading of spyware or help detect it on your hard drive.” Just this month the FTC announced it had asked a U.S. District Court in New Hampshire to shut down a spyware operation that hijacks computers, secretly changes their settings, barrages them with pop-up ads, and installs adware and other software programs that spy on consumers’ Web surfing. The FTC alleges the spyware operation – a network of sites operated by former “spam king” Sanford Wallace — violates federal law and asks the court to bar the practices permanently. How pervasive is spyware? Internet service provider Earthlink announced earlier this month that a scan of 3 million computer systems over nine months found 83 million instances of spyware. Researcher Gartner Inc. has estimated that more than 20 million people have installed adware applications (adware is a type of spyware that reports back on a user’s activities in order to serve up targeted advertising), and this covers only a portion of the spyware that is out there. A dangerous evolution All of this recent attention comes as traditional notions of spyware are evolving. Indeed, Gartner in July noted that spyware has evolved — from simple cookies to a range of sophisticated user-tracking systems. The researcher went so far as to issue a report this summer titled “A Field Guide to Spyware Variations.” In that report, Gartner observed that, midway through 2004, its clients were seeing a “surge in manifestations” of spyware. Moreover, new methods to snare users are appearing all the time, including greater exploitation of multimedia and mobile and wireless systems. Gartner clients reported that cleanup efforts typically take a few hours; however, in no time at all, the same systems will become infected again. Gartner’s research underscores a key finding of the latest Symantec Internet Security Threat Report: namely, that these violations are becoming more problematic. The Threat Report found that six of the top 50 malicious code submissions to Symantec Security Response in the first six months of 2004 were adware. The Threat Report noted that adware packages perform numerous operations, including displaying pop-up ads, dialing to high-cost numbers through the system’s modem if one is present, modifying browser settings such as the default home page, and monitoring the user’s surfing activity to display targeted advertisements. The effects range from mere user annoyance to privacy violations to monetary loss. Reasons to be vigilant While the threats posed by these programs may be difficult to quantify, that doesn’t mean they aren’t a security concern to today’s enterprises. Because spyware and adware programs are unauthorized, surreptitiously installed software, administrators have no knowledge of or control over what the programs may be running. For instance, they could be used to monitor users’ browsing habits, constituting a loss of privacy. Most spyware and adware packages are also capable of dynamically updating themselves, often with new functionality that the user is unaware of. As the Internet Security Threat Report observed, Symantec’s research has shown that there are good technical countermeasures to spyware and adware, such as implementing more restrictive Web browser settings. In addition, many companies have security policies in place that prohibit users from downloading or installing unauthorized software on corporate computers. Despite this, users often knowingly engage in activities that risk exposure of confidential information. For this reason, it is important for users to read and understand the End User License Agreement (EULA) and other notification methods before installing any software. Spyware EULAs typically contain ambiguous language designed to mislead users about the information-gathering functionality of the software. At the same time, it is equally important that software publishers provide users with clear and unambiguous notifications of the actions that their software performs. For its part, Gartner recommends that IT organizations promote cooperation between end-user groups, technical support, and security teams to ensure that a company’s response to spyware keeps pace with this growing threat to privacy. As the spate of recent legislative and FTC activity attests, public intolerance of spyware has reached a new plateau. In the enterprise environment, spyware is rapidly becoming a serious security concern, particularly as most corporate networks allow HTTP traffic, the means by which spyware is propagated. Symantec continues to view spyware as a significant threat and recommends that enterprise users be vigilant about updating their antivirus software. Security administrators should take extra measures to maintain a strong security posture on client systems. They should also ensure that client system patch levels are up-to-date and that acceptable usage policies are in place and enforced.
<urn:uuid:d434c52e-cae2-4f6b-a247-56a9ad48b9ca>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2005/01/21/spyware-an-update/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00381-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94227
1,542
2.828125
3
When your hard drive fails, it can start to smoke, trapping your data (and possibly setting off your smoke detector). If you’ve lost data due to a smoking hard drive, the professional hard drive repair experts at Gillware can recover your data and help you get back on your feet. People don’t often think about the heat produced by their hard drives. When a hard drive is running, after all, it’s usually doing so from inside your computer (or on your desk, connected via USB), not cradled in your arms like a newborn infant. But rest assured, if you were to pop open your PC’s side panel while your computer is running and lay your hand on the hard drive inside, you would find it to be quite hot. There are several factors creating this heat. The speed of the spinning platters and the spindle motor inside your hard drive creates friction with the air, which makes things hotter inside your hard drive. But most of the heat you would feel from your drive comes from the circuit board on the back of the hard drive. When you power on your computer, electricity flows through the circuit board and into the spindle motor, setting the hard drive’s internal components in motion. When you put your hard drive to work, this component can become the hottest part of the drive. Too much heat—and too much electricity—can burn out parts of a hard drive’s control board. This can cause the smoking hard drive to exhibit its inflammatory behavior. Often, the culprit behind a smoking hard drive is a power surge. These power surges happen most frequently in the summer, when thunderstorms are more common. Forcing more electricity through a hard drive’s control board than it was designed to handle is a bit like trying to fill a water balloon with a firehose. Even if a power surge only lasts a few nanoseconds, in that short time frame it can cram enough power through your hard drive to scorch the circuit board. External hard drives, many of which receive their power straight from a wall outlet, can be especially vulnerable to a power surge burning their circuit boards. Many external hard drives are especially vulnerable because they have two circuit boards, in fact, and one is not always as robustly designed as the other. Attached to the drive is a SATA-USB bridging dongle with a SATA plug on one end and a USB port on the other. It is actually far easier for this dongle to burn out than the control board on the hard drive itself. This renders the hard drive inaccessible not just because the drive is now trapped inside its casing, but because the dongle can contain encryption metadata. For example, even if a Western Digital My Book external drive isn’t password-protected by the user, it still has its hardware-level SmartWare encryption, and the USB dongle handles data encryption and decryption. Without the dongle, the hard drive will show up as blank, even if the drive itself is perfectly healthy. Fortunately, under most circumstances our engineers can circumvent these issues. Where There’s Smoke, There’s Fire… and Data Loss Smoking hard drives are dangerous things. Not only are they a fire hazard, but they can also cause other electronic devices to short out and fail. Plugging a hard drive with a smoked PCB into a power supply unit, for example, can fry the unit and render it inoperable. At the risk of sounding like an anti-smoking PSA, when your hard drive starts to smoke, everything around it feels the consequences. And, of course, the most pressing problem associated with a smoking hard drive is that your data is trapped on it. All of that data is lost. But with the help of professional data recovery experts in a world-class data recovery lab, what once was lost can still be found. There was a time, in the days of yore, when this wasn’t so. When a hard drive’s PCB died, you could just go out and find the same model of drive, remove its control board, and attach it to the failed drive, with a reasonably good chance of recovering your data. What happened? Hard drives grew more complex. As the areal density of the hard disk platters inside hard drives grew and manufacturers found new ways to pack ever-increasing amounts of data into the same space, margins for error grew razor-thin. Every hard drive today needs to be individually calibrated in the factory. The unique calibration settings for every hard drive must be stored in a ROM chip on the control board. Nowadays, if you simply replace the control board of a hard drive, the drive can’t access its unique ROM chip. Without the proper calibration data to guide it, your hard drive won’t work. It may even cause further damage to its internal components if you try to run it! And so, to properly replace a burned and smoking circuit board, a professional hard drive engineer must carefully replace the ROM chip as well. This delicate operation must only be attempted by a professional data recovery expert. In some cases, a smoking hard drive may have suffered more damage than just its control board, and other parts of it may need to be replaced. These types of hard drive surgeries can only be successfully performed in a cleanroom data recovery lab. Reasons to Choose Gillware for Smoking Hard Drive Repair When your hard drive starts smoking, Gillware Data Recovery is the data recovery company you want by your side. Our data recovery experts are seasoned hard drive repair veterans with years of experience and thousands of successful data recovery cases under their belts. With world-class expertise and state-of-the-art data recovery tools, Gillware can successfully recover the data from your smoking hard drive. Gillware’s services are recommended by Western Digital and Dell, as well as computer repair and IT professionals across the United States. Gillware’s data recovery lab uses ISO-5 Class 100 rated cleanroom workstations to make sure failed hard drives are repaired in clean and contaminant-free environments. With our SOC II Type 2 audited facilities, your data is as secure as it can be. Our data recovery evaluations are free, and we only charge you for our data recovery efforts when we’ve successfully recovered your data and met your goals. We can even cover the cost of inbound shipping for you. With prices lower than the industry-standard rates charged by other data recovery labs, Gillware’s data recovery services are both affordable and financially risk-free. Ready to Have Gillware Help with Your Smoking Hard Drive? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:84379e08-6a51-436e-a0b1-8d78d27f4dfb>
CC-MAIN-2017-09
https://www.gillware.com/smoking-hard-drive-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00081-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939773
1,937
2.859375
3
Sensors in your phone that collect seemingly harmless data could leave you vulnerable to cyber attack, according to new research. And saying no to apps that ask for your location is not enough to prevent the tracking of your device. A new study has found evidence that accelerometers—which sense motion in your smartphone and are used for applications from pedometers to gaming—leave “unique, trackable fingerprints” that can be used to identify you and monitor your phone. Here’s how it works, according to University of Illinois electrical and computer engineering professor Romit Roy Choudhury and his team: Tiny imperfections during the manufacturing process make a unique fingerprint on your accelerometer data. The researchers compared it to cutting out sugar cookies with a cookie cutter—they may look the same, but each one is slightly, imperceptibly different. When that data is sent to the cloud for processing, your phone’s particular signal can be used to identify you. In other words, the same data that helps you control Flappy Bird can be used to pinpoint your location. Choudhury’s team was able to identify individual phones with 96% accuracy. “Even if you erase the app in the phone, or even erase and reinstall all software,” Choudhury said in a press release, “the fingerprint still stays inherent. That’s a serious threat.” Moreover, Choudhury suggested that other sensors might be just as vulnerable: Cameras, microphones, and gyroscopes could be leaving their smudgy prints all over the cloud as well, making it even easier for crooks to identify a phone. “Imagine that your right hand fingerprint, by some chance, matches with mine,” Choudhury said. “But your left-hand fingerprint also matching with mine is extremely unlikely. So even if accelerometers don’t have unique fingerprints across millions of devices, we believe that by combining with other sensors such as the gyroscope, it might still be possible to track a particular device over time and space.” There’s not much that can be done to address this issue at this point, Choudhury said. It’s basically impossible to manufacture millions of cellphone components without each one being the tiniest bit unique, and there’s no good way to mask these signals to attackers. One way of maintaining privacy would be to cut off the flow of data from smartphones to the cloud—so, giving apps processed information instead of raw data to send to the cloud for processing would do the trick. But today’s mobile devices lack the processing power (and battery capacity) to do so. So for now, this just serves as yet another reminder that even innocuous, seemingly anonymous data is information that can be exploited.
<urn:uuid:5210643e-18c6-480f-b3c0-b59ca70ecc5b>
CC-MAIN-2017-09
http://www.nextgov.com/mobile/2014/04/phones-are-giving-away-your-location-regardless-your-privacy-settings/83302/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00377-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94059
584
2.765625
3
A history of the Internet was created using Facebook's timeline feature by Internet education company Grovo. The timeline, which goes back to the year 1536, includes important dates like 1983, when the Internet was born, and 1978, when the first spam email was sent over the ARPANET. And in 1900, deep sea divers discovered the Antikythera mechanism, an analog computer dating back to around 1 BC. The purpose of the timeline is to continue Grovo's mission of providing “high-quality Internet education.” The project is an example of how social media can be used in unexpected ways. Similar uses of social media, such as the real-time WW2 twitter project, provide insight into the future of the Internet landscape by exploring innovative uses of technology. “Many can still recall their first professionally-questionable AOL email addresses, while others can date the first time they watched a YouTube video,” Grovo's first timeline post reads. “As we’ve grown, so too has the Internet - and that’s exactly what we hope to share with this project, calling out some of our favorite moments from the Internet’s storied, complex, fun and memorable history. ” To find out what happened in 1536 that was so darned important, visit Grovo's Internet History on Facebook.
<urn:uuid:04b63854-03f5-4bb6-aff7-7f72524444a7>
CC-MAIN-2017-09
http://www.govtech.com/e-government/Colorful-Internet-History-Chronicled-on-Facebook.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00601-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925198
279
3.1875
3
Quantum computing explained: D-Wave on NASA's system - By John Breeden II - Sep 10, 2013 If one of the 35 million books in the Library of Congress has a big red X on the inside cover, how long would it take a man to find it? Opening each book might take the searcher hundreds of years, unless he was very lucky. But what if the searcher could replicate himself into 35 million different people, each one existing in a parallel universe? All 35 million people would head to the library and look inside a different book. If no X were discovered in an individual book, that searcher would simply disappear, until the last man standing would be holding the correct book. The same impossible problem could be solved in a few minutes. That’s one way to illustrate the difference between traditional and quantum computing, and it explains why the government is so interested in machines that can virtually try all possible solutions at once and find the best answer more quickly. That’s also why NASA, Google and the Universities Space Research Association have formed the Quantum Artificial Intelligence Lab, to explore quantum computers’ potential to tackle problems that are too difficult or perhaps impossible for supercomputers to handle. In fact, some problems can never be solved by traditional computers, according to Eric Ladizinsky, co-founder and chief scientist for D-Wave Systems, which built a quantum computer for NASA and is working on even more powerful versions that could one day soon crack open the mysteries of the universe. With traditional computers, the circuits are either on or off, and the binary code is represented by ones and zeros. Adding more processors increases the computer’s power linearly. By contrast, a quantum computer uses quantum bits, or qubits, the quantum equivalent of a traditional bit. Its circuits exist in all possible states at the same time, – a one, a zero and whatever is in between – and this superposition vastly increases the potential processing power. The National Science Foundation recently posted an animation in which theoretical physicists John Preskill and Spiros Michalakis explain the principles of quantum computing. Superposition becomes useful when quantum bits work together, multiplying the ways they can be correlated. The correlations are richer, and that richness increases markedly as even a few hundred qubits are added — so much so that these correlations couldn’t be described with classical bits. “You’d have to write down more numbers than the number of atoms in the visible universe,” the scientists said. But that random richness requires that the calculations be run in a stable environment completely isolated from the outside world because observation would destroy the delicate random superpositions. Likewise, there can’t be any leakage of information from the quantum computer to the outside world. That decoherence, or what the scientists call “the big enemy,” would destroy the quantum calculations as well. Only now, they conclude, “are we developing the technological capability to scale-up quantum systems.” That’s where D-Wave comes in. The D-Wave quantum computer takes a ring of metal and cools it down close to absolute zero. Then other factors are eliminated to combat the decoherence that can destroy the quantum calculations. Light is removed by sitting the machine inside a black box. Radiation is shielded, and sound is reduced as much as possible. All air is also removed from the enclosure. The result is that when a current is applied to the ring, scientists can measure the superposition – 100 percent of the current is going clockwise at the same time that 100 percent of the current is going counterclockwise. That dual state is harnessed to solve problems. The secret to D-Wave's approach versus other quantum computing companies is that it has been able to achieve quantum phenomenon using concrete parts. That means D-Wave can build its quantum computers in a more traditional way, as opposed to trying to work with atoms and electrons directly. Ladizinsky says much of the government’s interest in quantum computing has to do with code breaking. To break 128-bit RSA encryption the traditional way would take 2,000 workstations and supercomputers about eight months. For 256-bit encryption, it’s a million years. And for 600-bit encryption, it would take the age of the universe. But with quantum computing, the size of the problem doesn't matter so much, because a powerful enough quantum machine could look at all the possibilities at once. Although Ladizinsky says the D-wave machine is not specifically designed to break encryption, he knows others are experimenting heavily in that field. Ladizinsky would like to use his company's quantum technology to solve climate change, fight diseases and further biological research. NASA is also is interested in using its 512-qubit machine to study machine learning and artificial intelligence, he said. Researchers are working to improve communication among qubits. Right now, each qubit can talk with the six others sitting around it, but in the future Ladizinsky said he would like to see the entire matrix communicate. And unlike traditional computers whose processing power grows linearly, the quantum machines’ power grow exponentially, blowing Moore's Law out of the water. Moving from a 128-qubit computer to today's 512-qubit machine increased processing power by 300,000 times, Ladizinsky said. As even more power is added to D-Wave's quantum computers, those seemingly impossible questions of yesteryear may begin to fall in range of today's scientists. "In a sense, we are harvesting the parallel worlds to solve problems in this one," Ladizinsky said. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:30279a3d-dd6b-46a7-9bd9-f93ba7d3ebb4>
CC-MAIN-2017-09
https://gcn.com/articles/2013/09/10/dwave-quantum-computing.aspx?admgarea=TC_BigData
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00301-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948465
1,180
3.484375
3
Subnetting is definitely one of the things you need to know inside and out to pass your CCENT 100-101 or CCNA 200-120 exam. You will see pretty straight forward subnetting questions and you will also see scenario based questions that you will need to employ your subnetting skills to determine where the problem resides. A classic example of such as subnetting question on the CCENT or CCNA exam is where you will have hosts on different subnets and the exam question will state that Host A and Host B cannot communicate and since the topology shown will be using variable length subnet masks you will need to be able to identify the different subnet ranges and you will find that one of the Hosts is configured with a router as a gateway that is not within the subnet range even though they are physically connected. So that is just one of the many examples of why you need to know subnetting inside and out. Additionally, the better you know subnetting, the faster you can get through the questions as you do not want to be struggling trying to figure out subnet ranges as the exam only gives you a limited amount of time and most students only have a few minutes left at the end of their exam. A quick tip is as the proxy shows you to your testing sheet, they will usually hand you two laminated dry erase sheets with a dry erase marker to take notes and do your subnetting in place of scrap pieces of paper. The proxy will then start the exam session for you and you have about 15 minutes to answer various marketing questions that do not impact your exam. During that time write down your subnetting charts on the laminated sheets. This way you can quickly refer back to them during the test. It might only save a few minutes, but every minute counts on this exam! So below we have another classic subnetting question you may see on the exam. Take a look at the network topology below. One of our CCNA certified network administrators has added a new subnet with 17 hosts to the network. Which subnet address/mask should this network use to provide enough usable addresses while wasting the fewest addresses? Variable Length Subnet Mask (VLSM) Subnetting Answer A would provide you 254 hosts per subnet so that is too many. The number of hosts needed is 17; therefore the subnet mask should be /27 which allows for up to 32 hosts i.e 255.255.255.224. In this scenario there are only 2 answers that could fit, which are B and C. B is wrong since the network 192.168.0.64/27 is in another subnet on the first router. Therefore it cannot be used and accordingly the correct answer is C. Answer D is incorrect as /29 which is 255.255.255.248 will only provide 6 hosts per subnet and does not meet the criteria of a minimum of 17 hosts per subnet. Answer E is incorrect as a /26 which provides a subnet mask of 255.255.255.192 will provide 62 hosts per subnet which like answer A is wasteful as it provides too many. What is really cool is when you have your own CCNA lab, you will be able to actually configure the routers so they match the topology and cycle through the different options to see what really works. This way it is not simply theory, it becomes real to you when you see these concepts in action!
<urn:uuid:3e871333-85dc-4574-9cfb-cced17e01002>
CC-MAIN-2017-09
https://www.certificationkits.com/ccent-ccna-subnetting-exam-question/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00477-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960179
714
3.578125
4
An ongoing multibillion-dollar overhaul of the nation's air traffic control (ATC) system is designed to make commercial aviation more efficient, more environmentally friendly and safer by 2025. But some white-hat hackers are questioning the safety part. The Next Generation Air Transportation System (NextGen) will rely on Global Positioning Systems (GPS) instead of radar. And so far, several hackers have said they were able to demonstrate the capability to hijack aircraft by spoofing their GPS components. The Federal Aviation Administration (FAA) has declared that it already has multiple measures to detect fake signals. But it has so far not allowed any independent testing of the system. The hacking exploits are not new. National Public Radio's "All Tech Considered" reported last August that Brad Haines, a Canadian computer consultant known online as "RenderMan," noted that the radio signals aircraft will send out to mark their identity and location under NextGen, called automatic dependent surveillance-broadcast (ADS-B), were both unencrypted and unauthenticated. By spoofing those signals, Haines said he could create fake "ghost planes." "If I can inject 50 extra flights onto an air traffic controller's screen, they are not going to know what is going on," he told NPR. "If you could introduce enough chaos into the system - for even an hour - that hour will ripple though the entire world's air traffic control." Haines presented his findings at the Defcon hacking conference in Las Vegas last summer [http://www.csoonline.com/article/713233/the-black-hat-bsideslv-and-defcon-post-mortem]. Then there is the group of researchers from the University of Texas that successfully hijacked a civilian drone at the White Sands Missile Range in New Mexico during a test organized by the Department of Homeland Security (DHS) last summer. The system used to hijack the drone cost about $1,000. The NextGen program is expected to cost taxpayers $27 billion, plus another $10 billion spent by the commercial aviation industry. In a third case, NPR reported that Andrei Costin, a Romanian graduate student in France, was able to build a software-defined radio hooked to a computer that created fake ADS-B signals in a lab. It cost him about $2,000. Costin made a presentation at last summer's Black Hat conference. Paul Rosenzweig, founder of Red Branch Law & Consulting and a former deputy assistant secretary for policy at DHS, wrote in a post last week on Lawfare that this amounts to the FAA continuing to dig itself into a deeper hole. One problem, he wrote, is that the eventual goal is to eliminate radar, which is inefficient because it requires planes to fly on designated radar routes. "But the hardware for radar broadcasting and reception can't (that I know of) be spoofed," he wrote. "Today, when planes fly using GPS they 'double check' their location with radar. [But] the entire plan behind NextGen is to eventually get rid of the radar system -- an expensive 20th century relic, I guess. But then we are completely dependent on GPS for control." The FAA told NPR that besides confirming ADS-B signals with radar, the NextGen system will automatically check to make sure the correct receivers are picking up the correct signals. If a "ghost plane" is sending a signal to the wrong receiver, it would be spotted as fake. Third, it will use a technique called "multilateration" to determine exactly where every ADS-B signal is sent from. Nick Foster, a partner of Brad Haines, praised the use of multilateration. "But I still wonder if it would be possible to fool the system on the edges," he told NPR. "I think the FAA should open it up and let us test it." The risks of GPS hacks extend beyond aviation. Logan Scott, a GPS industry consultant, told Wired magazine last year [http://www.wired.com/dangerroom/2012/07/drone-hijacking/2/] that GPS is also used to control the power grid, to power banking operations including ATMs and to keep oil platforms in position. The world's cellular networks also rely on it. And given that it is free, unauthenticated and unencrypted makes it vulnerable. "The core problem is that we've got a GPS infrastructure which is based on a security architecture out of the 1970s," Scott said. Not everybody sees the GPS vulnerability as a major safety problem, however. Martin Fisher is now director of information security at Wellstar Health System, but worked previously in commercial aviation for 14 years. He said radar will still be around, even when the transition to NextGen is complete. "Don't for a moment believe there won't be radar anymore," he said. "Commercial aircraft will still have anti-collision radar and proximity alarms." Beyond that, he said, "do not make the assumption that the pilots flying your aircraft simply follow the instructions of ATC like automatons. These are very highly trained men and women with years of experience flying day, night, good weather, bad weather." Paul Rosenzweig said he would still be much more comfortable if the FAA would allow the system to be "stress tested." Whatever bugs are in the system, there may be more than 12 years available to fix them. The Washington Post reported in September that Calvin L. Scovel III, inspector general for the Department of Transportation, told a House subcommittee that the program was "four years behind schedule and $330 million over budget."
<urn:uuid:36ceefca-32cb-47a2-ae5f-746008bfd9f4>
CC-MAIN-2017-09
http://www.csoonline.com/article/2132793/access-control/hackers-say-coming-air-traffic-control-system-lets-them-hijack-planes.html?source=rss_news
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00245-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960449
1,167
2.90625
3
Why Do I Have to Tighten Security on My System? (Why Can't I Just Patch?) - Page 2 The Lifecycle of the Modern Security Vulnerability I. Bug Discovery If we think about the security vulnerabilities that crackers exploit, whether locally or remotely, we realize that they're caused by one thing: a "bug" in either the design or the implementation of a system program. The lifecycle of this vulnerability starts when someone discovers this bug, through whatever method. They may be reading code or reverse-engineering the program, but they might as well be reading Internet RFCs describing a given protocol. In any case, the problem becomes a real possibility at the moment someone discovers this bug. It becomes a little worse if and when this person shares that knowledge with another. II. Vulnerability Discovery Now at some point, possibly seconds later, someone realizes that this "bug" actually leaves a security hole in the program. If this program has privilege, the vulnerability may be exploitable to gain that privilege. Again, the discoverer doesn't necessarily share this knowledge with anyone! III. Exploit Coding - Run arbitrary commands. - Dump a section of memory containing passwords or other privileged information to a file. - Write well-crafted data to the end of a specific file. At this point, the vulnerability has become our problem. The exploit writer now has the capability to break into our machine - and we usually don't even know about the vulnerability. This is not good. IV. Exploit Sharing The exploit coder may share his exploit at this point. He can distribute it privately, among friends and acquaintances. Our problem just got worse, as there are now more people that can break into our machine and we may still not even know about the vulnerability. V. Public Release! Finally, one of the exploit owners may choose to release the exploit publicly, on BugTraq or other security mailing lists and possibly on security web sites. Our problem just got worse, in that now every script kiddie has access to a working exploit. Remember, there are tons of them and they're scanning the net indiscriminately, so we could be a target. But, our situation can finally be improved, in that someone might fix the vulnerability now! Remember, there's no guarantee that the vulnerability/exploit will ever reach this stage! Many exploits are circulated quite privately among cracker groups and thus don't become well-known for some time, if ever. VI. Source Code Patch Once the vulnerability is well-known, someone can code a patch. Often, the patch will be released on BugTraq and/or the vendor web site. This can happen very quickly in the Open Source community, but still often takes 1 hour to 4 days. Further, these source-code level patches are applied only by some sysadmins, who have the time and expertise to patch in this manner. Most admins wait for a vendor- supplied patch or update package. Finally, realize that even for this first group, there has already been a sizable window of opportunity, in which their system could have been cracked. To see this, consider all the time between step III and steps IV, V and VI! In all this time, some number of crackers has had a working method of cracking our machine, usually before we've even heard about it! VII. Vendor Patch Now some number of days, weeks or months later, the vendor will release a patch. At this point our troubles, with this particular vulnerability, are usually over! Remember, though, there has been a sizable window of opportunity between initial coding of the exploit and the vendor patch. In these days, weeks or months, your machine has been rather vulnerable. Given the indiscriminate nature of the script kiddie, there's a very real chance that you could get hit! Let's recapitulate the dangers here: first, many exploits are privately used, but not publicly announced for some time. Second, there's a delay between availability of exploit code and a source code patch. Third, vendors take quite a while to release that patch/update, leaving a large window of vulnerability in which you can be attacked. Fourth, there are a boatload of script kiddies out there, which means that while the exploit is publicly available, there's a number of people firing it indiscriminately against many random machines on the Internet. The only real way that we can stop the script kiddie is to actually take some proactive action. Really Stopping the Kiddie! Now that you realize that you've got to do something proactive to stop the script kiddie, let's consider what you can do. First, if you're on a Linux system, run Bastille Linux (shameless plug!). Bastille can harden a system for you very effectively with a minimum of hassle - it'll also teach you a fair deal in the process! You can also harden a system by hand, though it's likely to be less comprehensive than a Bastille run, unless you're using a very well- written checklist. If you do this all by hand, keep in mind these minimum important steps: - Firewall the box - if possible, do this both on the box and on your border router to the Internet. - Patch, patch, patch and patch some more. Automate this process, if possible, to warn you of new patches as soon as they're released. Please remember that the window of vulnerability is large enough without a sysadmin waiting two-four weeks to apply patches... - Perform a Set-UID root audit of the system, to clear up as many (local) paths to root as possible. I show how to do this and perform one for Red Hat 6.x in my previous SecurityPortal.com art icle. - Deactivate all unnecessary network services/daemons, minimizing the possibility of remote exploits! - Tighten the configurations of all remaining network services/daemons to better constrain remote exploits. - Harden the core O/S itself, through PAM settings, boot security settings and so on... - Educate the sysadmin and end users! As I said, Bastille does this stuff very well. Here's a real-world example of how hardening a box can be so much more effective than only patching: Red Hat 6.0 shipped with a BIND named daemon that was vulnerable to a remote root exploit. This vulnerability was unknown at the time, so no patch existed for a little while. If you had run Bastille, it had minimized the risk from any BIND exploit, known or unknown, by setting BIND to run as a non-root user in a "chroot" prison. When the exploit came out, people who hadn't hardened BIND were vulnerable to a remote root grab. Thousands of machines, at least, were rooted before patches were released and applied. If you had hardened ahead of time, by running Bastille or otherwise, the root grab failed. This example is just one of several - there were a few ways to root a Red Hat 6.0 box, all of which could be minimized by judicious hardening. 1 Actually, there are also hybrid exploits, where he gets an unprivileged shell on the system from some network daemon, without ever logging in but these are a hybrid type. Our script kiddie generally doesn't use this stuff, though he may if he's bright or has a good text file instructing him. Jay Beale is the Lead Developer of the Bastille Linux Project (http://www.bastille- linux.org). He is the author of several articles on Unix/Linux security, along with the upcoming book Securing Linux the Bastille Way, to be published by Addison Wesley. At his day job, Jay is a security admin working on Solaris and Linux boxes. You can learn more about his articles, talks and favorite security links via http://www.bastille- linux.org/jay. SecurityPortal is the world's foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks. The Focal Point for Security on the Net (tm)
<urn:uuid:3a4f4ba8-74bd-4de1-9ff7-757a98ce69d1>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/netsecur/article.php/10952_624511_2/Why-Do-I-Have-to-Tighten-Security-on-My-System-Why-Cant-I-Just-Patch.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00469-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954187
1,699
2.96875
3
With much fanfare, NASA announced this week that its Kepler Space Telescope had discovered the first Earth-size planet that orbits a parent star in the so-called habitable zone, where temperatures could allow liquid water to collect on the surface. The planet has been named Kepler-186f, and the discovery itself is a first. Of the 1,700 confirmed planets Kepler has detected, the telescope found many that were either similar in size to Earth or that orbited in a star’s habitable zone, but Kepler-186f is the first to achieve both. Certainly, it’s an exciting time for NASA, but in a particularly interesting twist, the space agency and its team of scientists at Ames Research Center in Moffett Field, Calif., didn’t find this planet in new data from Kepler; it actually came from an archived trove Kepler generated before the telescope malfunctioned in May 2013. Even without Kepler hunting for planets in the same way since, scientists have continued to pour over old data collected by the $600 million telescope since it was launched in 2009, using new techniques to find planetary diamonds in the rough. In fact, NASA announced the confirmation of 715 new planets in February, all unveiled through a new method called the “verification by multiplicity technique,” which examined possible multi-planet systems Kepler detected in its first two years. Much like those newly-confirmed planets, evidence for Kepler-186f already existed -- not light-years away in space, but stored away in a NASA facility. Scientists believe there may be hundreds or more planets waiting to be found in existing data. Even more reason for optimism for planetary scientists: Two full years of Kepler data remain to be explored. NASA officials have even suggested the agency might use its D-Wave 2, a quantum computer it operates in partnership with Google and the Universities Space Research Association, to pour over existing Kepler data. All in all, it’s a very exciting time for NASA. "The discovery of Kepler-186f is a significant step toward finding worlds like our planet Earth," said Paul Hertz, NASA's Astrophysics Division director at the agency's headquarters in Washington. "Future NASA missions, like the Transiting Exoplanet Survey Satellite and the James Webb Space Telescope, will discover the nearest rocky exoplanets and determine their composition and atmospheric conditions, continuing humankind's quest to find truly Earth-like worlds." Meanwhile, NASA is in the latter stages of determining whether to approve a new mission called “K2” for Kepler. Since the second of its fourth gyroscope-like reactionary wheels failed in May – it requires three to navigate precisely enough to gaze at stars – NASA has been exploring alternative ways for Kepler to effectively carry out science. If approved, K2 would attempt to make use of the sun’s solar wind to stabilize the telescope in conjunction with its two operational reactionary wheels over intervals of several months. Whether Kepler is repurposed or not, scientists have more than enough data to keep searching for Earth-like planets through information that is already at their fingertips.
<urn:uuid:4d68d1ce-a490-47c3-bd2f-3a30981412db>
CC-MAIN-2017-09
http://www.nextgov.com/big-data/2014/04/nasa-found-new-earth-planet-using-old-data/82819/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00645-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933626
636
3.546875
4
Genius People in History - Top 10 Genius People First off, you may be surprised to find that Albert Einstein is not included on this list of genius people. It is difficult to explain exactly what makes a person genius but it is fact that some people are genius by nature. The reason is that I have used a table of IQ estimates for historical geniuses to determine the members and order of this list, and Einstein's IQ (around 160) did not make the grade. Despite that, he is still the first person to pop in to most people's minds when thinking of a genius. Having said that, here is a list of the ten greatest genius people in history. 10. Madame De Stael (IQ: 180) In full - Anne-Louise-Germaine Necker, Baronne (baroness) de Stal-Holstein, byname Madame de Stal. Madame de Stael was a French-Swiss woman of letters, political propagandist, and conversationalist, who epitomized the European culture of her time, bridging the history of ideas from Neoclassicism to Romanticism. She also gained fame by maintaining a salon for leading intellectuals. Her writings include novels, plays, moral and political essays, literary criticism, history, autobiographical memoirs, and even a number of poems. Her most important literary contribution was as a theorist of Romanticism. Madamede Stael is on an equal level with Ren Descartes but I chose to include her rather than him in order to put at least one woman on this list. 9. Galileo Galilei (IQ: 185) Galileo was Italian natural philosopher, astronomer, and mathematician who made fundamental contributions to the sciences of motion, astronomy, and strength of materials and to the development of the scientific method. His formulation of (circular) inertia, the law of falling bodies, and parabolic trajectories marked the beginning of a fundamental change in the study of motion. His insistence that the book of nature was written in the language of mathematics changed natural philosophy from a verbal, qualitative account to a mathematical one in which experimentation became a recognized method for discovering the facts of nature. Finally, his discoveries with the telescope revolutionized astronomy and paved the way for the acceptance of the Copernican heliocentric system, but his advocacy of that system in support of his view that the Bible contained errors, eventually resulted in an Inquisition process against him. 8. Bobby Fischer (IQ: 187) Bobby is the byname of Robert James Fischer, an American chess master who became the youngest grandmaster in history when he received the title in 1958. His youthful intemperance and brilliant playing drew the attention of the American publicto the game of chess, particularly when he won the world championship in 1972. Fischer learned the moves of chess at age 6 and at 16 dropped out of high school to devote himself fullyto the game. In 1958 he won the first of many American championships. In world championship candidate matches during 1970-71, Fischer won 20 consecutive games before losing once and drawing three times to former world champion Tigran Petrosyan of the Soviet Union in a final match won by Fischer. In 1972 Fischer became the first native-born American to hold the title of world champion when he defeated Boris Spassky of the Soviet Union in a highly publicized match held in Reykjavk, Iceland. In doing so, Fischer won the $156,000 victor's share of the $250,000 purse. 7. Ludwig Wittgenstein (IQ: 190) In full - Ludwig Josef Johann Wittgenstein was an Austrian-born English philosopher, regarded by many as the greatest philosopher of the 20th century. Wittgenstein's two major works, Logisch-philosophische Abhandlung (1921; Tractatus Logico-Philosophicus, 1922) and Philosophische Untersuchungen (published posthumously in 1953; Philosophical Investigations), have inspired a vast secondary literature and have done much to shape subsequent developments in philosophy, especially within the analytic tradition. His charismatic personality has, in addition, exerted a powerful fascination upon artists, playwrights, poets, novelists, musicians, and even filmmakers, so that his fame has spread far beyond the confines of academic life. 6. Blaise Pascal (IQ: 195) Blaise Pascal was a French mathematician, physicist, religious philosopher, and master of prose. He laid the foundation for the modern theory of probabilities, formulated what came to be known as Pascal's law of pressure, and propagated a religious doctrine that taught the experience of God through the heart rather than through reason. The establishment of his principle of intuitionism had an impact on such later philosophers as Jean-Jacques Rousseau and Henri Bergson and also on the Existentialists. 5. John Stuart Mill (IQ: 200) John Stuart Mill was an English philosopher, economist, and exponent of Utilitarianism. He was prominent as a publicist in the reforming age of the 19th century, and remains of lasting interest as a logician and an ethical theorist. Mill was a man of extreme simplicity in his mode of life. The influence that his works exercised upon contemporary English thought can scarcely be overestimated, nor can there be any doubt about the value of the liberal and inquiring spirit with which he handled the great questions of his time. Beyond that, however, there has been considerable difference of opinion about the enduring merits of his philosophy. 4. Gottfried Wilhelm von Leibniz (IQ: 205) Gottfried Wilhelm Leibniz (also Leibnitz or von Leibniz (July 1 (June 21 Old Style) 1646 - November 14, 1716) was a German philosopher of Sorbian origin who wrote primarily in Latin and French. Educated in law and philosophy, and serving as factotum to two major German noble houses (one becoming the British royal family while he served it), Leibniz played a major role inthe European politics and diplomacy of his day. He occupies an equally large place in both the history of philosophy and the history of mathematics. He discovered calculus independently of Newton, and his notation is the one in general use since. He also discovered the binary system, foundation of virtually all modern computer architectures. In philosophy, he is most remembered for optimism, i.e., his conclusion that our universe is, in a restricted sense, the best possible one God could have made. 3. Emanuel Swedenborg (IQ: 205) Emanuel Swedenborg was a Swedish scientist, Christian mystic, philosopher, and theologian who wrote voluminously in interpreting the Scriptures as the immediate word of God. Soon after his death, devoted followers created Swedenborgian societies dedicated to the study of his thought. These societies formed the nucleus of the Church of the New Jerusalem, or New Church, also called the Swedenborgians. 2. Leonardo Da Vinci (IQ: 205) Leonardo Da Vinci, Italian painter, draftsman, sculptor, architect, and engineer whose genius, perhaps more than that of any other figure, epitomized the Renaissance humanist ideal. His Last Supper (1495-98) and Mona Lisa (c. 1503-06) are among the most widely popular and influential paintings of the Renaissance. His notebooks reveal a spirit of scientific inquiry and a mechanical inventiveness that were centuries ahead of their time. The unique fame that Leonardo enjoyed in his lifetime and that, filtered by historical criticism, has remained undimmedto the present day rests largely on his unlimited desire for knowledge, which guided all his thinking and behaviour. 1. Johann Wolfgang von Goethe (IQ: 210) Goethe, German poet, playwright, novelist, scientist, statesman, theatre director, critic, and amateur artist, is considered the greatest German literary figure of the modern era. Goethe is the only German literary figure whose range and international standing equal those of Germany's supreme philosophers (who have often drawn on his works and ideas) and composers (who have often set his works to music). In the literary culture of the German-speaking countries, he has had so dominant a position that, since the end of the 18th century, his writings have been described as "classical". In a European perspective he appears as the central and unsurpassed representative of the Romantic movement, broadly understood. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:1becfea5-4756-4038-ad2e-e7d9b25c71a3>
CC-MAIN-2017-09
http://www.knowledgepublisher.com/article-862.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00114-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967306
1,830
2.890625
3
Smart-phone battery life could double -- without better batteries Batteries are the most unappreciated technology in use today. They almost never make the list of important innovations that changed modern society, though many of the technologies that do, like smart phones, rely heavily on them. For public-sector officials, staying connected means staying powered up. And for first responders and military personnel, battery life can become more like a lifeline whose importance can’t be overstated. Of course, battery-makers haven’t done much to attract attention -- the technology hasn’t evolved very much or very quickly over the past 20 or even 50 years. But battery life for mobile devices nevertheless could change in a big way if the firm Eta Devices has anything to say about it. In February 2013, its team is presenting a paper at the Mobile World Congress in Barcelona showing ways to vastly increase battery life. Interestingly enough, the MIT spinoff isn’t trying to build a better battery. There are lots of teams attempting to do that, from Northwestern University, which has found a way, in theory, to put more lithium ions onto an anode, to IBM, which seems to have a whole division constantly trying to come up with new battery designs. Instead, the Eta Devices team is trying to come up with a way to make devices that depend on batteries, especially cell phones, more efficient. The reason Eta is staying out of the battery-making game is that there are a lot of factors that go into making a modern battery, and improvements in one area can counteract gains in others. For example, users want batteries with high energy density, which is their ability to hold a charge. They also want to be able to recharge batteries as quickly as possible, which is a factor called power density. Then durability comes in to play, which is how many times a battery can be charged and drained before it doesn’t work anymore. Increased energy density often leads to reduced power density, and durability almost always suffers if the other areas are improved. Without coming up with a whole new hardware tech, we may have reached the upper limit of efficiency with batteries in all areas. The Eta team is staying out of that all together, which I think is a smart move. Instead they have invented a process they say will make radios and phones more efficient. It’s a complicated process full of equations and test graphs, but it can be summed up by saying that cell phones waste a lot of power trying to push out a signal. This is due both to the logic inside phones as well as the inefficient way packets on cellular networks are handled. An example of phone inefficiency can be demonstrated when users are outside their normal coverage area, deep inside a building or simply away from a tower. In those cases, phones will ramp up the power of their transmitters, trying to search for a signal. the phone will get hotter and use more power, draining the battery sometimes twice as fast as normal. The phone is basically in an enhanced talk mode when it should be in standby. But even if it has a signal, a phone or other mobile device is probably using more power than it should, because it’s programmed to do that to overcome any inefficiency in the network. To some extent, we have tested this phenomenon in the GCN lab. When we reviewed the Wilson Sleek cell phone booster, we sent a tester into the hinterlands between cities and away from cell phone towers. Not only was the Sleek able to remain connected in more places, but battery life also lasted longer because the phone wasn’t constantly searching for a signal. The Eta phone design, called Asymmetric Multilevel Outphasing Architecture for Multi-standard Transmitters, is a lot smarter at determining how much power is needed for a call or data burst. It will only allow phones or other mobile devices to power up to the point that is required, and go no further. Unfortunately, this has to be coupled on the other end, at the tower, with a new efficient packet encoding scheme called Network Coding. The two work together to dynamically determine the power needed for any phone. It might be a lot to ask cellular providers to re-install new software throughout their networks, but this is probably easier than trying to develop a new battery technology from scratch, which often involves working at the molecular level and years of research and development. And there is incentive for companies. The ones that implement such a technology may be able to rightfully claim longer battery life and faster transmission rates without changing much on the hardware side. Agency employees in the field would certainly be interested in phones like that. For government, this might mean that those wireless lifelines would last a lot longer, be more efficient and stronger, without actually increasing the weight or size of the physical gear. And that would be a win for everyone involved, if this new technology works as intended. Posted by John Breeden II on Nov 14, 2012 at 9:39 AM
<urn:uuid:933c7331-9f20-4cad-95b9-8d205ddbcd08>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2012/11/smart-phone-battery-life-could-double-without-better-batteries.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00518-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964463
1,032
2.9375
3