text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Pain management devices help to reduce chronic pain suffering. The primary goal of hospice pain management is to enhance quality of life. Managing the pain can also help improve the physical and mental functions of an individual. Chronic pain continues and is subdivided into cancer-related pain and non-malignant pain such as arthritis, low back pain and peripheral neuropathy. Inadequately managed pain can lead to adverse outcomes for individual patients and their families, both physical and psychological. Continuous, unrelieved pain stimulates the pituitary-adrenal axis which can inhibit the immune system, contributing to postoperative infection and slow wound healing. What are pain management devices? Pain management devices are medical devices used to relieve different forms of pressure such as neuropathic pain, cancer pain, nociceptive pain, musculoskeletal pain among others. Stimulators for the spinal cord, transcutaneous electrical nerve stimulators, analgesic infusion pumps, and ablation devices are different types of pain management devices available for the patients. Pain management by stimulating the nerve is an alternative to surgery and medicine, or a way to improve the operation or medication. It is the latest in treating non-invasive pain. Electrical stimulation can be used for the treatment and reduction of pain in many parts of the body at home and in therapy. Why are pain management devices important? Effective pain management devices can monitor discomfort and put the patient at ease. The patient will be asked to resume a steady pace of different activities. These might cause pain as the individual is healing. Through proper pain management, the patient may note the difference in levels of pain when they continue exercising without feeling uncomfortable. Control is important for many patients. They want to take charge of their lives. This means they want to play an active role in how they regain their activity levels and how they manage their pain. Applications of pain management devices: Musculoskeletal pain refers to muscle, bone, ligament, tendon and nerve pain. You can feel the pain in only one part of your body, like your back. If you have a widespread condition like fibromyalgia, you can have it throughout your body too. The pain might vary from mild to severe enough to interfere with your daily life. It can begin all of a sudden and be short-lived, which is called acute pain. Chronic pain is called pain which lasts for more than 3 to 6 months. Such conditions affect the bones, muscles, joints, and ligaments directly. An injury to the bones, joints, muscles, tendons or ligaments is the most common cause of musculoskeletal pain. Falls, sport injuries and car collisions are only a few of the events that can cause pain. Neuropathic pain is often described as a burning or shooting pain. It may go off alone but is always chronic. It's relentless and severe at times and it comes and goes at times. This is often the result of damage to nerves or a malfunctioning nervous system. The impact of nerve damage is a shift in the function of the nerves at both the injury site and surrounding area. The prevalence of neuropathic pain is high, reaching up to 95 percent of cases , particularly when there has been a cervical root avulsion. Neuropathic pain is the result of damage to the somatosensory system, and its development into chronicity relies on peripheral and central nervous system disturbances. Managing these painful conditions is complicated and needs to be handled by a multidisciplinary team, beginning with first-line pharmacological therapies such as tricyclic antidepressants and calcium channel ligands, combined physical and occupational therapy, transcutaneous electrical stimulation and psychological assistance. Top pain management industry trends: Stem Cell Procedures Biological approaches – the use of stem cells and plasma rich in platelets – are some of the most exciting trends of pain control. Doctors can remove a patient's cells and then reinject them directly into the injured or painful area. Such cells then activate certain tissues to help them cure disks, nerves and other parts of the body from injuries. Since they are non-invasive, both stem cell and PRP therapies are ideal treatment choices for patients who may not be surgical candidates, or may choose to seek a more conventional, low-risk approach. Doctors and patients became increasingly interested in the pain control mind-body relation. For example , patients suffering from chronic pain such as fibromyalgia or rheumatoid arthritis also find that exercise does not only boost fitness; it improves mental well-being. The endorphins released by physical exercise also have the added advantage of reducing daily pain. Patients are advised to take lessons in yoga or Pilates, and to learn how to meditate. Most primary care facilities also have on-site chiropractors and physical therapists to provide therapies designed to align and relax body parts. The belief is that in certain cases a proper alignment of the spinal cord and relaxation of the muscle will reduce pain. Electrical Stimulation Devices Electrical stimulation therapy is now one of the most common methods of relieving muscle pain. These devices are non-addictive alternatives to narcotic painkillers and can reduce discomfort in most patients, often up to 80%. For example, the Neuro-Stim System (NSS) is an FDA-cleared peripheral nerve field stimulator designed to provide low-frequency electrical impulses that are transmitted to the associated nerves. Low-frequency impulses decrease the activation of the sympathetic nervous system, minimize inflammation and increase blood flow and tissue oxygenation. The results are a long-term reduction in pain with no narcotic side effects. Spinal cord stimulation offers pain relief for chronic back pain, leg, foot or knee pain that has not responded to other treatments, such as medicine, physical therapy, or injections. The spinal cord stimulator is a very small battery-powered device, like a pacemaker, that is implanted under the skin. A comparatively less invasive surgical technique helps patients to monitor their own pain relief by means of electrodes inserted to stimulate the nerves above the epidural area in the spinal cord. The bottom line In recent years, technological advances in pain management devices have played an immense role in the overall growth of the pain management market. Several firms have launched products with improved feasibility and high operational effectiveness. Boston Scientific announced the launch in January 2019 of an innovative chronic pain treatment system, described as one of the very few products on the market with the potential to incorporate paresthesia and sub-perception therapy. The 'Spectra WaveWriter SCS System' was launched to offer a non-drug treatment solution for chronic pain patients. Increasing incidences of diseases like cancer, cardiovascular diseases, neurovascular diseases, and musculoskeletal diseases are expected to drive the growth of the market for pain management devices. These diseases have a critical impact on a person's health as they weaken immunity and cause chronic pain. The incidences of the diseases are most frequently seen in adults and geriatric populations. Chronic pain incidences are commonly seen among athletes, sportspeople, and people living with past injuries. The incidences of chronic pain increase a person 's dependence on others for various everyday activities. Free Valuable Insights: Global Pain Management Devices Market to reach a market size of USD 6.3 billion by 2026 Pain management is therefore necessary if a regular everyday routine is to be carried out. The pain management devices provide comfort and also improve patient health over longer durations. A wearable device for pain management is easy to use and to operate; thus, it decreases the reliance of the person on others. Hence the demand for the same among patients with chronic diseases is growing due to the advantages of pain management devices.
<urn:uuid:e7ef4f06-0ad4-4a62-b5a5-e9d2435edc7f>
CC-MAIN-2022-40
https://globalriskcommunity.com/profiles/blogs/all-about-pain-management-devices-and-top-trends-of-the-industr-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00217.warc.gz
en
0.937516
1,591
2.859375
3
What is the National Institute of Standards and Technology (NIST)? The National Institute of Standards and Technology (NIST), founded in 1901, is part of the U.S. Department of Commerce. NIST is one of the nation's oldest physical science laboratories, and is a non-regulatory agency providing standards, measurement, and technology for a broad range of technologies. Numerous products and services rely on the technology, measurement and standards provided by the NIST including: smart electric power grids, electronic health records, atomic clocks, advanced nanomaterials, computer chips, and global communication networks. The mission of NIST is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology to enhance economic security and improve the quality of life of U.S. citizens and residents. The core competencies of NIST are measurement science, rigorous traceability, and the development and use of standards.
<urn:uuid:98009a29-f851-43a7-8250-57133359247e>
CC-MAIN-2022-40
https://www.digicert.com/support/resources/faq/compliance/what-is-the-national-institute-of-standards-and-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00217.warc.gz
en
0.93079
191
3.125
3
NIST Multi-Factor Authentication (MFA) Guide for E-commerce Sites Arrives The National Institute of Standards and Technology’s (NIST) National Cybersecurity Center of Excellence (NCCoE) has published a guide to help online retailers implement multi-factor protections to reduce fraudulent purchases. The effort arrives as technology companies call on MSPs to embrace two-factor and multi-factor authentication (2FA & MFA) to safeguard internal systems and customer systems. The NCCoE is a public and private partnership working on cybersecurity solutions for specific industries. The 166-page document, entitled Multifactor Authentication for E-Commerce, is intended as a primer to show online retailers that it is possible to implement open standards-based technologies to enable Universal Second Factor (U2F) authentication. “As retailers in the United States have adopted chip-and-signature and chip-and-PIN (personal identification number) point-of-sale security measures, there have been increases in fraudulent online card-not-present electronic commerce transactions,” the document’s authors (there are six) wrote. MFA Security Explained According to the NCCoE report, multi-factor authentication (MFA) is a “security enhancement that allows a user to present several pieces of evidence when logging into an account.” The “evidence” is derived from three sources: something you know, such as a password; something you have, such as a smart card; or, something you are, such as a fingerprint. To enhance security, evidence from two categories must be present, the association said. To test various methods to deploy MFA, the NCCoE built a laboratory environment to explore ways for online retail environments for the consumer and the e-commerce platform to deploy upgraded identity access. The examples are meant to urge retailers to adopt MFA by using standard, commercially available components and open-source applications. THE NCCoE made it clear that it is not endorsing any particular MFA products. MFA Security Benefits Here are the MFA benefits for online retailers, according to the NCCoE: - Help your organization reduce online fraudulent purchases, including those resulting from the use of credential stuffing to take over accounts. - Show customers that the organization is committed to its security. - Protect your e-commerce systems. - Provide greater situational awareness. - Avoid system-administrator-account takeover through phishing. - Implement the example solutions by using the step-by-step guide.
<urn:uuid:413908e6-5007-42bc-ae25-cffe34c28fb2>
CC-MAIN-2022-40
https://www.msspalert.com/cybersecurity-services-and-products/mfa-guide-nist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00217.warc.gz
en
0.904625
532
2.5625
3
Short-distance wireless communication technologies generally support to connect different cordless products in 100 meters, offering more convenient application for people’s daily communications. Today, LINKVIL will introduce you more about short-distance wireless communication. Zig-Bee is one of the short-distance wireless communication technologies applied in home & building control, industrial auto-control, agricultural information collection & control, public places information detection & control, etc. · Zig-Bee supports the short-distance connection between 10 ~ 100 meters · Zig-Bee devices can work for 6 ~ 24 months with low power consumption in standby mode · With a free protocol charge and cheap chip, Zig-Bee is a cost-effective choice for short-distance wireless communication · Zig-Bee usually works at a low rate of 20 ~ 250kbps · Zig-Bee realizes quick response with the feature of short time delay Bluetooth can realize point-to-point and point-to-multi-point wireless data and audio transmission within a radius of 10 meters via electromagnetic waves from 2.402GHz to 2.480GHz. Its data transmission bandwidth can reach up to 1Mbps. Bluetooth is widely used in all kinds of data and audio devices in LAN, such as PC, laptops, printers, fax machines, digital cameras, mobile phones, high-quality headsets, etc., realizing the connection between different devices anytime and anywhere. Wi-Fi is able to cover a wider LAN area of up to 100 meters, whose transmission speed can reach 11Mbps (802.11b) or 54Mbps (802.11a) and up to 9.6Gbps for Wi-Fi 6. People can enjoy a network via Wi-Fi hot-spots in crowded places, such as train stations, bus stations, shopping malls, airports, libraries, campuses, etc. 4. Ultra Wide Band (UWB) UWB is a kind of carrier-free communication technology, using a non-sine wave narrow pulse of a nanosecond to microsecond level to transfer the data within 10 meters. UWB takes advantage of up to 1GHz bandwidth to reach the communication speed of more than hundreds of megabytes bit/s. UWB operates in a band range from 3.1GHz to 10.6GHz and a minimum operating bandwidth of 500MHz. 5. Near Field Communication (NFC) NFC is a new short-distance wireless communication technology based on Radio Frequency Identification technology (RFID), whose operating frequency is 13.56MHz. It uses the same frequency as the widely popular non-contact smart card ISO14443, providing a convenient communication method for all consumer electronic products. NFC adopts Amplitude Shift Keying (ASK) mode with a data transmission rate of 106kbit/s and 424kbit/s. With the features of short transmission distance, high bandwidth, and low power consumption, NFC is suitable for the application of access control, public transportation, mobile payment when compatible with contact-less smart card technology.
<urn:uuid:850d42f8-4e64-4018-b0d3-d47214458dda>
CC-MAIN-2022-40
https://www.fanvil.com/news/2022/20220923/8184.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00417.warc.gz
en
0.860952
638
2.78125
3
When using Private DNS, the ISP (Internet Service Provider) and the general public will only see that you are visiting a Private DNS website. They cannot tell what websites or services you’re accessing. Private DNS is a service that allows you to access blocked websites or content, similar to how VPNs work. Private DNS is free to use, and there are many options available, such as: Open DNS – “Protects your whole home network while keeping access to all the web content you love, like your favorite streaming TV and music services.” Google Public DNS – “Give you faster browsing on both Google and non-Google websites.” By protecting your domains with private DNS, you will also receive some other benefits, such as: - Encryption of your domain name information. - Protection from malicious online ads and malware that try to change the DNS on your computer or phone. - The ability to use custom characters in a subdomain (apple. apples) instead of only standard top-level domains (apple.com). - The ability to use internationalized domain names, which can help do things like setting up email addresses in foreign languages. With the benefits offered by Private DNS services for your domains, you will be able to easily keep track of all changes across your online presence and stay secure at the same time. How Does Private DNS Work? Private DNS works by using your current public Internet connection to transfer information but then takes that transferred data and moves it into a private network. Using the same technology as Virtual Private Networks (VPNs), this process ensures that no outside sources can access or alter any of your domain name information except that given permission from you first. By using private DNS with your domain names, you can protect yourself from outside sources and stay secure on the web at all times. Private DNS also works to encrypt your information. Some ISPs can collect and sell data about you, including where you go on the web, what websites are visited while connected to that ISP’s network, or other personal details like passwords or credit card numbers entered into sites while browsing. Private DNS services help keep this outside of their networks by encrypting that data. Security Against Malicious Online Ads Private DNS security also helps protect against malicious online ads and malware that try to change the DNS on your computer or phone by changing where you are directed when visiting a website. Help Protect Against Interruption Of Third Parties Private DNS services help protect by using Private DNS networks instead of shared, public ones that can leave you vulnerable to these threats. Using a personal DNS service keeps the data encrypted and is supposed to go without interruption from malicious third parties. Help Protect Against International Phishing Private DNS networks also help protect against international phishing and spoofing attacks by making sure you are directed to the correct website, no matter what country it is in or the language used on that site. How To Add A Private DNS On Your Phone - Go into Android settings and choose “Wi-Fi” from the menu on the left side of the screen. - Choose “more” when you are in the Wi-Fi menu. Tap on “Private DNS” and type in the IP address given by your private DNS provider. - Check the “Use Private DNS” option and then tap on the name of your current connection to select it if you have multiple options available. - Choose “Save” when you are finished adding the private DNS connection and make sure it is saved for future use on your phone. You can now connect to this network as normal, and all web requests will go through the encrypted DNS channels set up by your service provider instead of a public domain name server. How To Add A Private DNS On Your Computer - Go to your control panel and choose “Network Connections.” This will bring up a list of all network connections currently being used on your computer. - Right-click on the connection you are using to connect to the internet, which should be “Local Area Connection” or “Ethernet.” - Choose “Properties” and then select the TCP/IP Properties tab. - Click on the “Use Following DNS Server Addresses” radio button and enter your private DNS IP address. - Click on the “Validate Settings Upon Exit” box to save your changes and finish adding a private DNS connection on your computer. - Choose “OK” to leave the network properties box and then “Close” in the connection properties menu to save your changes and finish adding a private DNS connection. Restart your computer, go into the control panel and choose “restart” from the menu to refresh any changes made. Connect to your private DNS network as usual and start browsing the web securely. All data sent from this point forward will be automatically encrypted and decrypted with your service provider. Public Vs. Private DNS When using your ISP to connect to the internet, you are automatically connected through their public DNS servers. The information sent and received during this process can be seen by outside sources, including ISPs or third-party agencies like law enforcement. Public DNS connections do not encrypt any data transferred back and forth from your device on these networks. This leaves you vulnerable to ISP snooping or other third-party monitoring. Public DNS connections also leave you open to malware trying to change how your computer works by changing where it is directed when visiting websites. Private DNS security offers a much more secure way to connect without paying for other services or relying on your ISP. They provide the same connection speeds as public DNS connections but with added security and privacy features that protect you from outside agencies trying to monitor your data or attack your computer through malware installed by third parties. Private DNS networks encrypt all information being sent back and forth on their private networks. Using Private DNS for your domain names is crucial to protect yourself from outside sources and stay secure on the web. You can keep all of your data encrypted and isolated without interruption from malicious third parties like ISPs or hackers who try to access it with private DNS.
<urn:uuid:b61b5d4a-68bc-4e35-94cf-1fe3d120aa9b>
CC-MAIN-2022-40
https://gigmocha.com/what-is-private-dns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00417.warc.gz
en
0.922657
1,275
2.734375
3
Have you been trembling a little more than normal? If you’ve already laid off the caffeine, you might want to consider a social media diet as well. A study by Anxiety UK linked social media usage with increased anxiety. That isn’t to say social media is all bad, the organization stressed (pardon the pun). It has led to some significantly positive advantages, like connecting people with each other in ways that wouldn’t have been possible before. “For many, many people, the rise of technology has been a big help,” said Anxiety UK CEO Nicky Lidbetter. “Technology, particularly social networks, allows people who are housebound, due to conditions such as agoraphobia, the chance to interact with others far more easily than they were able to in the past. That is a really positive development.” Increased connectivity does come with a few drawbacks, though. The study found more than 50 percent of individuals who regularly use social networking websites said their behavior changed negatively. Researchers cited issues such as a higher likelihood for social media users to compare themselves negatively to others, and trouble pulling themselves away from social media sites. Before you go off of Facebook forever, keep in mind the survey only identified a correlation, which means social networking may not be causing anxiety. Maybe people predisposed to anxiety are just more likely to escape into Twitter or Facebook for relief. Besides, it can’t be all bad, as researchers are looking into ways of using social media to treat medical conditions! The University of California-San Diego is working on a social media website that will provide self assessment tools for type I and type II diabetes. Researchers are hoping that using a social media platform will also encourage more patient-doctor interaction and information sharing, leading to a higher level of awareness of treatments and conditions. “Social networking provides a common way for patients with chronic disease to learn about their condition while interacting with others in similar situations,” said Dr. Jason Bronner, associate clinical professor at UC-San Diego School of Medicine. “As opposed to open networks, the use of this tool allows us to ensure that the medical information they receive and share is accurate, safe and absent of advertising.” How often do you log on to social networking sites? Do you think it has negatively impacted your lifestyle?
<urn:uuid:0ebe9a2c-ef72-4837-a43b-eaa7248453c9>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/researchers-link-social-media-usage-to-anxiety
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00417.warc.gz
en
0.959644
483
3
3
What is Data Leakage? Data leakage is when sensitive data is unintentionally exposed to the public. Data can be exposed in transit, at rest or in use. Data exposed in transit can include data sent in emails, chat rooms, API calls, and so on. Data exposed at rest may be the result of a misconfigured cloud storage facility, and unprotected database, or from lost or unattended devices. Data exposed in use may be from screenshots, printers, USB drives or clipboards. A data leak is not the same as a data breach, although a data leak can sometimes result in a data breach. The key difference is that a data leak is not the result of a hacking attempt, but the result of employee negligence. How Can Data Leaks Be Exploited? What makes data leakage so problematic is that it’s practically impossible to know who has access to the data once it has been exposed. Were a cyber-criminal to gain access to the leaked data they can use it for a variety of purposes. Firstly, they might try to use it to launch a targeted social engineering attack (spearfishing). Naturally, the more confidential data they have access to, the easier it will be to impersonate an employee or executive. This is especially true if the leaked data contains psychographic data, such as a data subject’s values, opinions, attitudes, interests, and lifestyles choices. Likewise, behavioural data, such as the data subject’s search history, pages visited, apps and devices used, can also be used to customize the phishing emails. Attackers can also use leaked data for the purpose of marketing, doxxing, extortion, surveillance and intelligence, or to simply cause disruption to the organization who’s data was leaked. Even-though, in most cases, data leaks don’t directly lead to a breach, they are still treated in much the same way. After all, any company who operates in a regulated industry will be required to notify the supervisory authorities about any personal data that was leaked to the public, regardless of whether or not the data was used for nefarious purposes. As such, companies must take data leaks very seriously in order to avoid any reputational or financial damage that might incur as a result. 10 Ways to Prevent Data Leaks The techniques and technologies used to prevent data leaks are mostly the same as those used to prevent data breaches. Most data loss prevention strategies start with carrying out risk assessments (including third-party risk assessments) and defining policies and procedures based on those assessments. However, in order to carry out a risk assessment, you must first understand what data you have, and where it is located. 1. Data discovery and classification Use a solution which can automatically discover and classify your sensitive data. Once you have done this, carefully remove any ROT (Redundant, Obsolete and Trivial) data to help streamline your data protection strategy. Classifying your data will make it easier to assign the appropriate controls and keep track of how users interact with your sensitive data. 2. Restrict access rights As always, it’s a good idea to limit the number of users who have access to sensitive data, as this will reduce the risk of data leakage. 3. Email content filtering Use a content filtering solution that uses deep content inspection technology to find sensitive data in text, images and attachments in emails. If sensitive data is found, it will send an alert to the administrator, who can verify the legitimacy of the transfer. 4. Controlling print Sensitive files can be stored on printers that may be accessed by an unauthorised party. Ask users to sign-in to access the printer, limit the functionality of the printer based on their role and ensure that documents containing sensitive data can only be printed once. You will also need to make sure that user’s don’t leave any printed documents containing sensitive data in the printer tray. It’s always a good idea to encrypt sensitive both at rest and in transit. This is especially relevant when storing sensitive data in the cloud. 6. Endpoint protection A Data Loss Prevention (DLP) solution can be used to prevent endpoints (desktops, laptops, mobiles, servers) from leaking sensitive data. Some DLP solutions can automatically block, quarantine or encrypt sensitive data as it leaves an endpoint. A DLP solution can also be used to restrict certain functions, such as copy, print, or the transferring of data to a USB drive or cloud storage platform. 7. Device control It is common for users to store sensitive documents on their smartphones and tablets. In addition to device management policies, you will need a solution which monitors and controls what devices are being used, and by who. You will also need to use Mobile Device Management (MDM) software, as this will make it easier for security teams to enforce the use of complex passwords, service the device remotely and control which applications can be installed on the device. Most MDM solutions can also track the location of the device and even the wipe the contents of the device if it gets lost or stolen. 8. Cloud storage configuration Data leaks caused by misconfigured storage repositories are common. For example, many data breaches were reportedly caused by Amazon S3 buckets being exposed to the public by default. Likewise, GitHub repositories and Azure file share have also been known to expose data when they are not configured correctly. As such, it is crucially important to have a formalized process for validating the configuration of any cloud storage repositories you use. 9. Real-time auditing and reporting Arguably one of the most effective ways to prevent data leakage is to keep track of changes made to your sensitive data. Administrators should have an immutable record of who has access to what data, what actions were performed, and when. The administrators should be informed (in real-time) when sensitive data is accessed, moved, shared, modified or removed in a suspicious manner or by an unauthorized party. This can be especially useful for monitoring access to sensitive data stored in the cloud. If an alert is raised, the administrator can launch an investigation into the issue – perhaps starting off by verifying the permissions of the storage container. 10. Security awareness training As mentioned previously, data leaks are caused by negligent employees. The reality is, people make mistakes. Such mistakes might include emailing sensitive data to wrong recipient, losing a USB drive, or leaving a printed document containing sensitive data in the printer tray. The most effective way to reduce the number of mistakes that our employees make is to ensure that they are well informed about data security best practices. Having an intuitive classification schema, such as public, internal and restricted, will help employees determine how certain types of data should be handled.
<urn:uuid:54f427a4-277a-4566-bc55-3929316b8322>
CC-MAIN-2022-40
https://www.lepide.com/blog/what-is-data-leakage-and-how-do-you-prevent-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00417.warc.gz
en
0.927275
1,403
3.140625
3
Ensuring Data Protection and Privacy in Botswana Botswana’s Data Protection Act 2018 or the Data Protection Act for short is a data privacy law that was recently passed in Botswana in 2018. Prior to the passing of the Data Protection Act, the country of Botswana had yet to pass any legislation strictly pertaining to data rights or personal privacy. As such, the Data Protection Act established the regulations and requirements for data processing within the country. What’s more, the Data Protection Act also established the Data Protection Commission, or the Commission for short, for the purposes of enforcing the law by imposing a variety of punishments against individuals and organizations who fail to maintain compliance with the law. How is personal data defined under the Data Protection Act? Under Botswana’s Data Protection Act 2018, personal data is defined broadly to include “information relating to an identified or identifiable individual, which individual can be identified directly or indirectly, in particular by reference to an identification number, or to one or more factors specific to the individual’s physical, physiological, mental, economic, cultural, or social identity”. Alternatively, the law defines a data controller to mean “A person who alone or jointly with others determines the purposes and means by which personal data is to be processed, regardless of whether or not such data is processed by such person or agent on that person’s behalf”, while a data processor is defined as “A person who processes data on behalf of the data controller”. Conversely, the personal scope of the Data Protection Act applies to all individuals who collect or process personal data within the country of Botswana, as the law makes no specific distinction between organizations and individuals. Additionally, the territorial scope of the law applies to both personal data that is collected and processed within Botswana, as well as personal data that is processed outside of the country, permitting these processing utilities automated or non-automated means that are situated within the country. Furthermore, the material scope of the law applies to the processing of personal data, but makes certain exceptions, such as instances in which data processing involves a matter of public safety or national security. What are the obligations of data controllers and processors under the law? Botswana’s Data Protection Act 2018 mandates that data controllers and processors within the country observe the following principles when collecting and processing personal data: - Personal data must be processed in a manner that is lawful, transparent, and fair. - Personal data may only be collected for specific and legitimate purposes, and must also be limited to what is accurate, relevant, and necessary in regards to the purposes for which it is to be processed. - Personal data must be kept up to date, and stored with an appropriate level of security for no longer than is necessary to fulfill the purposes for which it was collected. - Personal data must be protected at all times against risks such as unauthorized use or access, loss, destruction, or disclosure through the means of reasonable safeguards. In addition to these four data protection principles, data controllers and processors within Botswana are also responsible for ensuring that personal data is not retained for any period longer than is necessary for the completion of the function for which it was collected, and providing a further level of security and confidentiality when collecting the personal sensitive data of data subjects. Under the law, sensitive personal data can include any of the following: - Data related to race or ethnic origin. - Data related to political opinions. - Genetic and biometric data. - Data related to religious or philosophical beliefs. - Data related to trade union membership. - Data related to physical or mental health. - Data related to an individual’s sexual life. - Personal financial data. What are the rights of data subjects under Botswana’s Data Protection Act 2018? Under the Data Protection Act, data subjects are given the right to be informed of data processing involving their personal data, the right to access personal data in the possession of a data controller or processor, and the right to object to or opt-out of consent. Moreover, the law also provides data subjects with the right to rectification, the right to erasure, and the right not to be subject to data processing decisions made solely on the basis of automated processing. The law does not provide data subjects with the right to data portability. In terms of penalties for non-compliance under the law, the Commission has the authority to impose a variety of monetary fines and criminal punishments. These penalties include a monetary fine of up to BWP 300,000 ($31,279) for any “person who processes personal data in contravention of the Act”, while data controllers who fail to comply with the law are also subject to a monetary penalty of up to BWP 500,000 ($43,909), as well as a prison term of up to nines years. Contrarily, data controllers who fail to inform data subjects of their rights under the Data Protection Act 2018 prior to collecting or processing their personal data are also subject to a monetary fine of up to BWP 1 million ($87,823), as well as a term of imprisonment of up to twelve years. Through the passing of the Data Protection Act 2018 data subjects residing within Botswana were guaranteed personal data protection through legislation for the first time in the history of their country. To this end, Botswana joins the multitude of countries around the world that have passed comprehensive privacy laws in a similar manner to the EU’s GDPR Law and the California Privacy Rights Act or CCPA. As such, the Data Protection Act 2018 sets forth severe penalties and punishments for individuals and organizations who fail to comply with the law, ensuring that data subjects’ rights are protected and upheld at all times.
<urn:uuid:903fbe53-9d3c-4cfc-a6d5-7e624cd827ee>
CC-MAIN-2022-40
https://caseguard.com/articles/ensuring-data-protection-and-privacy-in-botswana/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00417.warc.gz
en
0.931393
1,186
2.859375
3
How Are New Technologies Like AI and Machine Learning Playing a Role in Cybersecurity? Artificial Intelligence is more than a buzzword - it's a key weapon in the fight against breaches. Artificial intelligence is often dismissed as an overused buzzword across the tech industry, and it certainly can be in cybersecurity as well. But a number of companies, Idaptive included, have found ways to practically implement AI and machine learning. The most proven uses for artificial intelligence and machine learning in cybersecurity today involve tasks that humans can’t perform, such as sifting through vast amounts of data. And by that, I mean millions of different signals from different sources – login attempts, geolocation data, etc. Algorithms have already been created that can identify unusual behaviors or irregular patterns when it comes to something such as malware. Idaptive has added a layer of intelligence to help companies verify and validate users, devices, and services while continuously learning from, and adapting to, millions of logins and risk factors. We apply AI and big data analytics to sift through these millions of data points and constantly learn, evolve, and improve. It makes it easier to identify anomalies, which represent suspicious behavior. Our platform automatically assesses risk based on behavior patterns and makes decisions regarding access based on that risk. All that is done with our proprietary machine learning tools. Risk-aware security products powered by machine learning and artificial intelligence will continue to evolve as prevailing weapons in the fight against security breaches. This post originally appeared in a Quora Q&A session hosted in May 2019. Our CEO Danny Kibel was asked to give his opinion on the state of cybersecurity, Zero Trust, working in the security field and entrepreneurship, among other things. For more of his answers visit Quora.
<urn:uuid:5b33d88d-ea3a-46cf-b86c-650e8cb9abbc>
CC-MAIN-2022-40
https://www.idaptive.com/blog/new-technologies-Artificial-Intelligence-Cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00417.warc.gz
en
0.957423
366
2.890625
3
[ Source – Featured Photo from Pexels ] In this tech-savvy world, keeping sensitive information private has become quite challenging. From in-app permissions of mobile apps to shared organizational files and databases, you never know when you are being watched. You might have come across the terms ‘information security’ and’ ‘cybersecurity’ or might even have used them interchangeably. What is the difference between cyber security and information security? Information security is more likely an umbrella that houses cybersecurity as one of the subsets. As per the US Bureau of Labor Statistics, these sectors have seen a 28% hike in demand owing to the large use case and an increase in cyberattacks worldwide. These terms might sound synonymous but have some key differences too! Table of Contents The Key Differences What are the key differences between Information Security and Cybersecurity? • Deals with both online and offline versions of data that is in a decrypted form. • It provides data protection from all forms of threat, be it online or offline. • Focuses on confidentiality, availability, and integrity (CIA triad). • Deals with unauthorized access involve the disclosure of highly sensitive data. • Professionals in this domain have organizational roles which involve field jobs as well as safeguarding certain government interests and policies. • Is questioned whenever a security breach occurs. • Deals with encrypted data which is cloud-based or live. • Its primary use case is to safeguard people who use the internet from malware & virus attacks. • Focuses on keeping one’s data within the organization they choose to share it with. • Deals with cybercrime, frauds, and online phishing attacks. • Officials in this role help keep potential hacking threats at bay. • Acts as the first line of defense in the event of an attack. Information Security: At a Glance Information security, also known as infosec, is an amalgamation of all kinds of information that may be valuable to a person or an organization. Common examples include credit card passwords, personal passwords, date of birth, security pins, etc. This type of information can be stored both locally and online depending on what the user wants. This domain is evolving quite rapidly and has various sub-categories ranging from network security to software auditing. This type of protection will give you the assurance that any piece of information that you hold cannot be disclosed publicly. The three main pillars of any robust infosec infrastructure are: Governance includes the Information Security Governance Framework (ISGF), which is essentially a set of rules on how to better manage sensitive information in an organization. The ISGF can also be modified to suit the requirements of each given business. This also includes advice on how to react to safety breaches and bounce back from data catastrophes. This is a moral aspect that motivates the individual to showcase honesty and the attribute of respecting data integrity. This ensures that data is correct and complete. For this, organizations have several routine monitoring sessions which ensure that the data is safe. From its creation to dissemination, data integrity should not be compromised at any point. As the name suggests, this is a generalized norm that implies that any piece of information which belongs to a particular person or a company can only be seen by another entity if the owner approves it. This helps businesses gain some competitive edge over their competitors. Common examples include tip sheets and trade secrets. Another key factor similar to confidentiality is availability. This guarantees that the data can be accessed at all times by its owner. CyberSecurity: At a Glance Cybersecurity comes into play whenever we need to secure an online network connected to any device. Given the increasing number of cybersecurity threats out there, every device ranging from smart TVs and home security systems to IoT devices needs protection.Owing to the increasing complexity of geopolitics and the proliferation of attack vectors, governments are starting to see cybersecurity as an important issue. Unlike information security where the owner has full disclosure of the data that they are hiding, cyber security officials often deal with data that is in encrypted form but holds a certain non-monetary value. A hacker might attempt to enter a system to steal money or alter mass opinions by hacking media channels. If done on a large scale, this can even harm a country’s image and fuel terrorism and mass outbreaks. To counter this, one must install several layers of firewalls and use software that updates constantly. These attacks include phishing, man-in-the-middle phish kits, pretexting, and quid pro quo attacks. Did you know? According to a study, there is a hacker attack every 39 seconds on average. One in three of all Americans has been affected by a cybersecurity scam at least once in their life. Common Ground Between InfoSec and CyberSecurity Infosec and cybersecurity are similar to each other in two primary aspects. First, they both depend on the presence of a secure physical infrastructure. For example, any paper documents that you want to safeguard or digital drives where you can store information.Another similarity is both of these systems rank information by order of priority first and then go about safeguarding it. The value of both digital and non-digital data is the primary concern of both these systems. Knowing data value can help managers impose necessary measures of cyber risk management and monitoring to prevent unauthorized electronic access. Fun fact: Cyber security professionals are paid quite well with some high-paying positions offering as much as $140k annually. This is due to the fact that more than 500,000 cybersecurity jobs in the US are unfilled. Info Security skills are like water in the southwest US – hard to find. With cutting-edge technology and rapid advancements, information security and cybersecurity are now fused quite closely. Despite the overlap, knowing the correct meaning behind each term can help you assess and implement security measures for any organization in a better way. Owing to the shortage of information security professionals, most companies rely on their cybersecurity team to cover the tasks of infosec as well. This is what has led to the fusion of both these terms. > Learn more on how to become a Certified Information Security Manager (CISM). > Learn more on how to become a Certified Cloud Security Professional (CCSP). Thank you for reading my blog. If you have any questions or feedback, please leave a comment.
<urn:uuid:3aebbfe1-7b39-4b5a-8d37-6bd45ed29df1>
CC-MAIN-2022-40
https://charbelnemnom.com/cybersecurity-and-information-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00417.warc.gz
en
0.9473
1,349
3.09375
3
In 2002, the United States Congress passed the Sarbanes-Oxley (SOX) Act in response to the infamous Enron and WorldCom accounting frauds in which investors and shareholders collectively lost billions of dollars. This act aims to bring transparency to all public companies’ financial matters and corporate governance and the firms representing them in the United States. The SOX Act particularly impacts technology businesses in the US and how they manage data and has had a significant impact on how public tech companies manage their finances. In this article, I’ll give you a brief introduction of the SOX Act and its compliance requirements from businesses in the U.S. What Is SOX Compliance? The SOX Act has 11 different titles that pertain to the finances, accounting practices, data management, investor relations, and corporate governance of publicly listed companies. SOX Compliance refers to the minimum standards a public company in the U.S is required to maintain under the SOX Act. Every publicly listed company in the US and management and accounting firms representing them must ensure SOX compliance. SOX compliance is determined through annual company audits by independent audit firms, and the compliance report must be easily accessible to all company shareholders. A company’s failure to comply with the SOX Act can result in various forms of penalties ranging from the public stock exchange delistings and invalidation of D&O insurance policies to fines of $5 million and up to 20 years in jail for its CEO and CFO. The main goal of SOX compliance is to ensure that publicly listed organizations. - Are more transparent in their financial practices - Develop safeguards and controls to ensure accurate financial data reporting. - Publish audit reports of their management and accounting practices that are available to all shareholders. - Protect in-house whistleblowers who assist in identifying corporate and financial frauds. - Take measures to increase investor confidence and remove gray areas in financial reporting. The overall impact of SOX compliance is positive since it brings more transparency to the financial matters of public companies and makes critical information accessible to the shareholders. However, it also significantly increases the cost of managing, regulating, and securing data through various checks and performance standards of information systems where the data is stored. A key outcome of the SOX Act was the formation of an independent auditing process through the Public Company Accounting Oversight Board (PCAOB). This practically ended self-regulation in organizations for auditing purposes and instead empowered the PCAOB to develop, establish, enforce, and audit industry standards. The PCAOB also has the power to investigate any fraud allegations and regulate third-party audit firms. As a result, only PCAOB approved audit firms have permission to conduct SOX compliance audits. How SOX Compliance Works SOX Compliance is based on the 11 titles mentioned and described in the SOX Act. To ensure compliance, an organization needs to satisfy all the articles’ requirements in the SOX Act. However, the following four sections of the SOX Act are generally considered the most important for SOX compliance. Section 302 – Corporate Responsibility for Financial Reports Companies must submit periodic (usually annual) financial reports reviewed and signed by their CEO and CFO. By signing the report, the officers certify that the report’s contents are 100% correct and validated through internal inspections and quality control within 90 days before publishing the report. This means that the signing officers can be held responsible for inaccuracies or data manipulation in the reporting. The report should also include a list of all the gaps in the organization’s internal controls and any fraud that involves employees responsible for the internal controls in any way. Section 404 – Management Assessment of Internal Controls The annual reports of all public companies should include an Internal Control Report that shares the list of all internal controls in place, their scope, procedural details, and reporting structure. The report should also list any compliance issues with SOX standards in the existing internal controls. The report’s independent auditor should also review and certify the company’s information in the same report. Section 409 – Real-Time Issuer Disclosures All public companies must update the public and their shareholders and investors in a timely manner about any material changes in their financial condition or operations. Section 802 – Criminal Penalties for Altering Documents This section of the SOX Act outlines the penalties (fines and imprisonment) for any who intentionally alters, destroys, mutilates, conceals, falsify records, documents, or tangible objects to influence a legal investigation. The suggested penalties are fines of up to $5 million and imprisonment of up to 20 years. This section also imposes penalties of fines and/or imprisonment up to 10 years on any accountant who knowingly and willfully violates the requirements of maintenance of all audit or review papers for five years. The crux of all these sections is that public companies should develop internal systems and mechanisms to ensure that their financial reporting is accurate and does not mislead their investors and stakeholders in any way. Failure to do so will result in heavy penalties and legal proceedings. How To Get Started With SOX Compliance Developing procedures, controls, and standards to enforce SOX compliance within an organization is a long process. Broadly speaking, we can break it down into the following six steps. Step 1: Defining the Scope The first step of SOX Compliance is to define the scope of the internal controls, the hierarchical positions responsible for ensuring the implementation of the controls, the signing off authority, and the approval chain. In this step, you should also clearly identify the areas that need to comply with the SOX Act by mentioning the specific sections that apply to them. Overall, this step should offer a bird’s-eye view of the SOX compliance procedure and everyone involved in it. It should help the auditor identify any potential risks, how they might impact your business, and whether the controls in place are sufficient to prevent compliance issues. Step 2: Determining Materiality and Risks In this step, you need to develop a clear definition of what constitutes “material.” Financial statement items are considered “material” if they could influence the economic decisions of users. Secondly, analyze the financials of all the locations where your business physically operates. Also, determine any transactions in your business locations that cause the financial statement account to increase or decrease. And finally, identify the risks that could prevent the transactions from being correctly recorded. Step 3: Applying SOX Controls Once you’ve identified the risk areas that could prevent transactions from being correctly recorded, you need to apply internal controls that counter any such possibilities. This would include checks and balances in the financial reporting process to ensure that the transactions are correctly recorded and the accounts are accurately updated. You must also determine whether to use manual or automated controls (or both) at different stages of the internal control process to ensure your records’ transparency and accuracy. Step 4: Developing Fraud Prevention Mechanism One of the primary objectives of applying internal controls and complying with SOX Acts is to prevent any form of fraudulent activities in analyzing, reporting, and managing data. There are no fixed steps or guidelines in this regard, but generally, the following fraud prevention steps should be sufficient. - Segregation of duties: No single employee should have complete control over your data reporting and management. Instead, the different segments of the process should be assigned to various team members to prevent manipulation and over-dependency. - Employee expense reimbursements: Any reimbursements to employees for official tasks or approved benefits should be processed through a transparent mechanism to ensure no false claims are entertained. - Whistleblower Safety: Create processes to ensure whistleblower safety and anonymity to ensure employees can highlight procedural problems or fraudulent activities without fear. - Regular Reconciliation Of Bank Accounts: This would help highlight any differences in the actual bank figures and your internal financial records. Step 5: Process and Control Documentation The complete procedures of your internal controls, the responsible individuals, frequency and nature of tests, their associated risks, etc., should all be well-documented and controlled under a central repository. Step 6: Periodic Testing Of Controls To Identify Deficiencies Testing your internal controls helps you verify that you have the proper procedures in place that are being looked after by the right persons in your organization. You’re successfully able to prevent fraudulent reporting and ensure transparency n your financial structure. The Benefits Of SOX Compliance Organizations should not view SOX compliance as an additional cost or a legal binding with no value for their business. Because when you apply adequate internal controls, ensure financial transparency, and comply with SOX regulations, your organization experiences several immediate and long-term benefits. Here are some of them: - SOX compliance has dramatically improved corporate governance across publicly listed companies in the United States. As a result of the SOX Act, all publicly listed companies now have independent audit committees that ensure accurate financial reporting. - SOX recommends an organization-wide documentation standard for every financial process and reporting mechanism. This has long-term benefits for companies and helps in streamlining their processes and improve efficiency. - A direct result of documentation and SOX compliance is the standardization of processes. Process standardization is the first step of process optimization which ultimately leads to higher ROI for every process. - SOX compliance encourages process documentation, standardization, and automation, significantly reducing human errors, thus increasing an organization’s financial reporting accuracy. - SOX internal controls ensure that no single employee has the power or authority to manipulate company financials. This not only prevents fraud but also maintains the balance required for effective financial management. Common Challenges In SOX Compliance Organizations face various types of challenges in complying with the SOX act. Here are some of the most common challenges you may face. - SOX compliance requires additional infrastructure and resource costs for monitoring, reviewing, and analyzing financial transactions. This can become a significant challenge for organizations operating at lower margins. - SOX compliance focuses on fraud reduction through the distribution of responsibility and succession planning for every critical role in an organization. Experienced and influential employees often resist these changes. - For many companies, SOX compliance means a complete overhaul of their financial reporting structure which can be a significant challenge. - SOX compliance encourages automation of processes, reducing dependency on manual work, and creating a central repository of all organizational processes. For many organizations, these are fundamental changes to the way they work and can be challenging to execute.
<urn:uuid:66a7b221-eec3-4a34-b4b2-cb9c6c156782>
CC-MAIN-2022-40
https://nira.com/sox-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00417.warc.gz
en
0.926532
2,183
2.59375
3
BSD is the software behind the world’s most popular Web site and the world’s most popular FTP site — but unless you’re a geek, you’ve never heard of it. An open-source operating system like Linux, BSD was developed in the 1970s at the University of California-Berkeley, well before Linus Torvalds ever took a computer course. So why was it Linux that captured mindshare and public imagination? BSD’s obscurity is just part of the reason it is now considered cooler than Linux among the geekiest geeks. But the software some say is the most secure operating system in the world may be poised to make a Linux-like leap to the forefront. The list of big-name companies and Web sites that use BSD is impressive. Yahoo, UUNet, Mindspring and Compuserve are on the list – in fact, perhaps 70 percent of all Internet service providers use BSD. Also on the list – Walnut Creek CDROM Inc. and its CD-ROM FTP download site, which the company says delivers more than 1 terabyte of data to visitors every day. Microsoft’s free e-mail service Hotmail began its life on BSD servers, and Apple announced in June its next operating system will be based on BSD. (Microsoft is a partner in MSNBC.) Enamored with Linux So why is Linux on everyone’s lips, and why are there about 10 times as many Linux users as BSD users? After all, they are both free operating systems that offer free source code – and BSD had quite a head start. Legal troubles tell part of the story. Right as the Internal began to reach critical mass, in 1993, the BSD movement was hit by a copyright lawsuit from AT&T, which still owned the rights to Unix. At the same time, Torvalds was welcoming help from all comers, mainly young computer science students enamored of with the coming information explosion. There are other reasons – much effort has been put into making Linux user-friendly enough for use as a desktop operating system. BSD groups have focused on servers, never putting much work into appealing to a mass market. But that doesn’t mean there’s not some obvious jealousy that the new Unix on the block has gotten all the attention. “In late 1991 there were 100 programmers on UseNet producing improvements for (BSD),” said Wes Peters, a BSD user from the beginning. “If not for the AT&T lawsuit at the worst moment…. Because of that, people said, ‘I don’t want to go with BSD now.’ That was the time Linux was gaining functionality.” Talk to BSD users, and a quiet but clear sense of superiority comes through. BSD users, they say, tend to have computer science degrees, hold management positions and have 10 years or more experience in the field. Linux users, on the other hand, are young hackers doing impressive work but motivated in part by having too much free time. “BSD has been where it’s happening in computer science research for 20 years,” Peters said. “It still hasn’t lost that cachet.” Do you doubt that this has all the makings of a good old-fashioned computer science religious war? Ask Peters, who wrote an article for online magazine daemonnews.com earlier this month. His even-tempered prose spurred a thread 600 messages long on geek news site Slashdot.org. When the best, brightest and most suspicious minds from the computer industry gathered in Las Vegas for the DEF CON trade show earlier this month, Linux-taunting by BSD sophisticates wasn’t at all subtle. And when one speaker announced that BSD CD-ROMs were being given away at the show, but Red Hat had declined to give away Linux CDs, there was outright jeering. Has Linux has become too mainstream and lost its appeal among “Ubergeeks”? “That stuff will always be out there,” said Red Hat spokeswoman Melissa London. “I like the old U2 albums, and after some of their newer stuff came out, I liked U2 less.” She was surprised to hear Red Hat declined the DEF CON opportunity, saying her company regularly distributes free CD-ROMs. BSD was already a mature operating system with four different flavors when Linus Torvalds wrote the first line of Linux code. A direct descendant of the Unix operating system, BSD (which stands for Berkeley Software Design) dates back to work done by Sun Microsystems co-founder Bill Joy to create the first free version of Unix when he was at Berkeley in the late 1970s. Later a group of Berkeley computer scientists added to his work, eventually beginning a project called 386BSD designed to rewrite Unix so it could be used on a PC with Intel chips. After Berkeley stopped funding the effort, BSD split off in several directions. * The NetBSD group, which focused on creating an OS that could run on any hardware – PCs, Macs, HP servers, Ataris, etc. * The FreeBSD group, which optimizes BSD for Intel chips. * The OpenBSD group, which did a line-by-line security audit of BSD code, and now has what is widely regarded as the most secure OS available. * And BSDi, the Red Hat of BSD. It’s a commercial venture started by some of the original Berkeley crowd that sells BSD and supports the product. Requirements for success Despite its dominance in the niche ISP market and its attractiveness as a server product, BSD remains a silent member of the Internet’s moving forces. Major PC vendors such as Dell will sell you a laptop with Linux; they won’t sell you any PC with BSD. There are also precious few applications for BSD. All that will soon change, some say. “Your readers will hear about it,” said Stephen Diercouff, who publishes BSD.org. “The emphasis has been on servers, but BSDi is moving into desktops…. And if one of the database vendors released a database that ran on BSD, you’d see a huge market share jump. I know there have been discussions with Oracle, Informix and Sybase.” Oracle, for the moment, isn’t interested. “We have not had sufficient demand,” said Jeremy Burton, Oracle’s vice president of server marketing. No matter, says Diercouff. Soon, the various BSD distributions will be able to run Linux applications, including office productivity suites such as StarOffice. Rose says BSD could make even a larger impact in so-called “Internet appliances” – function-specific devices such as TV set-top boxes or Internet routers, where simple, streamlined operating systems are required Better than Linux? There is one significant difference between Linux and the flavors of BSD, according to BSDi spokesman Kevin Rose. Linux development is restrained by the so-called “copyleft” general public license (GPL). Any programmer who modifies the Linux kernel must make the source code available to the Linux community. BSD is not bound by the agreement – therefore, entrepreneurial-minded developers will stay away from Linux, he predicts. “You have to give up your intellectual property to your competitors,” he said. “The OS itself is not going to see a great deal of innovation because there’s just no economic incentive to do so.” Other BSD supporters make a quite different argument – it’s the frenetic pace of innovation by Linux developers that makes the OS hard to pin down and hard for companies to use on mission-critical hardware. BSD is a much more mature OS with far fewer updates, they say. All that makes FreeBSD user Matthew Fuller shrug at the religious argument. “There’s a lot of things that Linux is ‘better’ at, and a lot of things FreeBSD is ‘better’ at, and a lot of those things can easily fluctuate on a daily or weekly basis,” said Fuller, who maintains a Linux vs. BSD Web page. “Thus, any definitive narrow statement that can be made is usually obsolete before anyone hears it.”
<urn:uuid:0946e084-9f1f-414c-a306-55e150ddb857>
CC-MAIN-2022-40
https://www.datamation.com/erp/zdnn-bsd-a-better-os-than-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00417.warc.gz
en
0.952898
1,776
2.765625
3
Lunar New Year is celebrated in many Asian cultures and countries. It is a time for families and friends to come together to celebrate the start of a new year and hope for a year of prosperity. The New Year starts on the first new moon of the lunar year and ends on the first full moon of the lunar year. Lunar New Year marks a new year of the Zodiac, and 2022 is the Year of the Tiger. Children born after the Lunar New Year will be known as Tigers, and Tiger personalities are said to be brave, confident, competitive, natural leaders, and well-liked. Our associates around the world celebrate Lunar New Year by taking time off to spend with loved ones, making and eating delicious food, and partaking in other customs and traditions such as decorating their doors and windows with paper cuts and couplets. Sheau Ling, Regional VP shares, “Another custom practiced during Chinese New Year is giving Ang Paos (Red Packet with money) to your elders and young children in the name of good fortune, luck, health and prosperity. In giving the envelopes, we also receive good luck and fortune.” Read on to learn more about Lunar New Year from Fred Yeo, Area VP. “Lunar New Year is a festival celebrated throughout Asia. This festival celebrates the arrival of spring and the start of a new year according to the lunar calendar. In regions and countries that have a Chinese community, it is known as the Chinese New Year. The significance of celebrating Lunar New Year is deeply rooted in the Asian culture. It is one of the most important occasions for generations of families to come together to celebrate the arrival of spring and to ensure good harvest, good fortune and good health in the coming year. For me, I have always looked forward to the big dinner celebration on the eve of Lunar New Year, having a great feast, a good laugh, paying respects to our elders and enjoying the moments quite simply as a family.”
<urn:uuid:cd77e557-73cd-4c8b-973e-0512283ee572>
CC-MAIN-2022-40
https://jobs.gartner.com/life-at-gartner/diversity-equity-and-inclusion/gartner-associates-celebrate-the-lunar-new-year/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00417.warc.gz
en
0.957459
408
2.65625
3
Did you know that not all bias in machine learning (ML) is bad? In fact, the concept of bias was first introduced into ML by Tom Mitchell in his 1980 paper, "The need for biases in learning generalizations.” He defines learning as the ability to generalize from past experience in order to deal with new situations that are related to this experience, but not identical to it. Applying what we’ve learned from past experiences to new situations is called an inductive leap and seems to only be possible if we apply certain biases to choose one generalization about a situation over another. By inserting some types of bias in ML architecture, we give algorithms the capacity to make similar inductive leaps. The first AI Chair of UNESCO John Shawe-Taylor said, “Humans don’t realize how biased they are until AI reproduces the same bias.” He is referring to the most famous type of bias in ML: human cognitive bias that slips into the training data and skews results. Cognitive bias is a systematic error in thinking that affects the decisions and judgments that people make. Melody, Nikhil, and Saurabh discuss several examples of how cognitive bias has negatively affected models and our society from upside down YouTube videos to an utter lack of facial recognition to Amazon’s AI recruitment tool. Enterprises know it is not what problem they solve that makes them successful, but how they solve it. They develop specific approaches which become the secret sauce that sets them apart in their industry. There is no question that AI and machine learning (ML) are the new frontier for innovation. The future of every enterprise company will hinge on their ability to apply their unique “secret sauce” approaches to create and adapt to the emerging AI world.
<urn:uuid:7bf09b20-e9ac-47e9-9e3c-51af789aaece>
CC-MAIN-2022-40
https://content.alegion.com/podcast/bias-in-machine-learning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00617.warc.gz
en
0.966984
361
3.171875
3
Data centers are power hogs. In the US, these facilities alone are responsible for about 2% of the total energy usage in the country (according to research by Villanova University). If all the data centers of the world are put together, their power consumption would exceed all but four sovereign nations of the world. The lion’s share of this energy consumption is used for cooling the servers and other computing equipment, using fans, air coolers, air conditioning, and other methods. Seattle’s Office of Sustainability and Environment estimates that about 50% of all power consumption in an average data center is for cooling purposes. This cooling equipment add considerably to the size of the data center as well, contributing to the waste and inefficiency. It is important to bring about energy efficiency, not just to reduce operational costs, but also as part of your corporate social responsibility policy. There are two ways to do something about it. The first approach is to use the heat generated for other purposes. Cities like Munich and Vancouver already divert the heat for other purposes, and the city of Seattle plans to follow suit. Seattle aims to take the water that cools two local data centers to pipe it and warm 10 million square feet of building space in the surrounding areas. However, deploying such a diversion system can be very expensive. Furthermore, a majority of data centers in the world today are situated in heat ridden tropics, where trying to sell more heat would be similar to carting coal to Newcastle. Even in the cooler atmospheres of the UK and USA, many data centers are situated at isolated country-sides without any buildings nearby that could use the generated heat. The other option, when there is too much heat around for it to be of any use, is to improvise and innovate on cooling systems. A data center in Hong Kong recently innovated with a rack-mounted immersion cooling system. This novel implementation of ultra-high-density cooling supports loads of up to 225kW a rack. The set-up uses rows of rack-mounted tanks filled with Novec, a liquid cooling solution created by 3M. Inside each tank, densely-packed boards of Application Specific Integrated Circuits (ASIC) run constantly as they crunch data. The Novec boils off as the chips generate heat, removing the heat as it changes from liquid to gas. The future belongs to data centers that are leaner, smarter, and, therefore, cost and resource efficient. And the immersion cooling technology promises an extremely energy-efficient, space-saving, and cost-efficient cooling facility for data centers.
<urn:uuid:60fdeb01-c75e-4338-a74a-4b96a5ba868e>
CC-MAIN-2022-40
https://lifelinedatacenters.com/data-center/new-immersion-technology-promises-energy-efficient-leaner-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00617.warc.gz
en
0.941662
531
3.265625
3
Arduino is an open source electronics prototyping platform used by an eclectic mix of people from all walks of life ranging from cake makers to artists and hobbyists. We spent some time with Brock Craft, author of Arduino Projects For Dummies, (opens in new tab) to explore why it is becoming increasingly popular. We've heard recently from John Nussey, author of Arduino For Dummies (opens in new tab), about getting started with Arduino and why it's causing such excitement. Can you tell us a bit more about who's using Arduino and for what? Arduino is being used very widely in the education community. Science and computing teachers in secondary schools are using it to teach kids the principles of programming and computational thinking. Arduino is also used in colleges and universities. They are often found in design programs, particularly in product design, because Arduinos can quickly be used to prototype products that do physical things – like toasters or dispensers or remote controls, for example. In the corporate world, Arduinos are used by designers, architects and engineers for design prototyping. It's very easy to try out design by building a prototype so that they can see what solutions work and toss out those that don't. This is much easier to do early in the design process before more money has been spent on bringing an idea to fruition; Arduino can play a key role here. Just a simple example – I know a lighting company that recently used Arduino to control dimmable lighting effects for architectural lighting products they were developing. Using an Arduino helped them try out their ideas in an afternoon, rather than waiting weeks. What are the basics I need to consider before I start to build my first Arduino project? It's pretty easy to get going with Arduino. All you need is a computer and a basic Arduino kit containing the board and any extra parts you want to play with, like LEDs, sensors and motors. You can get kits from the Arduino website and from all of the big electronics suppliers. Maybe one day they'll even show up at the supermarket! Most simple projects you can build at home and with few or no tools. You can build simple projects on a kitchen table or small desk. Because you work with low power, it's also safe for kids to try out building simple projects like a digital stoplight or a remote-controlled car. Many projects can be completed in an afternoon or over a weekend, but you can scale up from there. Building an Arduino-automated garden could be a project you can keep working on on over a whole summer. For people who think that Arduino projects are for hobbyists, can you provide some examples of business-related projects relevant for an IT professional. Arduinos are really good for sensing the environment and controlling things. You can also hook them up to the internet, to share data. Let's say you needed to monitor temperatures on sensitive hardware and post an alert message or email when critical temperatures are reached. Arduino would be a perfect solution. You could even use it to activate a visual alarm like a light or a sign. You could connect one to a router or server to create a visible indication of usage load that could be read at a glance. I've seen examples of this in server rooms! Another example would be using it as an RFID tag reader to quickly scan RFID asset tags or access key fobs. It would be easy to keep an eye on who's got access to a particular room with an RFID enabled door latch. There's an Arduino controlled door latch in my book. Another idea would be to track your inventory or vehicles using a GPS logger and display the data on a map. I show how to do that in Chapter 13. Are there any troubleshooting tips you can offer for when projects aren't quite working as planned? If you had one piece of advice for new users to help them get the most from their Arduino, what would it be? The best advice is to check out the official Arduino.cc (opens in new tab) website where you can find the answer to many questions you might have about using your Arduino and building projects with it. There are plenty of examples of how to get the most out of your Arduino there. You should also take some time to peruse and join an online forum, such as the Arduino Playground. Lots of people are inspired to build interesting projects by reading what other people have done. Also the forums are the first place to go if you get stuck on a technical issue. And the Arduino forums are really friendly to people who are complete beginners. You don't find the kind of techie snobbery that can sometimes scare off newcomers. What have you used Arduino for? What projects have you built with it? I've used Arduino to create lots of fun and practical projects. I built an electronic LCD display for my office door that says when I'm in or out. It also shows my office hours and where to reach me. Best of all, I can update it remotely if I'm not nearby to change the message on the display. At home, I've been monitoring the temperature inside and outside our house to see how it changes over time and with the seasons. This means we can see how our utility bills change over time and compare it to the temperature levels in the house. But my favourite project is for our cat. He comes and goes as he pleases, and we wanted to keep tabs on him to see how much he gets out and about. I used and Arduino and a couple of sensors to detect when he goes through his cat flap. Then I hooked it up to twitter so that he sends me a message whenever he goes through it. Yes, my cat has a twitter account! Check out @muonthecat to find out what he's up to. My next project is to add an Arduino cat cam to his collar so he can upload pictures. I'll make sure he sends a tweet, when it's up and running!
<urn:uuid:2d165959-0533-4ebd-8d11-db321b88debb>
CC-MAIN-2022-40
https://www.itproportal.com/2013/07/27/interview-with-brock-craft-author-of-arduino-projects-for-dummies-on-how-that-diy-pc-is-being-used-creatively-worldwide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00617.warc.gz
en
0.960037
1,218
3.296875
3
The Introduction to REXX Programming Language course introduces the REXX programming language and explains how it is run. It also reviews and describes the major elements that comprise a REXX program. These courses instruct operations, systems and application staff in the use and coding of the Restructure Extended Executer (REXX) programming language. They describe the major components and structure of the language, including Keyword Instructions and built-in functions. While REXX was originally developed for IBM mainframe systems, it is now capable of running on many platforms. Most of the modules in these courses will be of use to REXX programmers on any platform. Personnel requiring an introduction to REXX and an understanding of the fundamentals of programming in REXX. An understanding of data processing concepts and some basic programming skills. After completing this course the student should be able to: - Identify the features of the REXX Programming Language. - Describe the major elements of a REXX program. - Identify the common terms, variables and operators used in REXX Basic Features of the REXX Language Description of the REXX Language Features Executing REXX Programs in TSO/E Executing REXX from TSO/ISPF Processing REXX Code REXX Terms, Variables, and Operators REXX Terms and Variables
<urn:uuid:78878638-c10d-4fc8-8b8d-c436d5c3a24e>
CC-MAIN-2022-40
https://interskill.com/?catalogue_item=introduction-to-the-rexx-programming-language&noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00617.warc.gz
en
0.813241
302
3.328125
3
Deep learning can help discover mathematical relations that evade human scientists, a recent paper by researchers at DeepMind shows. Like many things coming from the Alphabet-owned artificial intelligence lab, the paper, which is titled “Advancing mathematics by guiding human intuition with AI,” has received much attention from science and tech media. Some mathematicians and computer scientists have lauded DeepMind’s efforts and the findings in the paper as breakthroughs. Others are more skeptical and believe that the use of deep learning in mathematics might have been overstated in the paper and its coverage in popular press. This article is part of a reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. The results are nonetheless fascinating and can expand the toolbox of scientists in discovering and proving mathematical theorems. A framework for mathematical discovery with machine learning In their paper, the scientists at DeepMind suggest that AI can be used to “assist in the discovery of theorems and conjectures at the forefront of mathematical research.” They propose a “framework for augmenting the standard mathematician’s toolkit with powerful pattern recognition and interpretation methods from machine learning.” Mathematicians start by making a hypothesis about the relation between two mathematical objects. To verify the hypothesis, they use computer programs to generate data for both types of objects. Next, a supervised machine learning model algorithm crunches the numbers and tries to tune its parameters that map one type of object to the other. “The key contributions of machine learning in this regression process are the broad set of possible nonlinear functions that can be learned given a sufficient amount of data,” the researchers write. […] Read more: www.bdtechtalks.com
<urn:uuid:df3971ca-f159-4cba-84aa-264f125ad91c>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/12/17/featureddeepminds-ai-can-untangle-knots/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00617.warc.gz
en
0.928771
373
3.5625
4
The human eye and brain are an amazing data gathering tool, the eye transmits information at the rate of 10 million bits per second to the brain, comparable to an Ethernet connection. It is for this reason that data visualizations can be a powerful tool to inform about data insights. As good visualizations inform observers about relations between massive amounts of data but also poor visualizations can lead to catastrophic enterprise failures. One of the founding fathers of the discipline of information design, Edward Tufte isolated poorly designed power point slides, that hid key engineering information, as one of the key data visualization failures that lead to the NASA Columbia shuttle disaster in 2003. The reasons some data visualization fail are mostly two, one is the inability of the data analyst to design effective information dense graphics, second is the human inability to handle too much visual complexity. The density of today’s enterprise Big-data are both quantity and complexity, that is today’s organizations collect massive numbers from vastly disparate sources. Due to the limitations of unassisted data visualization, the emerging trend in the space is Virtual Reality (VR) and AI assistance. Even that day is not too far when we will feel our data through haptic-feedback, and analyze on the basis of our sensory understanding. Until such time, the limits of data visualization borne by humans could perhaps be overcome with increasingly sophisticated AI tools which can empower humans to make the end decisions necessary for strategic growth.
<urn:uuid:12c5d774-45e8-4870-9ba7-3b14186d7007>
CC-MAIN-2022-40
https://straighttalk.hcltech.com/blind-sided-limitation-of-data-visualization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00617.warc.gz
en
0.916244
291
2.8125
3
Theoretically unrestricted by the binary limitations that limit the speed of conventional computers, quantum computers will be able to do much more in much shorter periods of time. When the quantum revolution begins, tech watches predict that five industries stand to benefit first: Research by McKinsey found that quantum computers can precisely simulate molecules, polymers, and solids. This approach will allow researchers to identify the most effective molecular designs or structures and achieve desired results faster when synthesizing molecules. Such technology-supported research on molecular structures will enable a better understanding of chemical properties and spur discoveries that could transform many fields. With quantum computing's effectiveness and speed, financial service providers can gain a competitive edge, particularly in areas such as risk analysis, dynamic portfolio optimization, and pricing. In the near future, quantum computers might decrypt today's seemingly unbreakable encrypted data. In such a quantum-driven world, quantum kinaesthetic encryption will provide the best protection for data privacy. Also known as quantum key distribution, quantum encryption uses photons that change their states during third-party interceptions, theoretically limiting data interpretation to only two key parties. The information exchanged between the two key endpoints will thus be virtually impossible to decode, taking cybersecurity a notch higher. In addition to the benefits that quantum machine learning may offer many sectors ¾ including autonomous vehicles and weather prediction ¾ future quantum computers may advance AI further. The use of quantum computing for AI may revolutionize computer vision, pattern recognition, voice recognition, machine translation and other tools, and maybe even endow bots with human-like intelligence.
<urn:uuid:3f95764e-b9a4-4f8e-816a-6537e8770894>
CC-MAIN-2022-40
https://straighttalk.hcltech.com/listicles/5-industries-that-will-benefit-from-quantum-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00617.warc.gz
en
0.908749
336
3.140625
3
The ALPR matcher uses the "Number of character differences" technique to improve the plate read accuracy rate. This technique allows for a difference in the number of characters between the plate read and the plate number in the hotlist. This allows the system to account for characters in the plate that cannot be read (dirt, bad camera angle, and so on), and for objects on the plate that might be mistaken for legitimate characters (screws, pictures, and so on). You can configure how the ALPR matcher handles the "Number of character differences" by modifying the MatcherSettings.xml file. For more information, see MatcherSettings.xml file. - OCR equivalence (see ALPR matcher technique: OCR equivalence) - Common and contiguous characters (see ALPR matcher technique: Common and contiguous characters) - There’s no exact match possible for plate reads AB123 and ABC0123 because the hotlist contains only six-character plates. - Both plate reads AB123 and ABC0123 match the plate ABC123 because you allowed for one character difference. It doesn’t matter if it is one character more, or one character less than the matched plate. - Both plate reads AB123 and ABC0123 match the plates A8C123, ABCI23, and ABC1Z3 because you allowed for one character difference, and one OCR equivalent character. If you were using a permit list instead of a hotlist, none of the matched plates in the example would raise hits (you get a permit hit when a plate is not on the permit list).
<urn:uuid:03bf1adf-45f5-44fa-b264-0ba9f0c44755>
CC-MAIN-2022-40
https://techdocs.genetec.com/r/en-US/Security-Center-Administrator-Guide-5.9/ALPR-matcher-technique-Number-of-character-differences
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00617.warc.gz
en
0.878187
336
2.703125
3
A virus is a self-replicating program that produces its own copy by attaching itself to another program, computer boot sector or document. It infects other programs, Corrupt files and Programs Different types of Viruses: Boot sector virus: Replaces itself with boot sector moving boot sector into another location on the hard disk File overwriting or cavity Virus: Replaces the content of files with some other content leaving the file unusable Crypter: Encrypts the contents of the file which causes the file unusable for the user Polymorphic virus: The virus code mutates itself by keeping the algorithm intact. Tunnelling Virus: These viruses trace the steps of interceptor programs that monitor operating system request so that they get into the BIOS and DOS to install themselves. To perform this activity they even tunnel under anti-virus software programs Metamorphic virus: They rewrite themselves every time, reprogram themselves into a completely different code and back to normal vice versa Macro Virus: Infects Microsoft products like WORD and EXCEL. They are usually written in the macro language visual basic language or VBA Cluster Virus: Modifies the directory entries so it always directs the user to the virus code instead of the actual program Stealth/ tunnelling virus: They intercept the anti-virus call to the operating system and give back uninfected version of the files requested for thereby evading anti-virus Extension Virus: Hides the extension of the virus files, deceiving the unsuspecting user to download the files. Metamorphic Virus: As with a polymorphic virus, a metamorphic virus mutates with every infection. The difference is that a metamorphic virus rewrites itself completely at each iteration, increasing the difficulty of detection. Metamorphic viruses may change their behaviour as well as their appearance. Add-on Virus: Add-on viruses append their code to the host code without making any changes to the latter or relocate the host code to insert their own code at the beginning.
<urn:uuid:d39c84b7-b60e-477d-93c6-8251be90c0c5>
CC-MAIN-2022-40
https://www.greycampus.com/opencampus/ethical-hacking/virus
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00017.warc.gz
en
0.896399
443
3.265625
3
What is a Trojan Virus? Table of Contents - By David Lukic - Oct 08, 2020 Computers, the internet, and mobile devices make our lives easier and more fun, but they also come with some danger. The digital world is replete with viruses, malware, trojans, and other infections that can ruin your hardware and software as well as destroy your files, lock you out of your computer and take control of your network. You cannot be too careful when protecting yourself against this barrage of online threats. How Does Trojan Virus Work? A Trojan or Trojan horse virus is malware that mimics legitimate software, but when you execute it (download or install it), it infects your computer and wreaks havoc. Trojans are designed to take control of your computer, steal files, delete files, modify data, block things, copy files or data, and disrupt network connections as well as other nefarious actions. Viruses can self-replicate, but Trojans cannot. They also cannot run on their own; someone must initiate/execute the code to start them running. The term trojan horse virus comes from the story about the ancient city Troy and how the Greeks used a wooden horse filled with soldiers to take over the city. The Trojans took the horse as a trophy, but once inside the city gates, out flooded the army and won the Trojan War. A Trojan virus works the same way; it hides inside something that looks good but is really a digital infection waiting to happen. Different Types of Trojan Horse Viruses There are a few common types of Trojans to be aware of: This type of Trojan allows hackers to remotely access and control a computer, often for the purpose of uploading, downloading, or executing files at will. These Trojans inject a machine with code deliberately designed to take advantage of the weakness inherent to a specific piece of software. These Trojans are intended to prevent the discovery of malware already infecting a system so that it can affect maximum damage. This type of Trojan specifically targets personal information used for banking and other online transactions. Distributed Denial of Service (DDoS) Trojans These are programmed to execute DDoS attacks, where a network or machine is disabled by a flood of requests originating from many different sources. These are files written to download additional malware, often including more Trojans, onto a device.” Other Types of Trojan Horse Computer Virus Fake AV Trojan This is when you see a pop-up on your computer saying you have been infected by a virus and need to buy a cleaner right now, just click the link. These are designed to trick you into buying something you don’t need. The pop-up is the infection, not the cure. This one does what it sounds like, it steals your information for financial gain or identity theft. This nasty one locks your computer, encrypts all your data and demands a fee to unlock it all. Remote Access Trojan This one gives the hacker complete control of your computer to spy on you or steal stuff. steals all your contacts’ information to use in spamming scams. this one works on mobile devices and intercepts your text messages; scary. This trojan attacks IM platforms and takes over ownership of your account, sending malicious messages to your contacts. This creepy one allows the cybercriminal to spy on you through your camera and microphone while also copying keystrokes to steal passwords and more! Trojans are among the most damaging digital infections your device can get. They don’t only affect computers; they are also injected into legitimate apps, so when the user downloads and installs an app, their mobile device gets infected. That is alarming because most people carry a lot of data around in their phones. An example of a particularly dangerous Trojan on the Android platform is called Switcher Trojan. Once infected, it attacks the user’s Wi-Fi network and uses it to commit crimes without them even knowing it! How to Detect a Trojan Virus If your computer starts running really slow, or windows are popping up when you browse the internet, you may be infected. If programs start crashing or opening without your interaction, you may have a trojan horse virus. How to Remove Trojan Virus Although these threats are numerous and challenging, there are things you can do to protect all your equipment. The number one way to combat these programs is using a reliable antivirus program, keeping it updated and running deep scans often. The best ones have built-in protection for ransomware, Trojans, malware, and even phishing attempts. Some other tips are: Never download and install software unless it is from a trusted source. Do not click links in email or download attachments. Even if it looks legitimate, if you weren’t expecting it, delete the email. Keep your devices updated with the latest operating system and security patches. Be sure to have antivirus software installed and run it frequently. Use complex passwords on all your accounts. Don’t use the same ones on multiple websites and be sure to change them often. Use a strong firewall on your home network and even consider installing a VPN to mask your IP address and online activities. Backup your computer regularly. If you do get infected, you can restore your files with a good backup. Stay away from sketchy websites and never download or install freeware. Do not click on pop-ups in your browser or on your computer. These are some solid tips that, if used regularly, will keep you safe and your devices clean.
<urn:uuid:c623ef30-1f6f-40a8-b887-119a1140a9e7>
CC-MAIN-2022-40
https://www.idstrong.com/sentinel/trojan-virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00017.warc.gz
en
0.915373
1,240
3.3125
3
Want to try Keepnets’ Phishing software for phishing training? Cybercriminals do not only target large corporations; they also target small businesses and people, employing a variety of attack vectors such as phishing, vishing, BEC, ransomware, crypto-jacking, and SMS phishing. As a result, learning and adhering to effective cyber safety standards will assist you in protecting yourself and your children from these threats. 1- The problem is not technology; it’s how technology is used. It can be concerning in terms of how young people use internet-connected devices. Smart devices, for example, feature cameras that are used to discover and foster creativity, and certain applications may have a video chat or live streaming functions. They can also be used to send inappropriate photos or to exploit security flaws. Teaching the host how to use the technology correctly will aid in the management of privacy and security settings, as well as teaching everyone how to better protect themselves online. 2- Create a safe environment for technology talk. Although young people may not always seek internet counsel from you, you must be prepared to assist them. Create a secure environment in which your children may confide in you about their personal experiences and concerns, even if they break the rules without fear of punishment or accusation. Also, allow your child to talk about their friends’ online experiences and concerns; they may be more comfortable sharing others’ experiences than their own. 3- Support young people in helping their friends Equal-level strong associations are crucial components of growing up and growth, and polls show that many young people (40 percent) resort to their peers for aid with online challenges. As a result, you can add to the account the ability of your child to seek assistance from a friend. Discuss with the youth the tools they need to protect themselves, how to increase their knowledge, how to utilize online safety concerns and suggestions to peers, how to prevent users from being exposed on sites, and how to report problems or misuse of sites and practices. Help your children understand their ability to respond to challenges and urge them to seek assistance from someone they trust if they or their friends appear to be beyond the capabilities of a difficulty they are experiencing. Set some ground rules for when a friend may damage himself or others, or when youngsters should seek adult assistance when breaking the law. Teenagers are unlikely to be able to establish a strategy for what a buddy has to do when they are experiencing or playing a part while immediately intervening in an online issue. Being secure online entails not only attempting to avoid negative situations but also attempting to generate resistance. 4- Speak to your children about your common concerns. Despite their differences, parents and teenagers share many technological worries. According to surveys, parents and young people are eager to learn more about themes such as internet safety, preventing identity theft, keeping devices secure and defending against bogus email, and securing social media messages or text messages. Create opportunities at these intersections; Create family apps to protect your most essential personal information, such as images, financial information, and internet accounts. 5- Listen to your children’s concerns. According to the research, young people are concerned about basic Internet security and security issues. In addition to other worries, young people (47 percent) expressed fear about unauthorized access to their accounts, and 43 percent wanted their information to stay private. Help your child gain skills for the internet world by inquiring if these concerns are valid and providing information about account privacy and security. Editors’ note: This article was last updated on August 9, 2022.
<urn:uuid:6a2c3a44-fd3c-4ed3-a0e9-7aedd00c9385>
CC-MAIN-2022-40
https://keepnetlabs.com/cyber-safety-rules/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00017.warc.gz
en
0.950699
736
3.5
4
What about this course? This course provides an introduction to programming methods and tools using Java. It introduces core programming concepts, core principles of writing in an object-oriented language, and the software development lifecycle at a very high level. Examples focus on writing applications for dynamically configurable network hardware and include connecting to devices, classifying and filtering traffic, and logging. This course is based on Java 1.7.0. Instructor for this course This course is composed by the following modules Why Learn to Program? :: Efficiency, Consistency, Repeatability Why Learn to Program? :: Documenting Business Practices Why Learn to Program? :: People Make Mistakes Why Java? :: Compiled vs. Interpreted Languages Why Java? :: The JVM and Bytecode Interpreter Why Java? :: JIT Compilation Why Java? :: Security Programming Paradigms :: Imperative Programming Paradigms :: Functional & Object Oriented Programming Paradigms :: OO: Abstraction, Polymorphism, Inheritance, Encapsulation Programming Paradigms :: Type Systems (strong, weak, duck) The Software Development Lifecycle :: Gathering Requirements The Software Development Lifecycle :: Design The Software Development Lifecycle :: Implementation The Software Development Lifecycle :: Testing The Software Development Lifecycle :: Maintenance The Software Development Lifecycle :: Optimization & Source Control Hello, World :: Conventions & Methods Hello, World :: Arguments & Return Values Hello, World :: Compiling Key Concepts :: Primitives & Objects Key Concepts :: Classes & Instances Key Concepts :: Interfaces & Implementations Key Concepts :: Packages Key Concepts :: Privacy Hello, _____! :: Conditions Hello, _____! :: Flow Control Hello, _____! :: Exceptions Data Structures :: Arrays Data Structures :: Lists Data Structures :: Maps Debugging :: Standard Error Debugging :: Stack Traces Debugging :: Logging Debugging :: JDB Common Tasks :: Handling Equality Common Tasks :: Converting Types Common Tasks :: File Input/Output Common Tasks :: Regular Expressions Common Tasks :: Formatted Output Common Tasks :: JAR Archives Common Tasks :: Third Party Libraries Common Tasks :: Command Line Switches Packets, Sockets, and Ports SNMP with SNMP4j Cisco onePK for IOS devices :: Intro Cisco onePK for IOS devices :: Connecting Cisco onePK for IOS devices :: Polling Cisco onePK for IOS devices :: Changing Settings Cisco onePK for IOS devices :: ACL Common Course Questions If you have a question you don’t see on this list, please visit our Frequently Asked Questions page by clicking the button below. If you’d prefer getting in touch with one of our experts, we encourage you to call one of the numbers above or fill out our contact form. Do you offer training for all student levels? Are the training videos downloadable? I only want to purchase access to one training course, not all of them, is this possible? Are there any fees or penalties if I want to cancel my subscription?
<urn:uuid:6780a131-32ca-4f40-90fd-5183df1ed4f9>
CC-MAIN-2022-40
https://ine.com/learning/courses/java-fundamentals-for-network-engineers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00017.warc.gz
en
0.759531
692
3.046875
3
Category 5, or CAT5, was ratified in 1991. CAT5 has become obsolete in recent years, due to its limitations compared to CAT5e cables. Although the CAT5 cable is a good, solid cable for 10/100 Mbps LANs, the newer versions of CAT cables are significantly faster. Category 5e, also known as Category 5 Enhanced, or CAT5e, is a network cable standard ratified in 1999. CAT5e cable offers significantly improved performance over the old CAT5 cable, including up to 10 times faster speeds and a significantly greater ability to traverse distances without being impacted by crosstalk. CAT5e was specifically designed to have a reduced amount of crosstalk (the interference between cables when they are close to one another) compared to CAT5 cables. Crosstalk can still occur in CAT5e cables, but most of the time it does not result in any serious compromising of data. You'll find existing CAT5 installations everywhere. It is commonly used to carry telephone or video signals in addition to Ethernet. CAT5e is an incremental improvement to CAT5 cable, designed to support full-duplex Fast Ethernet operation and Gigabit Ethernet. If you have a lot of 10-Mbps equipment, CAT5 cabling will serve your needs. It also handles 100-Mbps Fast Ethernet transmissions very well. CAT5e is a 100-MHz standard, though cables are available with up to 350-MHz capabilities. You can expect problem-free, full-duplex, 4-pair Ethernet transmissions over your CAT5e UTP. The maximum distance you can run CAT5 is 100 meters, the same as CAT5e. If you need longer runs, active components such as routers or extenders can be used, provided they are CAT5 or CAT5e compatible. The main differences between CAT5 and CAT5e can be found in the specifications. The performance requirements have been raised slightly in the new standard. CAT5e has stricter specifications for Power Sum Equal-Level Far-End Crosstalk (PS-ELFEXT), Near-End Crosstalk (NEXT), Attenuation and Return Loss (RL). CAT5e has the capacity to handle bandwidth superior to that of CAT5. If you're running up against the performance limitations of a 100-Mbps network, you'll probably want to upgrade at least parts of your system to CAT5e or higher. CAT5e is backwards compatible and can be used in any application that would typically use CAT5.
<urn:uuid:7927ac18-9647-406c-baf3-0227171840cd>
CC-MAIN-2022-40
https://www.blackbox.com/en-nz/insights/blackbox-explains/inner/detail/copper-cable/copper-category-standards/what's-the-difference-between-cat5-and-cat5e
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00017.warc.gz
en
0.927298
520
2.515625
3
It seems like there’s always a new warning about yet another online scam. Not only have Americans dealt with typical scams the last several months, but we’ve also had to contend with COVID-related scams, too. The thing about online scams is they’re not only annoying; they’re expensive. As of mid-September, consumer complaints to the U.S. Federal Trade Commission (FTC) related to coronavirus exceeded 200,000 — and reported losses topped over $140 million. Those numbers are only likely to increase as COVID rages on. Tap or click for tips on protecting yourself from these con artists. Last Friday, yet another scam-related PSA was issued by the FBI and CISA. This time, scammers are using some clever tricks to steal data and spread false information about the upcoming election. How to safely navigate 2020 election season If you’ve been looking up news or information about the 2020 presidential election, you need to keep a close eye on the sites you’re visiting. According to the FBI, scammers are using spoofed domains and fake email addresses to target voters who are searching for this type of info. Unfortunately, it can be tough to spot spoofed domains scammers are using. These savvy criminals use slightly altered names of legitimate websites to trick people into clicking on the site, which makes it hard to discern a legitimate site from a spoof. For example, scammers may use a misspelled word, like “electon” instead of “election” — or even .com instead of .gov — to trick you into thinking you’re clicking on a legitimate site. These slight changes are easy to miss, especially if you aren’t looking for them. In many cases, the purpose of these spoofed sites is to give out false information. Even worse, some scammers use them to steal usernames, passwords, and email addresses or to collect personally identifiable information. Others use them to spread malware, leading to compromised information or financial losses. But spoofed sites aren’t the only issue the FBI is warning about. According to the PSA, cybercriminals are also using seemingly legitimate email accounts to entice people into clicking on malicious files or links. This is done under the guise that these files contain important election-related information. The closer we get to the election, the more likely these issues are to continue. In fact, we’ve already been warned by the FBI about other 2020 election disinformation campaigns over the last few weeks. You need to take steps to avoid these traps, including: - Verify the spelling of web addresses, websites and email addresses that look trustworthy. They could be close imitations of legitimate election websites, so you need to do your homework. - Seek out information from trustworthy sources. You should also verify who produced the content and consider their intent. If you need help discerning legitimate sites from fakes, the Election Assistance Commission (https://www.eac.gov) offers a ton of verified information and resources. - Keep your device up to date. Your computer is less likely to be compromised if your operating systems and applications are up to date. - Do not click on links or attachments in emails from unknown individuals. Do not reply to unsolicited e-mail senders. - Never provide personal information of any sort via email. Emails asking for personal information may appear to be legitimate, but don’t fall for it. What to do if you’ve been compromised Think you’ve been compromised by one of these spoof sites or emails? Don’t panic. Just take the following steps to get things back on track: - Install antivirus software. You should be doing this anyway, but if you’ve been compromised, you need to install reputable anti-malware and anti-virus software stat. Use it to conduct regular network scans once you have the issue under control. Tap or click here for the best antivirus options for PC and Mac. - Do not enable macros on documents downloaded from an email. - Disable or remove unneeded software applications. - Use strong two-factor authentication if possible, via biometrics, hardware tokens, or authentication apps. Tap or click here to find out how to enable 2FA on your online accounts. - Be sure to report information concerning suspicious or criminal activity to the local FBI field office (www.fbi.gov/contact-us/field-offices) or to the FBI’s Internet Crime Complaint Center (www.ic3.gov). If you take the necessary precautions and stay informed on scams to watch for you’ll make it through election season with ease. We’ll keep you updated on any new schemes making the rounds through the election and beyond.
<urn:uuid:e8742ce7-3522-46ba-8226-eb068a677ae8>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/scam-election-sites/757777/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00217.warc.gz
en
0.909593
1,007
2.625
3
Why was 12 and 1 not included in the solution. Hi Gerald,The factoring example shows a multiplication "circuit" with two 3-bit inputs and a 6-bit output. Each 3-bit input can represent unsigned integer factors from 0 through 7 as follows. 000 = 0001 = 1010 = 2011 = 3100 = 4101 = 5110 = 6111 = 7 A 4-bit input would be required to represent a factor from 8 to 15.Cheers,S.D. To keep things simple, the factoring demo circuit / embedding was designed to solve the problem of what two 3 bit inputs multiplied together give you a 6 bit output of 12, 21, or 49. Since 12 cannot be represented using 3 bits (12 > 2^3), you'd need a larger circuit / embedding where the two inputs were at least 4 bits each. You can use the Factoring Jupyter Notebook as a guide for how you might do this yourself. Please sign in to leave a comment.
<urn:uuid:f394ba92-e20d-49e3-8636-689f1aece3f0>
CC-MAIN-2022-40
https://support.dwavesys.com/hc/en-us/community/posts/360017486794-Factoring-12
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00217.warc.gz
en
0.942168
214
3.328125
3
iPhone users have been warned about a recently discovered Apple security flaw that enables hackers to install fake apps and steal personal data. The US government’s Computer Emergency Readiness Team (US-CERT) has issued information regarding the hack, informing iPhone owners to be wary of clicking malicious links. The vulnerability was initially discovered by security company FireEye and has been dubbed ‘Masque Attack.’ Once users click on a link claiming to offer an application or service, hackers install software that mimics an existing app, allowing them to steal information from the handset. The attack also makes it possible to monitor a device’s background activity, such as web browsing. The FireEye team has released a video explaining how the attack works, which is shown above. In spite of the warning issued by the US government, Apple insists that there have been no reported victims of the hack. “We designed OS X and iOS with built-in security safeguards to help protect customers and warn them before installing potentially malicious software,” a company spokesperson told the Telegraph. “We’re not aware of any customers that have actually been affected by this attack. We encourage customers to only download from trusted sources like the App Store and to pay attention to any warnings as they download apps. Enterprise users installing custom apps should install apps from their company’s secure website.” US-CERT has echoed the Cupertino-based company’s advice, telling users that if an app shows an “Untrusted App Developer” alert, they should click “Don’t Trust” and uninstall the software immediately.
<urn:uuid:589535c3-9526-4e3a-b65e-c504b247c96e>
CC-MAIN-2022-40
https://www.itproportal.com/2014/11/14/iphone-masque-attack-hack-installs-fake-apps-steal-user-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00217.warc.gz
en
0.935361
340
2.5625
3
Physicists have designed artificial intelligence that thinks like the astronomer Nicolaus Copernicus by realizing the Sun must be at the centre of the Solar System. Copyright by www.nature.com Astronomers took centuries to figure it out. But now, a machine-learning algorithm inspired by the brain has worked out that it should place the Sun at the centre of the Solar System, based on how movements of the Sun and Mars appear from Earth. The feat is one the first tests of a technique that researchers hope they can use to discover new laws of physics, and perhaps to reformulate quantum mechanics, by finding patterns in large data sets. The results are due to appear in Physical Review Letters 1. Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) in Zurich and his collaborators wanted to design an algorithm that could distill large data sets down into a few basic formulae, mimicking the way that physicists come up with concise equations like E = mc 2. To do this, the researchers had to design a new type of neural network, a machine-learning system inspired by the structure of the brain. Conventional neural networks learn to recognize objects — such as images or sounds — by training on huge data sets. They discover general features — for example, ‘four legs’ and ‘pointy ears’ might be used to identify cats. They then encode those features in mathematical ‘nodes’, the artificial equivalent of neurons. But rather than distilling that information into a few, easily interpretable rules, as physicists do, neural networks are something of a black box, spreading their acquired knowledge across thousands or even millions of nodes in ways that are unpredictable and difficult to interpret. So Renner’s team designed a kind of ‘lobotomized’ neural network: two sub-networks that were connected to each other through only a handful of links. The first sub-network would learn from the data, as in a typical neural network, and the second would use that ‘experience’ to make and test new predictions. Because few links connected the two sides, the first network was forced to pass information to the other in a condensed format. Renner likens it to how an adviser might pass on their acquired knowledge to a student. One of the first tests was to give the network simulated data about the movements of Mars and the Sun in the sky, as seen from Earth. From this point of view, Mars’s orbit of the Sun appears erratic, for example it periodically goes ‘retrograde’, reversing its course. For centuries, astronomers thought that Earth was at the centre of the Universe, and explained Mars’s motion by suggesting that planets moved in small circles, called epicycles, in the celestial sphere. But in the 1500s, Nicolaus Copernicus found that the movements could be predicted with a much simpler system of formulas if both Earth and the planets were orbiting the Sun. […]
<urn:uuid:157b4bb8-dbc0-409c-b025-db8182e1d681>
CC-MAIN-2022-40
https://swisscognitive.ch/2019/11/10/ai-copernicus-discovers-that-earth-orbits-the-sun/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00217.warc.gz
en
0.966942
617
3.890625
4
In cryptography, a certificate authority or certification authority (CA) is an entity that issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or on assertions made about the private key that corresponds to the certified public key. A CA acts as a trusted third party—trusted both by the subject (owner) of the certificate and by the party relying upon the certificate. The format of these certificates is specified by the X.509 standard. One particularly common use for certificate authorities is to sign certificates used in HTTPS, the secure browsing protocol for the World Wide Web. Another common use is in issuing identity cards by national governments for use in electronically signing documents. - 1 Overview - 2 Providers - 3 Validation standards - 4 Validation weaknesses - 5 Issuing a certificate - 6 Industry organizations - 7 CA compromise - 8 Key storage - 9 Implementation Weakness of the Trusted Third Party Scheme - 10 Software - 11 See also - 12 References - 13 External links Trusted certificates can be used to create secure connections to a server via the Internet. A certificate is essential in order to circumvent a malicious party which happens to be on the route to a target server which acts as if it were the target. Such a scenario is commonly referred to as a man-in-the-middle attack. The client uses the CA certificate to authenticate the CA signature on the server certificate, as part of the authorizations before launching a secure connection. Usually, client software—for example, browsers—include a set of trusted CA certificates. This makes sense, as many users need to trust their client software. A malicious or compromised client can skip any security check and still fool its users into believing otherwise. The clients of a CA are server supervisors who call for a certificate that their servers will bestow to users. Commercial CAs charge to issue certificates, and their customers anticipate the CA's certificate to be contained within the majority of web browsers, so that safe connections to the certified servers work efficiently out-of-the-box. The quantity of internet browsers, other devices and applications which trust a particular certificate authority is referred to as ubiquity. Mozilla, which is a non-profit business, issues several commercial CA certificates with its products. While Mozilla developed their own policy, the CA/Browser Forum developed similar guidelines for CA trust. A single CA certificate may be shared among multiple CAs or their resellers. A root CA certificate may be the base to issue multiple intermediate CA certificates with varying validation requirements. Browsers and other clients of sorts characteristically allow users to add or do away with CA certificates at will. While server certificates regularly last for a relatively short period, CA certificates are further extended, so, for repeatedly visited servers, it is less error-prone importing and trusting the CA issued, rather than confirm a security exemption each time the server's certificate is renewed. Less often, trustworthy certificates are used for encrypting or signing messages. CAs dispense end-user certificates too, which can be used with S/MIME. However, encryption entails the receiver's public key and, since authors and receivers of encrypted messages, apparently, know one another, the usefulness of a trusted third party remains confined to the signature verification of messages sent to public mailing lists. Worldwide, the certificate authority business is fragmented, with national or regional providers dominating their home market. This is because many uses of digital certificates, such as for legally binding digital signatures, are linked to local law, regulations, and accreditation schemes for certificate authorities. However, the market for globally trusted TLS/SSL server certificates is largely held by a small number of multinational companies. This market has significant barriers to entry due to the technical requirements. While not legally required, new providers may choose to undergo annual security audits (such as WebTrust for certificate authorities in North America and ETSI in Europe) to be included as a trusted root by a web browser or operating system. More than 180 root certificates are trusted in the Mozilla Firefox web browser, representing approximately eighty organizations. OS X trusts over 200 root certificates. As of Android 4.2 (Jelly Bean), Android currently contains over 100 CAs that are updated with each release. On November 18, 2014, a group of companies and nonprofit organizations, including the Electronic Frontier Foundation, Mozilla, Cisco, and Akamai, announced Let's Encrypt, a nonprofit certificate authority that provides free domain validated X.509 certificates as well as software to enable installation and maintenance of certificates. Let's Encrypt is operated by the newly formed Internet Security Research Group, a California nonprofit recognized as tax-exempt under Section 501(c)(3). According to NetCraft in May 2015, the industry standard for monitoring active TLS certificates, states that "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec, Comodo, GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (or VeriSign before it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share." |14||Deutsche Telekom||< 0.1%||0.1%| |15||Network Solutions||< 0.1%||0.1%| A W3Techs survey from April 2016 shows: |14||Deutsche Telekom||< 0.1%||0.1%| |15||Network Solutions||< 0.1%||0.1%| The commercial CAs that issue the bulk of certificates for HTTPS servers typically use a technique called "domain validation" to authenticate the recipient of the certificate. The techniques used for domain validation vary between CAs, but in general domain validation techniques are meant to prove that the certificate applicant controls a given domain name, not any information about the applicant's identity. Many Certificate Authorities also offer Extended Validation (EV) certificates as a more rigorous alternative to domain validated certificates. Extended validation is intended to verify not only control of a domain name, but additional identity information to be included in the certificate. Some browsers display this additional identity information in a green box in the URL bar. One limitation of EV as a solution to the weaknesses of domain validation is that attackers could still obtain a domain validated certificate for the victim domain, and deploy it during an attack; if that occurred, the difference observable to the victim user would be the absence of a green bar with the company name. There is some question as to whether users would be likely to recognise this absence as indicative of an attack being in progress: a test using Internet Explorer 7 in 2009 showed that the absence of IE7's EV warnings were not noticed by users, however Microsoft's current browser, Edge, shows a significantly greater difference between EV and domain validated certificates, with domain validated certificates having a hollow, grey lock. Domain validation suffers from certain structural security limitations. In particular, it is always vulnerable to attacks that allow an adversary to observe the domain validation probes that CAs send. These can include attacks against the DNS, TCP, or BGP protocols (which lack the cryptographic protections of TLS/SSL), or the compromise of routers. Such attacks are possible either on the network near a CA, or near the victim domain itself. One of the most common domain validation techniques involves sending an email containing an authentication token or link to an email address that is likely to be administratively responsible for the domain. This could be the technical contact email address listed in the domain's WHOIS entry, or an administrative email like admin@, administrator@, webmaster@, hostmaster@ or postmaster@ the domain. Some Certificate Authorities may accept confirmation using root@, info@, or support@ in the domain. The theory behind domain validation is that only the legitimate owner of a domain would be able to read emails sent to these administrative addresses. Domain validation implementations have sometimes been a source of security vulnerabilities. In one instance, security researchers showed that attackers could obtain certificates for webmail sites because a CA was willing to use an email address like [email protected] for domain.com, but not all webmail systems had reserved the "ssladmin" username to prevent attackers from registering it. Prior to 2011, there was no standard list of email addresses that could be used for domain validation, so it was not clear to email administrators which addresses needed to be reserved. The first version of the CA/Browser Forum Baseline Requirements, adopted November 2011, specified a list of such addresses. This allowed mail hosts to reserve those addresses for administrative use, though such precautions are still not universal. In January 2015, a Finnish man registered the username "hostmaster" at the Finnish version of Microsoft Live and was able to obtain a domain-validated certificate for live.fi, despite not being the owner of the domain name. Issuing a certificate Template:Unreferenced section A CA issues digital certificates that contain a public key and the identity of the owner. The matching private key is not made available publicly, but kept secret by the end user who generated the key pair. The certificate is also a confirmation or validation by the CA that the public key contained in the certificate belongs to the person, organization, server or other entity noted in the certificate. A CA's obligation in such schemes is to verify an applicant's credentials, so that users and relying parties can trust the information in the CA's certificates. CAs use a variety of standards and tests to do so. In essence, the certificate authority is responsible for saying "yes, this person is who they say they are, and we, the CA, certify that". If the user trusts the CA and can verify the CA's signature, then they can also assume that a certain public key does indeed belong to whoever is identified in the certificate. Public-key cryptography can be used to encrypt data communicated between two parties. This can typically happen when a user logs on to any site that implements the HTTP Secure protocol. In this example let us suppose that the user logs on to their bank's homepage www.bank.example to do online banking. When the user opens www.bank.example homepage, they receive a public key along with all the data that their web-browser displays. The public key could be used to encrypt data from the client to the server but the safe procedure is to use it in a protocol that determines a temporary shared symmetric encryption key; messages in such a key exchange protocol can be enciphered with the bank's public key in such a way that only the bank server has the private key to read them. The rest of the communication then proceeds using the new (disposable) symmetric key, so when the user enters some information to the bank's page and submits the page (sends the information back to the bank) then the data the user has entered to the page will be encrypted by their web browser. Therefore, even if someone can access the (encrypted) data that was communicated from the user to www.bank.example, such eavesdropper cannot read or decipher it. This mechanism is only safe if the user can be sure that it is the bank that they see in their web browser. If the user types in www.bank.example, but their communication is hi-jacked and a fake web-site (that pretends to be the bank web-site) sends the page information back to the user's browser, the fake web-page can send a fake public key to the user (for which the fake site owns a matching private key). The user will fill the form with their personal data and will submit the page. The fake web-page will then get access to the user's data. This is what the certificate authority mechanism is intended to prevent. A certificate authority (CA) is an organization that stores public keys and their owners, and every party in a communication trusts this organization (and knows its public key). When the user's web browser receives the public key from www.bank.example it also receives a digital signature of the key (with some more information, in a so-called X.509 certificate). The browser already possesses the public key of the CA and consequently can verify the signature, trust the certificate and the public key in it: since www.bank.example uses a public key that the certification authority certifies, a fake www.bank.example can only use the same public key. Since the fake www.bank.example does not know the corresponding private key, it cannot create the signature needed to verify its authenticity. It is difficult to assure correctness of match between data and entity when the data are presented to the CA (perhaps over an electronic network), and when the credentials of the person/company/program asking for a certificate are likewise presented. This is why commercial CAs often use a combination of authentication techniques including leveraging government bureaus, the payment infrastructure, third parties' databases and services, and custom heuristics. In some enterprise systems, local forms of authentication such as Kerberos can be used to obtain a certificate which can in turn be used by external relying parties. Notaries are required in some cases to personally know the party whose signature is being notarized; this is a higher standard than is reached by many CAs. According to the American Bar Association outline on Online Transaction Management the primary points of US Federal and State statutes enacted regarding digital signatures has been to "prevent conflicting and overly burdensome local regulation and to establish that electronic writings satisfy the traditional requirements associated with paper documents." Further the US E-Sign statute and the suggested UETA code help ensure that: - a signature, contract or other record relating to such transaction may not be denied legal effect, validity, or enforceability solely because it is in electronic form; and - a contract relating to such transaction may not be denied legal effect, validity or enforceability solely because an electronic signature or electronic record was used in its formation. Despite the security measures undertaken to correctly verify the identities of people and companies, there is a risk of a single CA issuing a bogus certificate to an imposter. It is also possible to register individuals and companies with the same or very similar names, which may lead to confusion. To minimize this hazard, the certificate transparency initiative proposes auditing all certificates in a public unforgeable log, which could help in the prevention of phishing. In large-scale deployments, Alice may not be familiar with Bob's certificate authority (perhaps they each have a different CA server), so Bob's certificate may also include his CA's public key signed by a different CA2, which is presumably recognizable by Alice. This process typically leads to a hierarchy or mesh of CAs and CA certificates. Authority revocation lists An authority revocation list (ARL) is a form of certificate revocation list (CRL) containing certificates issued to certificate authorities, contrary to CRLs which contain revoked end-entity certificates. - Certificate Authority Security Council (CASC) – In February 2013, the CASC was founded as an industry advocacy organization dedicated to addressing industry issues and educating the public on internet security. The founding members are the seven largest Certificate Authorities. - Common Computing Security Standards Forum (CCSF) – In 2009 the CCSF was founded to promote industry standards that protect end users. Comodo Group CEO Melih Abdulhayoğlu is considered the founder of the CCSF. - CA/Browser Forum – In 2005, a new consortium of Certificate Authorities and web browser vendors was formed to promote industry standards and baseline requirements for internet security. Comodo Group CEO Melih Abdulhayoğlu organized the first meeting and is considered the founder of the CA/Browser Forum. The CA/Browser Forum publishes the Baseline Requirements, a list of policies and technical requirements for CAs to follow. These are a requirement for included in the certificate stores of Firefox and Safari. If the CA can be subverted, then the security of the entire system is lost, potentially subverting all the entities that trust the compromised CA. For example, suppose an attacker, Eve, manages to get a CA to issue to her a certificate that claims to represent Alice. That is, the certificate would publicly state that it represents Alice, and might include other information about Alice. Some of the information about Alice, such as her employer name, might be true, increasing the certificate's credibility. Eve, however, would have the all-important private key associated with the certificate. Eve could then use the certificate to send digitally signed email to Bob, tricking Bob into believing that the email was from Alice. Bob might even respond with encrypted email, believing that it could only be read by Alice, when Eve is actually able to decrypt it using the private key. A notable case of CA subversion like this occurred in 2001, when the certificate authority VeriSign issued two certificates to a person claiming to represent Microsoft. The certificates have the name "Microsoft Corporation", so they could be used to spoof someone into believing that updates to Microsoft software came from Microsoft when they actually did not. The fraud was detected in early 2001. Microsoft and VeriSign took steps to limit the impact of the problem. In 2011 fraudulent certificates were obtained from Comodo and DigiNotar, allegedly by Iranian hackers. There is evidence that the fraudulent DigiNotar certificates were used in a man-in-the-middle attack in Iran. In 2012, it became known that Trustwave issued a subordinate root certificate that was used for transparent traffic management (man-in-the-middle) which effectively permitted an enterprise to sniff SSL internal network traffic using the subordinate certificate. An attacker who steals a certificate authority's private keys is able to forge certificates as if they were CA, without needed ongoing access to the CA's systems. Key theft is therefore one of the main risks certificate authorities defend against. Publicly trusted CAs almost always store their keys on a hardware security module (HSM), which allows them to sign certificates with a key, but generally prevent extraction of that key with both physical and software controls. CAs typically take the further precaution of keeping the key for their long-term root certificates in an HSM that is kept offline, except when it is needed to sign shorter-lived intermediate certificates. The intermediate certificates, stored in an online HSM, can do the day-to-day work of signing end-entity certificates and keeping revocation information up to date. CAs sometimes use a key ceremony when generating signing keys, in order to ensure that the keys are not tampered with or copied. Implementation Weakness of the Trusted Third Party Scheme The critical weakness in the way that the current X.509 scheme is implemented is that any CA trusted by a particular party can then issue certificates for any domain they choose. Such certificates will be accepted as valid by the trusting party whether they are legitimate and authorized or not. This is a serious short-coming given that the most commonly encountered technology employing X.509 and trusted third parties is the https protocol. As all major web browsers are distributed to their end-users pre-configured with a list of trusted CAs that numbers in the dozens this means that any one of these pre-approved trusted CAs can issue a valid certificate for any domain whatsoever. The industry response to this has been muted. Given that the contents of a browser's pre-configured trusted CA list is determined independently by the party that is distributing or causing to be installed the browser application there is really nothing that the CAs themselves can do. This issue is the driving impetus behind the development of the DNS-based Authentication of Named Entities (DANE) protocol. If adopted in conjunction with Domain Name System Security Extensions (DNSSEC) DANE will greatly reduce if not completely eliminate the role of trusted third-parties in a domain's PKI. Various software is available to operate a certificate authority. Generally such software is required to sign certificates, maintain revocation information, and operate OCSP or CRL services. Some examples are: - OpenSSL, an SSL/TLS library that comes with tools allowing its use as a simple certificate authority - EasyRSA, OpenVPN's command line CA utilities using OpenSSL. - TinyCA, which is a perl gui on top of some CPAN modules. - XiPKI, CA and OCSP responder. With SHA3 support, OSGi-based (Java). - Boulder is an automated server that uses the Automated Certificate Management Environment (ACME) protocol. - Windows Server contains a CA as part of Certificate Services for the creation of digital certificates. In Windows Server 2008 and later the CA may be installed as part of Active Directory Certificate Services. - How secure is HTTPS today? How often is it attacked?, Electronic Frontier Foundation (25 October 2011) - "Mozilla Included CA Certificate List — Mozilla". Mozilla.org. https://www.mozilla.org/projects/security/certs/included/index.html. Retrieved 2014-06-11. - Zakir Durumeric; James Kasten; Michael Bailey; J. Alex Halderman (12 September 2013). "Analysis of the HTTPS Certificate Ecosystem". The Internet Measurement Conference. SIGCOMM. http://conferences.sigcomm.org/imc/2013/papers/imc257-durumericAemb.pdf. Retrieved 20 December 2013. - "What is SSL Certificate?". https://www.instantssl.com/ssl-certificate.html. Retrieved 2015-10-16. - "webtrust". webtrust. http://www.webtrust.org/. Retrieved 2013-03-02. - Kirk Hall (April 2013). "Standards and Industry Regulations Applicable to Certification Authorities". Trend Micro. https://casecurity.org/wp-content/uploads/2013/04/Standards-and-Industry-Regulations-Applicable-to-Certification-Authorities.pdf. Retrieved 2014-06-11. - "CA:IncludedCAs - MozillaWiki" (in en). https://wiki.mozilla.org/CA:IncludedCAs. - "Let’s Encrypt: Delivering SSL/TLS Everywhere". Let's Encrypt. https://letsencrypt.org/2014/11/18/announcing-lets-encrypt.html. Retrieved 2014-11-20. - "About". Let's Encrypt. https://letsencrypt.org/about/. Retrieved 2015-06-07. - Counting SSL certificates; netcraft; May 13, 2015. - "Usage of SSL certificate authorities for websites". 2015-05-13. http://w3techs.com/technologies/overview/ssl_certificate/all. Retrieved 2015-09-29. - Comodo has become the most widely used SSL certificate authority; w3techs; February 17, 2015. - "Usage of SSL certificate authorities for websites". 2016-04-25. http://w3techs.com/technologies/overview/ssl_certificate/all. Retrieved 2016-04-25. - Template:Cite conference - "Responsibilities of Certificate Authority". https://www.instantssl.com/code-signing/code-signing-technical.html. Retrieved 2015-02-12. - "Electronic Signatures and Records". http://euro.ecom.cmu.edu/program/law/08-732/Transactions/ElectronicSignatures.pdf. Retrieved 2014-08-28. - "Certificate transparency". http://www.certificate-transparency.org. Retrieved 2013-11-03. - "Certificate transparency". Internet Engineering Task Force. http://tools.ietf.org/html/rfc6962. Retrieved 2013-11-03. - "Multivendor power council formed to address digital certificate issues". Network World. February 14, 2013. http://www.networkworld.com/news/2013/021413-council-digital-certificate-266728.html. - "Major Certificate Authorities Unite In The Name Of SSL Security". Dark Reading. February 14, 2013. http://www.darkreading.com/authentication/167901072/security/news/240148546/major-certificate-authorities-unite-in-the-name-of-ssl-security.html. - "CA/Browser Forum Founder". http://www.melih.com/about/. Retrieved 2014-08-23. - "CA/Browser Forum". https://www.cabforum.org/. Retrieved 2013-04-23. - Wilson, Wilson. "CA/Browser Forum History". DigiCert. http://docbox.etsi.org/workshop/2012/201201_CA_DAY/5_Wilson_CAB-Forum.pdf. Retrieved 2013-04-23. - "Baseline Requirements". CAB Forum. https://cabforum.org/baseline-requirements-documents/. Retrieved 14 April 2017. - "Mozilla Root Store Policy". Mozilla. https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#conformance. Retrieved 14 April 2017. - "Apple Root Certificate Program". Apple. https://www.apple.com/certificateauthority/ca_program.html. Retrieved 14 April 2017. - "CA-2001-04". Cert.org. https://www.cert.org/advisories/CA-2001-04.html. Retrieved 2014-06-11. - Microsoft, Inc. (2007-02-21). "Microsoft Security Bulletin MS01-017: Erroneous VeriSign-Issued Digital Certificates Pose Spoofing Hazard". http://support.microsoft.com/kb/293818. Retrieved 2011-11-09. - Bright, Peter (28 March 2011). "Independent Iranian hacker claims responsibility for Comodo hack". Ars Technica. https://arstechnica.com/security/news/2011/03/independent-iranian-hacker-claims-responsibility-for-comodo-hack.ars. Retrieved 2011-09-01. - Bright, Peter (2011-08-30). "Another fraudulent certificate raises the same old questions about certificate authorities". Ars Technica. https://arstechnica.com/security/news/2011/08/earlier-this-year-an-iranian.ars. Retrieved 2011-09-01. - Leyden, John (2011-09-06). "Inside 'Operation Black Tulip': DigiNotar hack analysed". The Register. http://www.theregister.co.uk/2011/09/06/diginotar_audit_damning_fail/. - "Trustwave issued a man-in-the-middle certificate". The H Security. 2012-02-07. http://www.h-online.com/security/news/item/Trustwave-issued-a-man-in-the-middle-certificate-1429982.html. Retrieved 2012-03-14. - "Dogtag Certificate System". Pki.fedoraproject.org. http://pki.fedoraproject.org/wiki/PKI_Main_Page. Retrieved 2013-03-02. - "reaperhulk/r509 · GitHub". Github.com. https://github.com/reaperhulk/r509. Retrieved 2013-03-02. - "xca.sourceforge.net". xca.sourceforge.net. http://xca.sourceforge.net/. Retrieved 2013-03-02. - "xipki/xipki · GitHub". Github.com. https://github.com/xipki/xipki. Retrieved 2016-10-17. - "letsencrypt/acme-spec". github.com. https://github.com/letsencrypt/acme-spec. Retrieved 2014-11-20.
<urn:uuid:64a16b32-bdc9-40b2-8dda-0ced54c1c17c>
CC-MAIN-2022-40
https://wiki.glitchdata.com/index.php/Certificate_authority
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00217.warc.gz
en
0.881982
6,942
3.578125
4
On Friday, May 12, a global ransomware attack hit in over 100 countries. This was one of the biggest cyber attackers to date. According to BBC News, approximately $70,000 were paid to cyber hackers. This terrifying event has served as a wake up call to many businesses that ransomware is a very real threat. No matter the size of business, where you are located or what industry you are in; you can get attacked. If you are one of the many CEOs or business owners who is wondering what is ransomware and how can you protect yourself, this article is for you. Ransomware is a computer virus, or malware, that has been growing over the last few years. It locks your files and requires you to pay to access them. Usually ransomware encrypts your data so you cannot open it, blocks you from run any programs except for ones that allow you to pay the ransom. Ransomware usually displays an image or message on your screen that lets you know your data has been encrypted and you have to pay a specific sum of money to get it back. The ransom payments usually have a time-limit. Ransomware does not only affect desktop computers. It can also hit laptops and mobile phones. Ransomware usually hits through someone clicking a link with a virus or by clicking an attachment. It can spread to other computers on the network and it can be disastrous for your company. The question, of course, is how to protect against ransomware? One of the most important things you can do is backup important data every single day. When you backup your data, you can avoid having to pay to see your data again. It’s also important to set a protocol within your organization to not click on suspicious links or attachments. Remember that hackers are tricky and their links and attachments often look normal. Be sure to look at email addresses and check for extra letters or numbers or click to see the full address to ensure it’s coming from the proper sender. You also need to install software updates on a regular basis. Turn on auto-updates and run antivirus software. Make this a regular habit in your business. Updates aren’t just there to frustrate you with a reminder box. They occur because software systems are patching security flaws on a regular basis and their updates include those patches. Finally, it’s always important to be vigilant. While you can enlist every protection, there is always a possibility of an attack. Hackers are becoming more sophisticated every day so remain vigilant and, if necessary, hire an outside security team to help.
<urn:uuid:71e78b55-49ef-42e2-b21f-3e985b7fc7c4>
CC-MAIN-2022-40
https://gxait.com/it/business-guide-to-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00217.warc.gz
en
0.963222
529
2.765625
3
Your Ford Mondeo might achieve bigger speeds than some hybrid or electric cars, but the only place you're getting fast is a dirty, heated planet. Condescending talk aside, the truth is that hybrid and electric cars are gaining ground, and even though classic, internal combustion engine cars still have a huge, loyal market, these cars are not to be taken lightly. Especially with the push of an individual such as Elon Musk, these cars that run on clean and renewable sources of energy is something we’ll see in greater numbers in the future. Driving hybrid vehicles has its benefits, and here are some of them: Gas prices oscillate at unknown frequencies, which is why it is extremely hard to plan your car’s gas into a monthly budget. With hybrid cars it’s much simpler; as it gives you more bang for your buck. Toyota’s hybrid car, Prius, is able to give 50 MPG, irrespective of terrain on which it is used. Honda Civic’s Hybrid and Ford Fusion’s hybrid vehicles provide approximately 40 MPG. Effectively, these cars require less than $1,400 each year for fuel. Saves money again (this time on insurance) Newlaunches.com (opens in new tab) says that insurance companies think hybrid cars are less prone to accidents, as they are slower and can’t achieve high speeds at which accidents are more frequent. That’s why insurance companies charge lower premiums for insuring hybrid cars. Hybrid cars are on the rise, for two reasons: we never know when we’ll completely run out of oil; and these hybrids pollute our environment less. That’s why governments give certain tax incentives to encourage people into buying such vehicles. There are very few hybrid car manufacturers today, despite its recent rise in popularity. That’s why the cars’ manufacturers use more or less common and readily compatible parts across all hybrid cars, irrespective of their maker and model. Green is a very popular buzzword nowadays. It stands for being environmentally friendly, no matter what you do. That’s why they call these hybrid cars ‘green’ cars, as they use less gas. Therefore - they do not release harmful components from fossil fuels into the atmosphere, and they don’t warm up the world.
<urn:uuid:c830db7c-ad71-4cf2-848f-ac072cc21488>
CC-MAIN-2022-40
https://www.itproportal.com/2015/05/21/five-benefits-driving-hybrid-cars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00217.warc.gz
en
0.95355
502
2.859375
3
Security+: Understanding Security Risk Concepts (SY0-401) [DECOMMISSIONED ARTICLE] NOTE: This article reflects an older version of the Security+ Exam – please see the current Security+ Certification page for the most up-to-date information. Risk can be defined as “the possibility that something (such as virus or malware attack) could disclose, destroy, or damage data or other resources in the organization.” The purpose of security is to prevent risks and to ensure authorized access. By using risk management and information security strategies, security professionals identify some factors that could disclose or damage data. After that, they recommend and implement cost-effective solutions for mitigating those risks. Risk analysis is the process whereby the goals of the risk management are achieved. Risk analysis procedures include: - Analyzing an environment for risks. - If the risk(s) is found in the first step, evaluating the cost of damage it would cause. - Assessing the cost of various countermeasures for one or more risks. - Preparing cost/benefit analysis report for protection and presenting it to executive authorities. What Control Types Do I Need To Know for the Security+ Exam? There are three control types that you need to know for the Security+ exam: technical, management, and operational. Control types are used to implement security. A control can be a new product, a modified existing product, the removal of a product from the IT environment, or a redesign of an IT infrastructure. Controls are vital for protecting the confidentiality, integrity, and availability of data and information. Technical controls involve the hardware or software tools that manage access to systems and resources and provide protection for them. Various examples of technical controls are listed below. - Smart cards - Constrained interfaces - Access control lists (ACLs) - Intrusion detection systems (IDS) - Clipping levels Management controls are policies and procedures that should be addressed by the organization’s executives and managers. Management controls define how the overall access control will be implemented and enforced. The following list includes various examples of management controls. - The system development lifecycle (SDLC) - Legal and regulatory - Computer security lifecycle - Vulnerability management/scanning - Policies and procedures - Background checks - Data classification - Security training - Vacation history - Work Supervision Operational controls are designed to increase individual and group system security on a daily basis. They are executed by people who must have technical expertise and understanding of operational controls. Examples of operational controls include: - User awareness and training - Fault tolerance and disaster recovery plans - Incident handling - Computer support - Baseline configuration development - Environmental security What Risk Reduction Policies Do I Need to Know? Reducing risk is a significant factor in any organization. Security management identifies the risk and then implements the security policies to eliminate or mitigate that risk. Acceptable Usage Policies (AUPs) AUPs define what is and what practices and activities aren’t acceptable or appropriate uses of company resources and equipment. Usually, each employee is required to sign an AUP before starting to work in the organization. Failure to comply with an AUP may result in a warning, penalty, or job termination as a last resort. For example, if a manager asks an employee to repair a system that is outside the AUP’s parameter, the employee can refuse to do so. If he/she is found working on that system, then he/she will be subjected to termination due to AUP’s violation. Security policy is the top tier of formulating a company’s essential protection-plan documentation. A security policy is a document that defines the realm of security required by the company and ensures the protection of its assets. It also identifies the functional areas of data processing and defines all relevant terminologies. A security policy has three categories: regulatory, informative, and advisory. Separation of Duties Separation of duties means that one or more groups are assigned different tasks and a unique administrator is assigned to oversee each group. Separation of duties helps to prevent conflicts of interests, reduces errors, and prevents frauds. For instance, if one employee orders goods from suppliers, then another employee should add the entries of those goods to the accounting system. This prevents the purchasing employee from diverting incoming goods for his/her own use. Least privilege means that the minimum necessary access, right, privilege, and permissions that are required for the user to perform his/her task are assigned to that user. This prohibits the user from performing any task that is beyond the scope of his/her assigned responsibility. Management should periodically review the least privileges to check for privilege misalignment with job responsibilities. Privilege misalignment often occurs when an employee gathers privileges as his/her job responsibilities change with the passage of time. The accumulation of these excessive privileges indicates that an employee has more privileges than the principles of least privilege allows. Under such circumstances, the least privilege review is necessary. What Do I Need to Know About Risk Calculation? Risk calculation is an essential part of an organization’s security efforts. Risk calculation is a broad term that includes risk identification, risk assessment, vulnerability management, and risk analysis. It helps an organization address problems in its security policy. The main goal of risk calculation is to mitigate the impacts of risk on the enterprises by applying countermeasures and safeguards. Likelihood is the probability that a threat will be realized within a specific period process, as estimated by security management. Likelihood estimates are performed on a yearly basis through Annualized Rate of Occurrence (ARO). ARO is based on the statistical probability of how many times a risk will occur in a year. Single Loss Expectancy (SLE) SLE is the potential dollar-value loss expected from the occurrence of a single risk incident. SLE is calculated with the help of the following equation. SLE = Asset Value x Exposure Factor (EF) EF is the percentage of a loss to a specific asset if a risk is realized. Annualized Loss Expectancy (ALE) ALE is the monetary loss that can be expected due to a risk over a period of one year. ALE can be found by multiplying the SLE and ARO. Its mathematical equation is derived as: ALE = SLE * ARO One of the important features of ALE is that it is directly used in a cost/benefit analysis. For example, if a risk has ALE of $10,000, then it will be useless to spend $20,000 per year on countermeasures to eliminate that risk. Impact measures the loss or damage that will be or could be inflicted if a potential risk is realized. The exposure factor (EF) indicates the impact of a risk. MTTR, MTTF, and MTBF Aging hardware often needs repair or replacement. Security management can use some best practices to manage the hardware lifecycle. These practices involve mean time to repair/restore (MTTR), mean time to failure (MTTF), and mean time between failures (MTBF). Quantitative vs. Qualitative Risk Analysis Both quantitative and qualitative are risk assessment methodologies that are used to evaluate threats and their related risks. Quantitative risk analysis assigns numeric values to the loss of an asset. This method is cheaper, easier, and quicker but it cannot give a total or assign asset value for potential monetary loss. For example, with this method, the ranges from 1 to 20 or from 1 to 100 can be assigned. The probability of a risk will be high if the number is higher. As an example, the computer system with no antivirus program has a high probability of risk. Quantitative risk calculations can be performed by using ALE, ARO, and SLE calculations. Contrarily, qualitative risk analysis assigns the intangible and subjective values to the loss of an asset. Unlike its counterpart, it doesn’t assign dollar figures to possible loss. Instead, the threats are ranked on a scale to evaluate their risks, effects, and costs. Several techniques can be used to perform qualitative risk analysis. These techniques include brainstorming, Delphi technique, storyboarding, surveys, questionnaires, checklists, one-on-one meetings, and interviews. What Threat Vectors Do I Need to Know? A threat vector or an attack vector is the path whereby an attacker can gain access to a targeted system to deliver malicious outcomes. Threat vectors include viruses, emails, pop-up windows, attachments, deception (human factor), chat rooms, and instant messages. Risk Avoidance, Transference, Acceptance, Mitigation, Deterrence The outcomes of risk analysis are presented in the form of various documents that include: - Complete and detailed value of all assets - Comprehensive list of all risks and threats, rate of occurrence, and the extent of losses if risks are realized - List of threat-specific countermeasures that identifies their ALE and effectiveness - Cost/benefit analysis for each countermeasure After the risk analysis has been completed, security management must address all risks. Management has four possible responses to address those identified risks. - Reduce or mitigate: This involves the implementation of countermeasures and safeguards. - Transfer or assign: This places the cost of a loss inflicted by a risk onto another entity. The common forms of transferring or assigning risk are outsourcing and purchasing insurance. - Accept: This indicates that the management is agreed to accept the loss as a consequences of the risk. - Reject or ignore: This amounts to hoping that the risk will never be realized. It’s not a prudent response or wise approach. InfoSec Security+ Boot Camp The InfoSec Institute offers a Security+ Boot Camp that teaches the information theory and reinforces that theory with hands-on exercises that help you learn by doing. InfoSec also offers thousands of articles on all manner of security topics. We've encountered a new and totally unexpected error. Get instant boot camp pricing A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.
<urn:uuid:99f2a646-87fa-4490-ad3a-7a47aa25a685>
CC-MAIN-2022-40
https://resources.infosecinstitute.com/certification/security-plus-understanding-security-risk-concepts-sy0-401/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00217.warc.gz
en
0.914214
2,261
3.5
4
Public key infrastructure (PKI) is a system for the creation, storage, and distribution of digital certificates which are used to verify that a particular public key belongs to a certain entity. The PKI creates digital certificates which map public keys to entities, securely stores these certificates in a central repository and revokes them if needed. The adoption of PKI has increased steadily over the years, with most analysts predicting 15% – 20% growth rates between now and 2024-2025. The broad reason for the steady adoption is not surprising: the increased digitalization of enterprises, consumers and society. However, since digitalization often means different things to different people, we look at some specific trends in digitization that are driving PKI usage and adoption. - The Internet of Things (IOT): The IOT is emerging as one of the major factors driving PKI adoption. The number of connected devices already exceeds the number of human beings on the planet, and even conservative predictions indicate around 50 billion connected devices likely to be in place over the next five years. The proliferation of IOT devices along with the associated threats such as altering a device function, means that significant efforts are needed for IOT vulnerability management. PKI is expected to play a major role here since these devices will primarily rely on digital certificates for identification and authentication. - Cloud applications and services: Cloud usage is truly mainstream today: A recent report from Flexera indicates that 94% of enterprises today leverage some form of cloud (public, private, hybrid) services. With organizations moving an increasing number of workloads to the cloud, the need for PKI credentials for cloud-based applications is going up correspondingly. Another overlap area between PKI and cloud technology is Certificate Authority (CA) services. These could be a public CA service such as those available from companies like Comodo, Symantec or GoDaddy; or they could be a private CA running in a public cloud; or a private CA running in a private cloud. - Mobile applications: Smartphones and mobile applications are ubiquitous today. Both consumer mobile applications as well as enterprise mobility (e.g. BYOD or Bring Your Own Device) scenarios are driving PKI usage. No one can argue against the productivity gains and convenience workforce mobility provides. However, the challenges of mobile device management, especially from an authentication and data security perspective are often underestimated. Enterprises need a reliable method to verify the mobile user’s identity, validate the device itself, and secure the information through encryption. This is where digital certificates and PKI plays a big role. - E-Commerce and web: One sector where the impact of digital disruptions has been extremely widespread, is retail. Online shopping has been a game changer for the retail industry. And though E-commerce sales are still under 20% of overall retail sales, every retailer today, big or small, needs to have an e-commerce presence. Also, the minute payments and financial transactions are involved, authentication and encryption services become critical. Over the years, one of the key enablers for the boom in the E-commerce industry has been PKI and Secure Sockets Layer (SSL) certificates, which have ensured the safety of online transactions. In general, security on the web has become such a necessity today that SSL support is now becoming de-facto for any website, not just e-commerce. Most major browsers immediately flag a website as “not secure” in the address bar, if an SSL certificate is not available. SSL also has a direct impact on search engine rankings – unsecure web sites typically rank lower than those using SSL. Apart from these, other trends driving increased PKI adoption across enterprises today include initiatives related to risk management, cost reduction, and compliance to the regulatory environment (such as digital signature legislations). One thing is clear – in an increasingly digitalized world, PKI literally holds the key to ensuring safe and secure transactions for enterprises and consumers worldwide. 2019 Rightscale State of the Cloud Report from Flexera
<urn:uuid:9db47fbe-e3ac-43aa-a8fa-7d406acb69fa>
CC-MAIN-2022-40
https://www.encryptionconsulting.com/digital-trends-driving-pki-usage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00217.warc.gz
en
0.94009
813
2.765625
3
In the ever changing world of technology, security should be your utmost concern. You have passwords that help to secure a vast majority of your online accounts. But to get the most out of your security, Two-Factor Authentication (TFA) is a great way to truly protect your information. Apple ID’s TFA, if used properly, is a simple and easy way to guarantee that only you will ever have access your Apple ID account. If you’ve ever had a bad experience with TFA, then we urge you to keep reading and properly understand the process and the benefits of TFA. All you need is your Apple ID, an Apple device (the more the better) and a phone number. The Apple devices that are designed to use TFA are an iPhone, iPad, iPod touch with iOS 9 and later, or Mac with OS X El Capitan and later. Tip within a tip: Wondering why TFA is critical for protecting your device? Click here for six common security issues every smartphone user needs to be aware of. The trusted phone number can be from your iPhone or a landline; however, it is imperative that you have access to this phone number. So if you are going to change your phone number anytime soon, make sure your new number becomes your trusted phone number. How it works If you, or anyone for that matter, want access to your account you’ll need three things: Your Apple ID, your password and access to one of your devices. If you do not have access to all three, you cannot get access to the Apple ID. This is perfect because you know what your Apple ID is, and you should be the only one who knows your password (the longer the better – click here for password mistakes you need to avoid). You are also the only one with the device in hand, thus it becomes impossible for anyone else to gain access to your account. When you sign into a new device for the first time a prompt will appear on your trusted device followed by a temporary code. You don’t need to write this code down. Simply type the numbers from your trusted device onto the new device, and you’re in. If you do not receive a code for whatever reason, don’t worry. Simply hit “didn’t get code” on the sign in screen and then you can text or call your trusted phone number to receive a code. If you’re on a web browser, one last prompt may appear asking if you want to trust the new device. By selecting “trust” you will then no longer need any codes when signing into that browser again. It is important to take note that these “verification codes” that you are receiving are different from a passcode to get into your devices. This verification code is a one-time code that will not matter anymore after you’ve used it. Remember, to properly use TFA you need to: - Remember your Apple ID password. - Use a device passcode on all your devices. - Keep your trusted phone number(s) up to date. - Keep your Apple devices physically secure. With those things in mind, you should never have any issues gaining access to your account again. If you forget your password then TFA actually helps to reset it. On your Apple device, go to Settings >> [your name]. If you’re using iOS 10.2 or earlier, go to Settings >> iCloud >> tap your Apple ID. Then tap password and security, and tap change password. Or you can go to iforgot.apple.com and reset it there. Just make sure that when you are choosing how to reset your password, choose “reset from trusted device.” As readers of Komando.com, we want you to feel safe and secure. If you have Apple products and an Apple ID, then this is the absolute best way to protect your information. For more information on TFA for Apple ID, click here to visit Apple’s support page. If you are on the fence about it or if you have it and want to know more, then we hope we helped you to understand what TFA is and how it benefits you.
<urn:uuid:8cca3b0b-ede0-45b7-a9ac-c85ac5f8f335>
CC-MAIN-2022-40
https://www.komando.com/privacy/why-two-factor-authentication-is-crucial-for-your-apple-id-and-how-to-use-it/411461/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00217.warc.gz
en
0.929908
882
2.53125
3
What is Video Analytics? Picture this, you’re in a crowded train station and have lost your friend in the mix. How do you and your brain go about picking your friend out of the crowd? Do you go through the same process each time you look for something or does it depend somewhat on what you’re seeking? From a human perspective, looking for stuff seems rather straight forward and although we can describe those processes easily to others, the way we search for and identify things generally differs and depends on what we’re searching for. How one goes about finding a lost friend in a station is different than searching for your keys before going to the office. Now imagine how we might go about getting computers to look for things. They would need some kind of input to detect specific objects, recognize and differentiate between those, and then notify us somehow when the requested result is found or not. This process is what we call intelligent video analytics, IVA for short. This article will go into how different kinds of IVA work and also give some examples along the way. What Video Analytics Does The processes involved in getting IVA output from software is similar to how people visually detect and recognize things. The essence of what video analytics does is generally described in three steps. - Video analytic software breaks down video signals into frames. This article will not describe this step, but understanding digital video and how it works is an interesting topic and good to know before we break down the next steps. - The software then splits the video (frames) into video data and analytic data, then uses algorithms to process the analytic data to output specifically desired functions. - And finally, it delivers the result in a predetermined manner. Approaches to Video Analytic Processing Let’s get into the details for number two from the above list as it’s what most people have been talking about recently. Depending on the goal, video needs to be processed using different methods in order to deliver relevant results. Gorilla has categorized the most widely used types of analytics into five fundamental IVA groups which are described in more detail below. 1. Behavior Analytics These analytics use algorithms that are designed to look for a specific behavior. Thinking more deeply, a behavior could be defined as action over time. With that in mind, each Behavior Analytic needs more than one frame from the video to determine if an event or behavior has occurred. So it follows that the algorithms in Behavior Analytics look for changes from frame to frame over time to identify a very specific and predefined event or action. We’ve broken down and classified the Behavior Analytics that are used in our solutions here: The People Counting IVA does just that, it detects and counts people for a specified amount of time as they enter a zone and/or cross a line which users define in the software. This IVA detects when people cross a line (or lines) of user defined length and position. Intrusion Detection monitors user created zones to detect any activity or entries by moving objects (like people). This IVA monitors a user created zone for people moving A) within the zone AND B) in the marked direction. Movements in the opposite direction do not trigger an alert. Direction Violation Detection Same as the direction detection IVA but detects and alerts to movements in the opposite direction. As an example, security checks at airports and other transportation hubs stand to benefit from this type of IVA. The Loitering Detection IVA monitors figures or people entering and then remaining in a user created zone for a specified period. 2. People/Face Recognition People and Face Recognition could easily be sliced into two core groups, but we keep them as one since they are so closely related. As Behavior Analytics need to detect human shapes to perform effectively, People/Face Recognition IVAs are next on our list. The Human Detection IVA detects human figures within the video. Once detected, features like clothing color, gender, eyewear, masks, and age group can be detected as well. This IVA recognizes and identifies faces. This is used in conjunction with Gorilla’s BAP software and its facial recognition database. While uses for this are myriad (and often in the news), we most often see Face Recognition used for Watch Lists, VIP identification, Attendance Systems, and Black Lists. 3. License Plate Recognition Some people collect license plates and like them because different places have different plates. However, this variety makes it incredibly difficult for one License Plate Recognition (LPR) IVA to work globally (or even just from state to state). Currently, we generally see this IVA added as a customized feature because adding all the different and beautiful plates into the general release of the software would require too much space. Having said that, there are currently two approaches to LPR. - Parking LPR detects the license plates of parked vehicles in user created zones, vehicles travelling slowly, or vehicles stopped at boom gates. - Road Traffic LPR detects the license plates of moving vehicles, or vehicles stopped at a stoplight. 4. Object Recognition Replace the Face Recognition IVA with any given object and you’ll get the Object Detection IVA. This is where algorithms are used in training the software to detect and recognize a specific object, like a hot dog. There are a lot of different objects in the world (even more than there are license plate types!), so the training and size requirements add up quickly. 5. Business Intelligence Dashboards in software showing data about various business activities are a valuable asset in just about any retail or enterprise setting. Using video analytics from within a dashboard to enrich and increase results should be a part in everyone’s toolbox. While the IVAs in numbers one through four above are widely used for surveillance scenarios, there are a magnitude of business scenarios that can reap the benefits that video analytics offers. To see some great examples, check out how Gorilla is applying these to create intelligent solutions for multiple business markets and industries. Putting the Video Analytic Idea Together As you read above, these IVAs all orchestrate various algorithms to achieve and deliver results. Essentially though, IVAs detect for and determine if a defined event or behavior has been found or occurs within a video camera’s field of view and then notifies the designated user of the finding. In a similar manner, most of us go through varying processes depending on if we’re searching for keys at home or for our friend in a busy station. Video Analytic Processing Power Thinking about the entire process, could there be a single solution that can do everything effectively? It seems like an insurmountable amount of tasks: from processing each single frame’s analytic data to displaying it together with the video, into creating a complete video system with an array of user selectable & customizable IVA in a building or any other scenario, all the way to putting multiple systems together that report back to a central control center. It’s not impossible. To demonstrate this, let’s look at what IVAR™ from Gorilla can do and how it operates. CPU and GPU Video Analytic Processing Video analytics as a whole requires a lot of dedicated processing power. We should keep in mind here that before optimization and edge devices with capable CPUs, video analytics was processing both video and analytic data on one machine and required additional GPUs to do most of the work. Technology and the ability to split these two up has advanced to the point that it’s now possible to keep the video data at the edge while pushing the analytic data up the network for quick processing. One technology, which Gorilla was the first to adopt, is the Intel® distribution of the OpenVINO™ toolkit. Using the OpenVINO™ toolkit to optimize IVAR keeps deployment and upkeep costs low while decreasing operating temperatures by minimizing the need for expensive GPUs. Delivering and Deploying Video Analytics Considering the multitude of IVA capabilities and applications in the world today, Gorilla is asked about many things regarding delivering and deploying video analytics and the IVAR platform. Q: How many video feeds can IVAR handle? A: IVAR is a highly scalable solution that fits nearly any size system, from a single camera with one IVA to multiple systems with hundreds of cameras running multiple IVAs. Q: I need a complete VMS with integrated IVA, is IVAR right for my company? A: From using it as a standalone all-in-one video surveillance solution, to integrating via IVAR’s open API, to adding it to an existing Milestone Xprotect® system, IVAR excels at being versatile in suiting your needs. Final Thoughts on Video Analytics The next time you find yourself in a crowded station and need to locate a missing friend (which is hopefully never), think of how a computer attached to a camera might go about doing it. The way that video analytics works is an incredibly interesting and broad topic to cover in one article. If you made it this far in the article, you should now have a solid understanding of how video analytics operates and how video analytic software solutions like IVAR are driving technology forward.
<urn:uuid:3d109a7d-571a-46cb-a875-61d1e5e25cd9>
CC-MAIN-2022-40
https://blog.gorilla-technology.com/video-analytics-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00417.warc.gz
en
0.920018
1,943
2.703125
3
Is there a point you can reach where you no longer need to worry about a computer virus? Quick answer: No. You will never reach a point where you have a 100% guarantee that a virus can’t infect your system. There are, however, a few steps you can take to help get you as close as possible to that 100% guarantee. 1. Anti-Virus Programs Whether you are working from a desktop or a laptop, a PC or a Mac, a tablet or a phone, you need a quality anti-virus program. It is just good sense to invest in an anti-virus program that will work as hard as you do, and will help keep you safe. The truth is though, an anti-virus program is only going to work if you use it properly. Some quick tips on using an anti-virus program: – Don’t use more than one at a time. Having three anti-virus programs running on your computer won’t make you three times safer. Anti-virus programs don’t play well with others, and instead of working together, they will cancel each other out and you will end up with little to no protection at all. – Don’t randomly shut it off. There may be certain situations where you need to turn off your anti-virus in order for another program to work correctly. However, this should never be done without the permission of your IT support. They will be able to look into the program, find out why the anti-virus software doesn’t like it, and help you determine how best to proceed. If it is a program you use sporadically, they might advise you to turn it off for a few minutes. But, if it is a program you use every day, you want to dig deeper and really analyze the issue- you might need to use a different anti-virus program. 2. Follow Directions Can you remember the last time you rebooted your computer? If not, it’s time to reboot. When you reboot your computer, it allows updates to go through, which help your computer run more efficiently. It also helps give your hardware a break, even if it is only for a minute. You need to take proper care of your computer in order for it to last longer and run better. Your IT support has some instructions for you on how best to take care of your computer, such as leaving it on every night for background work to get done, rebooting once a week to allow updates to go through, etc. It is really important that you pay attention to these instructions and try to follow them as best as possible. These instructions are not given lightly either, they are necessary in order for your IT support to help keep your network running as smoothly as possible. The best way you and your employees are going to protect yourselves and your network from viruses is to educate yourselves. Having the best anti-virus program in the world won’t protect your network if one employee keeps opening the wrong emails and clicking on dangerous links. Our recommendation is to have a quick training session once or twice a year to review how to stay safe on your computers and the internet. Demonstrate how to spot a bad link, why you don’t want a million toolbars, etc. Everyone wants that 100% guarantee, but when it comes to computer viruses, it doesn’t exist. By educating yourself and following the advice of your IT support professionals though, you can come pretty close. Just remember that even if you are doing everything right, and even if your IT support professionals have everything running the way it should be, there will still be those one or two super-sneaky viruses that find their way through. So make sure, when that does happen, to contact your IT support right away. They will not only help you get rid of it, they will also be able to see where it came from and work to figure out how to prevent it from attacking again. Have you ever had a really bad experience with a computer virus? We want to hear them! Tell us about them in the comments.
<urn:uuid:10b5a6f2-d3cb-4211-b202-42074b46b382>
CC-MAIN-2022-40
https://www.networkdepot.com/when-dont-i-need-to-worry-about-a-virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00417.warc.gz
en
0.956144
864
2.59375
3
Every country’s government has its secrets. Whether it’s information on strategic military positions, to data on its citizens, there has long been a need for cyber-intelligence defences and the protection of High Assurance computer systems. But, as everything from a country’s power grid to its national transportation network comes online, the lines between government, civil, and industrial systems are becoming increasingly blurred. In today’s digital world, an adversary taking banking systems offline, or causing mayhem on transport infrastructure, poses a threat to life that’s every bit as real as a physical attack or traditional industrial espionage. Clearly, while many of these systems may be built and operated by commercial organisations, their importance to national defence can’t be underestimated. It’s vital therefore that, while the risks to data will vary from country to country, vital cyber-security measures are put in place to protect it. Here we’ll take a look at the steps some of the world’s superpowers are taking to protect themselves. A recent report published by the National Audit Office condemned the poor state of IT security across UK government departments, which may cast doubt on Britain’s readiness for cyber-attack. Central government however is much more prepared and recently announced the formation of the National Cyber Security Centre (NCSC), headed up by experienced security professionals, and with clearly laid out plans for its approach to improving the state of national cyber security. Not only does the NCSC take a threat-based approach to the issue, involving active analysis of the types of attacks the government might realistically face, but it also eschews the scare tactics and reactive endpoint security tactics traditionally used by vendors of IT security solutions. In addition, the NCSC has announced a policy of “active defence”, or “hacking back the hackers”. A controversial approach, particularly if used pre-emptively, active defence should be regarded as a necessary weapon in the fight against cyber-crime. And finally, as networks and software increasingly become the lifeblood of our daily lives and our country’s critical infrastructure, the UK government has publicly acknowledged the importance of working closely with industry experts and more forward-looking companies to share the responsibility of keeping society safe. Taking its own significant steps to defend against cyber espionage, the US recently passed the Cybersecurity Act of 2015, the main aim of which is to “provide important tools necessary to strengthen the Nation’s cybersecurity”. One particular focus of the Act is on making it easier for private companies to share information on cyber-threats with the government and other organisations. Early incarnations of the country’s cyber strategy were driven by the realisation that a range of businesses, from tech giants like Cisco to online banks and financial institutions, were at serious risk from cyber-attacks. Financial interference and IP theft – even from private companies – are effective ways of degrading a country’s capabilities, assets, and operation capacity, and should therefore be considered as threats to national security. It no longer takes a physical war to disrupt a society when it’s possible to reach straight into its citizen’s living rooms and hold their digital lives to ransom. The government clearly now recognises the importance of its citizens’ online data, and the role the public sector mist play in safeguarding this information. Indeed, the latest move in the Cybersecurity National Action Plan is for the government to work in partnership with commercial tech giants to help US citizens protect their online identities. Published by the European Commission in July 2016 as part of a series of measures to raise the continent’s preparedness to ward off cyber incidents, the NIS Directive is the first piece of Europe-wide legislation on cyber-security. Until recently, the defences and response systems implemented by various member states have varied in subtle but inconvenient ways, such as differing definitions of security levels, and different models for security authorities and response bodies. The Directive’s main aim is to enable an efficient, effective Europe-wide system of defence against cyber-attack by addressing many of these troublesome practical issues around harmonising the various different standards of the EU’s member states. In addition, the Directive also requires each member state to operate a Computer Emergency Response Team (CERT), and seeks to take greater control over the protection of “essential industries” such as power, water, transportation and big finance as they undergo a process of digital transformation. The Chinese government recently gave its approval to a broad new cyber-security law designed to tighten and centralise state control over the country’s information flows and technology equipment. To comply with the new legislation, agencies and enterprises are required to improve their ability to defend against network intrusions while carrying out reviews of security for equipment and data employed in different strategic sectors. However, while this may appear to be a sensible approach, it has been criticised by many, and described by James Zimmerman, chairman of the American Chamber of Commerce in China, as “a step backwards for innovation.” This new law doesn’t come into effect until June 2017, so it remains to be seen whether it proves to be as restrictive to businesses as some are predicting. “Digital India” is an ambitious and impressive programme designed to bring the whole country online, and “transform India into a digitally empowered society and knowledge economy.” Whether casting a vote or accessing public services, all interactions with the Indian government are soon to be made available via an easy, fast and modern online system. It’s hoped that the system will also be used to address non-governmental aspects of modern digital living, such as creating “private spaces in public cloud” and a secure system of “electronic and cashless financial transactions.” Of course, while the system represents tremendous possibilities for a more streamlined and contemporary democracy and digital economy, it also presents significant opportunities for hackers and fraudsters. Indeed, helping to keep Digital India ahead of the latest cyber-threats is a key concern for those working on the initiative, whether they’re experts in policy, government services, or security technologies such as PKI. Facing a world of changing threats From these examples alone, it’s clear how approaches to cyber-security vary across the world. What is common, however, the threat that cyber-attackers pose to a government’s data, and that of its citizens. Acknowledging this threat is the first step to defending against it. Only by deploying a bold strategy, which includes the most advanced and robust security techniques, combined with a strong understanding of the risks that they face, will governments ensure the safety and security of the information they hold. The world in which we live is changing, and so are the threats that we face on a daily basis. Governments around the globe must now ensure they’re flexible and agile enough to recognise when the attackers are getting ahead, and act accordingly.
<urn:uuid:f933f32d-d243-4809-b613-8fead6f095aa>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/tackling-cyber-security-global-perspective/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00417.warc.gz
en
0.950698
1,463
2.9375
3
With the spread of COVID-19, many organizations have adopted work-from–home policies, and more people than ever are using online conferencing tools. Zoom, a platform known for its simplicity and reliability, is the platform that has gained the most popularity in recent weeks. As of the beginning of April 2020, the company’s shares have risen more than 200% since the start of the year, even as the S&P 500 has dropped about 20%. Meanwhile, Zoom’s engineering operations team has been adding servers and other equipment at each of the company’s 17 data center locations to accommodate its fast-growing user base. While convenient and easy to use, however, video conference applications such as Zoom still pose risks for users, including the possibility of eavesdropping, data theft, privacy loss, harassment, and more. In this article, we cover everything you need to know about cybersecurity and privacy in Zoom. 1. Mac Zoom Client Vulnerability In July 2019, a vulnerability was discovered in Zoom’s Mac desktop client: Malicious websites had the ability to turn on a Mac’s webcam without the user’s knowledge. This vulnerability stems from how Zoom allows users to start or join a meeting simply by clicking a web link, which creates a local web server that runs on the user’s machine. While this is convenient for users, it also enables meetings with video and audio to be launched without additional user authorization. Thus, while this sign-in method might be user-friendly, it’s not security-friendly, since it allows attackers to start a meeting and turn on a computer’s camera without the computer’s user authorization. You can protect yourself from this vulnerability by disabling Zoom’s ability to turn on your webcam when joining a meeting, as shown below. 2. Zoom Meeting ID Vulnerability In January this year, researchers found that it’s possible to exploit the way Zoom generates URLs for virtual conference rooms to eavesdrop on meetings. By using automated tools to generate random meeting room IDs, researchers found during tests that they could generate links to actual Zoom meetings without password protection 4% of the time. Zoom meeting IDs are composed of 9, 10, or 11 digits. However, if you don’t enable the “Require Meeting Password” option or enable Zoom’s Waiting Room, which allows manual participant admission, these 9, 10, or 11 digits—which hackers can discover fairly easily—are the only thing stopping unauthorized persons from connecting to your meeting. You can protect yourself from this vulnerability by ensuring you have the latest version of Zoom. In late March and early April, three new Zoom vulnerabilities were discovered. These are not considered as serious as earlier vulnerabilities since they require more work to execute: Some of the attack vectors require hackers to access a victim’s computer, while another employs social engineering to trick users into interacting with bad actors. Those impacted by these vulnerabilities, however, can still suffer from data theft and abuse. At the time this article was written, Zoom has not yet taken action to mitigate these vulnerabilities. As Zoom usage increases, we are likely to see more hackers and security researchers trying to find and exploit its vulnerabilities. Security flaws such as these are bugs that are occasionally discovered. The best way to protect yourself from them is to keep your software updated and follow the manufacturer’s security guidelines. First, however, you should be aware of inherent privacy vulnerabilities. There are three main privacy issues in Zoom that you should look out for: 1. Zoom knows if you are paying attention to the call. Whenever you host a call, you have the option to activate Zoom’s attendee attention tracking feature. This feature alerts the call’s host anytime someone on the call “does not have Zoom Desktop Client or Mobile App in focus for more than 30 seconds.” In other words, if you are on a Zoom call and you click away from Zoom, the host of the call will be notified after 30 seconds, regardless of whether you minimized Zoom to take notes, check your email, or respond to a question on another app. This feature only works if someone on the call is sharing their screen. 2. Zoom collects and shares data. 3. Zoom gives hosts significant power These capabilities include the ability to record meetings and order transcriptions, as well as the responsibility for any meeting data safe, whether it’s stored on a laptop or under a host’s password in Zoom’s cloud. How can you protect your data? There are three easy ways to protect your privacy during Zoom calls: - Use two devices during Zoom calls. If you are attending a Zoom call on your computer, use your phone to check your email or chat with other call attendees. This way, you will not trigger an attention-tracking alert. - Do not use Facebook to sign in. While this saves time, it’s a poor security practice and dramatically increases the amount of personal data Zoom can access. - Look for an icon that tells you when a meeting is being recorded by the host. If you feel comfortable doing so, ask your host to turn on the feature that requires participants to provide consent before a recording can begin. If you’re hosting a videoconference, we suggest you use the feature, which is turned off by default. 4. Virtual Events With many now confined to home, people are increasingly using Zoom to host virtual events. Zoom has published its own tips and recommendations for maintaining security and privacy while managing such events: - When you share your meeting link on social media or other public forums, that makes your event … extremely public. ANYONE with the link can join your meeting. - Avoid using your Personal Meeting ID (PMI) to host public events. Your PMI is basically one continuous meeting, and you don’t want unexpected people crashing your personal virtual space after the party’s over. - Familiarize yourself with Zoom’s settings and features so you understand how to protect your virtual space when you need to. For example, the Waiting Room is an extremely helpful feature that allows hosts to control who comes and goes. 1. Manage screen sharing The first rule of the Zoom Club: Don’t give up control of your screen. You do not want random people in your public event to take control of your screen and sharing unwanted content. You can restrict the ability to screen share from the host control bar both before and during the meeting. 2. Manage your participants Zoom offers a number of options for managing meeting participants: - Allow only signed-in users to join: If someone tries to join your event but isn’t logged on to Zoom with an email to which the invitation was sent, they will receive the message “This meeting is for authorized attendees only.” This option is useful if you want to control your guest list and invite only those you want to your events, such as colleagues or other students at your school. - Lock the meeting: It’s always smart to lock your front door, even when you’re inside the house. When you lock a Zoom meeting that’s already started, no new participants can join, even if they have the meeting ID and password (if you have required one). - Set up your own two-factor authentication: This option lets you generate a random Meeting ID when scheduling your event and require a password to join. You can then share that Meeting ID on Twitter but send the password only to invited participants via DM. - Remove unwanted or disruptive participants: You also have the option to remove participants from your meeting. - Put participants on hold: You can also put everyone else on hold, and the attendees’ video and audio connections will be momentarily disabled. Other options include disabling a participant’s video, muting participants, blocking file transfer through the in-meeting chat, disabling the private chat function, and more. 3. Try the Waiting Room One of the best ways to use Zoom for public events is to enable the Waiting Room feature. Just as its name suggests, the Waiting Room is a virtual staging area that stops your guests from joining until you’re ready for them. It’s a bit like the velvet rope outside a nightclub, with you as the bouncer carefully monitoring who gets to enter. Meeting hosts can customize Waiting Room settings for additional control, and you can even personalize the message attendees see upon entering the Waiting Room so they know they’re in the right place. The Waiting Room is an optimal location for posting any rules/guidelines for your event, such as its goals and who it’s intended for. This video provides additional details. Zoom is focused on making something that was formerly complicated, such as video conferencing, extremely simple for users. However, technology almost always involves a trade-off between convenience and security, and it’s common for the very technology that we use to be more productive to put us and our personal information at risk. Now you know how to protect yourself while using Zoom and how to avoid eavesdropping, data theft, privacy loss, and online harassment. In summary, the most important items to remember about cybersecurity and privacy in Zoom are the following: - Keep your Zoom software up to date. - Disable Zoom’s ability to turn on your webcam when joining a meeting. - Do not use Facebook to sign in. - When hosting an event, manage your meeting settings and participants. - When participating in meetings, be aware of whether or not the meeting is being recorded. Remember, even if you choose not to use Zoom, many of these guidelines also apply to other video conference applications and other digital tools. Stay safe!
<urn:uuid:cd006a37-cca2-4926-9482-3d3aa68a6c7b>
CC-MAIN-2022-40
https://www.cybintsolutions.com/everything-to-know-about-cybersecurity-and-privacy-in-zoom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00417.warc.gz
en
0.930903
2,262
2.546875
3
Cloud Computing is one of the hot topics of the moment and everyone has an opinion on it. The term “Cloud” covers a number of deployment scenarios, including PaaS (Platform as a Service), SaaS (Software as a Service) and IaaS (infrastructure as a Service). In this article I’ll discuss the latter, including the use of Cloud Computing to deploy infrastructure and servers and storage from Cloud Service Providers. The National Institute of Standards and Technology (NIST) defines Cloud Computing as follows: “Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” As we will see, although Cloud Computing should meet the above definition, there are also other requirements that should be considered before selecting a service provider. Understanding IaaS, Infrastructure as a Service It is important to understand what is meant by Infrastructure as a Service as we delve into what’s available in the commercial marketplace. IaaS could be described as the foundation or lowest tier in the Cloud Computing stack. In short, it refers to the deployment of traditional infrastructure components such as servers and storage. Today this usually means virtualinfrastructure, as the need to have dedicated servers is diminishing rapidly. However, it is not a requirement for Cloud Computing to be deployed with a virtualized environment. Using Infrastructure as a Service has a number of distinct benefits: • It enables the customer to understand the provided resources in terms of components they would traditionally deploy in their own data centers. This means existing skill sets around server, database and application administration can all be retained and re-used. • It provides a degree of portability between service providers and the customer’s own existing infrastructure as deployment takes place on standard platforms such as Windows and Linux. • There is no requirement to learn new application or programming frameworks as there is with PaaS and SaaS. • The isolation of resources at the virtual server level means the customer has control over the storage of data, including additional encryption and security measures. IaaS is therefore a low risk way to evaluate and dip a toe into the Cloud Computing universe. In order to evaluate which provider best suits your requirements, there are a number of considerations to weigh. Cloud Computing is provided through the Internet, but at some point there are physical servers, storage and networking equipment deployed in a data center on to which your service will run. Therefore latency can be an issue, depending the application you are deploying. Most IaaS providers operate from multiple locations. If they don’t then they are probably not worth considering because (as we’ll discuss later), operating out of a single data center presents issues around availability. As an example, Amazon Web Services (AWS) is available in 5 regions globally; Northern Virginia and Northern California in the USA (known as US East and US West respectively), Ireland in Europe and Singapore and Tokyo in Asia Pacific. This geographic diversity allows applications to be provided globally with minimal latency impact. Look for service providers that can provide services in your region and the business continuity they provide for those locations. IaaS Redundancy and Availability One benefit of providing multiple locations is that of increased availability. The question of course, is how that availability is implemented. AWS, for example, uses availability zones within regions. These are physically separate data centers (possibly in separate locations, but not guaranteed to be so) between which data is replicated. In the event of a single data center location, it should be possible to restart applications in another part of the availability zone. Unfortunately a recent AWS outage highlighted the fact that the region and availability model was not infallible. Deploying across multiple regions or locations can increase availability. Infrastructure providers are unlikely to offer services to enable the automated failover and management of applications, therefore it will be incumbent on the customer to look at how geographic resiliency can be implemented. As we start to discuss the provision of services, it is a good point to delve deeper into what those services actually are. There are two features that almost all IaaS providers offer and both should be considered essential to offering a cloud-based infrastructure service. They are server/compute and load balancing. Servers, or “instances” as they are frequently known, represent the main compute resource in IaaS. Simply put, they will usually be instances of a virtual server running a standard operating system such as Windows or a Linux variant. The underlying virtualization technology used to support the servers isn’t significant, although some service providers make a virtue of highlighting the hypervisor they use. Operating system choice for servers will cover both Windows and Linux platforms — the specific versions available will vary by provider. One point worth considering when choosing an O/S is the ubiquity of that platform across service providers. Windows Server 2008 and CentOS are universally available (with Windows attracting an extra charge for licensing). Other variants of Linux are less popular. Amazon’s AWS takes operating system selection a step further by allowing the customer to choose from a range of AMIs or Amazon Machine Images. These include customized and pre-configured setups; currently there are over 7000 “community” customized AMIs to choose from. When a new instance is created, the boot disk on which the instance runs can be persistent or transient. Persistent disks are retained when an instance is destroyed; transient disks last only as long as the instance itself. Separating the instance from its boot disk is beneficial in a number of ways, as it can be replicated and moved around independently. Not all service providers offer persistent boot images and it is worth checking what features are offered in order to backup or snapshot the image in case recovery is needed. The second feature is load balancing. A load balancer provides a virtual IP network connection and distributes connection requests across a number of instances of an application. This feature can be used to spread load across a number of server instances or to add a degree of resiliency and availability to an application. For example, if a web application has periods of high demand, an additional server instance can be created and added to the load-balancing list for the duration of the increased demand. The temporary instance can then be decommissioned when demand subsides. The major IaaS providers all offer load balancing as a feature. In addition to the two basic features discussed, some providers (notably Amazon) have a number of other offerings available. These include storage, database and messaging. With the news of recent hack attacks, including the high profile PlayStation Network, security sits high on everyone’s list. In a Cloud environment both logical and physical security is a concern. Poor physical controls can result in data breaches or worse, including prolonged outages. Logical security should ensure that unauthorized access can’t be achieved in what is a multi-tenant environment. Always review the security features of your Cloud provider to ensure they meet your standards or compliance rules. The definition on Cloud Computing from NIST states “minimal management effort” as a service goal. This is achieved by most providers using web interfaces displaying dashboards and control panels. Web-based management should be simple and easy to use but is unlikely to provide the features needed to deploy cloud infrastructures at scale. Application Programming Interfaces (APIs) enable Cloud computing to be integrated into existing business processes, including change control, provisioning and billing. And for organizations that already run their IT operations as a service to internal business customers, this will be mandatory. When multiple providers are used, APIs enable a common interface to be established, irrespective of where the computing resources are located. Finally we have the critical subject of cost. It may seem strange to discuss cost last, but in reality most providers are pretty close to each other in the cost of their services. Of course every provider will do differentiated pricing, including the costs of some services as part of the package and charging for others. It’s worth ensuring you know the full details of what your cost model is and more importantly how that translates into any Service Level Agreements if the service is unavailable or performs poorly. This is probably the most important aspect of service provision to understand; your business could be affected by an outage against which you have no claim. Infrastructure as a Service provides an easy way to start using Cloud Computing. Most providers offer the core services of server instances, storage and load balancing. When choosing and evaluating a service, it is important to look at issues around location, resiliency and security as well as the features and cost.
<urn:uuid:1ca4b28b-d0c7-4a9a-aab2-7dfb42590692>
CC-MAIN-2022-40
https://www.datamation.com/storage/comparing-iaas-providers-cost-security-location/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00417.warc.gz
en
0.933537
1,847
2.96875
3
The recent proliferation of the WannaCry ransomware has changed the face of this growing form of computer threat in for several reasons: - WannaCry represents a new evolution of ransomware that not only damages the infected host but also acts as a "worm" that actively attempts to infect any other reachable device. This resulted in exposure of backend systems, including UK hospital computers connected to MRI scanners, as well as infection of advertising billboards and unattended parking kiosks which are often unpatched and running on an outdated—and unsupported—OS. - Ties to Lazarus, a suspected elite hacking group from North Korea are now being suggested. If determined to be true, this might be the first example of a widespread cyberattack involving a nation-state. Unlike the recent launch of military missiles in North Korea, a state-sponsored cyber-attack is not likely to elicit a military response. - Blame is also being attributed to U.S. intelligence agencies for hoarding the knowledge of hundreds of known exploits, not to mention their dismal failure at preventing highly-classified information about these vulnerabilities from getting into the hands of criminals. This discussion won’t specifically focus on WannaCry other than to reiterate that it is an exploit of older versions of Windows using an attack vector that was revealed during a breach of the aforementioned government agencies. It was also quickly patched by Microsoft. The three-part lesson there is quite simple: - Stay current on OS versions whenever possible - Implement security patches as soon as they become available - Maintain a good anti-virus solution Oh, and educate users more formally why they should never click on an unsolicited attachment or a hyperlink and why comprehensive backups are critical. Instead, we will focus on how this type of attack may impact those running the uniquely-architected IBM i operating system. Ransomware on IBM i Argued by many to be immune, we can categorically state that servers running IBM i can indeed be impacted by viruses and malware, including those like WannaCry running on a Windows machine that may have a connection. Any suggestion otherwise is a fallacy. There are numerous examples of IBM Power Systems servers falling victim to traditional viruses and even ransomware. The HelpSystems security experts recently aided a customer who discovered almost 250,000 infected files within their IFS! The good news, and ironically the reason for the misperception, stems from the fact that the IBM i operating system, along with native objects such as RPG programs and Physical Files (PF), are immune to infection. But immunity does not imply that those objects can’t still be impacted via a rename or delete operation. And there are file systems on the server whose objects can be both infected as a carrier or encrypted and held for ransom. So how do we minimize the risk? Protecting Your Server First, I always recommend Powertech Exit Point Manager for IBM i to restrict user (or viral!) access to IFS and the associated file systems. This should be employed in conjunction with strict management over defined shares including never openly sharing the root, and as part of an overall control that should be applied to all network services, including FTP and ODBC. Next, leverage the QPWFSERVER authorization list to limit who can access the QSYS.lib directory structure through the file server. This activity is rarely required for business purposes and can prevent impact on traditional files. Note that this control is not effective against users that have *ALLOBJ special authority. On a related note, ensure that profiles don’t have unnecessary access to the file systems or data. People often think that attacks come in anonymously but that’s rarely true. At some point, credentials are being compromised or leveraged so ensuring that security best practices are followed for user connections, password policy, and object permissions is critical. We also need to ensure that viruses are detected prior to delivering their payload. Unbeknownst to many, IBM i has contained anti-virus enablement features since V5R3. Part of the reason for this lack of awareness is that these controls are not beneficial until a native scan engine, such as the popular Powertech Antivirus for IBM i is purchased and installed. We cannot comment on whether or not any HelpSystems customers were impacted by the WannaCry ransomware attack, but we have had customers reach out expressing concern over the attack. Though they weren’t impacted, they saw this as a wake-up call and are now interested in taking action to protect themselves from future threats. Unfortunately it can take attacks like these to get people to take action, but we are happy to see that this did serve as a wake-up call for some already. If you want to learn more about how viruses and malware can wreak havoc on your IBM i systems, we have several other resources that can help: Webinar: The Truth About Viruses on IBM i
<urn:uuid:4b015cc4-58ad-4f49-8bc0-f3b4087fd4c5>
CC-MAIN-2022-40
https://www.helpsystems.com/blog/could-ransomware-wannacry-hit-ibm-i
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00417.warc.gz
en
0.956235
1,021
2.65625
3
Scientists created a new type of CPU that communicates using light instead of electrons Unexpected miracle. This nickname and many like it have been quickly ascribed to a new kind of CPU that has been created by a group of scientists from three US universities. Their research published by Nature came out unexpectedly. Even though it promises huge changes in the future development of processors. What is it all about, then? It’s a CPU that uses both photons and electrons, amazingly fast when moving data around, and it’s been built the same way that current chips are. Twenty-two researchers from MIT, Berkeley University and Colorado University Boulder put together a chip design that uses light instead of electricity to transport data. The chip uses photons for input and output operations (IO), the computational operations themselves are done by a normal electronic core. This makes for a blisteringly fast transfer. The researchers claim the CPU’s throughput density reaches up to 300 gigabits per second per a square millimetre – that is anywhere from ten to fifty times as much as current CPUs. Specs of the photonic-electronic chip - Die size: 3 mm x 6 mm - Manufacturing process: 45 nm - Transistor count: 70 milionů - Photonic components count: 850 - CPU Cores: 2 - Maximum CPU frequency: 1,65 GHz - Theoretical throughput with all transceivers active: 550 Gb/s Tx, 900 Gb/s Rx The scientists are sure this is a huge technological breakthrough. “This is a milestone. It’s the first processor that can use light to communicate with the external world,” said the chief researcher and professor of electric engineering and computer sciences from University of California Vladimir Stojanović. The fruit of the cooperation between three universities is particularly sweet for data centers. Thanks to using photons, this kind of CPU uses much less energy for the same operations. To shift one terabit of data per second off the chip takes just 1.3 Watts. According to one of the researchers Chen Sun, it’s exactly this movement of data between CPUs, memory and network parts that eats between twenty to thirty percent of energy used in data centers. By using the tech demonstrated by these scientists, data centers could save up to a third off of their huge energy expenses. We’re reaching the point where every other article about a similar theoretical breakthrough would state something like: “Nevertheless, it’s going to take a while before this new invention reaches the consumer market.” But this research is different – the scientists actually designed the CPU so that it would be possible to manufacture it with current processes. And that’s what actually happened. They designed the architecture and had it made-to-order by GlobalFoundries in New York. This means there’s practically nothing preventing a mass production of more of these. And because they don’t need special manufacturing processes, they could be even relatively cheap. It’s still not clear when these marvels will eventually reach our data centers, but the scientists have already set up two new start-ups that are supposed to sell these chips as well as develop them further.
<urn:uuid:1cab6d35-36d1-434c-97ed-858ff535e0d2>
CC-MAIN-2022-40
https://www.masterdc.com/blog/new-type-of-cpu-chip-uses-light-for-io-the-future-of-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00417.warc.gz
en
0.939687
670
3.859375
4
Emails are an essential part of our work routine, and by now you have probably sent hundreds, if not thousands of emails to your clients, colleagues, or business partners. However, most of us spend little or no time at all thinking about how emails actually work. Our main focus tends to be on the content of the email itself, and once we cross our t’s and dot our i’s, we are all set to click the send button. In reality, sending and receiving an email isn’t as simple as it seems at first glance. There are multiple components necessary for an email to get from one destination to another, and one of the most important of these components is the email server platform. If you’re curious to learn more about emails, we’re here to take a closer look at email server software — what it is, how it works, and more. What Is a Mail Server? A mail server, also referred to as an email server, is essentially a computer system that sends and receives emails. Every email that is sent has to pass through a series of mail servers on the way to its intended recipient. As much as this process seems instant and simple, the reality is that a series of complex transfers take place in the process. The reason why mail servers work is due to standard email protocols — these are a set of networking-software rules that allow computers to connect to networks everywhere so you may send emails, shop online, and browse the internet freely. The most commonly used protocols are SMTP, IMAP, and POP3 (we will elaborate on these in a moment). Sent emails can be accessed using a mail server in two ways: via a cloud-based email service or an on-premises email client. What are the differences? Email service – also known as webmail – is a platform via which we send and receive emails using a web-based interface and a web browser; some of the most famous examples of these email services are Gmail and Yahoo. Email client – is a software that we install on our computer and we send or receive emails through an interface provided by that client’s software; some of the most notable examples of email client software are Outlook and Thunderbird. How Does Email Server Software Work? Previously we have stated that an email server program works using a set of networking-software rules that are referred to as standard email protocols (SMTP, IMAP, and POP3). Let us take a closer look at what these protocols are: - Simple Mail Transfer Protocol (SMTP) is an application that is used to send, receive, and relay outgoing emails between senders and receivers. When an email is sent, SMTP is used to transfer it from one server to another. Simply put, an SMTP email is just an email sent using the SMTP server. - Internet Message Access Protocol (IMAP) allows easy access to your email wherever you are, from any device. When you’re reading an email message using IMAP, you aren’t actually downloading or storing it on your computer; instead, you’re reading it from the email service. Therefore, you can check your email from different devices, anywhere in the world – your phone, a computer, etc. IMAP only downloads messages when you click on them, and attachments aren’t automatically downloaded. - Post Office Protocol (POP3) works by contacting your email service and downloading all of your new messages from it. Once the messages are downloaded onto your computer, they are deleted from the email service. In other words, once the email is downloaded, it can only be accessed from the same computer. If you tried to access your email from a different device, the messages that have previously been downloaded wouldn’t be available to you. To put it shortly, this means that sent mail is stored locally on your device, and not on the email server. Many Internet Service Providers (ISPs) will give you email accounts that use POP. Should You Host Your Own Email Server? Whether you have a large business or a small one, you can decide between self-hosting or outsourcing hosting to a third-party provider. With third-party hosting, servers are rented to businesses by a hosting provider (a company that owns multiple mail servers). These hosting providers take complete responsibility for managing and maintaining their servers, which is one of the reasons why they are so popular. The other reason is that they are affordable. Additional features of paid hosting services include spam filtering, online storage, email backup, etc. Naturally, multiple free hosting services for custom domains could be found on the market. Unfortunately, just like with many free things, there are downsides. Most of these free hosting services are typically ad-heavy and lack much of the functionality that paid services offer. Some of the biggest security risks that third-party hosting poses are potential security breaches. These breaches can happen due to the fact that you don’t have direct control over the email servers, which is why it is very important to pay close attention to data security while choosing a third-party hosting service. With self-hosting, you are owning and operating your own internal mail server where you will be storing your mail. This allows you to avoid many of the issues found in third-party hosting, such as privacy, security, and control over the email servers. However, self-hosting doesn’t come without a cost. One of the reasons why this type of hosting is expensive is the necessary infrastructure needed for it to operate, as well as the staff required to perform routine maintenance. In addition, as your business grows over time, you will need to add more servers to accommodate your email hosting needs. Both of the aforementioned options have their pros and cons, and you should base your decision on the specific needs of your business as well as the resources that you have at your disposal. Choosing the Right Email Server Software If you’ve decided to host your own email server software, you should do your due diligence and thorough research to decide what the optimal solution for your company is. Here are some useful questions you can use to better filter your research: - Do I need an on-premise or a cloud-based solution? - How complex is it to manage this platform? - Is adding new users and managing/changing existing ones user-friendly? - Is setting up a user’s preferred email client a simple task? - Does the platform offer the webmail feature? - Are there any other useful features available? - Is email archiving included in the solution, and if it is not, can it be easily integrated? Your end decision should be based solely on the requirements of your organization, your resources, and your IT infrastructure, and keep in mind that as long as you are well informed, there really are no “bad” choices. Using Email Archiving to Improve Your Email Server Software Performance Using email archiving solutions can help you mitigate the issues that cause your email server to slow down and have sub-par performance, like accumulating documents, images, spreadsheets, presentations and other data. These solutions do so by storing email-based data (attachments, attachment copies, PST files etc.) on a dedicated and secure archiving destination instead of keeping it within on your email server. Companies that find the optimal email archiving solution and implement it properly within their infrastructure, are able to reduce their data archiving costs. They are capable of automating the process of email capturing (for both incoming and outgoing emails) and do so in real time, while these solutions also help them store emails, index email-based content, and so on. All this enables your email server to always operate at an optimal performance level. If you are looking for a robust email archiving solution and need more information on what email server or archiving platform is the right one for you, reach out to Jatheon’s email archiving experts and see our solutions in action.
<urn:uuid:114e8b85-108b-4247-966f-991cccfda624>
CC-MAIN-2022-40
https://jatheon.com/blog/email-server-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00617.warc.gz
en
0.944438
1,676
3.328125
3
A Wide Area Network (WAN) is the network that is used to connect various office environments over large geographic areas by linking the data center to remote locations. Generally, it consists of Multi-Protocol Label Switching (MPLS) infrastructure to connect branches to headquarter networks. However, this infrastructure was never meant to support cloud applications or the thousands of devices that we use today. Traditional WAN architecture also includes single points of failure due to its structure relying on one provider and data center, leading to a lack of a performance guarantee. Modern WAN Architecture needs to offer: performance, reliability, cost-efficiency, simplicity, security, and intelligence It should be: smart, efficient, easy to manage, and know how/when to appropriately steer traffic over various network paths, all while maintaining performance and reliability. It should help businesses adopt cloud-based services and applications while optimizing bandwidth and traffic from every area. Finally, it should keep everything secure regardless of the type of connection. SD-WAN (Software-Defined Wide Area Network) brings “network virtualization” to traditional WAN infrastructure and controls the traffic over all circuits. This enables IT professionals to transfer the operational responsibilities of WAN to the cloud controller and to seamlessly manage every branch network centrally. SD-WAN overlays a software-driven network on a physical network, breaking up the WAN network into a set of capabilities, and giving enterprises the ability to diversify their providers, network management, traffic, and monitoring functions. According to an IDC – US Enterprise Survey, more than 70% of medium to large U.S. companies intend to implement SD-WAN by 2022.1 Enterprises that have that have adopted SD-WAN’s can choose from a plethora of service providers, transports, and locations to ensure optimum performance at all times and business continuity. This “network virtualization,” or cloud network, routes traffic over multiple routes to a destination(s) providing better performance and network uptime assurance. With SD-WAN, companies reported a staggering 94% reduction in network downtime.2 SD-WAN simplifies policy creation and network configuration by aligning business-level policy decisions with network policies. Businesses are able to add devices and circuits while keeping the old infrastructure and making the transition seamless. Control Plane Servers can be very flexible, hosted in existing hardware or even through the cloud. This makes the implementation of SD-WAN highly cost effective due to the lack of new hardware components, and can reduce or even eliminate the need for data closets or even onsite IT. Today’s network requirements have far surpassed the capabilities of WAN, hybrid MPLS, and internet broadband. Due to the massive influx in network-connected devices, traditional WAN infrastructures simply cannot keep up with the bandwidth required to run a business smoothly and securely in today’s day and age. Enterprises are implementing SD-WAN’s networking capabilities in order to survive and adapt to rapidly-changing business and technology landscapes. Furthermore, there is a high-cost, labor-intensive, and time-consuming process associated with the complex hardware, maintenance, and provisioning of in-house data centers and network operations. Some enterprises simply do not have the funds, capacity, or size to maintain on-premise networks especially as they moves towards a total digital transformation. Adding to network complexity of on-premise network operations is being tied to a long-term contract with a single provider. The implementation of SD-WAN into your network infrastructure can solve many of the problems associated with last-century WAN architectures. Traditional WAN does not have the capabilities to support cloud applications or handle the demands on bandwidth that go along with digital transformation. With SD-WAN, a North American service provider experienced a 30% reduction in connectivity costs, as well as an incredible reduction in fulfillment time from an average of 21 days to just minutes.3 It’s a lot easier than you think based on the simplicity of SD-WAN and we’ll walk you through how to get a flexible, versatile SD-WAN solution for your growing enterprise. We’ll work with you to configure, program, and scale an SD-WAN based on your network and security access needs and get you set up with the right combination of carrier services. It’s very likely you won’t need many boots on the ground or new equipment deployments, but if you do, our global tech force is everywhere you are. The future of enterprise networking is SD-WAN. It’s cost-effective, flexible, and gives you the ability to utilize the full capability of carrier-provided networks and cloud infrastructure. SD-WAN simplifies the complex landscape of hardware, carriers, and IT headaches. You may also be interested in these posts. If you would like to receive our quarterly newsletter, View from the Edge, you can sign up here.
<urn:uuid:3da74355-4d75-4f8f-9682-28db6f3631c7>
CC-MAIN-2022-40
https://www.blackbox.com/en-se/insights/blogs/detail/bbns/2021/06/30/the-road-to-sd-wan-the-evolution-of-wan-infrastructure-why-this-changes-everything
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00617.warc.gz
en
0.93015
1,039
3
3
MTU stands for Maximum transmission unit, which is the maximum packet length that can be sent on an egress interface toward a destination. MTU is an attribute of the egress interface and is typcially considered over a full path from the source to destination. MTU size differes from one media to another and is often a source of issues when configured incorrectly in the network. The following posts will give deep understanding for What is MTU and how you can manipulate it and set it to the correct value. The post starts from the very basics and defines exactly MTU and MRU (Maximum receive unit) and the different ways to manipulate them in a production network. In this post we also clarify PMTUD (Path MTU discovery ) and TCP MSS (Maximum segment unit) clapping and how they are used to solve or avoid problems caused by MTU. MTU and ping size confusion is another interesting post on the different implementations of network operating systems and how they can cause confusion to network engineers based on how they interpret their ping commands. MTU is a very simple concept, but needs to be understood throughly, specially in service provider environments and multivendor equipment. the misconfiguration or MTU or path MTU discovery (PMTUD) is a big source of problems that are some times very hard to spot and fix because of the different behavior they exhibit on different paths from sources to destinations. MTU settings mismatch on neighbors interfaces can have an impact on IS-IS neighbor adjancency relationships. Mounir has explored this topic in this post, just in case. Here is also a great tool, that will help you visualize this process of MTU size calculation requirements based on what type of headers you expect on the network path. Make sure to bookmark it for future use.
<urn:uuid:dfbd776a-f845-40a8-b87c-9a722738407b>
CC-MAIN-2022-40
https://www.networkers-online.com/blog/2014/05/what-is-mtu/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00617.warc.gz
en
0.94013
371
3.15625
3
The population of city dwellers is increasing across the globe, exerting an immense pressure on existing resources and infrastructures. To accommodate the growing urban population, governments across the globe are devoting a significant amount of resources for building roads, buildings, and transportation systems. But still, many cities are suffering from the lack of basic amenities. All these have prompted city planners and governments to come up with the concept of connected and smart city. The term connected city is generally used to mean a more efficient, functional, accessible, and inclusive city. A connected city will use connected devices in order to change the way city dwellers work, commute, and even spend their leisure time. For example, many cities have already adopted smart traffic lights that can automatically adjust timing to maintain a smooth flow of traffic. Likewise, technologies such as wearable devices, parking sensors, garbage sensors, and sensors to monitor air quality are slowly becoming a part of modern cities to make them smarter and thereby, improve the quality of life of city dwellers. So, digital technologies are going to play a key role in making cities connected. For the people living in cities, all these will eventually translate into quicker responses from law enforcement agencies and public utility providers, better public transportation, and efficient energy consumption. Connected devices can not only save time and efforts of municipalities but their expenses as well. However, it is important to weigh the benefits against the costs of installing these devices so that the most feasible and suitable technology is incorporated. Other factors to be taken into account while prioritizing the use of technologies are the size of the population and the most demanding needs of the population. However, the greatest challenge before connected and smart cities will be data security or data privacy. The storage of the huge volume of data generated by connected devices will be another big problem. Finally, to make sense out of the enormous volume of data, connected cities will require IoT analytics. Without analytics, it won’t be possible to understand the data and identify the patterns, in order to get critical insights from the data collected by connected devices. So, a city in order to become connected and smart in its true sense will need a proper system in place to address all these challenges.
<urn:uuid:8d3bcd69-92e9-4584-ad88-e1ed11e8c9ce>
CC-MAIN-2022-40
https://www.alltheresearch.com/blog/connected-city-problems-prospects
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00617.warc.gz
en
0.939706
452
3.6875
4
How prepared is your small or medium business (SMB) to withstand a major cyberattack or data breach? Do you have the digital infrastructure to stymie such an intrusion? Do you have the resources available to recover from an especially malicious attack? If you're unsure about any of these questions, you absolutely must invest in cyber liability insurance. It's a common misconception that SMBs are relatively safe from cyberattacks due to their size. "Our business is too small to be targeted by hackers, You might say. "Besides, we don't have anything to hide anyway. Âù This blind spot opens SMBs to a world of hurt. Hackers are more likely to attack SMBs because small companies rarely have the resources and infrastructure to thwart a cyber breach. Further, even if your organization doesn't cache top-secret documents, this doesn't mean you're safe from harm. Most cyber crooks are primarily interested in personal and financial information they can sell on the dark web, or leverage for identity theft. Examples include names, addresses, Social Security numbers, medical records, credit card numbers, email addresses, passwords and the like. If you have employees, clients, customers or partners, you likely have this kind of information stored on your network. Once the cat is out of the bag, retrieving this data isn't cheap. The Ponemon Institute estimates recovering each stolen record costs roughly $217. Depending on how many records were compromised, the expenses can really add up. Of course, not all attacks are focused on pilfering data. Sometimes hackers prefer to turn a quick buck through cyber extortion. For instance, a cybercriminal might hit your network with a ransomware attack before requesting compensation to lift the plague from your organization. Hackers are usually careful to set their blackmail fee lower than the cost losses. However, you can still expect to pay thousands, or even tens of thousands of dollars in cryptocurrency. Similarly, distributed denial-of-service (DDoS) attacks will prevent legitimate traffic from reaching your site or service by flooding your network with phony web requests until it crashes. As you can imagine, this is harmful to your reputation and your bottom line. But the troubles don't stop there. DDoS attacks are also used as smoke screens to obscure a secondary attack, such as a data breach or malware upload. Luckily, there are a few techniques that can help prevent hackers from getting the better of your small business: Follow smart password protocols, including unique passphrases or leveraging a password manager Build a culture of cybersecurity in your office by hosting routine cyber defense training sessions. Hire an in-house IT team or external defense agency to monitor your network for suspicious activity. Invest in a corporate firewall, virtual private network (VPN) and antivirus software for your small business. However, even with all these safeguards, private organizations still fall victim to hackers and data breach. In the end, it doesn't make sense to forgo cyber liability insurance. It's a simple and affordable option to a costly and deleterious security problem. Do the right thing and visit CyberPolicy for your free cyber insurance quote today!
<urn:uuid:791f794d-d0e1-4fa8-9704-402a710dab4d>
CC-MAIN-2022-40
https://www.cyberpolicy.com/cybersecurity-education/blind-spot-why-do-smbs-overlook-cyber-liability-insurance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00017.warc.gz
en
0.946466
643
2.53125
3
The data centers used to manage the country’s ballistic missile defense systems have major security weaknesses that could leave the US vulnerable to missile attacks, according to a newly declassified report from the Department of Defense. The report, released earlier this month by the DoD’s Inspector General, lists a number of security problems, everything from unlocked doors to unpatched software vulnerabilities dating back decades. One vulnerability, for example, dated back to 1990, but still had not been mitigated. “Officials… did not consistently implement security controls and processes to protect BMDS technical information,” the report said. That could allow enemies of the US to learn how to get around the missile defense system, “leaving the United States vulnerable to deadly missile attacks.”
<urn:uuid:0abb7e3c-64c3-46e7-b1f5-b1acbd4efb73>
CC-MAIN-2022-40
https://www.mariakorolov.com/2018/report-of-gaping-security-holes-at-us-missile-defense-systems-data-centers-shocks-experts-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00017.warc.gz
en
0.943848
159
2.609375
3
The Ethics of ML and AI AI will enable breakthrough advances in areas like healthcare, agriculture, education and transportation; it's already happening in many ways. But new technology also inevitably raises complex questions and broad societal concerns. As we look to a future powered by a partnership between computers and humans, it's important that we address these challenges head on and address: - How do we ensure that AI is designed and used responsibly? - How do we establish ethical principles to protect people? - How should we govern its use? - And how will AI impact employment and jobs? To answer these questions, technologists will need to work closely with government, academia, business, civil society and other stakeholders. And focus on ethical principles - fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability - to guide the cross-disciplinary development and use of artificial intelligence for business and cyber. In this talk we'll share the principle ethics of AI & ML and have a discussion about how we can all work together to forward AI and ML use responsibly.
<urn:uuid:5482a1a3-302b-40c4-9c17-d021dfcd5b65>
CC-MAIN-2022-40
https://www.careersinfosecurity.com/webinars/ethics-ml-ai-w-2274
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00017.warc.gz
en
0.954306
218
2.9375
3
This article discusses how to exchange COVID-19 contact tracing information between countries that use contact tracing apps and infrastructure. It takes a structured look at the goals, challenges, opportunities, and limitations of this kind of interoperability between national systems. This article is not intended as an introduction to contact tracing, nor as an opinion piece if and how app-based contact tracing is an effective way to improve national and international management of the Corona virus pandemic. Contact tracing is an epidemiological method for understanding and tracking the spread pattern of infectious diseases. It works by tracing back the contacts that an individual had who at some point is diagnosed as being infected. The goal is to find contact persons in that individual’s recent past so that they can be quarantined and tested. App-based contact tracing is a way how to complement traditional ways of contact tracing by using mobile phones and their sensors to create traces and thereby tracking past contacts. In general, mobile phones trace an individual by location, by their proximity to others, or by a combination of these methods. This information is collected by and stored on the phone, and then all or some of this information then is sent to a server. The “What is Contact Tracing? And how do the apps work?” video provides an overview of the various elements of this picture, and how communications between them work. A major discussion recently has been how to handle the data that is being collected. There are two general approaches, and both of them are favored by several countries for designing and building their national systems: The centralized approach forwards the data of all individuals to a central server, where it is stored and can be analyzed. The advantage of this approach is that countries have a comprehensive dataset for their analysis. The disadvantage is the loss of privacy, which in itself is an important good to contemplate, and also may play heavily into the willingness of the public to use the contact tracing app. The decentralized approach keeps data on the phone and only transmits data to the server when an individual is diagnosed as being infected. In that case, the history of that individual is transmitted to a server, which then distributes it to all other users’ phones. The check for contact (and thus possible exposure to the virus) then is done locally on all phones. The advantage of this approach is that it preserves privacy, and thus may see better acceptance by users. The disadvantage is that countries do not have access to the full data (except for the anonymized identifiers of diagnosed individuals), and therefore cannot use it for analysis. It is important to understand that in both cases, countries will operate servers that are used by the apps for communications. But in the centralized case, the servers store all tracing data of all users, while in the decentralized case they only store tracing data of users who reported themselves as being infected, and that data is anonymized and thus cannot be traced back to individual users. The picture painted above assumes one server that handles all data. This is true at the level of individual countries. But when looking beyond country borders, there now is the problem that individuals have installed the apps provided by their countries, and these apps are communicating with their countries’ servers. In this kind of scenario, contact tracing only works across residents of one country. But if people start traveling again, increasingly there will be cases where individuals from one country come in contact with individuals from another country, and in the case of isolated solutions, app-based contact tracing will not work in such a scenario. It now becomes necessary to think about a federated scenario: In such a scenario, individual servers (each one operated by a country) exchange data, and thus make it possible to trace contacts and raise exposure notifications across country borders. Such an approach would greatly increase the effectiveness of app-based contact tracing, in particular in regions of the world where international travel is common, and in light of the fact that with fewer restrictions in place, people will start traveling again. The picture painted in the previous section is great as an ideal, but there are challenges along the way. The biggest one is in the fundamentally different model of identity. In the centralized model, identity is known by the server, and thus data can be tied to individual identities. For sharing data internationally, the question is whether identities are revealed, or some anonymization is applied. In either case, this can be managed by the servers exchanging the data, and thus data can be exchanged between servers following the centralized model. In the decentralized model, identity is not revealed to the server. For example, the currently popular Apple/Google model uses advanced cryptographic methods to make sure that privacy is preserved for all participating users. The “What is Apple/Google Exposure Notification?” video explains these methods in more detail, but the important aspect is that all that servers have are anonymous so-called “temporary exposure keys” which change once a day and are not connected to user identity. It is possible to exchange these keys across country borders, but only if all apps follow the same method of creating and storing them. This fundamentally different approach to handling identity makes it very hard to even envision how to share contact tracing information between the two worlds of centralized and decentralized approaches. For example, when the Apple/Google model is being used, the only identifier phones receive and store are anonymized identifiers, and they are anonymized according to the specific scheme that is defined in the Apple/Google specifications for their “Exposure Notification” framework. Outside of this framework, these identifiers make little sense other than for aggregate data such as counting the number of individuals that are self-reporting as having been diagnosed. Even with the limitations outlined above, there are opportunities for international collaboration. The reason for that is that national practices will very likely gravitate around the two general models: Countries choosing the centralized model will be in full control of the data they are collecting and managing, which means that for interoperability, they have control over how to manage and exchange identities. The main call to action here would be to work on a well-defined way of how to exchange information, so that there is a standard Application Programming Interface (API) between countries, instead of relying on custom-made bilateral ways of how to exchange data. Countries choosing the decentralized model probably will choose the Apple/Google model so that their apps have good device support on most mobile phones. But that model only defines APIs for Bluetooth and the app on the phone, i.e. it does not define APIs for how apps communicate with servers, or for the federation model of communications between servers. Defining these APIs would mean that countries would have more open models (by using an open API between apps and the server), and that countries using the Apple/Google model would have a relatively easy way how to collaborate. Looking at these options, it seems that there are considerable options for international collaboration. But given the fundamentally different approaches of data management by the centralized and decentralized model, it seems questionable whether there can be meaningful exchange across these scenarios. This means that it is likely that interoperability can be achieved for the two groups of countries outlined above, but not across these groups. But even if we accept that for fundamental reasons these two types of solutions will not be interoperable, at least we can move from the current picture where all countries are essentially islands in terms of their contact tracing approaches, and move towards a scenario where there are two communities where contact tracing data can be exchanged internationally. In terms of the effectiveness of app-based contact tracing, this already would be a very significant achievement. This article is a first attempt to provide a structured view of the current goals, challenges, and opportunities of international collaboration in the space of app-based COVID-19 contact tracing. There also are some limitations or at least caveats. One such limitation is that it seems unlikely that the two fundamentally different approaches of the centralized and decentralized model can be bridged. The different identity models make it very hard to imagine a way how to bridge two worlds with very different perspectives of privacy, and the resulting ways how to handle identity. Another limitation is that this kind of interoperability may cause scalability issues. For example, the Apple/Google model assumes that all anonymized identifiers of all diagnosed individuals are forwarded to all phones. Because of the decentralized model, this is the only way how to match the data of individuals with a positive diagnosis with everybody who might have been in the proximity of them. While this already produces a substantial amount of data to be exchanged, it becomes even more critical when there are many countries exchanging this data, and when this method of exchanging data is still required to scale if there are larger outbreaks with a large number of individuals diagnosed as being infected. All APIs and implementations in this scenario would have to be designed and tested to handle this kind of scale. Given the economic impact of COVID-19 lockdowns, the wish and need to lift restrictions is very understandable. However, this also means that until a vaccine is available, it will remain necessary to manage infection events and trace outbreaks. Contact tracing will be an important method in this area, and app-based contact tracing is a part of this method. Looking at app-based contact tracing beyond the national scope is only in its infancy. For countries following the centralized model, this means thinking about how the centralized dataset can be meaningfully matched with datasets of other countries. For countries following the decentralized model and likely using the Apple/Google Exposure Notification framework, this means widening the scope of this framework to not just cover device APIs, but to also cover APIs to the server, and server-to-server federation APIs. It can be said with some certainty that our understanding and implementation of app-based contact tracing will evolve over the coming months. We should therefore also make sure that we follow established practices of API design and management, meaning that we design them in open and extensible ways. This means that with our evolving understanding of how to design and use contact tracing apps, we can evolve the ways in which components in this international network are communicating.
<urn:uuid:86e9a65c-2406-4799-98b7-290f2ef184d4>
CC-MAIN-2022-40
https://blog.axway.com/learning-center/software-development/api-development/covid-19-contact-tracing-apps
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00017.warc.gz
en
0.946962
2,067
3.546875
4
A Brief Introduction to NetFlow NetFlow is data generated by network devices – routers, switches, firewalls, etc. – that contains information about the data that’s moving through the network. The term NetFlow is often used generally to refer to this type of information, but “NetFlow” is actually proprietary to Cisco. Other vendors have their own versions, such as J-Flow from Juniper, and sFlow. There are also different versions of NetFlow. The most commonly used are v5 and v9 (which includes some additional information not available in v5). IPFIX, which is also known as NetFlow v10, was created by the IETF as a common standard. This article discusses NetFlow in general and is relevant to most types of network flow data. NetFlow is metadata – it’s data about the data traversing the network. Even though NetFlow doesn’t contain information about the contents of the data, it does provide extremely valuable insight about what’s going on in your network, including (but not limited to): |NetFlow data||What it tells you| |Source IP address||Who is sending the traffic| |Destination IP address||Who is receiving the traffic| |Ports||The application utilizing the traffic| |Class of service||Priority of the traffic| |Device interface||How the traffic moves through your network| |Tallied packets and bytes||The amount of traffic| |TCP flags||Connection states| |Packet timestamps||The exact time the traffic traversed the network| In short, NetFlow helps you understand who, what, where, when, and how network traffic is moving through the network. But in order to take advantage of this insight, you need to do two things: - Enable NetFlow or sFlow on your network devices. Be sure to be as inclusive as possible when determining which devices to enable NetFlow for; the more data you have, the more visibility you get – and the better prepared you are to quickly detect and mitigate security problems. Here is some guidance: - Use a NetFlow collector that offers the monitoring and analysis capabilities you need. We’ll discuss NetFlow collectors later in this article. NetFlow for Real-time Monitoring NetFlow was originally developed to help network admins get a better handle on what their network traffic looks like. Because NetFlow is extremely valuable for monitoring what’s going on in the network and alerting when something undesirable happens, network operations teams often use NetFlow to identify performance issues. But NetFlow is also a valuable weapon in any information security professional’s arsenal. Network security is a nearly impossible job nowadays, with the constant evolution of threats that come from a wide range of sources. There are almost as many point solutions available as there are types of potential vulnerabilities. The problem is that even if you have the budget and manpower to deploy every kind of security point solution available, you still wouldn’t be completely protected. That’s because those tools help protect you against known threats. There is no and never will be a silver bullet, but leveraging NetFlow for information security can help you protect against unknown threats. This means you don’t have to be on the lookout for a specific threat (which requires that you understand its attributes in all potential permutations). Instead you can characterize normal operational network traffic patterns – and then quickly detect out-of-character patterns that could represent a security breach, even for unknown vectors and techniques. This could include incomplete TCP handshakes, multiple failed login attempts, unexpected connections, unusual volumes of data leaving the organization, traffic from known bad hosts/blacklisted systems, and much more. NetFlow for Forensic Analysis Real-time monitoring helps you identify security problems quickly, before a significant amount of damage is done. But, that’s just the first step. NetFlow also provides infosec professionals with valuable forensic analysis capabilities. A NetFlow collector consolidates flow data from across multiple devices and interfaces, which means that you don’t need to check individual logs. This not only vastly speeds your ability to find critical information about an incident, it also provides a consolidated and comprehensive view of network traffic. You get a complete timeline that shows you what happened before, during, and after an attack. And you can easily drill down to understand the most granular details, or drill up to see trends. This fast but comprehensive visibility enables infosec professionals to react very quickly when there’s a security breach. But savvy organizations also use NetFlow’s analysis capabilities for proactive cyber hunting, which essentially seeks to identify more unknown threats – and make them known – before they hit and cause damage. In either case, your ability to construct a timeline of what happened requires that you retain NetFlow data for the time period in question. Since flow data is compact, it’s an effective way to provide the detail you need while at the same time enabling you to keep the data going back in time for long enough to have full context. Not All NetFlow Collectors Are Equal As mentioned above, simply enabling NetFlow doesn’t deliver all of these monitoring and analysis benefits. You need a NetFlow collector that uses the data and provides you with an interface to perform required tasks. There are many NetFlow collectors available that range from limited-functionality freeware to enterprise-grade solutions. As you evaluate the options for your organization, keep the following questions in mind. How many flow types and interfaces does the collector support? Some NetFlow collectors limit you in the number of interfaces supported. And if your organization has devices with different types of flow data (NetFlow, J-flow, IPFIX, sFlow, etc.) make sure the system you select supports them all so you get maximum visibility – and protection. How easy is configuration and tuning? Look for a NetFlow collector with an easy-to-use interface that simplifies adjustments to tailor the system to your organization’s attributes and requirements. Does it provide advanced alerting and reporting capabilities? Alerting is critical, but it’s only useful if you get the right alerts at the right time and in the way that supports your workflows. Does it integrate with other solutions you’ve deployed? When your NetFlow collector integrates with mitigation and other security tools, you can streamline reaction times and improve security visibility and effectiveness across the board. How long – and how completely – is flow data retained? Look for systems that offer a high-speed database architecture that enables full recall of all network flows. This will allow virtually unlimited traffic volumes to be analyzed. Is multi-tenant support available? If you are an ISP, managed security provider, or other organization that requires you to support multiple separate customers or business units, make sure your NetFlow collector can handle multiple end users through a single instance. Does the solution support clustering and load balancing? Scalability is always an important consideration, and you want to make sure that your NetFlow collector supports unlimited scalability with clustering and load balancing. [su_box title=”About Vince Berk” style=”noise” box_color=”#336588″][short_info id=’70551′ desc=”true” all=”false”][/su_box]
<urn:uuid:8717a6c8-4076-46ac-824b-e58621557bcd>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/netflow-monitoring-analysis-infosec-professionals-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00017.warc.gz
en
0.926004
1,540
2.671875
3
The following validation set and its prediction values shows an R2 value of 0.69 where as Datarobot result shows 0.65. This is not specific to this one data set, whatever model i use, when i check the validation set and calculate its R2, its slightly different from what datarobot is showing. Am i missing anything? Solved! Go to Solution. r-squared refers to the 'goodness' of fit for a particular model with no regard for the number of independent variables. Whereas, adjusted r-squared takes into account the number of independent variables. So if you have a regression equation such as y = mx + nx1 + ox2 + b The r-squared will tell you how well that equation describes your data. If you add more independent variables (p, q, r, s ...) then the r-square value will improve because you are in essence more specifically defining your sample data. Using adjusted R-squared metric instead takes into account that you have added more independent variables and will 'penalize' the result for the more variables you add which don't fit the sample data. This is a good way to test the variables, either by adding in one at a time and checking when the adj-R2 starts to deteriorate or by starting with all the variables and removing one at a time until the adj-R2 doesn't improve. Hi Manojkumar and Erica, This was a good question. I had to do some digging to find the answer 🙂 There are several methods for computing R2, and their results don’t always match. We use the most general definition of R2, which you can read about in detail on wikipedia: 1 - (residual sum of squares) / (total sum of squares): Here is some R code that explains the calculation more thoroughly: a <- c(17.98, 35.61, 54.16, 58.69, 77.57, 141.14, 161.05, 178.8) p <- c(63.6761, 40.0788, 79.47874, 56.8481, 97.33846, 157.1376, 106.6461, 127.3321) # Manual method SSE = sum((p - a)^2) SST = sum((mean(a) - a)^2) R2 = 1 - SSE/SST print(R2) # 0.6550015 # Package method print(MetricsWeighted::r_squared(a, p)) # 0.6550015 The residual plot uses the same approach, but down samples some of the data. Specifically if there are more than 1000 data points. So you may see some differences here as well. I hope this helps. Thanks for posting! Well I'm curious now :0) @emily or anyone else from Datarobot ... Can you tell us what kind of R squared is used in the residuals tab? It doesn't specify in the documentation. Thank you! There are several types of R² - in addition to the calculation that you will have learnt in school there are also: Adjusted R² - which account for the effect of adding more fields to the data (this can "artificially" fit the data) Predicted R² - this will directly check the prediction by rerunning the model with missing data points and checking its prediction against those points. Both these values will be lower than the "vanilla" R² but will be more accurate. I am not sure - trying to check the documentation to see but I imagine that datarobot would use one of those metrics rather than the standard.
<urn:uuid:5848cbce-f40d-421a-898a-fa9af25020ca>
CC-MAIN-2022-40
https://community.datarobot.com/t5/platform/r-square-value-different-from-manual-calculation/m-p/5359
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00017.warc.gz
en
0.891339
893
2.984375
3
Veterans who opt-in to sharing their DNA through the program have the opportunity to support medical discoveries and health improvements across future generations. Veterans across America now have the ability to join the Million Veterans Program research project online. Launched in 2011, the national voluntary research initiative enables former military personnel to share their DNA with the Veterans Affairs Department to boost research on how genes and other factors like military exposure can impact people’s overall health. The agency’s ultimate hope is that this “will make it even easier for Veterans nationwide to take part in this landmark research effort,” Secretary Robert Wilkie said in the recent announcement. MVP enables veterans who opt-in to complete surveys about their health, lifestyle, military experience and personal and family histories, as well as make a one-time visit to a VA center to provide a blood sample for genetic analysis. Participants don’t receive any follow-up information from the program to benefit them directly, but instead contribute the data to medical research in hopes to improve health across future generations. So far, more than 775,000 veterans have already voluntarily supplied their DNA to the program, which researchers use to better understand how genes and other factors influence health and could lead to stronger treatments of disease. So far, MVP data has been put to use in more than 30 projects across the VA, supporting efforts around understanding the roles that genes play in heart disease, cancer and suicide. In Connecticut’s VA health care system for example, researchers are using MVP data to figure out specific genetic and clinical markers that can support the prediction of breast cancer risk. They aim to build a new screening strategy for detecting the life-threatening disease. Under the new digital-first approach, veterans already enrolled in VA care can now use their credentials to log into MVP Online, explore the personalized dashboard and, if they’re interested, use the portal to complete all elements of the consent process. Participants can allow access to their health records for research purposes, provide information about their health and backgrounds and schedule a visit to supply a blood sample. The agency is also exploring ways to make the blood sample collection easier for veterans who do not live near MVP collection sites. “MVP has already resulted in a number of important scientific publications that increase our knowledge of conditions that affect Veterans’ health, and we expect this resource to continue to prove its value over the coming years,” Wilkie said.
<urn:uuid:7f1b577b-9886-45cd-9795-88531fcad842>
CC-MAIN-2022-40
https://www.nextgov.com/it-modernization/2019/10/million-veterans-program-now-open-online-enrollment/160494/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00017.warc.gz
en
0.931191
501
2.625
3
As mobile operators are beginning their roll-outs of 5G technology, a Gartner survey reveals that two thirds of organisations worldwide see enough benefit to their businesses that they would consider deploying their own networks. The rest of us are keen to understand more about the technology, what it means for our smartphone and device connectivity and how it will impact our day-to-day lives. Mainstream Network Operators are now very actively promoting their 5G coverage, especially for mobile, but there remains confusion over how 5G works and the technologies it encompasses, including the innovative 5G mmWave technology that supports the highest bandwidth features. At Blu Wireless, we’ve been developing mmWave 5G technology for ten years. Over that time, we have developed a deep understanding of how it can be applied to meet the needs of the most demanding 5G applications. Here is what to expect from 5G mmWave as this technology’s implementation intensifies: 5G: What is it and how does it work? 5G is the next generation of mobile internet connectivity that will power businesses, homes and cities. The transition to 5G is different from the technological jump between 3G and 4G back in 2012. It brings together existing services, adding new technologies that focus on the applications rather than the communications that link them together. It enables significantly faster and more available communications enabling remote or mobile use cases that were previously limited by speed, delay, reliability and cost, including transport, remote healthcare, manufacturing and entertainment. How mmWave enables 5G So far, every new generation of connectivity has been about getting the most out of the available radio spectrum at all ranges. Each generation has improved the radio links between towers and user devices, supporting ever more services. For individual links, this has been achieved with the roll-out of 4G but existing spectrum bands are quickly becoming fully used. What is different about 5G is the ability to combine links and technologies in new ways and that new bands of spectrum are being leveraged, whose potential had previously been unavailable – such as mmWave. mmWave will power the future of 5G connectivity. Millimetre waves are very short wavelengths, ranging between 10mm and 1mm, created by very high frequency radios. The wavelengths are small but powerful – they can carry huge quantities of information. With expert engineering, they can provide reliable connectivity with fibre-equivalent data speeds of 10Gbps. How will we use 5G mmWave? As well as vastly improving connectivity speeds for our smartphones when we need it, 5G mmWave opens up exciting opportunities for a huge range of consumer and commercial use cases. It will enable real-time services within our towns and cities every day. Tiny mmWave units can be installed on existing roadside lampposts with minimal disruption, bringing high-speed connectivity to city infrastructure, vehicles and user devices. Local authorities will be able to deliver services efficiently in the community and maintain the environment in real time, for example checking and responding to pollution levels, traffic flow and energy usage, thanks to ubiquitous 5G IoT sensors. Connected and Autonomous Vehicles (CAVs) It will become an everyday part of the connected and autonomous vehicles of the future. Freed of the need to drive ourselves, we will be able to use the time for work and relaxation, as well as enhancing our engagement with the journey itself. Connectivity on high-speed trains will also undergo a 5G transformation – a movement that is already beginning across the UK, starting with FirstGroup’s 5G mmWave implementation. Their mmWave track-to-train network will bring on-board WiFi with the speed of fast fibre broadband to every passenger. As logistics and supply chain businesses get smart, merging digital and physical technologies, 5G mmWave will play a key role in ensuring competitive and environmentally friendly manufacturing. Connected manufacturing improves yield and quality and, by focusing maintenance on the areas where it is most needed, increases uptime. Augmented reality (AR), robotics and connected machinery will require the ultra-fast speeds of data transfer and reliability that can be provided, thanks to its higher frequency bandwidths. AR, VR and Video Streaming Low-delay connectivity delivers virtually unnoticeable latency – which is a huge facilitator for the consumer entertainment sector, as well as professional and industrial applications. VR, AR, gaming, live streaming and video calling will all become substantially more useful with 5G mmWave. This will equally widen the scope of how companies leverage experiential activities for customer engagement. Remote healthcare is perhaps the most demanding use case for telecommunications, dependent on IoT sensors, high-bandwidth imaging, resilient control and monitoring and wide coverage for its effective operation. The high-frequency connectivity provides consistent internet access for virtual visits to the doctor, as well as for wearable health monitoring technology, which are powered by 5G sensors. If you want to learn more about the subject and virtual healthcare, the Liverpool 5G Testbed is a good example of how 5G mmWave technology can be implemented to support health and social care providers and their patients. The Future of 5G mmWave With its speed, reliability and ease of implementation, 5G mmWave is set to drive many exciting developments across multiple industries in the years to come. From our work to our health to how we travel, this technology will bring a tangible improvement to our daily lives and how we experience services that rely on connectivity – it’s an exciting transition that everyone stands to benefit from. At Blu Wireless, we want to ensure the benefits of our mmWave technology can be felt as soon as possible. We are already deploying our mmWave solution for 5G use cases at home and abroad in many of the above industries, from smart cities to connected vehicles. Get in touch today to find out how you could leapfrog your industry’s 5G rollout with our mmWave technology.
<urn:uuid:35c13fab-abdd-4f84-bffa-956dc5e6d2fd>
CC-MAIN-2022-40
https://www.bluwireless.com/insight/blog/what-is-5g-mmwave/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00017.warc.gz
en
0.948282
1,228
2.59375
3
What is Data Gravity? How it Can Influence Your Cloud Strategy The amount of data that is generated every day is amazing. The latest statistics show that 1.7 MB of data is created every second or 146.88 zettabytes every day. While your business may generate just a small slice of this massive amount, effectively managing all of their data has become a challenge for even the largest enterprises. Artificial intelligence, machine learning, deep learning, advanced data analytics, and other data-intensive applications provide better insight than ever before. However, managing and utilizing these large data sets requires a new way of approaching your cloud architecture. In order to do this, it’s important to understand the concept of data gravity. What is Data Gravity? When working with larger and larger datasets, moving the data around to various applications becomes cumbersome and expensive. This effect is known as data gravity. The term data gravity was first coined by Dave McCrory, a software engineer, in trying to explain the idea that large masses of data exert a gravitational pull on IT systems. In physics, natural law says that objects with sufficient mass will pull objects with less mass towards them. This principle is why the moon orbits around the earth, and the earth revolves around the sun. Data doesn’t literally create a gravitational pull, but smaller applications and other bodies of data seem to gather around large data masses. As data sets and applications associated with these masses continue to grow larger, it becomes increasingly difficult to move. This creates the data gravity problem. Data gravity hinders an enterprise’s ability to be nimble or innovative whenever it becomes severe enough to lock you into a single cloud provider or an on-premises data center. To overcome the consequences of data gravity, organizations are looking to data services that simultaneously connect to multiple clouds. How Does Data Gravity Influence Your Cloud Strategy? As providers like AWS, Azure, and Google Cloud compete to be the primary cloud computing provider for companies, it seems like they all have a pitch to convince you to migrate to their cloud. Adopting one or more clouds might make things run more smoothly for your business needs, but does it make sense for your data? The massive amounts of data generated — both in terms of the scope of datasets and in the gravitational pull of that big data — multiply the requirements for additional capacity and services to utilize it. Data Gravity encompasses what happens to big data in cloud services. For many enterprises, the associated costs are crushing. Large data sets can increase the fees to access your data — doubling the costs to host, replicate, and sync duplicate data sets can all impede your budget and business success. There are two challenges to solving the gravitational pull of massive amounts of data: latency and scale. The speed of light is a hard limit on how quickly data can be transferred between sites, so placing data as close to your cloud computing applications and services as possible will reduce latency. As your data increases in size, it becomes more difficult to move it around. Let’s look at a couple of cloud strategies organizations use to address the major challenge of Data Gravity. Data Gravity and Latency One approach to reducing latency is putting all of your data in a single cloud. Like the proverbial warning about putting all of your eggs in one basket, this path introduces a few drawbacks. The reasons include: - Compatibility — Your cloud provider’s storage solutions may not fit your use case as well as you might want or need, and require additional services for functionality that may not be expected or budgeted. - Fees — Not only are you paying for the base data storage costs with cloud provider storage, but cloud providers may charge you transaction and egress fees when you need to access it. Each cloud provider promises agility, flexibility, lower costs, and superior services and toolsets, but the reality can be unforgiving. Instead of increased agility and flexibility, your developers may become hamstrung by the single cloud implementation. Things may start out fine as you start with a vendor that meets your needs at the time, but as time passes, there may be better solutions available. Instead of being able to use these better options, you may get trapped with a provider (“vendor lock-in”) because it’s too difficult or expensive to move the data. Instead of lowered costs, you’re sitting on a mountain of egress fees or paying for a mismatch in performance levels. Data Gravity, Storage, and Cloud Computing Duplicated data, outside of backups or DR strategies, is wasteful, so maintaining a single big data repository or data lake is the best method to avoid siloed and disparate datasets. Rather than using a data warehouse, which requires conformity in data, a data lake with appropriate security can handle your raw data and content from multiple data sources. A data lake with cost-effective scalability seems easy enough, and it can be — depending on the data needs at enterprises. Many organizations have a suitable on-premises data lake, but accessing that data lake from the cloud has several challenges: - Latency – The further you are from your cloud, the more latent your experience will be. For every doubling in round trip time (RTT), per-flow throughput is halved. This can cause a greater likelihood of slowdown, especially for data-intensive data analytics that leverages artificial intelligence and machine learning. - Connectivity – Ordering and managing dedicated network links, such as AWS Direct Connect or Google Cloud dedicated Interconnect, can be costly. Balancing redundancy, performance, and operational costs is difficult. - Support – Operating and maintaining storage systems is generally expensive and complicated enough to require dedicated expert personnel. - Capacity – A location and infrastructure plan and budget for growth are required. On-premises data lakes can address latency by co-locating closer to public cloud locations and by purchasing direct network connections. Still, the cost is prohibitive for midsized companies who wish to leverage the innovative services of multiple clouds. The Multi-Cloud Solution: Avoiding Vendor Lock-in According to Gartner, by 2024, two-thirds of organizations will use a multi-cloud strategy to reduce vendor dependency. Cloud-native storage tiers on AWS, Google Cloud, and Azure can be matched to the performance and access frequency of different types of data processing but can only be accessed from their own cloud location. If your developers and business teams use multiple services from different clouds that all need access to the same body of data, these cloud provider storage solutions may become a trap. External, remote, or cross-cloud access may be closed off. Cross-availability zone access within the same cloud or replication can become more difficult. Even a simple method for seeding data, for example, can become a pain point. Sidestepping these issues may require duplicating your datasets – adding cost and management overhead — for data analytics. While you may have solved the problem of vendor dependency, this approach still has data access and cost implications. Overcoming Data Inertia: Future Proof Your Data with Multi-Cloud If enterprises have sunk costs in equipment that may not be fully depreciated, legacy applications that are unsuitable for cloud-native deployment designs, or data compliance requirements, overcoming inertia to access the benefits of a multi-cloud strategy may seem impossible. It’s helpful to change the goal from “how do I get my app in the cloud” to “how do I use my data from the cloud?” This perspective change that places your data at the center of your strategy will help your organization chart a path that future-proofs your data and makes the idea of leveraging competitive services from each of the clouds possible. When solving for multi-cloud data access, ask these questions: - How do I minimize real-time latency? - How do I keep my data secure? - What is the most efficient way to access my data from anywhere? - How do I minimize my fees? For example, some challenges, like cross-cloud access, can add such complexity or cost that the design becomes untenable. The real dream killer, though, is latency. No matter how awesome your storage array is, no matter how fat your network pipes are, storage performance is a function of latency, and distance is the enemy. Overcoming data gravity allows you to leverage big data for better insight and data analysis. Where’s the Right Location for My Data? Where can you put your data that allows for multi-cloud access at low latency? Adjacency is the proper solution to latency, and the cloud edge is the logical answer, but what does that really mean? Colocated Data Lake Colocation data centers that are adjacent to cloud locations can enable data collection and access from multiple clouds, a significant improvement over the data duplication that comes with copies of the same data in each cloud’s native storage. Because organizations often manage their own equipment in a colocation agreement, the responsibility of cross-referencing possible colo data centers with desired public clouds to validate low latency requirements falls on the customer’s organization. If that organization needs cross-region access, your business logic may require additional colocation sites (and higher costs). You may also have charges for extra regions with multi-cloud data services as well. Finally, cross-connects and private circuit options, along with hyper-scaler onramps, introduce additional unknowns and will certainly increase the cost. Leveraging them safely and effectively may be more effort than you are prepared to shoulder. Managed Data Services Managed Data Services providers can offer the best of both worlds. They have already done the work of ensuring their data centers are located in close proximity to major hyperscale cloud providers, which means they can offer cloud-adjacent data lakes with low-latency, secure connections as well as SLOs suitable for your unique workloads and use cases. For additional efficiencies, big data management and cloud providers can offer a familiar storage platform that you can easily consume without the burden of managing and supporting yourself, bundled with access and service offerings that connect to and augment resources and services of your clouds of choice. Leveraging your preferred platform, directly from multiple cloud edges, is critical to crafting a more expert and reliable multi-cloud environment. The shortcomings of existing solutions are laid bare when high performance and multi-cloud access are needed. Make the cloud edge the central pivot point for your data workflows to enable simultaneous access from multiple clouds and unlock the innovation and flexibility of multi-cloud. This resolves latency and performance bottlenecks from on-premise or un-optimized datacenter locations while greatly improving access and availability. Seeding data, configuring DR, and migrating data out become nearly painless. Best-of-breed storage services and toolsets are available from any cloud provider. Data security is more digestible, and compliance and security are easier to understand and manage. Finally, cloud arbitrage is possible, allowing you to deploy or shift workloads depending on cloud provider pricing or resource availability, enabling application-level high availability (HA) across clouds. With data at the center of your multi-cloud world, the options are endless. Unlock the Unrealized Value of Your Data Contact the multi-cloud and digital transformation team at Faction today. We make complex multi-cloud technology simple to avoid data gravity black holes. Let us help you unlock more value from your data for better insight and improved data analytics. About Dan: Dan is a Senior Storage Engineer and Infrastructure Architect who has been with Faction 8 years focusing on hybrid and multi cloud storage architectures.
<urn:uuid:01a154b2-4eb8-4f46-9764-4ca71f73922e>
CC-MAIN-2022-40
https://www.factioninc.com/blog/data-gravity-as-the-center-of-your-multi-cloud-universe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00017.warc.gz
en
0.925046
2,415
2.90625
3
ThreatSCOPE Use Case: Healing the Heartbleed OpenSSL vulnerabilityhttps://www.bluerisc.com/wp-content/uploads/2018/11/heartbleed_after_insertion-1024x555.jpg 1024 555 BlueRiSC BlueRiSC https://secure.gravatar.com/avatar/a7158cc1d3cf99ba24bb066e9343643b?s=96&d=mm&r=g In this previous post we showed how ThreatSCOPE can be used to identify and analyze the HeartBleed vulnerability in the OpenSSL library. In this post we will show how ThreatSCOPE can be used to insert code to heal it. For this example, the OpenSSL library is used in the common use case of providing SSL for an Apache web server. This web server is taken from an ARM based Raspberry Pi embedded system. We ran this web server on the Raspberry Pi and verified that it’s vulnerable with a HeartBleed test script: As shown by the output from the script, the server is found to be vulnerable to HeartBleed. To show how we can use ThreatSCOPE to insert code to heal the HeartBleed vulnerability we wrote a small C function which identifies and filters SSL heartbeat packets, which are the ones exploited by HeartBleed. To review the HeartBleed vulnerability, it resulted from an unchecked call to memcpy(), where a client provided both the content that was to be copied as well as the size to be copied. This enabled a malicious user to provide a much longer length than the size of the payload that was actually provided. Based on this, the code for healing this vulnerability identifies malicious heartbeat packets and limits their length field to the length of the payload that was actually provided. We then compiled the source code for this function and used ThreatSCOPE to insert it such that it runs after each SSL packet is received: After choosing to insert code, ThreatSCOPE brings up a new window for customizing how the code will be inserted. From here we chose the desired insertion location, at the beginning of the basic block that was selected, and to insert this code using a procedure call. Since we are inserting this code as a procedure call we also need to tell ThreatSCOPE which active registers at this point in the program should be mapped to which procedure argument registers. Our filter_heartbeat() procdure takes a single argument which is a pointer to an SSL packet which it will filter. Since the pointer to the SSL packet resides in r11 at this point in the program, we instructed ThreatSCOPE to map r11 to the first argument register, r0. These code insertion options as well as the file name for the new web server executable are shown here: Finally, after clicking the “Insesrt Code” button, ThreatSCOPE peforms this code insertion and generates a new executable for the Apache web server, with the healing code inserted. Afterwards, it shows the updated control flow graph of the procedure where the code was inserted, side-by-side with the original control flow graph. The new code that was inserted is shown in green: After performing this code insertion we installed the new Apache web server on the Raspberry Pi and ran it to verify that the HeartBleed vulnerability had been healed. Once again we ran the same HeartBleed test script and targeted the Raspberry Pi: As shown by the output from the tests script, the HeartBleed vulnerability was successfully healed and the web server can no longer be exploited. Learn more information about ThreatSCOPE by downloading BlueRiSC’s recent whitepaper. Visit our Contact Us web page or email [email protected] to schedule a one-hour WebEx meeting, and BlueRiSC will provide a live demonstration of ThreatSCOPE’s unique features and capabilities.
<urn:uuid:646330b6-4641-441a-9859-7055e4c311f4>
CC-MAIN-2022-40
https://www.bluerisc.com/2019/11/21/threatscope-use-case-healing-the-heartbleed-openssl-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00218.warc.gz
en
0.897277
805
2.5625
3
How beamforming works: making the conference phone a smarter listener Beamforming technology makes the conference phone a smarter listener, actively improving the sound quality of your distance meeting. The technology helps to pick out the right audio data for processing and refinement using intelligent algorithms. We take a look under the hood of Konftel's new generation OmniSound® with beamforming. Beamforming is a signal processing technology where, instead of sending signals in all directions, the signal is directed towards the user. A good example of beamforming in everyday use is a WiFi network, where you want to focus the signals on the person using the network at that specific point in time. The main benefit of using beamforming in an audio product is that the technology can be used to pinpoint the sound source that people want to listen to, making it better able to refine the desired sound, while simultaneously filtering out any other sounds that get in the way. In essence, beamforming helps to direct the attention of the microphone where it is most needed. This means that you can reduce the perceived distance between the teleconference participants. If the person speaking is standing at a whiteboard four meters away, the beamforming will make it sound like the speaker is only two meters from the device, cutting the distance in half. With the help of beamforming, Konftel can halve the perceived distance from the listener, so that a person speaking four meters away will sound like they are only two meters away to the person at the other end of the conference phone. Knows where the microphones are The audio improvements are possible because the system always knows the exact location of the microphones that are picking up the sound. Although sound travels at a speed of over 340 m/s at room temperature, the technology can still detect the time difference when the sound hits the different microphones and then perform the desired signal processing using an algorithm. If, as with the Konftel 800 conference phone, you have three microphones around the edges of the phone, the direction of the sound can be tracked 360 degrees around the device. With three microphones fitted into a conference phone, it is possible to determine the direction and distance of the speaker and then process the sound in order to considerably improve the experience. The triangular positioning of the microphones naturally makes the algorithm and the calculations more advanced than having all the microphones in a row, for example, but it also makes it easier to determine exactly where the speaker is located, and that information can be used to boost the desired sound and so halve the perceived distance for the listener. Many unknown parameters Beamforming for a conference phone is still quite a challenge, with a number of unknown parameters coming into play. Even if you know exactly how the built-in microphones behave in relation to each other, both the speaker and the conference phone could be anywhere in the room. The conference room could also vary in its design, with better or worse acoustics and disturbing background noise. When working on audio for a hands-free system in a car or on a laptop, it is reasonable to assume that the person speaking is most likely going to be around the same distance from the microphones every time. In the case of a car, you will also know quite precisely how the background vehicle noise will affect the sound. With a car or a laptop, you can therefore work on the basis of various assumptions, which make your audio processing calculations that much easier. But this is not the case in a conference room. All these unknown parameters make beamforming much more of a challenge for a conference phone, but success brings a huge reward in the form of better audibility – particularly compared with a conventional solution in a large conference phone with multiple built-in microphones, where you might attempt to optimize the sound by switching the microphones off and on. Konftel is something of a pioneer in audio for conference phones. It was over 30 years ago that the company launched its OmniSound®, delivering full duplex and flowing dialog without irritating sound clipping, damping or echoes. Although Konftel now also offers video solutions, sound is still a core focus, which is why, as a key component of Konftel's technical advancement, the next generation of OmniSound® with beamforming has been incorporated into the Konftel 800 and the Konftel Smart Microphone.
<urn:uuid:279bb258-827a-4709-98b1-59197bf830d8>
CC-MAIN-2022-40
https://www.konftel.com/no/academy/how-beamforming-works
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00218.warc.gz
en
0.930527
885
2.5625
3
(continued from page 1) Our suspect string is: Step 1: Adjust Trailing Padding if Necessary We put the suspect string into CyberChef and choose the "From Base64" recipe, which produces the error: "Data is not a valid byteArray." Adjust the number of trailing "=" from 0-2 until the error goes away. In this example, deleting the "=" allows for decoding. Step 2: If Plaintext Isn't Apparent, Prepend Some Characters If the output looks to be binary and you suspect text, don't give up yet. Add some characters to the beginning to see if it's simply a bit alignment problem due to truncated data. You can use any valid Base64 character here, but consider using the "/" as the injected padding tends to stand out better (unless the first encoded character is already a "/"). From our test string, three padding characters caused the plaintext to be revealed. Where Will I See Base64? A security analyst will encounter Base64 encoded strings in a variety of places. The routine and most common places come from examining mail attachments and embedded content (mostly images) from web pages. Other places should cause analysts to be on alert -- for instance, when Base64 strings are detected on the command line. Below is an example of a reverse shell hiding in plain sight using a powershell command. (Ref: mkpsrevshell.py, https://gist.github.com/tothi/ab288fb523a4b32b51a53e542d40fe58.) This leverages the "-e / -EncodedCommand" feature of powershell that allows a Base64 string to be passed in. Powershell will decode the Base64, then execute the script inside. The behavior of spawning a process with Base64 reflected on the command line by itself is suspicious. If you're monitoring Windows process creation, you should inspect when you see that happening. Let's look at another common oversight spotted in a Sigma IDS rule. The rule fragment below is published to Sigma and looks for a particular Base64 string (among other things, see full rule for that): This rule contains a detection element if the string '"L3NlcnZlc" is observed. According to the rule, this string translates to "/server=." In fact, it falls a bit short. If we use CyberChef, we notice that it actually translates to "/servet" a mistake/bug introduced probably from the input string carrying a trailing "=" sign. Now that we are savvy Base64 sleuths, we can update this rule to the correct string: "L3NlcnZlcj0=." And also using our knowledge of the bit offset problem, add the two other Base64 variants that will detect the same thing: "y9zZXJ2ZXI9," "c2VydmVyPQ." Another common Base64 exposure for security analysts is examining HTTP Basic Authentication. (Maybe this isn't as "common" as it used to be, but I'm pretty sure every security analyst has seen at least one of these alerts fire.) Here's an example of an HTTP header using it. The problem here is now pretty obvious. This is a plain-text password. HTTP basic auth carries the convention of Base64 encoded "username:password" in the "Authorization" client header. This example decodes to "joeuser:very$ecure." Other Encoding Schemes If you're a security analyst, at this point you may have realized a great evil application for Base64: data exfiltration over DNS! But there are a couple problems here. First, the defined character set for Base64 includes characters not allowed in DNS strings (+, /, =). Second, DNS is case-insensitive. An adversary couldn't guarantee that their Base64 encoded subdomain wouldn't get "lowered" along the way. But … there's always Base32! Base32 is very similar to Base64 encoding, except it carries data when we can't use upper/lowercase to encode information. Base32 is even more inflationary than Base64, so encoding large amounts of data for exfiltration using Base32 is surely to be a very loud network event. Don't forget, too, that Base16 (hex) and Base2 (binary) are also valid encoding schemes with early access tooling available. Security analysts see these everywhere as part of their daily exposure but rarely as part of an adversary technique to analyze like Base64. Variants of Base64 use different alphabets. For instance, there's a "filename safe" variant that substitutes the "/" for a "-." So just because you see something that looks like a Base64 string but has an "-" in it, don't discount it too quickly. The CyberChef tool demonstrated earlier can be configured for these alternate alphabets. We explored Base64 encoding from the security analyst's perspective. Base64 encoding is traditionally used to convert binary data to printable text characters, but it can also be used to hide plaintext. Security analysts should keep these common techniques in mind while performing investigations, as all too often encoding plaintext as Base64 is enough to allow the best detection engine to miss (our eyes). Once understood, Base64 detection flaws can be identified and signatures/logic improved to reflect all possible permutations.
<urn:uuid:3a67c5bc-b4cc-46c5-840b-e45a79ae5aba>
CC-MAIN-2022-40
https://www.darkreading.com/edge-articles/hacker-pig-latin-a-base64-primer-for-security-analysts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00218.warc.gz
en
0.899774
1,184
2.5625
3
The pulse dialing method was technically determined by how the rotary dial of the telephone works. When the caller picks up the receiver and operates the rotary dial, it disconnects the loop between the telephone and the exchange at a specific frequency and at defined intervals based on the number dialed. The analog exchange switch uses these disconnections to determine which telephone number the subscriber dialed and forwards these to the so-called rotary switch as current pulses. A certain pause between the individual numbers must be maintained for correct signaling. Even analog phones with push-buttons typically support the pulse method. The devices can be flexibly configured for the pulse method and multi-frequency signaling. The pulse dialing method is referred to as in-band signaling, since the telephone number is transmitted over the voice channel and can be heard by the subscriber. Multi-frequency signaling is also a form of in-band signaling. However, here the telephone numbers are not transmitted as pulses, but consist of a mix of difference frequencies. Each number is represented by two specific frequencies. The switching center filters these frequencies out of the voice channel and determines the respective number. Many VoIP-compatible phone systems still allow the use of telephones which support the pulse dialing method. The analog telephone ports in many phone systems automatically recognize whether a device is using the pulse dialing method or multi-frequency signaling. If a system only supports multi-frequency signaling, a pulse to tone converter can be used to connect a pulse telephone.
<urn:uuid:c1e37c06-b0b8-441a-be75-96337e026102>
CC-MAIN-2022-40
https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/pulse-dialing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00218.warc.gz
en
0.933407
303
3.09375
3
This article introduces a set of evasion techniques wherein malware takes advantage of running processes. These techniques fall under the broad category of malware evasion techniques known as process injection. Finding threats in running processes In the past, malware infections normally involved malicious processes that either carried out the attack itself or downloaded a file-based payload containing malicious code. These processes were easily caught by threat analysts and security software that simply listed running processes and then distinguished suspicious processes from legitimate ones. Evasion through process injection Malware authors are now aware of this countermeasure and have devised a way to circumvent it through a technique known as process injection, which makes it even harder for security tools to detect. Also known as memory injection, this technique involves running or ‘injecting’ malicious code in the address space (i.e., the range of valid addresses in memory that are allocated for a particular program or process) of a legitimate process already present in the memory. Two advanced persistent threat (APT) attacks are good examples of using this particular evasion technique, and have been discussed previously here in our blog—APT27 (Emissary Panda) and APT32 (OceanLotus). By hiding in a legitimate OS or application process, ransomware is be much less likely to stand out if security software runs an inspection on running processes. There are several ways that ransomware can implement process injection. Let’s take a look at two of them. Dynamic-Link Libraries or DLLs are integral to every running process, as they add functionality to the program running a particular process. If you inspect processes running on your system , you’ll probably notice that they consist of one or more threads, most of which correspond to a DLL. DLL injection is a process injection technique where the threat actor uses a legitimate process to execute a malicious DLL. To do that, the threat actor typically carries out a number of steps: - Enumerate a list of processes and identify a process to target. - Place a malicious DLL file into the target system’s file system. - Allocate memory space in the target process to accommodate that malicious DLL’s path. - Copy that path into the process’ memory - Obtain the address of an API function known as LoadLibraryand then use the DLL’s path as an argument of this function when using CreateRemoteThreadin the next step. - Create a new thread in the target process using the CreateRemoteThread function while setting that new thread’s start address to the address of LoadLibrary. Reflective DLL Loading Using DLL injection for malware evasion has a couple of disadvantages for the attacker. First, the attacker has to store the malicious DLL file on the target system. That DLL file can potentially be detected by security solutions. Secondly, some security solutions monitor LoadLibrary calls and can even track DLLs loaded onto processes. To circumvent these defensive measures, some malware developers use a modified version of DLL injection known as Reflective DLL Loading. This process injection technique loads a DLL from memory rather than from the target system’s disk. Basically, reflective DLL loading forgoes using a DLL file, and instead, maps the actual contents of the malicious DLL to the target process without calling LoadLibrary. Aside from avoiding potentially-monitored LoadLibrary calls, this technique also eliminates the risk of getting detected as a suspicious DLL file. A variation of this technique also involves a fileless attack, in which the ransomware threat actor actually downloads the content of the malicious DLL directly into memory without even creating a file on the local hdd. These are just two of several process injection techniques that are now used in the wild. Some of the popular ones include portable executable injection, process hollowing, process doppelganging, and VDSO hijacking, to mention a few. How Minerva Armor Prevents Ransomware that use Memory injection Minerva Armor’s Ransomware Protection platform includes a Memory Injection Prevention module which blocks attempts by fileless and other memory-resident malware to hide in legitimate processes and evade detection. By deceiving the malware about its ability to interact with other processes, Minerva prevents the ransomware from gaining a foothold on the endpoint, rendering its evasion technique completely ineffective. As this evasion technique occurs very early in the ransomware attack, when Minerva Armor blocks the evasion technique, it completely stops the attack before it manages to do any damage.
<urn:uuid:6d7d3233-74ef-49ba-8c61-7279b7fe0dd4>
CC-MAIN-2022-40
https://minerva-labs.com/blog/malware-evasion-memory-injection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00218.warc.gz
en
0.906665
934
2.609375
3
There are some obvious and not-so-obvious overlaps among various “advanced computing” concepts. Before I describe some of the inter-relationships among these concepts, it would be helpful to level-set the general definitions: Classical Computing: is the form of data storage and analysis utilizing transistors in integrated circuits to turn switches on or off, hence storing a given computational state as a “bit”. These circuits are coordinated into logic gates to perform various instructions such as “AND”, “OR” and “NOT” and do so in a sequential manner. Today’s computers are increasingly fast and robust, having enjoyed Moore’s law for nearly 50 years. However, classical computers are beginning to hit an advancement ceiling and with the ever-increasing amount of data being collected and stored, the sequential nature of classical computing analysis is leading to longer and longer processing times for large data sets. High-Performance Computing (HPC): is a technology that harnesses the power of supercomputers or computer clusters to solve complex problems requiring massive computation. While aggregating computing resources can improve overall power and speed, such increases in performance are linear (i.e., classical computing based), so an increasingly large set of resources is required as the data increases. Quantum Computing: Quantum Computers (QCs) utilize evolving new technologies which take advantage of certain features of quantum mechanics. It uses “qubits” instead of classical computing bits and harnesses the properties of superposition, entanglement, and interference to perform calculations. Combining these quantum properties with a broader array of logic gates, QC’s can perform calculations simultaneously (instead of sequentially) and therefore much faster than classical computers. QCs are relatively new, and the existing devices are still not very powerful, but they are becoming more and more powerful all the time. Artificial intelligence (AI): is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. In AI’s most basic form, computers are programmed to “mimic” human behavior using extensive data from past examples of similar behavior. AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (e.g., Siri and Alexa), self-driving cars (e.g., Tesla), etc. Machine Learning (ML): the study of computer algorithms that can improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. There are three types of machine learning: supervised learning (classification and regression), unsupervised learning (clustering and dimensionality reduction), and reinforcement (semi-supervised learning). Big Data: refers to large, diverse sets of information that grow at ever-increasing rates. It encompasses the volume of information, the velocity or speed at which it is created and collected, and the variety or scope of the data points being covered (known as the “three v’s” of big data). Data analysts look at the relationship between different types of data, such as demographic data and purchase history, to determine whether a correlation exists. Quantum Machine Learning Using these broad definitions, we can further refine this discussion to note that “Artificial Intelligence” today, is a general catch-all category for using classical computers to parse, analyze and draw conclusions. ML and Big Data are generally considered sub-sets of AI and HPC is a general catch-all for using mainframes, supercomputers and/or parallel processing to scale the power of classical computing. With the recent introduction of working QCs, and given that QCs operates with different processes and logic, there is an evolving field known as “Quantum Machine Learning” (QML) at the intersection of these technologies. Over the past few years, classical ML models have shown promise in tackling challenging scientific issues, leading to advancements in image processing for cancer detection, predicting extreme weather patterns, and detecting new exoplanets, among other achievements. With recent QC advances, the development of new Quantum ML models could have a profound impact on the world’s biggest problems, leading to breakthroughs in the areas of medicine, materials, sensing, and communications. In a milestone discovery, IBM and MIT revealed the first experimental proof that the theory of combining quantum computing and machine learning could be achieved. They published their findings in Nature on March 13, 2019, using a two-qubit QC to demonstrate that QCs could bolster classification supervised learning. TensorFlow, and PyTorch are leading platforms used for classical computing machine learning. TensorFLow is an end-to-end open source platform with a comprehensive ecosystem of tools, libraries and resources that allow researchers and ML developers easily build and deploy ML powered applications. PyTorch is also open source and has a machine learning library that specializes in tensor computations, automatic differentiation, and GPU acceleration. Reimagining these concepts for use on a QC, Google has released open sourced TensorFlow Quantum (TFQ) which provides quantum algorithm research and ML applications within the Python framework, designed to build QML models leveraging Google’s QC system. To build and train such models, users would do the following: - Prepare a quantum dataset - Evaluate a quantum neural network model - Sample or average measurements - Evaluate a classical neural networks model - Evaluate cost functions - Evaluate gradients and update parameters Which is graphically depicted below: A key feature of TensorFlow Quantum is the ability to simultaneously train and execute many quantum circuits. This is achieved by TensorFlow’s ability to parallelize computation across a cluster of computers, and the ability to simulate relatively large quantum circuits on multi-core computers. Similarly, Xanadu’s PennyLane is another open-source software framework for QML, built around the concept of quantum differentiable programming. It integrates classical ML libraries with quantum hardware and simulators, giving users the power to train quantum circuits. Companies such as Menten AI are using PennyLane to design novel drug molecules that can efficiently bind to a specific target of interest. Menten AI is seeking to develop new approaches that are beyond the reach of current classical computation by integrating QC and classical machine learning techniques. PennyLane, is integrated with Amazon Braket, a fully managed quantum computing service from Amazon Web Services (AWS). Together with Amazon Braket, it seamlessly integrates classical machine learning (ML) libraries with quantum hardware and simulators, giving users the power to train quantum algorithms in the same way they train neural networks. Data scientists and machine learning researchers who work with TensorFlow or PyTorch on AWS will now have a way to experiment with quantum computing and see how easily it can fit into their workflows. “Amazon Braket makes it easy for customers to experiment with quantum computing through secure, on-demand access to a variety of quantum hardware and fully managed simulators. We are delighted to be working with PennyLane to give our customers a powerful set of tools to apply proven and familiar machine learning concepts to quantum computing. Our goal is to accelerate innovation, and PennyLane on Amazon Braket makes it easy and intuitive to explore applications of hybrid quantum computing, an area of research that aims to maximize the potential of near-term quantum computing devices” said Eric Kessler, Sr. Product Manager for Amazon Braket. While QC is still in its early stages, there are promising developments in applying QC to Artificial Intelligence/Machine Learning. Menten AI’s use of this technology for drug discovery and Quantum Image Processing are but two examples of near-term applications. As the amount of stored data and images continues to explode, along with the increasing adoption of voice recognition tools (i.e., Alexa, Siri, etc.) utilization of QML will be vital to enabling efficient use of these evolving tools. I expect we’ll see many more collaborations and tools in the QML space in the next few years. Disclosure: I have no beneficial positions in stocks discussed in this review, nor do I have any business relationship with any company mentioned in this post. I wrote this article myself and express it as my own opinion. Uj, Anjaii, “Quantum Machine Learning: A Smart Convergence of Two Disruptive Technologies, ” Analytics Insights, October 24, 2018 “What is Quantum Machine Learning,” published by Discover Data Science, accessed February 20, 2022 Havlicek, Corcoles, Temme, Harrow, Kandela, Chow & Gambetta, “Supervised learning with quantum-enhanced feature spaces,” Nature, March 13, 2019 Pennylane.ai, accessed February 20, 2022 TensorFlow.org, accessed February 21, 2022 Ho, Alan and Mohseni, Masoud, “Announcing TensorFlow Quantum: An Open Source Library for Quantum Machine Learning,” Google AI Blog, March 9, 2020 “Menten AI Partners with Xanadu to Develop Quantum Machine Learning for Protein-Based Drug Discovery,” PR Newswire, January 25, 2022. |If you enjoyed this post, please visit my website and enter your email to receive future posts and updates: http://quantumtech.blog||Russ Fein is a venture investor with deep interests in Quantum Computing (QC). For more of his thoughts about QC please visit the link to the left. For more information about his firm, please visit Corporate Fuel. Russ can be reached at [email protected].|
<urn:uuid:15456b92-ced2-4577-ab2b-4069870959f1>
CC-MAIN-2022-40
https://quantumtech.blog/2022/02/22/at-the-intersection-of-quantum-computing-artificial-intelligence-and-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00218.warc.gz
en
0.923902
2,052
3.453125
3
Every single time a terror attack happens somewhere in the world, we gasp, sigh and think “Why did this happen?” Is there really no way this could have been prevented? Be it the recent tragedy in Sri Lanka, countless school shootings in US (that have become so commonplace that we have lost count), suicide bombings, large massacres like what happened in Las Vegas or Thousand Oaks, or vehicle ramming attacks occurring worldwide in cities like London, Stockholm, Berlin, Jerusalem, and Barcelona, at some point we need to find a way to curb this troubling trend. Just one lost life has serious repercussions on the mental well-being and the future of hundreds of interconnected lives. To ensure that these catastrophic attacks of terror don’t continue to happen at this rate, we need to figure out ways to stay ahead of nefarious actors. Imagine a world where there are sensors that are able to assist in crime prevention. Utilizing a joint AI and IOT solution, devices are now capable of sensing, recording and reporting any unusual activity that departs from normalized patterns of daily human activity. Deployment of these technologies, while slightly intrusive, can help prevent crime from happening or at least significantly deter those intent on causing harm. Since today’s algorithms excel at collecting and analyzing massive data sets in real time based on machine-learned rules, the goal would be to give these systems access to live footage in order to assess situations in real time. The software should also be able to weave in relevant data from other sources including radar, building management, social media, and other data rich sources. Video surveillance can be a useful tool, but its efficacy can be greatly magnified using machine vision and AI technology. For the purpose of filtering, for example, a rule could be set that says ‘During school hours, we want to monitor any suspicious activity/gunman sightings within a mile radius’. Each time one of these situational vectors are detected, a trigger is created to alert emergency services and key managing officials. If guns are sensed around public places or schools, we currently have the software and hardware to automatically flag this situation and notify police with precise locations. It’s about setting up electronic boundaries—or geofencing around events. Giving the police actionable data opens up the possibility of stopping an attack—or at least lessening its impact on civilian populations. The point is to eliminate silos, optimize city operations, and address some of our biggest safety and security challenges faced during mass terror attacks, suicide bombings, crimes against vulnerable women and children, cyber-crime and vehicle ramming attacks. The whole objective of being “smart” and introducing technology is about being proactive over simply reactive in these situations. So the question is – Can we combat terrorism with technology? Cities are currently experimenting with innovative approaches to preventing crime and countering extremism. Many are improving intelligence gathering, strengthening policing and community outreach, and investing in new technological innovations. From processing data at speeds we could never have imagined before, to the development of recognition software, and the implementation of drones, the use of sensor technology and the benefits that the Internet of Things provides, we are seeing possibilities which didn’t exist just a few years ago. These cities are deploying what is known as ‘agile security’: data-driven and problem-oriented approaches that hasten the decision-making process while reacting to environmental changes which should limit security issues and increase urban safety. Agile security measures are based on the premise that many types of crime, radicalization and terrorism are non-random and even predictable to an extent. With a few exceptions, they tend to cluster in time, space and among specific population groups. The massive surge in computing power and advancement in machine learning have made it possible to sift through gigabytes of data related to crime and terrorism in order to identify underlying correlations. The harnessing and processing of these data flows is crucial to enabling agile security in cities. Accurate real time crime prediction is a fundamental issue for public safety, but remains a challenging problem for the scientific community. Crime occurrences depend on many complex factors. A pre-requisite of agile security is connected urban infrastructure. When city authorities, private firms and civic groups have access to real-time data – whether generated by crime-mapping platforms, gunshot-detection systems, CCTV’s or smart lighting, they can increase the chances of detecting crime before it occurs. As a result, public authorities are more easily aggregating license plates, running facial recognition software, focusing on hotspots to deter and control crime, mapping terrorist networks and detecting suspicious anomalies. Some of these technologies are even processing data within the devices, known as edge computing, to speed up crime-fighting and terrorist prevention capabilities. One example of a machine used in the war against terrorism is the PackBot. The PackBot can carry out complex tasks without any human interference and will be very effective in detecting terrorists and explosives. PackBot can analyze data from devices such as smart streetlights, connected bus shelters and a variety of sensors to make decisions in real time. While technology can only help so much, we also need to focus on making physical, social and cultural changes to the environment we live in. This could be building low-rise buildings and green spaces, providing community centers, promoting demographically diverse communities and targeting renewal measures in neighborhoods that are at a perpetual disadvantage. Investments in high-quality public goods and social cohesion will help prevent crime and radicalization even more. Agile security should aim to avoid curbing civil liberties – intentionally or unintentionally. Local governments need to find ways to consult with city residents and local groups to discuss the implications of these new technologies. This means undertaking consultations, especially in the most vulnerable communities. If deployed with diligence and care, the adoption of agile security measures can yield massive economic savings uplift entire communities to prosper. At the very least, they would reduce expenditure on law enforcement agencies, prosecutors, judges and penal authorities. By preventing crime and terrorism through technology-enabled means, governments and businesses can also reduce medical costs generated by victims, lower insurance premiums in high-risk areas, cut back on private security guards and improve the overall investment climate. We all are well aware that technologies are not 100% fool proof. They tend to be tested in lab-based environments and trained on fake attempts to deceive rather than in real-life situations. The people who are monitoring video surveillance are already dealing with data overload. Just because you can generate more data doesn’t actually give you a solution. A real solution requires a potpourri of people, processes, and technology. It is a valid argument that technology does not have three vital qualities that humans possess: experience, values and judgement. This means that machines may miss something that only a human could detect. So while technology offers exciting possibilities for tracking terrorist communications and predicting attacks and criminal situations, it isn’t a replacement for human judgement and should be applied carefully. The road ahead This world needs to be a happier safer place. “Safety” is the prime component of a “Smart City”. It is the basic human right for all of us living in it. It’s not a luxury. It is our responsibility to leave a safer, more secure world for the generations to come. A world free from crime, radicalization and religious bigotry. Technology combined with human awareness is one step closer to reaching that Utopian world.
<urn:uuid:35ea0fc9-d76c-435c-9d0e-0fafd9eb22a2>
CC-MAIN-2022-40
https://www.clovity.com/blog/agile-security-and-crime-prevention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00218.warc.gz
en
0.937427
1,578
2.765625
3
It looks like a regular roof, but the top of the Packard Electrical Engineering Building at Stanford University has been the setting of many milestones in the development of an innovative cooling technology that could someday be part of our everyday lives. Since 2013, Shanhui Fan, professor of electrical engineering, and his students and research associates have employed this roof as a testbed for a high-tech mirror-like optical surface that could be the future of lower-energy air conditioning and refrigeration. Research published in 2014 first showed the cooling capabilities of the optical surface on its own. Now, Fan and former research associates Aaswath Raman and Eli Goldstein, have shown that a system involving these surfaces can cool flowing water to a temperature below that of the surrounding air. The entire cooling process is done without electricity. “This research builds on our previous work with radiative sky cooling but takes it to the next level. It provides for the first time a high-fidelity technology demonstration of how you can use radiative sky cooling to passively cool a fluid and, in doing so, connect it with cooling systems to save electricity,” said Raman, who is co-lead author of the paper detailing this research, published in Nature Energy Sept. 4. Together, Fan, Goldstein and Raman have founded the company SkyCool Systems, which is working on further testing and commercializing this technology. Sending our heat to space Radiative sky cooling is a natural process that everyone and everything does, resulting from the moments of molecules releasing heat. You can witness it for yourself in the heat that comes off a road as it cools after sunset. This phenomenon is particularly noticeable on a cloudless night because, without clouds, the heat we and everything around us radiates can more easily make it through Earth’s atmosphere, all the way to the vast, cold reaches of space. “If you have something that is very cold — like space — and you can dissipate heat into it, then you can do cooling without any electricity or work. The heat just flows,” explained Fan, who is senior author of the paper. “For this reason, the amount of heat flow off Earth that goes to the universe is enormous.” Although our own bodies release heat through radiative cooling to both the sky and our surroundings, we all know that on a hot, sunny day, radiative sky cooling isn’t going to live up to its name. This is because the sunlight will warm you more than radiative sky cooling will cool you. To overcome this problem, the team’s surface uses a multilayer optical film that reflects about 97 percent of the sunlight while simultaneously being able to emit the surface’s thermal energy through the atmosphere. Without heat from sunlight, the radiative sky cooling effect can enable cooling below the air temperature even on a sunny day. “With this technology, we’re no longer limited by what the air temperature is, we’re limited by something much colder: the sky and space,” said Goldstein, co-lead author of the paper. The experiments published in 2014 were performed using small wafers of a multilayer optical surface, about 8 inches in diameter, and only showed how the surface itself cooled. Naturally, the next step was to scale up the technology and see how it works as part of a larger cooling system. Putting radiative sky cooling to work For their latest paper, the researchers created a system where panels covered in the specialized optical surfaces sat atop pipes of running water and tested it on the roof of the Packard Building in September 2015. These panels were slightly more than 2 feet in length on each side and the researchers ran as many as four at a time. With the water moving at a relatively fast rate, they found the panels were able to consistently reduce the temperature of the water 3 to 5 degrees Celsius below ambient air temperature over a period of three days. The researchers also applied data from this experiment to a simulation where their panels covered the roof of a two-story commercial office building in Las Vegas — a hot, dry location where their panels would work best — and contributed to its cooling system. They calculated how much electricity they could save if, in place of a conventional air-cooled chiller, they used vapor-compression system with a condenser cooled by their panels. They found that, in the summer months, the panel-cooled system would save 14.3 megawatt-hours of electricity, a 21 percent reduction in the electricity used to cool the building. Over the entire period, the daily electricity savings fluctuated from 18 percent to 50 percent. The future is now Right now, SkyCool Systems is measuring the energy saved when panels are integrated with traditional air conditioning and refrigeration systems at a test facility, and Fan, Goldstein and Raman are optimistic that this technology will find broad applicability in the years to come. The researchers are focused on making their panels integrate easily with standard air conditioning and refrigeration systems and they are particularly excited at the prospect of applying their technology to the serious task of cooling data centers. Fan has also carried out research on various other aspects of radiative cooling technology. He and Raman have applied the concept of radiative sky cooling to the creation of an efficiency-boosting coating for solar cells. With Yi Cui, a professor of materials science and engineering at Stanford and of photon science at SLAC National Accelerator Laboratory, Fan developed a cooling fabric. “It’s very intriguing to think about the universe as such an immense resource for cooling and all the many interesting, creative ideas that one could come up with to take advantage of this,” he said.
<urn:uuid:465c0d80-8840-45ae-8439-d38049b67f21>
CC-MAIN-2022-40
https://debuglies.com/2017/09/08/cooling-system-works-without-electricity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00218.warc.gz
en
0.952708
1,187
3.484375
3
The IT security researchers at deep learning cybersecurity firm Deep Instinct have discovered a sophisticated malware in the wild targeting Microsoft’s Windows-based computers. Adding devices to Botnet The malware works in such a way that upon infecting, it allows hackers to take over the device and make it part of a botnet to carry out different malicious activities including conducting Distributed Denial of Service (DDoS) attacks, spreading malware or infecting the system with ransomware etc. A Botnet is a network of private computers infected with malicious software and controlled as a group without the owners’ knowledge, e.g., to send spam messages. Apart from these, the malware not only steals user data, it also disables the anti-virus program and removes other malware installed on the system. Dubbed MyloBot by Deep Instinct; based on its capabilities and sophistication, researchers believe that they have “never seen” such a malware before. Furthermore, once installed, MyloBot starts disabling key features on the system including Windows Updates, Windows Defender, blocking ports in Windows Firewall, deleting applications and other malware on the system. “This can result in loss of the tremendous amount of data, the need to shut down computers for recovery purposes, which can lead to disasters in enterprises. The fact that the botnet behaves as a gate for additional payloads, puts the enterprise in risk for the leak of sensitive data as well, following the risk of keyloggers/banking trojans installations,” researchers warned. Dark Web connection Further digging of MyloBot sample reveals that the campaign is being operated from the dark web while its command and control (C&C) system is also part of other malicious campaigns. Although it is unclear how MyloBot is being spread, researchers discovered the malware on one of their clients’ system sitting idle for 14 days which is one of its delaying mechanisms before accessing its command and control servers. It is not surprising that Windows users are being targeted with MyloBot. Last week, another malware called Zacinlo was caught infecting Windows 10, Windows 7 and Windows 8 PCs. Therefore, if you are a Windows user watch out for both threats, keep your system updated, run a full anti-virus scan, refrain from visiting malicious sites and do not download files from unknown emails. Deep Instinct is yet to publish research paper covering Mylobot from end to end.
<urn:uuid:1f76a81e-182c-4bd1-a1cd-a60bf6c11845>
CC-MAIN-2022-40
https://debuglies.com/2018/06/25/mylobot-malware-turning-windows-devices-into-botnet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00218.warc.gz
en
0.927503
508
2.515625
3
If left untreated, it can cause serious and potentially life-threatening damage to the liver over many years. Today, with the help of modern treatments it’s usually possible to cure the infection, and most people with it will have a normal life expectancy. An estimated 1 percent of the world population is chronically infected with HCV. The World Health Organisation (WHO) has set the global target of eliminating HCV infection as a major public health threat by 2030. In the United Kingdom (UK), around 214,000 people are living with chronic HCV infection. Eliminating hepatitis C as a public health threat requires an improved understanding of how to increase testing uptake. Researchers from the University’s Institute of Infection and Global Health, led by Professor Anna Geretti, piloted point-of-care testing (POCT) for a current HCV infection in an inner-city Emergency Department (ED) and assessed the influence on uptake of offering associated screening for HIV. POCT or bedside testing is medical diagnostic testing at or near the point of care—that is, at the time and place of patient care, rather than sending a blood sample to the laboratory and then waiting for the results. Over four months, all adults attending ED with minor injuries were first invited to complete an anonymous questionnaire then, in alternating cycles, invited to take a finger-prick blood test that would either detect HCV or both HCV and HIV. Reduced uptake 94.8 percent (814/859) questionnaires were returned and 39.8 percent tests (324/814) were accepted, comprising 211 HCV tests and 113 HCV + HIV tests. The researchers found that offering the HCV test that was associated with a HIV test significantly reduced uptake after adjusting for age and previous HCV testing. HCV prevalence was 1/324 and no participant tested positive for HIV. Based on postcodes obtained from the questionnaires submitted 56.2 percent of participants lived in the most deprived neighbourhoods in England. Professor Geretti, said: “Our study found that POCT HCV finger-prick testing was technically feasible and is suitable for rolling out to sites where people with undiagnosed hepatitis C may present so that they may offered treatment. “Uptake of the HCV POCT was moderate and the offer of associated HIV screening appeared to have a detrimental impact on acceptability in this low prevalence population. “The study indicates that persisting fears about HIV infection can influence testing behaviour. The findings bear implications for the design of HCV screening programmes in the future. We need to understand how to overcome the apparent barrier.” More information: Anna Maria Geretti et al. Point-of-Care Screening for a Current Hepatitis C Virus Infection: Influence on Uptake of a Concomitant Offer of HIV Screening, Scientific Reports (2018). DOI: 10.1038/s41598-018-33172-w Journal reference: Scientific Reports search and more info website Provided by: University of Liverpool search and more info website
<urn:uuid:d1f3e524-9e99-43a9-ad1a-a0a0adeec0e0>
CC-MAIN-2022-40
https://debuglies.com/2018/11/14/research-shows-persisting-fears-about-hiv-infection-may-impact-testing-uptake-for-the-hepatitis-c-virus-hcv/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00218.warc.gz
en
0.924848
651
3.453125
3
With warnings of increased cyber scams related to the Coronavirus and many people working at home, it’s a good time to remember the cybersecurity basics. Learn these 16 tips to help better protect yourself and your loved ones from scams. Passwords and Email - Use strong passwords - The Federal Trade Commission (FTC) advises implementing a password on all devices and apps. It’s best to use the longest password or passphrase allowed and to set a unique password for each account, instead of reusing the same password across multiple accounts. - Consider enabling multi-factor authentication - Many services offer multi-factor or two-factor authentication, which is another layer of security, typically in the form of a code sent to the individual’s phone. Multi-factor authentication can make it harder for scammers to successfully login to an account if they manage to steal a username and password combination. - Be wary of unsolicited messages - Experts say that individuals should always be wary of unsolicited messages via email, chat applications, or text. If needed, it’s better to type in the organization’s official web address manually (instead of clicking a link from an email) or to contact the organization through other means to determine if the message is legitimate. Learn the signs of phishing and how to better protect against it. - Keep device software up-to-date - Software updates often include protection against recently discovered threats and new fixes for security vulnerabilities. It’s recommended to turn on software auto-updates for computers, smartphones, and tablets. - Consider antivirus software - Experts recommend installing an antivirus program and keeping it up to date. Also, consider using a website reputation rating tool, which can help warn about potentially dangerous websites. - Make a backup - It’s always a good idea to back up data, including information stored on your phone. Backup options may include an external hard drive or cloud storage. - Get apps from official sources - It’s best to download new apps from official sources, such as the Apple App Store or Google Play. Experts advise against downloading apps from third-party application sites, as they sometimes distribute malware. Internet Use and Online Purchases - Pay attention to URLs - Malicious websites may look identical to a legitimate site, but sometimes the URL has a variation in spelling or a different domain, such as .net instead of .com. Don’t assume that a website is legitimate just because its URL starts with “https,” as criminals have been known to use encrypted sites. - Guard personal and financial information - It’s advised to never provide a username, password, date of birth, Social Security number, financial data, or other personal information in response to an email or robocall. Do not respond to email solicitations for personal or financial information, including following links sent in email. - Keep track of financial transactions - Monitor credit statements monthly for any fraudulent activity. Report unauthorized transactions to the bank or credit card company as soon as possible. - Review credit reports annually - Experts recommend that individuals review a copy of their credit report at least once a year to look for any unexpected activity, which could be a sign of potential fraud. - Dispose of financial documents securely - Never throw away credit card or bank statements in a usable form, such as by putting them directly into the trash or recycling bin. The FTC recommends disposing of sensitive data by shredding it first. Working from Home - Secure your home network - The FTC recommends that individuals secure their home networks by turning on encryption (WPA2 or WPA3), which helps scramble information sent over the network. - Use a Virtual Private Network (VPN) - A VPN can help secure web traffic against bad actors who may try to steal or monetize a person’s data. - Store sensitive information securely - Keep confidential documents and files out of sight, by locking them in a file cabinet or room. In addition, experts advise that individuals keep devices with them at all times or store them in a secure location when not in use. If You Believe You Have Been a Victim of a Cybercrime
<urn:uuid:7e95b524-ef38-4713-a2b1-fe8aa3601f66>
CC-MAIN-2022-40
https://www.idwatchdog.com/cybersecurity-checkup
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00418.warc.gz
en
0.900686
925
3.015625
3
Recently, the US Securities and Exchange Commission (SEC) offered both a view and a reminder on release of data related to climate change impact. The interpretation is located at: For businesses, this means a greater focus on carbon impact and greenhouse gas emissions. For data center owners, it goes beyond efficiency on the data center. Energy consumers will need to understand the “carbon-makeup” of the energy streams they’re consuming. For example, is the source a coal-fired, natural gas or hydroelectric power plant? Or is the energy a mixture of different sources? Utilities like PG&E provide information on their energy mix via their website: The US Environmental Protection Agency (EPA) has a wealth of information on their website at: As agencies and the business community gains a greater awareness of climate change impact, we (in IT) must gain a greater understanding of our impact as well.
<urn:uuid:1021c269-0f38-4a6b-947e-88efd8e9985f>
CC-MAIN-2022-40
https://avoa.com/2010/02/16/u-s-sec-views-on-disclosure-bring-focus-to-carbon-impact/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00418.warc.gz
en
0.94304
185
2.59375
3
The Internet of Things (IoT), which connects medical equipment and apps to the healthcare IT system, has been driving the expansion of wearable devices in the healthcare business, which is now known as the Internet of Health Things (IoHT). In medical equipment, the Internet of Things entails machine-to-machine (M2M) communication and connectivity to a cloud platform, capturing, storing, and analyzing data created by devices. “The overall wearable AI market is expected to increase at a CAGR of 29.75 percent from USD 11.5 billion in 2018 to USD 42.4 billion in 2023,” according to the report. (MarketsandMarkets source) Wearable IoT and AI intend to improve functionality and user experience by providing consumers with real-time insights, data, and advice to help them make better lifestyle decisions. AI-enabled wearables in healthcare. Wearables in healthcare often collect, monitor, and interact with users' health data. It alerts the user and clinician to numerous health markers in real-time— real-time health monitoring wearable sensors, exercise wearables, geriatric care wearables, and so on. Wearables in the healthcare business are leveraging AI in various ways to improve people's quality of life. Take, for example, the Google Brain initiative's researchers' AI-powered diabetic eye disease diagnosis. In this system, neural networks are used by mathematical algorithms based on Deep Learning to learn and accomplish a specific task through repetition and self-correction. Over 100 human-graded fundus images were utilized for training this mathematical method, which indicates varied amounts of retinal bleeding induced by elevated blood sugar levels. Each print is assigned a severity grade by the algorithm, compared to a previously determined quality from the training set. The parameters are then tweaked to reduce the inaccuracy of that image. This method is performed many times for each image in the training set, allowing the algorithm to calculate the diabetic retinopathy severity from the image's pixel intensities for all photos in the training set. Another example is next-generation wearables for blind people, which employ ultrasounds to identify obstacles in the user's path and alert them to navigate objects around them safely. In the healthcare industry, these are some instances of advanced versions of next-level medical devices. Now let's look at how artificial intelligence is affecting the sports sector. AI-enabled wearables in fitness. AI wearables can help fitness enthusiasts with their daily workouts. The majority of fitness wearables enable the user to keep track of their actions. If a person walks 12000 steps, the wearable device will count and display the steps. However, the problem with these wearables is that consumers don't know how to use the data after a certain point. Wearables with Artificial Intelligence (AI) may track data and provide insights into what the user needs to consume, how much sleep they should get, and how they should train to enhance their fitness, among other things. Wearables now exist in various shapes and sizes, with features such as integration with intelligent voice assistants (Alexa, Siri, and so on). Advanced sensors are also included in these wearables to track, analyze, and improve users' fitness or sport-specific activities by providing real-time user insights. These smart wearables go above and beyond by providing actionable insights to the user to lessen the danger of injury. Smart helmets for bicyclists, smart glasses, smartwatches, fitness bands, and yoga trousers that help with correct poses are just a few examples. AI-enabled intelligent assistants Bluetooth is a wireless technology that allows you Biosensors to measure heart rate, elevation, motion, proximity, and touch in headphones that also serve as fitness trackers. In addition, these headphones come with an AI-based personal trainer that helps you work out smarter by tracking your running, cycling, and other exercises in real-time. These headphones provide the best approach to attain your exercise objectives based on your health criteria. Wearable gadgets powered by AI algorithms are being developed rapidly, thanks to advances in both hardware and software. Bulky devices are no longer necessary. Wearable devices are now available in various shapes and sizes, making them easy to carry and wear by patients, resulting in better compliance. The advancements in artificial intelligence in recent years have aided in the acceptance of these medical gadgets. Here are some of the ways that AI and IoT are helping to improve healthcare delivery via telemedicine. Personalization of care The data acquired by wearable devices allows healthcare providers to use a data-driven approach to patient treatment. Doctors may use data to make informed decisions and create a personalized health plan for each patient. While healthcare technologies allow patients to access their medical data, resulting in more patient participation, healthcare providers' intervention becomes necessary. Care professionals must interpret and explain patient data, which is where telemedicine comes into play. AI algorithms can create personalized and individualized action plans using data collected by IoMT devices. The treating physician can then guarantee that the protocols are followed, assess progress, and prescribe changes as needed, resulting in better health results. Early diagnosis and timely intervention Patients at high risk of acquiring diseases can be identified using artificial intelligence and machine learning techniques. The use of AI to analyze radiological and histological data has already yielded encouraging results. Using AI-powered wearable devices to assess patients and generate a risk profile can lead to quicker telemedicine interventions and better overall outcomes. Identifying "at-risk" patients also enables the establishment of unique touchpoints and prompt interventions, minimizing hospital burden, lowering hospital admission and readmission rates, and lowering overall healthcare delivery costs. When deciding when to contact the doctor, the patient and his caretakers may not be the best judges. Still, AI algorithms that analyze patient data provided by wearable devices regularly can lead to prompt interventions. Remote patient monitoring In contrast to standard hospital setups where nurses and doctors check on the patient regularly, wearable gadgets monitor vitals on a minute-by-minute basis. Any irregularity can be quickly discovered, resulting in the prompt notification of healthcare personnel and timely care delivery. Wearable technology powered by artificial intelligence has revolutionized the way we collect and evaluate patient data. The widespread availability and usage of these devices have made post-hospitalization surveillance a lot easier. Healthcare providers now have real-time access to patient data. AI-processed data provides critical insights into trends and patterns, resulting in increased efficiency in healthcare delivery. Telemedicine is a cost-effective approach for scheduling follow-up appointments and treating patients remotely. Healthcare practitioners can treat a large number of patients while lowering healthcare delivery expenses. Virtual healthcare also saves time and resources for patients by eliminating the need for them to travel to and from the hospital for follow-up appointments, decreases hospital readmissions, and, most significantly, minimizes preventable deaths. To summarise, artificial intelligence in wearables pushes the boundaries by assisting patients and clinicians with remote tracking, precautions, remote diagnostics, and guiding patients in making educated decisions. In the fitness industry, AI enables devices to act as artificial assistants, assisting consumers in taking care of themselves. Teksun has hands-on technical experience designing wearable devices from the ground up for medical applications, including monitoring, diagnostics, analysis, imaging, wearable health, and telemedicine solutions. To learn more, contact one of our medical specialists. Sheetal Tank is associated with Teksun Inc as a Content Writer. She has the technical precision, industry experience, and creativity to craft technically detailed write-ups with ease. She has more than 4+ years of experience in Content Writing, and her focused domains are AI, IoT, Web, Mobile, and Cloud Automation.
<urn:uuid:8ff7bc79-29f5-4f91-b9f8-fcfa5374ca69>
CC-MAIN-2022-40
https://www.iotavenue.com/ai-based-wearables-helpful-for-the-healthcare-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00418.warc.gz
en
0.93995
1,635
3.109375
3
As data drives more and more of the modern economy, data governance and data management are racing to keep up with an ever-expanding range of requirements, constraints and opportunities. It is a little like a data version of the Cambrian Explosion, where data-centricity is giving rise to a rich variety of practices, each distinct and unique in its own way. Some of these practices will succeed and develop, while others will no doubt become blind alleys inevitably forgotten. Even so, it is already possible to discern some practices that will become new disciplines. And among these, we find data acquisition. Data Acquisition Defined What is data acquisition? We define it as this: Data acquisition is the processes for bringing data that has been created by a source outside the organization, into the organization, for production use. Prior to the Big Data revolution, companies were inward-looking in terms of data. During this time, data-centric environments like data warehouses dealt only with data created within the enterprise. But with the advent of data science and predictive analytics, many organizations have come to the realization that enterprise data must be fused with external data to enable and scale a digital business transformation. This means that processes for identifying, sourcing, understanding, assessing and ingesting such data must be developed. This brings us to two points of terminological confusion. First, “data acquisition” is sometimes used to refer to data that the organization produces, rather than (or as well as) data that comes from outside the organization. This is a fallacy, because the data the organization produces is already acquired. Second, the term “ingestion” is often used in place of “data acquisition.” Ingestion is merely the process of copying data from outside an environment to inside an environment and is very much narrower in scope than data acquisition. It seems to be a term that is more commonplace, because there are mature ingestion tools in the marketplace. (These are extremely useful, but ingestion is not data acquisition.) The Data Acquisition Process What is exciting about data acquisition to data professionals is the richness of its process? Consider a basic set of tasks that constitute a data acquisition process: - A need for data is identified, perhaps with use cases - Prospecting for the required data is carried out - Data sources are disqualified, leaving a set of qualified sources - Vendors providing the sources are contacted and legal agreements entered into for evaluation - Sample data sets are provided for evaluation - Semantic analysis of the data sets is undertaken, so they are adequately understood - The data sets are evaluated against originally established use cases - Legal, privacy and compliance issues are understood, particularly with respect to permitted use of data - Vendor negotiations occur to purchase the data - Implementation specifications are drawn up, usually involving Data Operations who will be responsible for production processes - Source onboarding occurs, such that ingestion is technically accomplished - Production ingest is undertaken There are several things that stand out about this list. The first is that it consists of a relatively large number of tasks. The second is that it may easily be inferred that many different groups are going to be involved, e.g., Analytics or Data Science will likely come up with the need and use cases, whereas Data Governance, and perhaps the Office of General Counsel, will have to give an opinion on legal, privacy and compliance requirements. An even more important feature of data acquisition is that the end-to-end process sketched out above is only one of a number of possible variations. Other approaches to data acquisition may involve using “open” data sources or configuring tools to scan internet sources, or hiring a company to aggregate the required data. Each of these variations will amount to a different end-to-end process. The Need For Metadata Tools Given the characteristics of data acquisition, how should it be handled? A fairly obvious conclusion is that because it consists of so many tasks and involves so many different organizational units, some form of tooling is required. The variety of metadata that is produced by the overall process — and the need to utilize it both within the process and after acquisition has been completed — makes it difficult to see how spreadsheets, emails and other end-user computing solutions will work. Remember also that legal, privacy and compliance constraints will be discovered and evaluated in data acquisition. These need to be made available to the enterprise as a whole to prevent accidental misuse of the acquired data. What we need are tools capable of storing the wide range of metadata that is produced during data acquisition, and a defined data governance process that ensures the process is followed in a standard way and metadata is captured appropriately. Such tools are beginning to appear in the marketplace, and data professionals engaged in data acquisition would do well to implement their processes in such tools. Modernizing Data Governance With Data Catalogs Organizations embarking on a data journey to leverage the business value of data across the information supply chain will need to navigate the unique challenges of self-service analytics. And the criticality of metadata management and data catalogs cannot be undermined. We’re proud to partner with Alation to deliver new methodologies, including one focused on data acquisition, to power a more modern and agile approach to governance. Learn more about this strategic partnership and our commitment to meet the needs of Chief Data Officers and analytics leaders seeking to bring more trust to data-driven decision-making. This was first posted on First San Francisco Partners blog. Access the original article here.
<urn:uuid:39b6a360-21a6-4b35-919b-d6e0c5944216>
CC-MAIN-2022-40
https://www.alation.com/blog/defining-data-acquisition-and-why-it-matters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00418.warc.gz
en
0.953691
1,136
2.59375
3
Scientists around the world are inspired by the brain and strive to mimic its abilities in the development of technology. Our research team at IBM Research Europe in Zurich shares this fascination and took inspiration from the cerebral attributes of neuronal circuits like hyperdimensionality to create a novel in-memory hyperdimensional computing system. The brains behind the research: Geethan Karunaratne, Manuel Le Gallo, Abbas Rahimi, Giovanni Cherubini, Luca Benini, Abu Sebastian The most efficient computer possible already exists. And no, it’s not a Mac or a PC – it’s the human brain. When computers were originally invented, they were designed around a brain model. At one time, many even referred to them as electronic brains. Indeed, it is certainly impressive how today’s computers can emulate single brain functions, such as learning and identifying visual objects or recognizing text and speech patterns. However, the evolution of computers has a long way to go to match the remarkable capacity of the human brain, which can learn and adapt without needing to be programed or updated, has intricately connected memory, doesn’t easily crash, and works in real time. What’s more, computers are energy guzzlers. Our brain with all its magnificent capabilities operates below 20 watts while attending to a complex thought process. In comparison, a simple task like writing this blog post on a laptop requires about 80 watts. In terms of energy efficiency, our brain can even outperform state-of-the-art supercomputers by several orders of magnitude at only a fourth of the power. Learning from the Human Brain Over the last decade, there has been significant advances in neurophysiology and brain theorizing to the extent that we now know more than ever about how the brain works. Neuroscientists have discovered that the mind operates by evaluating the state of thousands of synaptic connections at a time and computes with patterns of neural activity that are not readily associated with numbers. Drawing inspiration from this cerebral functionality, our research team set out to explore ways to move computing away from the conventional digital paradigm that we are used to. Specifically, we focused on hyperdimensional computing (HDC), which is an emerging computational paradigm that aims to mimic attributes of the human brain’s neuronal circuits such as hyperdimensionality, fully distributed holographic representation, and (pseudo)randomness. What we discovered is that an HDC framework functions exceptionally well within an in-memory computing architecture. In fact, based on the results of our experiments in training and classifying datasets, HDC is a killer application for in-memory computing in many respects. We believe our research, which is now being featured in the peer-reviewed journal Nature Electronics, will play an essential role in the advancement of next-generation AI hardware. In contrast to the conventional von Neumann systems, which are digital and based on processing vectors having length of 32 or 64 bits, we wanted to create a computing paradigm that could potentially function more like a holistic network of neurons. Hence, we have a deep interest in hyperdimensional computing (HDC). The essence of HDC is the observation that key aspects of human memory, perception and cognition can be explained by the mathematical properties of hyperdimensional spaces comprised of hyperdimensional vectors, or hypervectors. Put more clearly, HDC models neural activity patterns of the brain’s circuits, operating on a rich form of algebra that defines rules to build, bind, and bundle different hypervectors that are D-dimensional (pseudo)random with independent and identically distributed components as well as holographic representations. HDC represents data using bits in the order of thousands and distributes the related information across all bits evenly (i.e., equally significant bit). In such a computational framework, hypervectors representing different symbolic entities can be combined into new unique hypervectors to create representations for composite entities using well-defined vector space operations. These vector compositions create a powerful system of computing that can be used to perform, in addition to classical tasks, sophisticated cognitive tasks such as object detection, language and object recognition, voice and video classification, time series analysis, text categorization, and analytical reasoning. HDC is the Brainiest of Approaches There are many advantages to computing with hypervectors. For one, training algorithms in an HDC architecture is transparent, quicker and more efficient as object categories are learned in one shot of training from the available data. This beats other brain-inspired approaches such as neural networks, which require a large number of iterations for training. Moreover, this computing paradigm is memory-centric with parallel operations and is extremely robust against noise and variations or faulty components in a computer platform. Indeed, HDC is the brainiest of approaches. However, we still need an efficient HDC processor that can fully support it. Hence, the current ongoing research effort focuses on both the algorithmic front as well as in building efficient computing substrates for HDC. A key attribute of HDC, in terms of hardware realization, is its robustness to the imperfections associated with the computational substrates on which it is implemented. HDC also involves manipulation and comparison of large patterns within memory when used for machine learning tasks such as learning and classification. These two attributes make HDC particularly well-suited for emerging non-von Neumann computing paradigms such as in-memory computing where the physical attributes of nanoscale memory devices are exploited to perform computation in place. In our research paper, we present a complete in-memory HDC system consisting of two main components: an HD encoder and an associative memory. The core computations are performed in-memory with logical and dot product operations on memristive devices. Due to the inherent robustness of HDC, it was possible to approximate the mathematical operations associated with HDC to make it suitable for hardware implementation, and to use analogue in-memory computing without compromising the accuracy of the output. Using 760,000 phase-change memory devices performing analog in-memory computing, we experimentally demonstrate that such an HDC platform can achieve over 600% energy savings compared to optimized digital systems based on CMOS technology. Moreover, this first-of-its-kind prototype system is programed to support different hypervector representations, dimensionality, number of input symbols and of output classes to accommodate a variety of applications. In testing various in-memory logic operations, our architecture also attained comparable accuracy levels in three different learning tasks, including language classification, news classification and hand gesture recognition from electromyographic signals. What distinguishes our work from other similar research is that we perform a complete end-to-end study also involving the synthesis of the digital peripheral submodules using 65 nm CMOS technology. Our study clearly shows the efficacy and potential of in-memory computing for this exciting new field. This work was performed in collaboration with ETH Zürich and was supported in part by the European Research Council under grant no. 682675 and in part by the European Union’s Horizon 2020 Research and Innovation Program through the project MNEMOSENE under grant no. 780215. In-memory Hyperdimensional Computing, Geethan Karunaratne, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abbas Rahimi, and Abu Sebastian, Nature Electronics, DOI: 10.1038/s41928-020-0410-3 Inventing What’s Next. Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter. Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies. IBM is supporting marine research organization ProMare to provide the technologies for the Mayflower Autonomous Ship (MAS). Named after another famous ship from history but very much future focussed, the new Mayflower uses AI and energy from the sun to independently traverse the ocean, gathering vital data to expand our understanding of the factors influencing its health.
<urn:uuid:322dfdf4-3690-4d25-82da-67a401b112e7>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2020/06/in-memory-hyperdimensional-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00418.warc.gz
en
0.92989
1,779
3.84375
4
AI mistakes? How could this happen? Aug 10, 2018 AI perceives its environment and takes actions that maximize the probability of successfully achieving its goals. This does not ensure success (the correct answer). This is a common misconception. AI-based systems will produce some wrong answers. AI, much like natural intelligence, is fallible, but not for the reasons many claim. There are plenty of real-world examples of AI mistakes from the world's leading companies in AI deployment. When the stakes are highly visible, for example in making oncology treatment recommendations, the broader community will demand explanations for why and how mistakes happen. The reasons for failure are often quite simple. Let's explore a few common root causes: - The wrong data are used in estimation. Simulated data are often used for experiments in AI, but they are not suitable for the estimation of the underlying equations / models. These simulated cases disturb the underlying distributions within the data and can lead to undesirable outcomes. Responsible Party = Human. - Extreme values are not considered. Often researchers will "clean" the data used in estimation, for example removal of outliers. Cleaning data in itself is not the problem - failure to consider what will happen to the model predictions when it encounters an outlier is the problem. Responsible Party = Human. - The data generating process (DGP) is in flux. Business processes change, new data are collected, some data are no longer collected, laws change, administrative policies change, all of these can have serious implications for DGP and in turn, serious implications for AI. For example, income was previously collected as a continuous variable and the equations were estimated with these data, but income is now collected as a categorical variable, segmented in $25,000 increments. Did anyone re-estimate the equation(s) that use income? Responsible Party = Human. You have probably noticed a common characteristic of these examples...Human. People are at the heart of AI's successes and failures. The analytic software used to estimate the underlying analytics for AI (e.g., R, Python, SAS, Oracle) will produce results. The results may be complete nonsense, but it is the responsibility of the data analyst and the stakeholders to understand why. Do you have an example of AI mistakes that can't be traced back to a human? Category: Business Intelligence, Government, Predictive Analytics, BI Platforms, Blog, Enterprise Technology, Healthcare, Business
<urn:uuid:bbbcde5c-5ee4-4c75-851b-fcc3fca6e5c9>
CC-MAIN-2022-40
https://www.asranalytics.com/about-asr/blog/2018/08/ai-mistakes-how-could-happen
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00618.warc.gz
en
0.940694
505
3.328125
3
Microsoft ended support for XP last year but there are probably millions of computers out there, in homes and offices that are still running the operating system. How many computers do you see running Vista, the OS that was intended to succeed XP? Vista was buggy, unreliable, and a massively flawed product. Nobody liked it, and when Windows 7 was released it quickly became extinct in the wild. However, the failure of Vista has less to do with the radical new look and feel from XP: the success of Windows 7 which took a lot of features from Vista was proof enough that the idea behind Vista was solid enough. Vista failed so spectacularly because it was a product driven by marketing timelines and fell short in the quality assurance and quality control department. A lot of people, even programmers, use the terms QA and QC interchangeably. They are related in that both of these processes are responsible for ensuring that the code performs as advertised. Before we talk about the differences, it’s instructive to understand what these terms actually mean- let’s take NASA’s definitions (these folks have hundreds of lives and hundreds of billions of dollars riding on the results of tests, so they know what they are talking about). Software quality assurance is defined as “the function of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented” Software quality control is defined as “the function of software quality that checks that the project follows its standards, processes, and procedures and that the project produces the required internal and external (deliverable) products”. Quality assurance is: The goal of quality assurance is to strengthen the software development process so that quality products can be delivered consistently and cost effectively. Some deliverables that emerge out of the QA tasks include process documentation, detailed requirements, and audit reports. Quality assurance has nothing to do with execution of code. Quality control is: The goal of quality control is to catch the defects in the finished product. The deliverables from QC include bug reports. Quality control involves executing code. A well designed product needs both departments to work together through several iterations. After QA sets the requirements and the developers write the code the QC department performs a number of tests. The feedback from these tests is shared with QA department who then modifies the requirements and the processes to ensure that the defects don’t pop up in the next version. In software development methodologies like Agile, both these processes run almost simultaneously. Quality assurance and quality control cannot be performed by the same person or department as it will lead to a conflict of interest. Both of them are adversarial in nature, in the same way writing is different from editing. It would be fatal to skip either QA or QC. If you perform only QA related activities all you will get is a set of processes that seek to improve quality. There is no way of knowing whether the final product will meet these specifications. On the other hand, without QA you would perform tests in isolation, and fix bugs as they come along without any assurance that the bugs won’t pop up in a later version because of faulty development practices. QA and QC are tough to do in-house but the problems can magnify in the context of outsourcing because of a multitude of issues around choosing the right outsourcing partner. However, there are thousands of outsourced projects that have met customer expectations, and all of them had the right mix of process and people. Invest upfront on quality assurance and quality control, and gain happy users and delighted customers. Read More: Pros and Cons of Offshore Software Development Outsourcing: How to Pick the Right Partner and Location How is the Coronavirus affecting the Global Economy? Buyer’s Journeys and Latest Trends on IT Outsourcing Do you need an IT Department in your Company? Eight Common IT Outsourcing Mistakes to Avoid Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:5971300a-f298-4d50-930c-32d10fed6b4d>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/quality-assurance-vs-quality-control-why-you-need-both-in-software-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00618.warc.gz
en
0.952298
1,057
2.953125
3
Editor’s Note: This post was originally published August 2021 and has been updated for accuracy and comprehensiveness. If you’re anything like me, you have probably fallen into the field of digital forensics (check out my Kicking and Screaming blog to learn more). You may be the semi-technical expert on your team or just someone who isn’t a self-proclaimed Luddite. Regardless of how you entered the field of digital forensics, it is common to feel a little lost in the beginning. So, if you’re brand new to this field or would like a refresher on understanding digital forensics, this blog is for you. What is digital forensics? Simply put, digital forensics uses special tools and techniques to collect, analyze, and report on digital evidence. Once the evidence is collected, you can use it in a court of law to help prove or disprove a particular theory or piece of information. There are many different focus areas within digital forensics, but some of the most common include: Computer forensics: This area of digital forensics deals with identifying, preserving, and extracting evidence from computers. Investigators in this field must be familiar with a wide range of computer hardware and software. Vehicle infotainment systems: In recent years, vehicle infotainment systems have become more and more complex, with many of them including internet-connected features. As a result, these systems can be a goldmine of evidence for investigators. Mobile devices: Mobile devices are another common focus area for digital forensics investigators. These devices can contain a wealth of evidence, which we highlight in other blog posts and the Investigator’s Corner on Grayshift.com. (Login credentials are required for access.) Smart devices: With the rise of the Internet of Things, digital forensics investigators may be called upon to examine smart devices such as TVs, thermostats, and even refrigerators. Computer systems: Of course, traditional computer systems are still a significant focus for digital forensics investigators. These systems can contain a wealth of evidence. In the early days, digital forensics was labeled as computer forensics since most of the technology involved in those early investigations was only computers. Over time, the discipline naturally expanded to include all devices capable of storing digital data and has since been re-branded to digital forensics. Devices that store digital data can consist of anything from your personal computer to your refrigerator. In today’s world, devices that store digital data are part of our everyday lives, and one of the most notable device types almost everyone has is the mobile phone. What is mobile device digital forensics and why is it important? Mobile device digital forensics is a subcategory of digital forensics, and it is the process of recovering data from mobile devices. This data can be used to track down a suspect, understand a crime, or gain insights into a person’s life. Here are a few reasons why mobile device digital forensics is essential to investigations today: - Mobile devices contain a wealth of evidence that can be used in any type of investigation. - In many cases, they are often the only source of evidence. - Mobile device digital forensics can be used to track down a suspect who may be hiding their tracks on a traditional computer system. - Mobile device digital forensics can help investigators understand how a crime was committed and who was responsible. What tools and techniques do you need for mobile device digital forensics? To properly conduct mobile device digital forensics, you need a few essential tools and techniques. - Computer System : Run digital forensics imaging and analysis software to process digital evidence - Network Isolation Hardware : Isolate devices from radio frequency signals to maintain evidence integrity - Portable Batteries and Device Cables : Ensure on-scene officers have the equipment and accessories they need to properly secure seized devices - GrayKey : Access and extract encrypted or inaccessible data from mobile devices. - Data Analysis Software : Import extracted mobile device data into analysis software to begin examining digital evidence - Device Storage : Safely store mobile device extractions and simplify chain of custody and data integrity - External Data/Evidence Storage : Relieve storage space from computer systems and store evidence long term Check out A Beginner’s Guide to Building and Funding a Mobile Device Forensics Lab eBook to learn more. - Preserve Digital Evidence: Dedicated evidence intake personnel should be educated on the proper way to preserve digital evidence. Properly seizing and storing digital evidence can be paramount to your investigations due to the security implemented on digital devices. Educating team members on proper device handling is worthwhile, even if that is the only time they will interact with the evidence. - Copy, Copy, Copy: Once the evidence is back at your lab, you must create a forensic image, or copy, of the digital evidence. You will conduct your investigation on the forensic image as opposed to the evidence item itself. While manually searching the device itself is sometimes necessary, this is not typical in most investigations. - Nerd Out with Hashes: After you create your forensic image, a hash value will be reported for the newly created file. This hash value results from a calculation or hash algorithm performed on the forensic image obtained from the device. This hash value is important as it is used to verify the integrity of your forensic image throughout the life cycle of your investigation. - Ask the Investigator: It can be beneficial to ask the investigator many questions about the case before beginning your analysis. Anyone in this position has heard the line “Give me everything.” As you can imagine, that can be an overwhelming amount of data, and without applying techniques to filter through the data, evidence could be missed. - Maintain Chain of Custody: Whatever methods are applied to search for data, examinations of the evidence must be thorough and proper note-taking is critical. The results from any examination need to be repeatable, and logging who has interacted with the evidence at any point during the investigation helps prove evidence authenticity and document chain of custody in court. Digital forensics, specifically mobile device digital forensics, is at the forefront of investigations. And the devices that are seized are changing and advancing daily. As a Digital Forensic Investigator (DFI), it is important to keep up with the latest and greatest tools and stay up to date with training. Staying abreast of current trends in the field is beneficial to your investigative techniques and can lead to more productive acquisitions and analysis of digital evidence items. To learn more about mobile device digital forensic tools like GrayKey by Grayshift, please get in touch with us or visit Grayshift.com. © 2022. Grayshift, LLC. All rights reserved. Proprietary and confidential. The material and information contained in this resource is based on 30+ years of in-the-field experience from the Grayshift Digital Forensic team and is intended for general information purposes only. As always, please defer to your department’s policies and procedures as they relate to digital forensics.
<urn:uuid:da553231-365e-417a-8b47-a12454f8de61>
CC-MAIN-2022-40
https://www.grayshift.com/what-is-digital-forensics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00618.warc.gz
en
0.924681
1,465
2.71875
3
Critical Alert: Multiple Vulnerabilities in PHP Could Allow for Arbitrary Code Execution Description: Multiple vulnerabilities have been discovered in PHP, the most severe of which could allow an attacker to execute arbitrary code. PHP is a programming language originally designed for use in web-based applications with HTML content. PHP supports a wide variety of platforms and is used by numerous web-based software applications. Impact: Successfully exploiting the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the affected application. Depending on the privileges associated with the application, an attacker could install programs; view, change, or delete data; or create new accounts with full user rights. Failed exploitation could result in a denial-of-service condition. - PHP 7.2 prior to 7.2.3 - PHP 7.0 prior to 7.0.28 - PHP 5.0 prior to 5.6.34 The following actions are recommended: - Upgrade to the latest version of PHP immediately, after appropriate testing. - Verify no unauthorized system modifications have occurred on system before applying patch. - Apply the principle of Least Privilege to all systems and services. - Remind users not to visit websites or follow links provided by unknown or untrusted sources. 02 Oct 2022 - Security Advisories & Alerts 01 Oct 2022 - Security Advisories & Alerts
<urn:uuid:d64cc859-b69e-412c-9f18-b3884c23371d>
CC-MAIN-2022-40
https://www.cirt.gov.bd/critical-alert-multiple-vulnerabilities-in-php-could-allow-for-arbitrary-code-execution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00618.warc.gz
en
0.823961
395
2.59375
3
Understanding the RAMBleed Exploit [cylance] Side-channel attacks are some of the scariest exploits ever. They don’t usually exploit vulnerabilities in code, they exploit the fundamental implementation of computer systems themselves. Therefore, they’re often hardware-based. Dynamic random-access memory, or DRAM for short, is one of the most common types of memory found in modern computers used by both consumers and businesses. For example, the memory in an x86-64 based PC, such as one based on an Intel Core i7 CPU, is typically DRAM. The same goes for the memory in popular devices like video game consoles. DRAM is frequently used in the computers we see every day because it can be made to be high-capacity for limited cost. There’s one major physical problem with DRAM, and that’s the rowhammer vulnerability. Because of how DRAM (DDR) works, its individual memory cells can leak their charges and interact electrically between themselves. Applications and specific operating system processes are authorized to access only certain parts of your computer’s memory. For example, my web browser is supposed to access some memory addresses, and the part of my operating system that executes new applications is supposed to access other memory addresses. Think of four poker players sitting at a table, and each player is only allowed to see what’s in their own hand. But if one of the poker players falls off of their chair and towards the table, they can also knock over one of their neighboring players and see which cards are in their hand. Very clever! Dan Goodin did an excellent job of explaining rowhammer attacks a few years ago: “DDR memory is laid out in an array of rows and columns, which are assigned in large blocks to various applications and operating system resources. To protect the integrity and security of the entire system, each large chunk of memory is contained in a ‘sandbox’ that can be accessed only by a given app or OS process. Bit flipping works when a hacker-developed app or process accesses two carefully selected rows of memory hundreds of thousands of times in a tiny fraction of a second. By hammering the two ‘aggressor’ memory regions, the exploit can reverse one or more bits in a third ‘victim’ location. In other words, selected zeros in the victim region will turn into ones or vice versa. The ability to alter the contents of forbidden memory regions has far-reaching consequences. It can allow a user or application who has extremely limited system privileges to gain unfettered administrative control. From there, a hacker may be able to execute malicious code or hijack the operations of other users or software programs. Such elevation-of-privilege hacks are especially potent on servers available in data centers that are available to multiple customers.” For more, click here.
<urn:uuid:cf4e292d-1c29-4437-9970-dedaa0368185>
CC-MAIN-2022-40
https://www.cirt.gov.bd/understanding-the-rambleed-exploit-cylance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00618.warc.gz
en
0.943228
604
3
3
The national lab is developing more energy-efficient motors to improve the endurance of legged robots used in disaster response. Steve Buerger is leading a Sandia National Laboratories project to demonstrate energy-efficient biped robots. Increased efficiency could enable bots to operate for much longer periods of time without recharging batteries, an important factor in emergency situations. (Photo by Randy Montoya) Researchers at federal defense and energy laboratories are open sourcing some of the electronics and software for two advanced ambulatory robots in hopes of boosting their ability to handle perilous situations. In a Dec. 16 announcement, the Energy Department's Sandia National Laboratories said it is developing more energy-efficient motors to dramatically improve the endurance of legged robots performing the types of motions that are crucial in disaster response situations. The project is supported by the Defense Advanced Research Projects Agency. Sandia is developing two robots for the DARPA Robotics Challenge, a competition in which robots face degraded physical environments that simulate conditions in natural or man-made disasters. Many of the robots will walk on legs to allow them to negotiate the challenging terrain, according to Sandia. Sandia said its two automatons won't participate in the competition's finals next June, but the lab's more energy-efficient platforms could help any robots entered in the DARPA Challenge extend their battery life. The Sandia Transmission Efficient Prototype Promoting Research, or STEPPR, robot is a fully functional research platform that allows developers to try different mechanisms that perform like elbows and knees to quantify how much energy is used. Sandia officials said the second robot -- named Walking Anthropomorphic Novelly Driven Efficient Robot for Emergency Response, or WANDERER -- will be a better-packaged prototype. The Open Source Robotics Foundation is developing the two robots' electronics and low-level software, and the designs will be publicly released to allow engineers and developers worldwide to take advantage of them. Sandia said the key to the testing is the novel, energy-efficient actuators that move the robots' joints. "The actuation system uses efficient brushless DC motors with high torque-to-weight ratios, efficient low-ratio transmissions and specially designed passive mechanisms customized for each joint to ensure energy efficiency," the Sandia announcement states.
<urn:uuid:92e527e1-e0a5-493b-b09d-e364d05973b6>
CC-MAIN-2022-40
https://fcw.com/2014/12/sandia-looks-to-open-source-robot-tech/254881/?oref=fcw-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00618.warc.gz
en
0.914649
463
2.8125
3
It’s been three months since the world was shaken by the brutal murder of George Floyd. The image of a white police officer kneeling on a black citizen for 8 minutes and 46 seconds are still fresh in America’s collective memory. This wasn’t the first case of racially-charged police brutality in the US. And unfortunately, it won’t be the last one either. Racism in this country has deep roots. It is a festering wound that’s either left ignored or treated with an infective medicine. There’s no end in sight to institutional racism in the country and to make matters worse, this disease is finding new ways to spread. Even Artificial Intelligence, which is said to be one of the biggest technological breakthroughs in modern history, has inherited some of the prejudices that sadly prevail in our society. Can AI Be Biased? It would’ve been ridiculous to suggest that computer programs are biased a few years ago. After all, why would any software care about someone’s race, gender, and color? But that was before machine learning and big data empowered computers to make their own decisions. Algorithms now are enhancing customer support, reshaping contemporary fashion, and paving the way for a future where everything from law & order to city management can be automated. “There’s an extremely realistic chance we are headed towards an AI-enabled dystopia,” explains Michael Reynolds of Namobot, a website that generates blog names with the help of big data and algorithms. “Erroneous dataset that contains human interpretation and cognitive assessments can make machine-learning models transfer human biases into algorithms.” This isn’t something far into the future but is already happening. Unfortunate Examples of Algorithm Bias Risk assessment tools are often used in the criminal justice system to predict the likelihood of a felon committing a crime again. In theory, this Minority Report type technology is used to deter future crimes. However, critics believe these programs harm minorities. ProPublica put this to test in 2016 when it examined the risk scores for over 7000 people. The non-profit organization analyzed data of prisoners arrested over two years in Broward County Florida to see who was charged for new crimes in the next couple of years. The result showed what many had already feared. According to the algorithm, Black defendants were twice as likely to commit crimes than white ones. But as it turned out, only 20% of those who were predicted to engage in criminal activity did so. Similarly, facial recognition software used by police could end up disproportionately affecting African Americans. As per a study co-authored by FBI, face recognition used in cities such as Seattle may be less accurate on Black people, leading to misidentification and false arrests. Algorithm bias isn’t just limited to the justice system. Black Americans are routinely denied programmers that are designed to improve care for patients with complex medical conditions. Again, these programs are less likely to refer Black patients than White patients for the same ailments. To put it simply, tech companies are feeding their own biases into the systems. The exact systems that are designed to make fair, data-based decisions. So what’s being done to fix this situation? Transparency is Key Algorithmic bias is a complex issue mostly because it’s hard to observe. Programmers are often baffled to find out their algorithm discriminates against people on the basis of gender and color. Last year, Steve Wozniak revealed that Apple gave him a 10-times higher credit limit than his wife even though she had a better credit score. It is rare for consumers to find such disparities. Studies that examine discrimination on part of AI also take considerable time and resources. That’s why advocates demand more transparency around how the entire system operates. The problem merits an industry-wide solution but there are hurdles along the way. Even when algorithms are revealed to be biased, companies do not allow others to analyze the data and aren’t thorough with their investigations. Apple said it would look into the Wozniak issue but so far, nothing has come of it. Bringing transparency would require companies to reveal their training data to observers or open themselves to a third-party audit. There’s also an option for programmers to take the initiative and run tests to determine how their system fares when applied to individuals belonging to different backgrounds. To ensure a certain level of transparency, the data used to train the AI and the data used to evaluate it should be made public. Getting this done should be easier in government matters. However, the corporate world would resist such ideas. Diversifying the Pool According to a paper published by New York University research center, the lack of diversity in AI has reached a “moment of reckoning”. The research indicates that the AI field is overwhelmingly white and male due to which, it risks reasserting power imbalances and historical biases. “The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems,” explained Kate Crawford, an author of the report. With both Facebook and Microsoft having 4% of the workforce that’s Black — it’s quite clear that minorities are not being fairly represented in the AI field. Researchers and programmers are a homogeneous population who come from a certain level of privilege. If the pool is diversified, the data would be much more representative of the world we inhabit. Algorithms would gain perspectives that are currently being ignored and AI programs would be much less biased. Is it possible to create an algorithm that’s completely free of bias? Probably not. Artificial Intelligence is designed by humans and people are never truly unbiased. However, programs created by individuals from dominant groups will only help in perpetuating injustices against minorities. To make sure that algorithms don’t become a tool of oppression against Black and Hispanic communities —public and private institutions should be pushed to maintain a level of transparency. It’s also imperative that big tech embraces diversity and elevates programmers belonging to ethnic minorities. Moves like these can save our society from becoming an AI dystopia.
<urn:uuid:60035a2a-e50e-4577-b2ae-a2949c874790>
CC-MAIN-2022-40
https://www.iotforall.com/artificial-intelligence-the-next-front-of-the-fight-against-institutional-racism
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00618.warc.gz
en
0.95671
1,295
2.796875
3
A student may not need a loan until they attend college. Therefore, a student signs up for a loan through the Federal Government or a private bank. Unfortunately, he or she learns that there is an existing loan in their name. Then the student finds out they are an identity theft victim. Identity theft in education affects all sectors including the government, banks for private loans and the consumer. The damage is substantial, because now the victim has bad credit, the loan was not used for its intended purpose and everyone risks losing out. If a student wants a private loan and the credit prevents that, the bank does not get the expected amount of money and neither does the educational institution. As a result, the student has to work multiple jobs to attend college and it takes longer. Not only did the fraudster assume the identity of the student, they most likely did not spend the money on education, which affects the economy. It is crucial for the government, educational institutions and banks to protect personally identifying information. Knowledge-based authentication is an example of how to improve an identity theft issue. In doing this, the problems can be prevented by asking questions only the person would know. For students who have already been affected or want to be proactive, credit monitoring is a consumer solution. Although that is the consumer’s responsibility, the private and public entities have a responsibility to safeguard their processes.
<urn:uuid:cff93647-9a3d-4452-8987-3e7d6f741387>
CC-MAIN-2022-40
https://www.electronicverificationsystems.com/blog/the-effects-of-education-identity-fraud
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00018.warc.gz
en
0.972556
281
3.015625
3
Two decades ago, Brain, the first boot sector virus which infected personal computers via the floppy disk, was detected. While Brain itself was relatively harmless, it marked the genesis of the world of computer viruses. This year marks the 20th year of the existence of viruses after Brain was detected on January 19, 1986. Boot sector viruses, now long extinct along with the floppy disk, held a relatively long reign from 1986 to 1995. Since transmission was via disk from computer to computer, infection would only reach a significant level months or even years after its release. This changed in 1995 with the development of macro viruses, which exploited vulnerabilities in the early Windows operating systems. For four years, macro viruses reigned over the IT world and propagation times shrank to around a month from the moment the virus was found to when it became a global problem. As e-mail became widespread, e-mail worms became the next menace, and some worms reached global epidemic levels in just one day. Most notable in this connection was one of the very first e-mail worms, Loveletter aka I LOVE YOU, which caused widespread havoc and financial loss in 1999 before it was brought under control. In 2001, the transmission time window shrank from one day to one hour with the introduction of network worms (such as Blaster and Sasser), which automatically and indiscriminately infected every online computer without adequate protection. E-mail and network worms continue to cause havoc in the IT world. At present there are more than 150,000 viruses and the number continues to grow rapidly. The biggest change over these 20 years has not been in the types of viruses or amount of malware; rather, it is in the motives of virus writers. “Certainly the most significant change has been the evolution of virus writing hobbyists into criminally operated gangs bent on financial gain,” said Mikko Hypponen, chief research officer of F-Secure Corp., a security applications vendor based in Helsinki, Finland. “And this trend is showing no signs of stopping.” According to Hypponen, indications are that malware authors will target laptop WLANs next for automatic spreading worms. “Whatever the next step is, it will be interesting to see what kind of viruses we will be talking about in another 20 years time – computer viruses infecting houses, perhaps?”
<urn:uuid:ecec680b-88cf-433d-aebb-5cee3e3aa409>
CC-MAIN-2022-40
https://www.itworldcanada.com/article/jan-06-20th-anniversary-of-virus-menace/4903
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00018.warc.gz
en
0.958933
478
2.921875
3
Key things you didn't know about phishing Phishing is one of the most common forms of cyberattack, fooling people into thinking they're dealing with a trusted organization in order to get them to part with credentials. But what are the hallmarks of a phishing attack? Atlas VPN has collected some phishy statistics to find out. It finds that nearly 70 percent of phishers leave the subject line of the email blank, so this should be something that rings alarm bells when you receive an unexpected message. When a subject is used it usually tries to instill a sense of urgency, for example 'Fax Delivery Report' (used in nine percent of cases), 'New Voice Message' (3.5 percent), 'Urgent request' (two percent), and 'Order Confirmation' (two percent). Business social media site LinkedIn is a popular lure, used in 52 percent of phishing scams worldwide -- a 44 percent jump from eight percent in the first quarter of this year. This is the first time a social media brand has outranked tech giants like Apple, Google, and Microsoft as a phishers' favorite. Cryptocurrency is another tempting target, Blockchain.com being the most spoofed crypto brand, with 662 phishing websites in the 90 days up to June 22, 2022. Crypto investing app Luno is the second on the list with 277 phishing pages, followed by proof-of-stake blockchain platform Cardano with 191. Amazon is the most frequently impersonated of all the retail brands, with over 1,633 suspicious sites detected to July 12, 2022 thanks to phishers keen to cash in on interest in Prime Day. You can find out more on the Atlas VPN blog.
<urn:uuid:4d0172bb-cf79-43e0-9811-c46c105baf21>
CC-MAIN-2022-40
https://betanews.com/2022/08/01/key-things-you-didnt-know-about-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00018.warc.gz
en
0.936304
353
2.578125
3
IBM AND Duke University Team to Study Heart Disease Duke University researchers will utilize a powerful IBM SP supercomputer to create models of the heart that they hope will lead to uncovering causes and developing treatments for life-threatening heart conditions. Relying on the same IBM technology used in the U.S. Energy Department's ASCI White supercomputer, Dr. John Pormann and the Electrophysiology research team at Duke University are creating accurate and complex modeling of electrical currents flowing through the heart and nerve tissue. "Using the IBM SP, the Duke team of researchers can access the horsepower needed for our computationally intense heart modeling," said Dr. John Pormann, Research Associate, Duke University. "The simulations made possible on the supercomputer can give the researchers insight into the problems that generate heart irregularities. These simulations can help provide additional information that is difficult to obtain in the lab." Irregular heart beats and heart attacks, the leading cause of death in the United States and abroad, are a result of improper electrical impulses flowing through the heart. Complex mathematical computer models, based on lab data, recreate the heart's reaction to various electrical stimuli. Using the SP supercomputer, researchers can change the model's variables, run simulations and determine the heart's reaction to different electrical stimuli. For realistic computer modeling of the heart, Duke researchers send huge amounts of data to multiple, ultra-fast processors in the IBM SP supercomputer at the North Carolina Supercomputing Center. With 720 processors, this system is one of the fastest computers in the world -- ranking 16 on the list of Top500 Supercomputers in November 2000. The IBM SP receives and runs multiple researchers' simulations concurrently on different processors. Researchers can simulate parts of the heart comparing how specific deviations affect the heart function and then incrementally add new complexities to their simulation. Results from various simulations can be compared by running simulations against each other. As a result of this research, the Duke Computational Electrophysiology Group has developed realistic computer models depicting normal and irregular heart functions. For more information, visit www.ee.duke.edu/~jpormann/CardioWave.html
<urn:uuid:9f65a564-2f49-4049-8747-533571a8296e>
CC-MAIN-2022-40
https://esj.com/articles/2000/11/13/ibm-and-duke-university-team-to-study-heart-disease.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00018.warc.gz
en
0.875496
444
3.4375
3
Routing, a term commonly used in networking, is the process of selecting the most optimal path for a data packet to travel across networks. Broadly speaking, routing can be static or dynamic. In static routing, the route for a data packet is configured manually or entered each time a data packet travels across networks. Dynamic routing, as you may have guessed, is the automatic routing of data packets. In this article, we'll focus only on static routing to understand how it works. Also, there are many ways to add a static route, but in this article, we'll talk about how you can do it using a PowerShell cmdlet. What is Static Routing? Routing is a complex process as it involves routing every data packet through the appropriate channels as it traverses from its source to destination. This is often done based on a set of rules or protocols that determine the most optimal path for a data packet under different circumstances. These rules are contained in a routing table. What's a Routing Table? A routing table is nothing but a set of entries that tell a router where it should redirect the packets that come to it. A routing table looks something like this: As you can see, the routing table contains the port and the next-hop destination for a packet. Essentially, a router looks at the routing table and accordingly, transfers the packets. In static routing, these entries in the routing table are fixed and don't change automatically. These entries are entered manually by an IT admin, and hence can be changed only when the organization deems it necessary. Just to understand the difference, dynamic routing is the process where a router determines the next best path for a packet based on the prevailing conditions such as traffic levels and available communication paths. Also known as adaptive routing, this process is not fixed and the router makes a decision on-the-fly. A mere glance at the two routing options clearly shows that dynamic routing is far more flexible and has the potential to make the optimum use of network paths when compared to static routing. In such a case, why even use static routing in the first place? Advantages of Static Routing Static routing has many advantages and hence, works well in many scenarios. Some of the advantages are: One of the biggest advantages of static routing is its impact on the existing resources. Since it requires only a minimum amount of CPU usage from the router, it does not add to the cost of operations. All that you need is one router that reads the routing table and sends the packets along the predetermined route. Even the routers don't have to be advanced for this routing. In all, static routing is the choice if you're money-strapped and can't afford to spend on smart routers. The other big reason for companies to opt for static routing is the control they have over the entire routing path. Your IT admin determines the optimal path and the data packet simply travels along it. At any point, your IT admin will know the path in case something goes wrong. You don't have the same level of control in dynamic routing as the path is determined by the router, and this could change every time depending on the prevailing conditions. Simple to Configure Configuring a static route is simple and doesn't require a large team. A single individual can handle it for you, especially if you have only a small network. Further, the configuration process is simple and can be set up within just a few minutes. All this means your organization saves time, effort, and money when you choose static over dynamic routing. That said, static routing comes with its share of disadvantages too, which are: - No flexibility, and can impact the data packets, especially when there are problems in a certain path. - There's always a possibility for human errors. - No fault tolerance. - Since the IT admin has to configure each router manually, it can add to his/her workload. Despite these disadvantages, static routing is used extensively across many use cases. Let's now look at a few scenarios where they can come in handy. Static Routing Use-Cases Static routing is used extensively across homes and enterprises today because of its benefits and low overhead costs. Let's now see a few scenarios where they come in handy. - Setting the Default Route A default route is a configuration that establishes the path that a data packet takes when there is no specific address for its next hop. Often, the default route is to reach a router with packet filtering and firewall capabilities. Static routing makes it easy to define an exit point when there are no exit points. - Small Networks Static routing works well for small networks that often have to choose only from a handful of paths. Since this routing consumes only minimal resources and is easy to implement, they are an ideal choice for small businesses. - Redundancy and Backup Though large organizations prefer to use dynamic routing because of its many benefits, most of them also use static routing as a backup that the data packets can fall back on in case of any issues with the router's dynamic routing capabilities. In this sense, static routing is the safer and failsafe backup for dynamic routing failures. - Redistribution Static routing is the easiest choice when you want to transfer routing information from one protocol to the other. Also known as routing redistribution, this transfer of data is simple and easy when you choose static routing. Thus, static routing is predictable, provides complete control, operates with little overhead, and is highly efficient for small networks and organizations. Now that you know all about static routing, it's time to see how you can add or delete static routes to a router. There are many ways to add a static route, but we will see how you can do it with a PowerShell cmdlet, as this is the quickest and easiest way to add a static route. Of course, you can follow this method only if you have some familiarity with PowerShell cmdlets and scripting knowledge. Adding a Static Route Using PowerShell Cmdlets In PowerShell, cmdlets are code snippets that carry out a particular task. From a user/programmer's standpoint, it saves coding time and effort. To add a static route, the “route” cmdlet is used. It takes the following parameters. - /p – Adds an entry to the routing table. - Command – Depicts your action, which can be addition, change, or deletion. - Destination – Specifies the destination. - Gateway – Specifies the gateway's IP address. - Mask – Determines the subnet mask of the destination - /f – Clears the routing table - If – Name of the interface Now, let's see an example. route /p add 18.104.22.168 mask 255.255.255.255 22.214.171.124 The above command adds a static route to 126.96.36.199 with the subnet mask 255.255.255.255. The next hop address for packets that come to this subnet mask is 188.8.131.52. You can always check if this route is added to the table with the command, “route print”. This command displays the routing table and you can see the entry you just added. To delete this route, simply replace the action parameter like this: route /p delete 184.108.40.206 mask 255.255.255.255 220.127.116.11 You can also create a new IP route with the New-NetRoute cmdlet. It takes the following parameters. - -AddressFamily: This can be either IPv4 or IPv6, depending on how your network is set up. - -CimSession: Used if you want to add an IP route from a remote computer or session. - DestinationPrefix: This is a required value and depicts the destination IP route. - -NextHop: Specifies the next hop destination for a packet. - -Protocol: Specifies the type of routing protocol. - -InterfaceIndex: This is a required parameter and specifies the index value of a network interface. Here's an example of how you can use this cmdlet. New-NetRoute -DestinationPrefix “18.104.22.168/12” -InterfaceIndex 23 -NextHop 22.214.171.124 This command adds an entry to the routing table for the interface that has a value of 23. It specifies 126.96.36.199 as the next hop for the data packets coming to 188.8.131.52/12. This cmdlet gets information about one or more IP routes from a routing table. It also comes with many parameters to filter the search results. To get all the routes in a routing table, simply type “Get-NetRoute”. You can either filter the results manually or add parameters to do that automatically. For example, if you execute “Get-NetRoute -AddressFamily IPv6”, you'll see all the routes that are related to IPv6. In all, these PowerShell cmdlets ease your task of viewing, adding, and deleting entries from a routing table. Static routes are a part of the routing process that sends data packets from one destination to another. These routes are fixed and don't change based on prevailing factors like traffic and bandwidth. Often, the static routes are manually entered or configured by IT admins, and the same can be changed or deleted manually when needed. Though there are many ways to add and delete these static routes, PowerShell is the easiest option, thanks to its cmdlets that handle all the required functionality for you. We hope the above-mentioned cmdlets come in handy for you to manage your static routes.
<urn:uuid:cca86a79-4904-4692-aa21-a7cbb987a1a2>
CC-MAIN-2022-40
https://www.ittsystems.com/add-static-route-using-powershell-cmdlet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00018.warc.gz
en
0.918586
2,048
4.03125
4
Jens Stoltenberg, NATO secretary-general, wrote that cyber-attacks “can affect every one of us”. The official mentioned that “For NATO, a serious cyber-attack could trigger Article 5 of our founding treaty. This is our collective defense commitment where an attack against one ally is treated as an attack against all.” The official recalled that “In the United Kingdom, the 2017 WannaCry virus crippled computers in hospitals across the country, canceling thousands of scheduled operations and costing the National Health Service millions of pounds. Even NATO is not immune to cyber-attacks and we register suspicious activity against our systems every day. To keep us all safe, as it has been doing for 70 years, NATO is adapting to this new reality.” According to Stoltenberg, NATO has designated cyberspace a domain in which the defense will be just as serious as it does in the air, on land, and at sea. “This means we will deter and defend against any aggression towards allies, whether it takes place in the physical world or the virtual one. We must work ever more closely together and leverage our unique network of allies, partner countries, and organizations. No single country alone can secure cyberspace, but by co-operating closely, sharing expertise, we will not only survive but thrive in the new digital age”. The allegation was considered normal by the analysts, given that the understanding and definition of warfare continues to expand to include cyber-attacks on other countries as the norm. NATO’s official statement was published in Prospect Magazine.
<urn:uuid:96630b8f-5e8e-471b-984d-b3674880489b>
CC-MAIN-2022-40
https://blog.bit-guardian.com/nato-cyber-attacks-to-trigger-article-5/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00018.warc.gz
en
0.964527
331
2.890625
3
“What exactly is a data scientist?” This question is increasingly on the minds of many in the tech world today. The mix of skills needed to be called a “data scientist” is still very much a work in progress—however, the role may not be as new as you think. A recent Harvard Business Review article made more than a few headlines with its own headline: Data Scientist: The Sexiest Job of the 21st Century. It would have been hard to image the words “data,” “scientist” and “sexy” in the same sentence, let alone a headline, even a few short years ago (namely because the title hadn’t been coined yet, according the article). Now it’s making it into it one of the country’s top business magazines. That should speak volumes about the big data ride that businesses big and small are about to take. Data Science and Today’s College Classrooms Sensing an opportunity, universities have taken notice. Programs are underway at schools across the country to address what has quickly become a booming demand for people that understand advanced analytics and statistics, have solid programming skills and “get it” when it comes to the day-to-day realities of the businesses they find themselves in. Tips: How Big Data Brings BI, Predictive Analytics Together Columbia University has put together its first course with “data science” in the title. In July, the school launched the Institute for Data Sciences and Engineering, according to instructor and course creator Rachel Schutt, a senior statistician at Google and an adjunct assistant professor in the Statistics Department. “I kept hearing from data scientists in industry that you can’t teach data science in a classroom or university setting—and I took that on as a challenge,” Schutt says in a blog post she wrote in response to questions for this article. “This course creates an opportunity to develop the theory of data science and to formalize it as a legitimate science.” In addition, Cloudera Chief Scientist Jeff Hammerbacher, formerly head of Facebook’s data team, and University of California at Berkeley computer science professor Mike Franklin taught an Introduction to Data Science course this past spring. A quick Google search uncovered a couple listing for schools ranging from Stanford and Stevens to Harvard (fall 2013) and the University of Cincinnati that offered “data scientist” courses. Few, though, use the term data scientist. Most are billed as advanced analytics degrees. This is appropriate; the focus of the job, from a business standpoint, is gleaning actionable insights from data that the business can use to turn a profit, not just play with. News: Grad Schools Add Big-Data Degrees “[For] most companies, their biggest challenge isn’t going out and hiring someone who can do segmentation, or clustering or statistical analysis using tools from SAS,” says Shawn Blevins, executive vice president and general manager of sales at big data as a service provider (BDaaS) Opera Solutions. “It’s the fact that that’s a disconnected activity from the rest of the business.” What companies want in a “data scientist,” then, is the mix of skills that will lead to better understanding of the massive volume and variety of data that is now available for analysis because of tools such as Hadoop and R. “It’s this idea of operationalizing [data], putting domain expertise with it and, frankly, calling [BS] on it because it doesn’t result in profit,” Blevins says. Data Scientist Jobs Gaining Ground A search of job boards reveals that companies do want to hire data scientists—while Monster.com listed just 49 openings, Dice had 224 jobs and LinkedIn showed 477 positions. LinkedIn searches for “DBA” and “system administrator” showed 764 and 1,827 positions, respectively, but the data scientist role is gaining ground. Analysis: Implement Cascading Framework, Ease Data Scientist Hiring Pain Of course, big data is the reason this job is even on the radar. It’s not that people haven’t been working with big data sets in the past, or that the idea of big data is new. After all, the three “Vs”—volume, velocity and variety—coined by Gartner’s Doug Laney more than 10 years ago still make up the definition of big data today. Companies today are finding that there really isn’t any one person in their organization who can deal with all three Vs and put them into a business context. Given that the primary goal behind any big data project today is a better understanding of your customers—how they interact with your company, its products and what they want going forward—the skills of a Ph.D. statistician doing regression analysis is just a subset of the skills a full-blown data scientist will be expected to know, says Herain Oberoi, a director in the Business Platform Group at Microsoft. “The title is definitely new. The data scientist role is not,” Oberoi says. “It’s part of a continuum. What’s happened in the past few years is new technologies like Hadoop, that enables cheap distributed processing and improved capabilities and the ability to do things like statistical programming, [have] become easier, so the bar from getting insights from new types of data has come down.” This means specialists skills are no longer needed to glean specialist insights, at least in the discovery and modeling phases of finding the little nuggets of knowledge that lead to innovative products and services. Those nuggets exist in the massive data streams and data sets now open for examination, says Paul Barth, co-founder and managing partner of big data consultancy New Vantage Partners. Analysis: Desperately Seeking Data Scientists “It’s going to be a lot different compared to today, where you throw your questions over a wall and wait six weeks for an answer and then have to say, ‘No, that’s not what I asked,'” Barth says. Big data analysts, who are the forerunners of and most likely candidates for the data scientist title today, let companies ask and answer questions in quick succession, significantly shortening the mean time-to-answer and thus bringing the power of Moore’s Law and analytics to the average business user. “What kind of person does all this?” Thomas Davenport and D.J. Patil ask in their Harvard Business Review article. “What abilities make a data scientist successful? Think of him or her as a hybrid of data hacker, analyst, communicator and trusted adviser. The combination is extremely powerful—and rare.” Allen Bernard is a Columbus, Ohio, writer. He has covered IT management and the integration of technology into the enterprise since 2000. You can reach Bernard via email or follow him on Twitter @allen_bernard1. Follow everything from CIO.com on Twitter @CIOonline, on Facebook, and on Google +.
<urn:uuid:c7bcaa2f-4934-43e8-8b0f-13ab4488ecb4>
CC-MAIN-2022-40
https://www.cio.com/article/286599/data-management-data-scientist-role-is-clear-even-if-job-description-isn-t.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00018.warc.gz
en
0.94182
1,513
2.640625
3
SARS-COV-2 is a biological enemy, but the COVID-19 pandemic can be — and actually has been — fought with digital measures. Advanced analytics and data management made it possible to track the way coronavirus is spreading, know its tactics, its structure, and the way its features make it so efficient and dangerous. There is a curse. They say: May you live in interesting times. Terry Pratchett, Interesting Times Currently, leading companies and institutions in the pharmaceutical sector are working on vaccines and effective treatments for the COVID-19 disease. They would not be able to work at such a high pace without the use of digital technologies and data. Because it concerns business, health, and people’s lives at global scale now, never before has fast and reliable access to data been so important both globally and locally. And thus, never before has the need to digest data, process it and produce meaningful insights been so pressing for so many. Suddenly, everyone is trying to understand how such a disruption is impacting their nation, society, neighborhood, business. Just to remind you and keep it in the background: in 2020, humanity will create and consume nearly twice as much data as in 2018. As IDC indicates in their updated Global DataSphere forecast released in May, “the next three years of data creation and consumption will eclipse that of the previous 30 years”. The gain is huge and impressive, even if we bear in mind that almost 40% of the data will be generated by entertainment (video streaming). But at a time when we binge-watch more or less interesting series during the lockdown, or when (working from home) we hold endless teleconferences back-to-back, data engineers, developers and data scientists are working with life science experts to find a way to stop the pandemic. It was Google that paved the way for the use of big data to track epidemics. The idea was to collect millions of users’ behaviors and use Google search queries to determine if there was a flu-like illness present in a population. Although Google Flu Trends actually failed, the way it worked was inspirational for infectious-disease researchers: Twitter was used in Brazil to get high-resolution data on the spread of the dengue fever in the country. Similarly, data from Google and Twitter helped to predict the spread of the Zika virus in Latin America. Now, Google and Apple are working together to help with contact tracing and governments in the US, UK, and even the European Union, are relying on these technologies to fight COVID-19. With the help of big data, scientists are working on new ways to develop vaccines, with the so-called reverse vaccinology being one of the trends. This method of vaccine development is fast, but it requires screening the entire pathogen genome. The “Vaccinology 3.0” approach, as it is also called, is a world of huge volumes of data; it’s definitely worth reading about it in this truly fascinating article by Deepak Karunakaran. Also: testing, production, and even delivering vaccines (it is fundamental to maintain a cold chain from the manufacturer to the point of use, and to keep temperatures within a precise range of values) require a data-driven approach. Special digital apps are designed and released to manufacturing sites to optimize their operation and limit the number of staff needed physically on site. Special software modules are being designed exclusively for the optimization of the Covid-19 vaccine production to achieve an unprecedented parallelism and satisfy global demand when the vaccine is finally ready. There are a myriad more good examples and practices illustrating the power of data analytics to fight a pandemic. Governments, health and science organizations, communities and health professionals are conducting analyses to track the virus and simulate future spread scenarios. Likewise, companies are gathering data and using analytical tools to assess the extent of financial difficulties and delinquency risk, perform process planning, adjust the scale of production, gather insights, implement employee protection, make employment decisions or re-create customer support plans. All that data and effort is needed to make informed decisions and be able to manage business disruptions that have arisen in this crisis. Data quality and sharing as a secret weapon Data-sharing during the pandemic has become crucial, as “The Lancet” stressed as early as May this year. Data is constantly being gathered in electronic health records and in laboratories, and we shouldn’t let it become dark data — considering SARS-CoV-2, the insight we could gain from a pooled, publicly available dataset analyzed by researchers in academic institutes and the industry is invaluable. But sharing the kind of data that has been so important in the response to Covid-19 is not as simple as popping it into an email and hitting “Send”. It requires an advanced system and strategy. First of all, health data contains numerous personal and sensitive details. This makes it especially difficult to share, even though, or perhaps because, a lot of this data is collected by local hospitals or health authorities. Trying to find a solution, way back in April, the European Commission established the Covid-19 Data Platform to allow research data to be rapidly collected and widely shared, as part of their ERAvsCorona Action Plan. Check Horizon Magazine to read more about European Union’s fight against Coronavirus. Crucially, data sharing is important not only for public health institutions, authorities or research centers. According to the 2019 Good Pharma Scorecard, big pharma data-sharing around clinical research appears to be on the increase. The biennial study, last released in June 2019, finds that 95 percent of patient trial results are now publicly available within six months of US FDA approval. At 12 months, 100 percent public results for new drugs have been approved since 2015. As the world is becoming increasingly digital, so is data. Data sharing from the perspective of life science companies is not only a business issue. As we can see, it is often clearly responsible for the welfare of humanity at large. The coronavirus pandemic is not the first one to have hit us, and so the sooner we establish data and data-sharing standards, and put quality over quantity, the better for all of us. Today’s fast-changing social, business and regulatory landscape forces companies to be always on their toes to continuously meet the shifting criteria of compliance and integrity with all stakeholders and decision makers. To our aid we call advanced analytics and true data-driven decision making.
<urn:uuid:074c6338-14ae-45c0-b7e8-450efaa8b48c>
CC-MAIN-2022-40
https://candf.com/articles/data-driven-life-science-understanding-the-pandemic-through-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00218.warc.gz
en
0.950835
1,365
2.859375
3
Royals and espionage have gone together ever since Sun Tzu’s identification to his royal masters in the 4th century BC, of the three different types of intelligence agents he deemed necessary for subsequent military success. In the centuries since, we have seen Julius Caesar’s creation of a spy network to keep track of all the plots against him, Francis Walsingham’s devastatingly effective work in the service of Elizabeth I, and the original establishment of modern British intelligence as essentially a mechanism to keep Queen Victoria from being assassinated. But there have been a few occasions where royals themselves have dived into the secret theatre… Queen Moremi of the Yoruba was a 12th-century monarch in the Ife kingdom whose people were under frequent attack from the neighbouring Ugbo, or ‘forest’ tribe, and their spirits that were feared unkillable. On one raid, she allowed herself to be captured as a slave, and (it is said) so entranced the forest leader that he made her his own wife, unaware of her royal status already. From such a privileged position, not only was Moremi able to gather intelligence as to the secrets of the Ugbo’s military success (they were not in fact spirits, but men wearing cloaks of leaves) she was able, crucially, to leave the camp unchallenged and make it back to the Yoruba with her secrets. The Ife kingdom then attacked the forest dwellers with fire arrows, exposing the disguises and winning a mighty victory, all based on the intelligence that Moremi had provided. She resumed her position as Queen and a statue of her can be seen today in Ife state, the tallest in the country. Queen Victoria herself understood the expediency of political alliances, which is why she did an extremely thorough job of marrying off her children to various European royal houses. However, this was not only a dynastic move, but an intelligence-driven one. The children would be able to pick up on court gossip, rumours and private affairs occurring across the continent in which Britain would be interested, and all they had to do was write letters to their mother and pass it on. One daughter, also called Victoria, who married Frederick III of Germany, was so successful in this that she felt the need to encrypt her letters, making her mother a forerunner of the great British cryptologists. Matters did not always turn out so well. Yoshiko Kawashima was a Manchurian princess but was raised in Japan and became a spy for the Japanese Kwantung Army. Throughout the 1930s, she was responsible for eliciting information from Chinese military officers stationed in Shanghai and passing this back to Tokyo, and had a key role in persuading the ‘Last Emperor’, Pu-Yi, to become a Japanese-controlled puppet ruler in Manchuoko state. She was known as both the Joan of Arc of Manchuoko, and the Mata Hari of the East. However, her fortunes and successes slid as she became addicted to opium, and Kawashima was executed as a traitor in 1948 at the end of the war. Lastly, there was the genuinely tragic case of Noor Inuyat Khan, an Indian princess whose family had fled to England from France following the Nazi invasion and who joined the infamous Special Operations Executive in 1943, being parachuted into France as the first female wireless operator behind enemy lines. She refused an offer of extraction following the exposure of all other operators in Paris and continued to work in extreme danger for three months, far beyond the expected ‘life’ of any person in her position. She was eventually betrayed to the Nazis and shot at Dachau camp, with her courage and work ‘at the most dangerous position in France’ earning her a posthumous George Cross. The stories of all these women prove that the line between royalty and espionage is not strictly a one-way relationship, and confirm that the question ‘And what do you do?’ can sometimes have the most fascinating answer… To download a copy of this article, please click here.
<urn:uuid:4b1e2e76-a726-468a-8b8f-25060c0eeb53>
CC-MAIN-2022-40
https://kcsgroup.com/a-right-royal-case-of-espionage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00218.warc.gz
en
0.982902
843
2.765625
3
Virtualization has been transforming IT infrastructure strategies. It began with server virtualization. Establishing virtual machines within single systems allowed one server to host dozens of applications, changing the way data centers operate by allowing apps to use system resources on an as-needed basis. This process extended out to storage, network and desktop systems, creating software-defined data centers. Virtualization’s rise has also led to more discussions around the idea of software-defined WAN systems. At its core, the idea of virtualization is fairly straightforward. By abstracting the software from the hardware, apps and services can use resources flexibly, letting groups of clustered machines freely share capabilities and allowing each system to be used at a higher capacity. This becomes a bit more nuanced when dealing with the network. “Virtualization streamlines data routing by flattening the traditional layers of a network.” The nuts and bolts of software-defined networks Virtualizing the network is effectively a data routing strategy that flattens the traditional layers of a data center network so that information can move through the most efficient pathway at any time to reach its destination. When a network is virtualized, the usual routing protocols that depend heavily on the physical location of routers and switches are replaced by logic controllers that automatically identify the resources available within the network and route information accordingly. In the data center, this functionality is instrumental as a solution to many of the challenges created by server virtualization. When servers are highly virtualized, organizations end up with a situation in which a system with only one or two network ports may be supporting more than a dozen applications. As those apps need to access network resources at the same time, the typical routes can quickly get clogged as one port is heading out to one switch or patch panel – which may well be the destination for dozens of physical machines. The data density challenges escalate fast in this situation, and the ability to add a layer of intelligence to network routing through a software-defined controller proves invaluable when breaking down the physical barriers of the network to eliminate longstanding bottlenecks. In practice, all of these capabilities add up to a network that can be automated and orchestrated to the same degree as virtualized server and storage environments. This same functionality can be applied to WAN systems, something that is particularly valuable as businesses ramp up their efforts to invest in cloud computing and other web-based technologies. Leveraging the SD WAN WAN infrastructure has long been a thorn in the side of IT leaders, and the problem has only gotten worse in recent years. WANs used to primarily serve one of two functions – provide internet access or support mission-critical apps and communications between branch offices and data centers. In most cases, a solution such as an MPLS would support the essential data workloads, and a broadband plan would handle day-to-day data. Effectively, businesses maintain multiple layers of their WAN depending on the data type, and must also manage dedicated network controllers that are programmed with the logic needed to send different information through various links. Effectively balancing traffic between these distinct connectivity options has become more difficult as companies depend more heavily on cloud computing and other web-based technologies. All of these apps and services, including video and voice systems, depend on the WAN to get the job done. As these services play a larger role in enterprise operations, businesses must develop more flexible WAN optimization strategies to make sure every user has access to the bandwidth and security features needed to work effectively. “Software-defined WANs use hosted network controllers to automate route optimization processes.” Without virtualization, identifying which cloud, video, voice and data traffic would need to go through different WAN channels would require manual deployment and programming of network controllers. Supporting data delivery over the WAN would also require frequent updates to routing strategies as new apps are deployed, workflows change or company policies shift. Software-defined WANs use hosted network controllers to automate route optimization processes within the WAN, allowing IT teams to effectively take a step back while the virtualized WAN setup balances resources and priorities across various services. Driving revenue creation through SD WANs A virtualized WAN setup can create value in diverse ways, including: - Optimizing hardware resources to eliminate unnecessary and expensive bandwidth upgrades. - Removing management and hardware overhead that the IT team would have to deal with. - Freeing WAN resources to be used in the most flexible way possible instead of depending on rigid pre-programmed routing guidelines. - Maximizing the potential of distinct WAN connections by allowing data to use the most effective network link at any moment. Software-defined networking is all about adding a layer of intelligence to connectivity systems. Businesses that want to glean the greatest possible ROI from such investments can use custom network solutions to mix and match various services to best meet their operational demands. Managed services providers are particularly valuable in this process as they have the combination of expertise and technology partnerships needed to help organizations find the right blend of WAN options for their specific needs.
<urn:uuid:b852ea89-cc40-445a-90f8-b8f97066bc35>
CC-MAIN-2022-40
https://www.bcmone.com/blog/software-defined-networking-transforming-wan-capabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00218.warc.gz
en
0.929846
1,041
2.578125
3
As more IT functions are moved offshore to developing countries, there are ways for IT workers in developed countries to improve their chances of staying employed, according to a report from the Association for Computing Machinery (ACM). Savvy students and IT workers already know they should obtain a strong educational foundation, learn the technologies used in the global software industry and keep those skills up-to-date throughout their careers if they want to keep their jobs. But they need to adopt other strategies if they want to remain in the technology field in the long term, according to the ACM, an international association of scientists, academics and other professionals involved in advancing IT. These strategies include developing good teamwork and communication skills, getting management experience and becoming familiar with other cultures. IT workers can also choose jobs in industries and occupations less likely to be automated or sent to a low-wage country, such as positions that require discretionary judgment or knowledge of trade secrets, the ACM’s Job Migration Task Force says in a report titled “Globalization and Offshoring of Software.” One surprising conclusion of the report is that it’s not just lower-skilled jobs that are moving offshore: High-level research is also moving from Europe and the United States to India and China, as improvements in graduate education systems in those countries are increasing the number of qualified researchers. However, the report says, governments in the United States and other developed nations can ensure that good IT jobs continue to be created through policies that promote research and development, improve education, enable foreign scientists and technologists to be employed in these countries, and encourage fair trade.
<urn:uuid:cb29f1c0-5e66-4865-b131-bc4c59ef2e9a>
CC-MAIN-2022-40
https://www.cio.com/article/256901/it-organization-staffing-how-to-save-it-jobs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00218.warc.gz
en
0.958407
331
2.765625
3
Welcome to Terminology Tuesday! Where we explain some commonly misused or misunderstood terms and phrases! This week’s term is Operating System (sometimes referred to as OS). Operating systems (OS) are installed in nearly every consumer computer today. They’re needed to run other programs and applications that we’re familiar with. OS perform the most basic tasks in the computer such as recognizing inputs from the mouse and keyboard as well as sending output signals to the monitor. There are different types of operating systems, as well. The most popular are Windows by Microsoft and OS X by Apple, but there are a host of others such as Chrome OS by Google and LynxOS by LynuxWorks. Finally, it’s important that we don’t confuse these operating systems with other programs that these companies make. For example, Microsoft Office is a set of programs that Microsoft makes that are typically associated with the Windows operating system, but these programs can be installed on OS X by Apple as well. In the same way, Google Chrome is a program made by Google, but it is different from their Chrome OS. See what I’m getting at? It’s important to remember these differences so that we can all be on the same page!
<urn:uuid:33a9472f-7d18-4b61-a8fa-bc50c78df3d1>
CC-MAIN-2022-40
https://www.computerhardwareinc.com/terminology-tuesday-vol-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00218.warc.gz
en
0.965428
261
3.859375
4
Imagine two groups at war. One defends against every attack as it comes. The other anticipates threats before they happen. Which is more likely to win? The same thing can be said about cybersecurity. Responding to attacks as they happen is important, but it's even more vital to anticipate the inevitability of a cyberattack and adjust your tactics according. Below are a handful of helpful tips discussing how to prevent cyberattacks with this strategy in mind. Fear can be a powerful motivator; and when it comes to cybersecurity, there is plenty to be afraid of including malware attacks, phishing scams and data breaches. Unfortunately, many web users are unaware of these threats or foolishly assume cyber crooks won't target them. Do your best to instill a healthy sense of paranoia by hosting cybersecurity training sessions. Cyber threats are lurking everywhere, by teaching your employees to stay sharp; you can drastically reduce your chances of becoming a victim. Bring on the Bots Even with routine training sessions, employees can sometimes miss red flags and allow hackers to run amok on your network. Your staff is only human after all. But what if you could bring on an artificial intelligence platform to help you resist the hackers? Well, that's exactly what some organizations are doing. Adopting AI in the fight against cybercrime, businesses can now block cyberattacks with greater precision. Admittedly, these solutions do cost a pretty penny, so research them thoroughly to make sure they're right for you before committing to anything. Staying Up to Date While movies and the media tend to portray hackers as malicious super geniuses hell-bent on going after the most challenging rivals, the truth of the matter is that most cyber crooks take aim at easy targets. And one of the easiest targets in the world is a computer or device employing outdated software plug-ins. Software developers consistently release security patches to eliminate defense gaps. When you ignore software update prompts, you increase your chances of being hacked. It will also benefit your organization to stay up to date on all the latest cybersecurity news. Take a little time each week to read about data breaches, security trends and common vulnerabilities. After all, that's what hackers are reading every day! Preparing a Contingency Plan The military philosopher Sun Tzu said, \"He who knows when he can fight and when he cannot, will be victorious.\" Once your network has been breached, your fight against cybercrime shifts focus to responding to the incident. As such, companies with an incident response plan will fare much better than organizations without direction in this regard. Invite leaders and managers throughout your organization to help craft an incident response plan that includes reporting the incident, isolating the device or network, preventing secondary attacks and expunging the threat from your systems. Remember to share this plan with your employees during your next training session. Establishing a Safety Net As you might expect, recovering from a cyberattack or data breach is very expensive. In addition to covering the cost of getting your network back up and running, you may also need to foot the bill for record recovery, credit monitoring and (heaven forbid) legal fees. To prevent cyberattacks from crippling your finances, invest in cyber threat insurance from CyberPolicy. Get your free personalized quote today!
<urn:uuid:9d999b17-0ad0-419e-8e0f-8a15c783c54c>
CC-MAIN-2022-40
https://www.cyberpolicy.com/cybersecurity-education/anticipation-the-best-weapon-against-cyber-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00218.warc.gz
en
0.948515
680
2.578125
3
A map as we know it does not tell the whole story; it’s missing classifications of what makes up the land such as water, types of forests, snow, grasslands, and so on. Interplay® can be used as a land classification system, able to clearly define boundaries between land classes. This can then be graphically mapped out and scaled to any geographic location. Landcover classifications can be used to understand the environment as a whole, including the availability of habitat, contributors to climate change, pollution and chemicals, frequency of natural and urban settings, and general monitoring of the ecosystem. To gain an understanding of how to use a specific environment, and accurate land cover classification is key. Interplay is able to take an inbound stream of images (satellite or aerial photos) and then classify the land types according to any trained data set. Because this leverages AI, landcover analysis can be done at huge volumes and repeatedly-- showing the change in landcover (i.e. as forests are cleared, flood damage occurs, or industrial areas are built out). These tools are immensely valuable to the Intelligence Community as well as non-profit environmental organizations. Data security is paramount for Interplay within run-time production environments. As needed, Interplay can function as stand-alone servers within SCIF (Sensitive Compartmented Information Facilities) or other sensitive facilities.
<urn:uuid:02a69167-2ef1-4430-b879-0451a04ce028>
CC-MAIN-2022-40
https://www.iterate.ai/use-cases/applications/landcover-classification?industry=&application=&force=
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00218.warc.gz
en
0.933855
284
2.5625
3
What is the Surface Web? Also known as the Visible Web “Lightnet”, or Indexed Web; Surface Web is everything that you can find on the regular World-Wide-Web. It contains the pages that are put under “Indexable” to be readily available to its searchers in any Search Engine’s Result Page. According to the worldwidewebsize.com, “The Web contains minimum 5.28 billion Indexed pages (Wed, 28 Nov’18).” Interestingly enough, the Surface Web is only approx. 10% of the whole World Wide Web. Examples of Surface Web include- Facebook, YouTube, Wikipedia, Regular Blogging Websites, and basically everything that we can see on any search engine’s result page (SERP). Index or NoIndex? By default, a web page is set to “Index”. However, some pages have no requirement of being in the SERP and be accessed by the others DIRECTLY. For example; Thank you Pages, a companies admin, and login pages. So primarily the pages that are either intended to specific people or the pages that come after accessing a particular page on your websites come under no-index or no-follow. The web pages shown In the search engines serve the purpose of being directly accessible and have the intent of getting better Ranking in the Engines. This is why they need to be indexed. But some internal subpages like the above mentioned are merely there to support the main pages, and that’s why are set to “no-index”. This conveys the Robots and crawlers of search engines to stop right there. Furthermore, “NoFollow” disallows them to follow any particular link. This needs to be set internally. What is Deep Web? So, in essence, there are pages that are easily approachable but not present in the result pages of any engine. These types of pages go to the deep web. You can access them by their links (if you have them) or by visiting the page it is connected to. Hence, you can surf the Deep web by accidentally clicking on a link. The Deep Web, Hidden Web, or the Invisible Web contains what is hidden behind the HTTP forms. It furthermore includes online banking pages, medical and financial records, personal files, etc that are generally secured by a paywall and can only be accessed from specific pages. However, some engines may display the hidden files and pages of this part of the deep web. For example, Deep Web Technologies, DeepPeep, Intute, Ahmia.fi, and Scirus are a few of those search engines. What is Dark Web? You may have come across this name, mostly while browsing through Reddit or other discussion pages like Quora. For some pages on the internet, it is not that simple to land on them. While the Deepnet covers up to 90% or even more of the www. There are sites that are not just set to “Noindex” but also are restricted to be found by any link or any of the standard search engines and browsers INTENTIONALLY. Although they come under the Hidden Web, this 0.1% of the darknet is only accessible through particular kinds of software and tools. This BlackNet area generally contains illegal content. Content like someone’s personal data, drug trafficking, and other illicit activities. So, law enforcement regularly shuts down and prosecutes sites and people performing illegal things on the darknet. Overlay Networks can help you to get through this part of the invisible web. You need special software to access the Blacknet or Dark Web because a lot of it is encrypted, and most of these are hosted anonymously. The current most popular browser is the TOR. This community is many times referred to as the OnionLand. Other popular networks are Freenet, I2P, and Riffle operated by public organizations and individuals. And also may include small, friend-to-friend peer-to-peer networks. You May Also Like to Read: An Introduction to the New Google Dataset Search Engine
<urn:uuid:87bdb95b-d367-4bc9-8d26-e7bef0c2ff99>
CC-MAIN-2022-40
https://www.hitechnectar.com/blogs/introduction-surface-web-deep-dark-web/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00218.warc.gz
en
0.91741
874
3
3
If you have completed the first lesson of this course, you are well aware that personal data has no place in the public domain. So how does it end up there? There numerous ways, and not all are under your control. For example, a cybercriminal can steal information from the server of an online store or hotel if it is not secure enough. This might include your name, home address, birthday, and even passport Information that you consider to be personal can get accidentally leaked by relatives, friends, and even ordinary acquaintances. For instance, a pal might absent-mindedly post a stupid photo from a bachelor party, which could potentially ruin your reputation. They might even tag you, so that everyone in your list of friends — including your boss — will see it. But far more often we spill personal information ourselves, through carelessness or ignorance./ You might, for example, post some silly photos on a social network, forgetting that some colleagues follow your page. However, there are less obvious cases. Do you sometimes apply for loyalty cards and take part in promotions? Are you OK with giving your phone number and email address during registration? Did you read the terms and conditions? It’s bound to be written there that the company can share this data with anyone. And one day it could fall into the hands of spammers. Spammers might also get your contact details from ad sites like Craigslist. They don’t need to wheedle anything out of you. You put it there yourself! Decided to send some sample contracts to a client via free Wi-Fi in a cafe or hotel? Your work email password might get intercepted — and used to penetrate the internal network of your company, where the cybercriminals will find plenty of juicy fodder. All these are instances when we ourselves are sloppy with personal information. But sometimes such data is systematically targeted and stolen. It can be particularly nasty if an attacker gains access to a social media account that you actively use. They will get their hands on a treasure trove of personal data, such as information about your interests and family contact details, which are very handy for phishing or extracting money posing as you. Or personal correspondence, which can be turned into a blackmail tool. But an even greater threat is having your main mailbox hijacked. Armed with this, cybercriminals can reset the passwords for all services you use and take full control of them. So they avidly prey on mail accounts, trying to bruteforce passwords or steal them from you or a company server. Want to know how to protect this data? We reveal all in the next lesson. You receive an email: An online store is offering a 20% discount. You follow the link to the website and see a form asking you to log into your Instagram account. What do you do?
<urn:uuid:ea62a692-f36f-4cdf-854e-4a360aa3a533>
CC-MAIN-2022-40
https://education.kaspersky.com/en/lesson/16/page/68
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00218.warc.gz
en
0.947574
587
3.453125
3
If you have your laptop, you can work from just about anywhere if there’s an internet connection to your company servers. Out on a sales call and need to log in to grab your proposal? Need to get to your email from home? Updating your project timelines while you’re waiting at the airport for your next flight? No problem! No problem unless… Unless you’re accessing your company network with a Virtual Private Network (VPN) connection, you’re creating an open door for brute force cyber attacks that can compromise your information and leave a mess of problems in their wake. It might seem easier and cheaper to neglect putting VPN into your cyber security plan, but if the “bad guys” get through you could end up with legal issues with clients, downtime that halts business operations, and unexpected costs to stop and remediate the infiltration. Whether you’re working from home or traveling for a seminar, remote access to your business’s local services in a manner that’s safe and reliable is a reasonable expectation you should have for your business. VPN makes this possible, by allowing you to connect to internal servers without compromising the safety of your network. Breaking Down VPN VPN is a method to remotely connect to servers by simulating a private network over a public network. More simply put, it allows you to connect safely to your work network from a remote location with private or public WiFi with added layers of protection. Through VPN, your computer will appear as if it is at the office even if you are at home or traveling without compromising your network’s safety. It's virtual because it's not a physical space, but rather a simulated platform through which information is passed. It's private, because only you, with granted authentication, have access to your internal services. It's a network because it allows you the virtual space for connection to take place. How Does VPN Work? VPN maintains security by creating a pathway via a small hole in your firewall, forcing connections to be authenticated and encrypted. This authentication helps to ensure that only those with permission are allowed to connect. Encrypting the traffic being sent and received, helps to ensure that nobody else can eavesdrop on your communications. Because VPN signals are encrypted, it’s more difficult for hackers to intrude on your internet activity when you’re on a public WiFi system. Working from home is a little different situation but even when you’re on your home network, your privacy is not assured unless you’re using VPN. Do We Need VPN if We Use Port Forwarding? If you’re using port forwarding to access your servers, you’re connecting directly through your firewall. This method, while simple and convenient, forgoes most of the security offered by your firewall which is a very important layer of security. If you can get through your firewall, so can a hacker. That’s why VPN is a better approach that also allows for more functionality. Additionally, using VPN allows connectivity to all of your internal services not just a portion of them. You might think that you’re saving money by using port forwarding, but you’ll be stuck with bigger costs if you have a data breach. VPN is the most effective and low-risk method to safely access your server remotely while ensuring protection from malware and viruses that could potentially compromise your system. VPN and Cyber Security at Accent Here at Accent, VPN is part of our layered approach to cyber security. We work with clients to determine the best options that will balance your employees’ need for remote access with the need to manage cyber risk. If you’re wondering if you’re missing a layer of security, like VPN, contact us for a security assessment or just give us a call at 800-481-4369.
<urn:uuid:e6fa9349-67d0-401b-87df-741909e6445e>
CC-MAIN-2022-40
https://www.accentonit.com/blog/remote-access-without-vpn-is-risky-business
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00218.warc.gz
en
0.925736
798
2.625
3
Some counters are configured to count a duration, rather than a number of events. Basically, it is incrementing by the number of active TBF, each second. For example : during 5s, the number of simultaneous DL TBF is: 5 TBF,4 TBF,5 TBF,6 TBF, 3 TBF Over this 5seconds, the counter will print : 5+4+5+6+3 = 23 seconds. This is an example with a counter sampling granularity of 1 second. But it could be less than 1 second, or more. But it doesn’t change anything. ROUND(10x)/10 –> it means the counter is actually displayed with a decimal when you retrieve it from the OSS. I don’t know why they put that in the counter definition, it’s an information which is useless to the user, as far as i remember. If there is a decimal, it means the sampling granularity is probably less than one second.
<urn:uuid:e7a6112f-3409-4fc7-90b5-23ef57122840>
CC-MAIN-2022-40
https://www.erlang.com/reply/69524/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00218.warc.gz
en
0.890343
224
3.015625
3
How long has AR/VR/Mixed Reality been around? Although VR, AR and MR have garnered more attention and gained more momentum in recent memory, the idea (and rudimentary executions) of these technologies has actually been around for more than 150 years. Some would say that panoramic paintings and stereoscopes are early interpretations of Virtual Reality. However, contemporary VR as we’ve come to know it really began taking shape in the 1980s. What are some of the most exciting use cases these technologies will empower? In truth, there is practically no end to the exciting applications that these world-altering technologies bring to the table. Virtual and augmented realities have the ability to support healthcare workers, educational institutions, tourism and governmental organizations, automotive businesses and more. Advanced medical imaging could potentially help surgeons become more accurate, immersive remote training can help guide greater success in production across a number of industries and more immersive learning opportunities can help create better environments for students across the globe. The only limit is our imagination — and the infrastructure we provide to enable these use cases. What do these technologies have to do with the data center? As with many cutting-edge technologies, AR, VR and MR have everything to do with the data center! The data center serves as a foundational, infrastructural support for these capabilities, which have a lot of data and processing demands to maintain their real-time feedback and functionality. These are technologies that rely on maximized speed and minimized latency — which means that edge data centers are crucial for ensuring ideal data transfer, processing and storage can happen as close to the user as possible. This cuts down on lag, jitter and other experientially damaging results from added latency. What organizations can benefit most from incorporating AR or VR applications? Any and all organizations can benefit from AR, VR and MR. Actually, the question today is less about who can benefit and more about how we can ensure everyone can leverage this technology to the extent they want to. Edge data centers and robust networks are in place today, but continuing to problem solve for more Extended Reality applications and more powerful infrastructure will be key to making these technologies accessible and utilized to their fullest extent. How is 1623 Farnam supporting technology development in this sector? To drive ongoing collaboration and advancement in the field of Extended Reality, 1623 Farnam is participating in an Augmented Reality and Virtual Reality (AR/VR) Developer Challenge. This challenge is in conjunction with the UNMC – iEXCEL, University of Nebraska, Greater Omaha Chamber, Omaha Metropolitan Community College, AIM Institute, KC Digital Drive, T-Mobile and US Ignite. To learn more about this event, which brings together innovative ideas, cutting-edge concepts and a problem-solving ecosystem of AR/VR solutions, read our press release here.
<urn:uuid:17568928-0bb6-4b11-97b8-c5721d3cd120>
CC-MAIN-2022-40
https://www.1623farnam.com/ar-vr-faq/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00218.warc.gz
en
0.935512
578
2.671875
3
Today’s tech can connect you with anyone, anywhere in the world. By the minute, our world is getting more connected and increasingly in tune with our modern, on-demand, on-the-go lifestyles. There are countless organizations and individuals alike responsible for the innovations fueling this transformation. As Father’s Day approaches, we thought we’d look at some of forefathers behind these trends. From the creator of the Internet to the first chat platform, join us in celebrating some of the #1 ‘IT Dads’ who paved the way for technological innovation as we know it today: Steve Wozniak, Apple At 66 years old, Steve Wozniak has been named as the “Original Geek” by Fortune. Also known as “Woz,” he co-founded Apple with Steve Jobs and invented one of the pioneer computers, the Apple II. Since his early days with Apple, Woz has seen the evolution of Silicon Valley’s culture and the transformation of the Apple computer. Over the years, he has truly lived up to his reputation as an IT industry mentor with his life philosophy of happiness and fun. In fact, he wants to be remembered for the quality of his work and the intrinsic value of his achievements. In a recent interview he noted that he’d like “Happiness=smiles-frowns” engraved on his tombstone. Woz’s spirit of geeky innovation is inspiring and has encouraged many after him to create for the sake of creating. Tim Berners-Lee, founder of the Internet Tim Berners-Lee, engineer and computer scientist, invented the World Wide Web in March of 1989. However, his journey to industry pioneer was not always clear cut. Like all things, the origin of today’s Internet started with a great idea. When working for a private contractor Berners-Lee proposed a project to share information among researchers across the world. He designed the first web browser, and introduced the concept of nodes and hypertext as well as domains. In 1999 Berners-Lee stated, “the web is more of a social creation than a technical one. I designed it for social effect – to help people work together – and not as a technical toy.” His great idea turned into an invention that changed the world. Doug Brown and David Woolley, inventors of the first online chat system In 1973, Doug Brown and David Woolley created the first online chat platform called Talkomatic on the PLATO computer system at the University of Illinois. Over two decades before the World Wide Web, the technology was designed for the university’s computer-assisted instruction system. Brown and Woolley’s invention laid the groundwork for the rise of online forums, message boards, email, chat rooms, beloved ‘90’s instant messaging, remote screen sharing, gaming and the creation of the online community. Martin Cooper, inventor of the first cellular phone Martin “Marty” Cooper, is an American engineer and the inventor of the first cellphone. Cooper invented the first handheld cellular mobile phone when working at Motorola in 1973 with the intention and desire for people to talk on the phone away from their cars (competitor AT&T was focusing on developing car phones at the time). Cooper’s story is one of innovation and creativity, but also one of great ambition. Cooper rose through the ranks to become VP and corporate director of research and development at Motorola. He is not only considered the “father of the cellphone,” but also a forefather of IT and communications. In an interview with CNN, Cooper said that the first cellular call was made from Cooper himself, ironically (or not so ironically), to rival Joel Engel, head of the cellular program at AT&T at the time. Cooper is 89, still witty and even tweets: @MartyMobile. Fernando Corbato, inventor of the computer password Fernando Corbato, former computer science professor at MIT, pioneered the first password in the early 1960s to “protect against casual snooping.” Since then the password has evolved tremendously. Due to all the recent password breaches and measures to keep consumers and businesses alike safe, Corbato is reluctant to take credit and has noted it’s become a nightmare. However, it goes to show that shared logins and password theft is not a new phenomenon. Larry Page and Sergey Brin, inventors of Google Last but not least, Larry Page (named one of the 30 most influential people in tech) and Sergey Brin co-founded Google in 1996. The duo invented the search engine as a research project when they were PhD students at Stanford University. Originally, Google was called BackRub, based on its ability to analyze the back links of websites. Page and Brin renamed their search engine “Google” when they misspelled the word “Googol,” which is the large number 10100. According to a 2003 article on the founders; “the name reflects Google’s mission to organize the limitless amount of information on the Web.” The most appealing part of the invention according to Brin, was that it tackled the web and represented the power of human knowledge. With their PhD project, Page and Brin paved the way for future collaborations between innovators. Thanks to these great innovators, and many others, we now have the world at our finger tips. So this Father’s Day, as you reflect upon all the men who shaped and influenced your life, take time to also appreciate the forefathers of IT. Without them, so much of what we take for granted daily would not be possible—including many of the gifts you may give and get this weekend!
<urn:uuid:29fcb288-54a5-412c-bd7e-30d7ba86b44b>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/celebrating-the-forefathers-of-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00218.warc.gz
en
0.963005
1,200
2.640625
3
To celebrate her historic visit to Bletchley Park, the home of the Codebreakers of World War II, Her Majesty The Queen has issued a challenge to schoolkids: break a book of war-time ciphers. The Codebreakers worked in secret throughout World War II, intercepting and decoding encrypted German communications to aid the war effort. General Dwight D. Eisenhower described the information obtained in this way as of 'inestimable value,' claiming that it was directly responsible for saving thousands of British and American lives. Now, children from throughout the UK are invited to get a taste of the sort of work carried out at the Park with a codebreaking challenge of their own. The Queen has issued a code book - dubbed the Agent X Code Book Challenge - which contains a sample of the sort of encryption system used by the Government Code & Cypher School at Bletchley Park during the war, and seven messages encrypted using the scheme. The challenge, which is aimed at UK residents aged 13 to 16, is to decrypt these messages. The seventh is the key: it contains a question which must be answered in order to enter the contest. Those who work out the secret code required to unlock the secrets of the book are asked to e-mail the answer - in unencrypted form - along with their full name, age, date of birth, parent or guardian's name, telephone number, and full address to [email protected] before 1800 on Thursday the 18th of August. From the correct entries, a single winner will be picked to receive what is described as 'a small prize'. If you fancy having ago, the code book can be downloaded in PDF format.
<urn:uuid:e6245031-ff96-4d12-84c2-2c47022350e7>
CC-MAIN-2022-40
https://www.itproportal.com/2011/07/15/queen-launches-code-book-challenge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00218.warc.gz
en
0.953135
354
2.625
3
New Physics AI Could Be a ‘Snowball that Causes a Quantum Computing Avalanche’ (NextWeb) Scientists in the quantum computing field may have found a eureka moment in recently published physics research conducted by an international team with representatives from Cornell, Harvard, Université Paris-Sud, Stanford, University of Tokyo and other centers of academia. In a paper titled “Using Machine Learning for Scientific Discovery in Electronic Quantum Matter Visualization Experiments” the team explored a 20 year-old hypothesis that could lead to the creation of a room-temperature superconductor. Optimists might consider this work “a snowball that could cause a quantum computing avalanche.” There’s a physics problem with superconductors called “cuprates” that nobody has been able to figure out yet. It basically says that as a cuprate’s temperature is lowered to the point where it can conduct, it enters a mysterious state called a “psuedogap” wherein researchers aren’t able to determine what’s happening. This team created a machine learning paradigm that could figure out one of hypotheses regarding the pseudogap: 1) cuprates’ psuedogap is the result of strong interactions between particles; 2) or it’s the result of weakly interacting waves. The AI-generated result result indicates the behavior of the psuedogap more closely resembles the particle-like hypothesis than the wave-like one. Unfortunately, there was no “C” option since the neural network not generate its own hypothesis, so this work isn’t definitive by any means.
<urn:uuid:8ebf64fd-f5e8-4933-9334-845545277358>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/new-physics-ai-snowball-causes-quantum-computing-avalanche/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00418.warc.gz
en
0.913606
337
2.90625
3