Dataset Viewer
text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
The cost of Internet of Things (IoT) sensors has decreased remarkably over the past decade, heralding new possibilities of a renewed push for smart cities. With worldwide spending on IoT predicted to surpass the $1 trillion mark by 2020, this has increasingly attracted attention within various sectors.
With that tied in line with the strong push towards smart cities, more cities are looking towards developing more in-depth frameworks through the use of technologies such as IoT. This gives rise to an abundance of opportunities it can bring to governments and businesses – implementing further smart city initiatives, such as smart street lighting, to drive efficiency and better quality of life for citizens.
Connectivity is a fundamental aspect of a smart city and implementing a smart network nationwide is a challenge. Which brings us to street lamps.
As the number of street lights globally is set to grow to 363 million by 2027, it makes sense to consider this as a platform to kick start the smart city network. With street lamps typically dotted at walking distances apart from each other, we can leverage existing street lighting infrastructure to affix smart sensors instead of constructing a smart network from scratch.
By incorporating IoT sensors within smart street lighting, this can offer benefits such as:
- Environmental monitoring: Sensors built into street lights to monitor real time environmental factors such as air quality, UV-ray levels and noise levels. Control allows the monitoring to be done over specific locations or citywide.
- Traffic monitoring: Traffic sensors in street lighting to provide more precise traffic updates and congestion levels.
- Smart parking and metering: A variety of sensors can be used to track parking lot availability and records for fee collection, and occupant’s vehicle information.
- Public Wi-Fi and HD video surveillance: High bandwidth wireless networks to provide citywide Wi-Fi access. Utilising of high bandwidth wireless networks to match the bandwidth requirements of HD videos and GPS for emergency response.
Through these solutions, governments and citizens can be kept informed of information in real time. Furthermore, governments and businesses can utilize the data to tackle issues such as public safety, traffic congestion and enhance emergency response. For instance, the transmitted data from the HD video surveillance could possibly inform emergency units of a casualty by identification through facial recognition, allowing the casualty to be identified remotely amongst the crowd.
Integration and interoperability
While governments and city planners are aware of the benefits of a smart sensor network, many face challenges in its implementation, particularly in the integration of solutions and interoperability. This is mainly due to the myriad of technologies and solutions involved which will require the complementation to each other.
To ensure optimal outcomes, both private and public parties need to work together to bring the right set of capabilities to ensure the various smart platforms can be successfully implemented. These partnerships can further unlock new innovations and opportunities – something as simple and apparent as the extended use of street lamps for smart networks. This will ensure that the smart cities do not end up turning into a mix of mini-ecosystems that will only work in silos.
Aside from public-private partnerships, governments also play a role when it comes to implementing regulations and policies within a smart city. In doing so, it enables the objectives of the smart initiatives to be successfully met while minimizing misuse.
In the case of smart parking solutions, sensors are embedded in or on top of pavements to collect data such as space availability and vehicles’ parking duration for automatic charges. The aim is to automate processes and take away the redundancies of manpower. Regulations can be imposed in order to prevent issues such as illegal parking, and ensure that parking authorities still have control on the parking situation despite reduced physical surveillance.
Privacy and data hacks
While great strides have been made in smart city developments, data privacy and cyber attacks are still a key concern. The focus of smart city initiatives tend to solely be on the implementation of the solutions while overlooking the cyber security aspect. As the complexity of cyber threats continuously increases, it is even more important to prioritize cyber security in smart city planning – particularly smart street lights and sensors in the public space.
As cities continue their push towards being a smart city, we look forward to more possibilities beyond the horizon. However, greater involvement of stakeholders will prove essential to drive innovation and collaboration to realize smart city goals.
For all we know, the springboard to smart cities could very well be right under our noses – perhaps something as simple as a street lamp. | <urn:uuid:6c99d927-b67e-4b33-a3a3-e80984a870e2> | CC-MAIN-2022-40 | https://disruptive.asia/smart-city-street-lamp-sensors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00213.warc.gz | en | 0.928023 | 903 | 2.703125 | 3 |
Ever wonder where the phrase “null and void” or “breaking and entering” came from? Blame the French.
From Drafting Contracts: How and Why Lawyers Do What They Do, by Tina L. Stark, Aspen Publishers – Wolters Kluwers, 2007, p. 204, based in turn on The Language of the Law, David Mellinoff, Little, Brown & Co., 1963, Chapter 9.
“The profusion of couplets and triplets [e.g., null and void] reflects the evolution of the English language. After the Normans invaded England in 1066, French slowly became the language used in English courts and contracts. It predominated from the mid-thirteenth century to the mid-fifteenth century.
Not unexpectedly, the English came to resent the use of French and began once again to use English for legal matters. As the use of French began to wane, English lawyers were faced with a recurring problem. When they went to translate a French legal terms into an English legal term, they were often unsure whether the English word had the same connotation.
The solution was obvious: Use both the French and the English word. For example, free and clear is actually a combination of the Old English word free and the French word clair. [Another example:] breaking and entering (Old English and Old French).
Compounding this penchant for joining French and English synonyms was the English custom of joining synonyms, especially those that were alliterative and rhythmic.
- to have and to hold
- aid and abet
- part and parcel”
Tags: couplets, synonyms
Ronald G. Ross
Ron Ross, Principal and Co-Founder of Business Rules Solutions, LLC, is internationally acknowledged as the “father of business rules.” Recognizing early on the importance of independently managed business rules for business operations and architecture, he has pioneered innovative techniques and standards since the mid-1980s. He wrote the industry’s first book on business rules in 1994. | <urn:uuid:1392ce75-1f00-4626-bf28-f6ee81af62da> | CC-MAIN-2022-40 | https://www.brsolutions.com/ever-wonder-where-the-phrase-null-and-void-or-breaking-and-entering-came-from-blame-the-french/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00213.warc.gz | en | 0.958487 | 431 | 3.046875 | 3 |
Cloud computing is a tricky thing. It’s something we all hear about, and think we should be familiar with, but the majority of us aren’t quite sure what it is. There are lots of conflicting views out there, which only add to the confusion.
What it is:
Cloud computing typically incorporates Hardware as a Service (HaaS), Software as a Service (SaaS) and Platform as a Service (PaaS). Within those capabilities, companies can offer flexible and scalable options for a number of services, including:
- Storage servers
Most providers agree that cloud computing always offers three things:
- On-demand services with monthly or set pricing.
- Flexibility to scale services as needed.
- The consumer only needs a computer and Internet access to use the service or program.
Although the term can confuse people, cloud computing always uses the power of the Internet to provide products and services to consumers. Check out this InfoWorld article for additional information on the basics of cloud computing. | <urn:uuid:d7490a30-1913-4ae4-82a5-aebb7fe15fd5> | CC-MAIN-2022-40 | https://www.ittropolis.com/cloud-computing-what-it-really-means/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00213.warc.gz | en | 0.927534 | 222 | 3.03125 | 3 |
If you are interested in the topic of cyberbullying and how the US government, and each state, is dealing with bullying and cyberbullying, the Cyberbullying Research Centre has a law fact sheet that is regularly updated on the topic.
This information can be of particular use to parents, to understand the rights of their children, and to schools on putting together their own bullying guidelines. Schools can, and have, been sued for punishing students when a legal bullying line has not been crossed; they've also been sued for not stepping in when said line was crossed.
So, this is a great resource to jump into to understand the situation. As always, you can find many great resources online about cyberbullying - for parents, educators, and for youth. | <urn:uuid:fcfca7fb-b789-427d-8689-f3e5f9b61aef> | CC-MAIN-2022-40 | https://www.absolute.com/blog/cyberbullying-and-bullying-laws-in-the-us/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00213.warc.gz | en | 0.976327 | 155 | 2.5625 | 3 |
Top 7 Security Measures for IoT Systems
It is important to understand that the Internet of Things (IoT) is based on the concept of providing remote user access anywhere around the world to acquire data, operate computers, and other devices. The widespread IoT network includes computing devices along with unrelated machines that are solely responsible to transfer data excluding human-to-computer or human-to-human involvement.
The outbreak of technology and vitality smart devices in diverse sectors such as energy, finance, government, etc, makes it imperative to focus on their security standards. As per security firm, Kaspersky, close to one-third (28%) of companies managing IoT systems were threatened with attacks impacting their internet-connected devices during the year 2019. Furthermore, almost 61% of organizations are actively making use of IoT platforms; thereby, enhancing the overall scope for IoT security in the coming years.
Below mentioned are seven crucial steps for a business to uplift IoT security for preventing a data breach.
Swapping Default Passwords
The foremost step to enhance IoT security is through a sensible approach. It is recommended for businesses to enforce procedures that permit the changing of default passwords. This action should be implemented for each of their IoT devices present on the network. In addition, the updated passwords need to be changed on a timely basis. For added safety, the passwords can be simply stored in a password vault. This step can prevent unauthorized users from gaining access to valuable information.
Detach Corporate Network
Count it as an essential step to split the corporate network from unmanaged IoT devices. This can include security cameras, HVAC systems, temperature control devices, smart televisions, electronic signage, security NVRs and DVRs, media centres, network-connected lighting and network-connected clocks. The businesses can make use of VLANs to separate and further track various IoT devices active on the network. This also allows analyzing important functions like facility operations, medical equipment, and security operations.
Limit Unnecessary Internet Admittance to IoT Devices
Many devices run on outdated operating systems. This can become a threat since any such embedded operating system can be purposely reached out to command and direct locations. In the past, there have been incidents when such systems have been compromised before they got transported from other nations. To completely wash out an IoT security threat is not possible but IoT devices can be prevented from communicating outside the organization. Such a preventive measure outstandingly reduces the dangers of a potential IoT security breach.
Control Vendor Access to IoT Devices
In order to improve IoT security, several businesses have limited the count of vendors gaining access to different IoT devices. As a smart move, you can limit access to those individuals already functioning under the careful supervision of skilled employees. In case remote access is highly necessary, keep a check that vendors make use of the same solutions similar to in-house personnel. This can include access via a corporate VPN solution. Moreover, enterprises should assign a staff member to supervise remote access solutions on a regularly. This individual should be well versed with certain aspects of software testing to manage the task with proficiency.
Incorporate Vulnerability Scanner
The use of vulnerability scanners is an effective method in detecting the different types of devices linked to a network. This can be viewed as a useful IoT testing tool for businesses to improve their IoT security. Vulnerability scanner in collaboration with a regular scanning schedule is capable of spotting known vulnerabilities related to connected devices. You can easily access several affordable choices of vulnerability scanners available in the market. If not a vulnerability scanner, try accessing free scanning options such as NMAP.
Utilize Network Access Control (NAC)
An organization can successfully improve IoT security by implementing a NAC solution consisting of a proper switch and wireless assimilations. This setup can help detect most devices and recognize problematic connections within the network. A NAC solution, for example, ForeScout, Aruba ClearPass, or CISCO ISE, are efficient tools to secure your business network. If in case a NAC solution doesn’t fall within the budget, you can make use of a vulnerability scanner for fulfilling the purpose.
Manage Updated Software
Having obsolete software can directly influence IoT security for your organization. Try to manage your IoT devices by keeping them up-to-date and replacing the hardware to ensure smooth operations. Delaying the update can prove a crucial factor to safeguard data and invite serious cybersecurity breaches.
Security arrangements with IoT devices are helpful for businesses to minimize operational costs, enhance productivity, and deliver better customer experience. The above pointers can be understood and applied to sharpen IoT security directed at escalating your business’s reach. To learn more about safeguarding IoT devices, you can simply connect with professional experts at ImpactQA. | <urn:uuid:13037f22-fba6-4972-bbde-df2e2952755b> | CC-MAIN-2022-40 | https://www.impactqa.com/blog/top-7-security-measures-for-iot-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00213.warc.gz | en | 0.930431 | 982 | 2.578125 | 3 |
As was the case over seven decades ago in the early days of digital computing – when the switch at the heart of the system was a vacuum tube, not even a transistor – some of the smartest mathematicians and information theorists today are driving the development of quantum computers, trying to figure out the best physical components to use to run complex algorithms.
There is no consensus on what, precisely, a quantum computer is or isn’t, but what most will agree on is that a different approach is needed to solve some of the most intractable computing problems.
It may seem odd to have a detailed discussion about several different types of quantum computers at a supercomputing conference, but it actually makes sense for Google, which is a proud owner of one of the first quantum computers made by D-Wave, and researchers from the Delft University of Technology and Stanford University to have talked about the challenges of designing and building quantum computers to tackle algorithms that are simply not practical on modern digital systems, no matter how many exaflops they may have.
The other reason that it makes sense to talk about quantum computers at ISC 2015, which was hosted in Frankfurt, Germany last week, is something that may not be obvious to people, and it certainly was not to us. A quantum computer could end up being just another kind of accelerator for a massively parallel digital supercomputer, and even if the architectures don’t pan out that way, a quantum machine will require supercomputers of enormous scale to assist with its computations.
The joke going around ISC 2015 was that no one really understands what quantum computing is and isn’t, and it was so refreshing to see that in the very first slide of the first presentation, Yoshi Yamamoto, a professor at Stanford University and a fellow at NTT in Japan, showed even he was unsure of the nature of the quantum effects used to do calculations in the D-Wave machine employed by Google in its research in conjunction with NASA Ames. Take a look at the three different quantum architectures that were discussed:
Speaking very generally, quantum computers are able to store data not as the usual binary 1 and 0 states that we are used to with digital machines, but in a much more fuzzy data type called a qubit, short for quantum bit, that can store data as a 1, a 0, or a level of quantum superposition of possible states. As the number of qubits grows, the number of possible states that they can hold grows. The trick, if we understand this correctly, is to take a complex problem that is essentially a mathematical landscape in multiple dimensions, and punch through the peaks of that landscape to find the absolute minimum valleys or absolute maximum peaks that solve for a particular condition in an algorithm when the qubits collapse to either a 0 or a 1. (Oh, we so realize that is an oversimplification, and not necessarily a good one.)
One tricky bit about quantum states, as we all know from Schroedinger’s cat, is that if you observe a quantum particle or a pair that are linked using the “spooky action at a distance” effect called quantum entanglement, you will collapse its state; in the case of a quantum computer, the spin of a particle that respresent the qubit will go one way or the other, becoming either a 1 or 0, and perhaps before you have enlisted it in a calculation. So a machine based on quantum computing, using spin to store data and entanglement to let it be observed, has to be kept very close to absolute zero temperatures and, somewhat annoyingly, has to have quantum error correction techniques that could require as many as 10,000 additional qubits for every one used in the calculation.
This is a lot worse than the degradation in flash memory cells. (You were supposed to laugh there.)
The specs of the D-Wave machine are given in the center column, using a technique called quantum annealing, which is the subject of some controversy with regards to whether or not it is a true quantum computer. At some level, if the machine can solve a particular hard problem, the distinction will not matter so much and perhaps, given the nature of the mathematical problems at hand, perhaps a better name for such devices would be topological computing or topographical computing. The third column in the chart above represents the coherent Ising machine technique that Yamamoto and his team have come up with, which uses quantum effects of coherent light operating at room temperature to store data and perform calculations, specifically a class of very tough problems called NP Hard and using a mathematical technique called Max-Cut.
As the chart above shows, there is a range of computational complexity in the nature of computing, quantum or digital.
According to Yamamoto, a set of problems based on what are called combinatorial optimizations can potentially be solved better by a quantum computer than by a digital system. Some examples of combinatorial optimization problems include protein folding, frequency distribution in wireless communication, microprocessor design, page ranking in social networks, and various machine learning algorithms. No direct quantum algorithms have been found for these tough problems, and that means, as Yamamoto put it, as the problem size increases, the computational time to do those calculations scales exponentially.
But there is a way to map these combinatorial optimization problems to what is called the Ising model, which was created almost a century ago to model the spin states of ferromagnetic materials in quantum mechanics. Once that mapping is done, the Ising model can be loaded into qubits in a coherent Ising machine and solved.
Interestingly, Yamamoto says that coherent Ising machines have an advantage over the other architectures because he believes that quantum computing and quantum annealing machines will not be able to escape the exponential scaling issue for hard problems. (Neither of his peers on the panel – Vadim Smelyanskiy, the Google scholar working on quantum computing for the search engine giant, and Lieven Vandersypen, of Delft University of Technology – confirmed or denied this assertion.)
To help us all understand the scale of the architecture of possible future quantum machines, Yamamoto put together this chart, showing the amount of quantum iron it would take to do a factorization problem using Shor’s algorithm, which got quantum computing going, and also showed what size machine would be needed to simulate the folding of an alanine molecule.
In his presentation, Smelyanskiy did not get into the architecture of the 128-qubit D-Wave Two machine installed at Google, but described the mathematics behind the quantum annealing technique it uses. (The math is way over our heads, and was very likely out of reach for others at ISC 2015, too.) What Smelyanskiy did say is that Google has run a number of algorithms, including what is called a weak-strong cluster problem, on classical machines as well as on the D-Wave Two, and as the temperature of the device increases, quantum tunneling kicks in and helps better find the minimums in an algorithm that is represented by a 3D topography.
To get a sense of how a larger quantum computer might perform, Google extrapolated how its D-Wave machine might do with a larger number of qubits to play with:
As you can see, the speedup is significant even as the problem set size grows. But Smelyanskiy wanted to emphasize one important thing, and that this is a very simple alternative to the kind of machine that Yamamoto was talking about to build a circuit model machine that would solve any problem. “If we wanted to build a circuit model, we did a very careful analysis and it would require for similar problems on the order of 1 billion qubits to build, and that is probably something that is not going to happen any time soon, say the next ten years.”
Smelyanskiy said that Google has been working very hard to find applications for its D-Wave Two machines with the NASA Ames team, and thus far the most promising application will be in the machine learning area. (Google will be publishing papers on this sometime soon.)
At Delft University of Technology, Vandersypen expects a very long time horizon for the development of quantum computers, and his team is focused on building a circuit model quantum computer.
“Despite the steep requirements for doing so, we are not deterred,” said Vandersypen. “We are realistic, though.”
“The first use will be for simulating material, molecules, and physical systems that are intrinsically governed by quantum behavior, where many particles interact and where classical supercomputers require exponential resources to simulate and predict their behavior,” explained Vandersypen. “Since a quantum computer is built from the same quantum elements, in a sense, as these quantum systems we wish to understand, it maps very well onto such problems and can solve them efficiently.”
Code breaking and factoring are also obvious applications, too. Solving linear equations that can be applied to machine learning, search engine ranking, and other kinds of data analytics are also possible use cases for a quantum machine.
But we have a long way to go, said Vandersypen.
“What we are after, in the end, is a machine with many millions of qubits – say 100 million qubits – and where we are now with this circuit model, where we really need to control, very precisely and accurately, every qubit by itself with its mess of quantum entangled states, is at the level of 5 to 10 quantum bits. So it is still very far way.”
The interesting bit for the supercomputer enthusiasts in the room was Vandersypen’s reminder that a quantum computer will not stand in isolation, but will require monstrous digital computing capacity. (You can think of the supercomputer as a coprocessor for the quantum computer, or the other way around, we presume.)
“In particular, to do the error correction, what is necessary is to take the quantum bits and repeatedly do measurements on a subset of the quantum bits, and those measurements will contain information on where errors have occurred. This information is then interpreted by a classical computer, and based on that interpretation, signals are sent back to the quantum bits to correct the errors as they happen. The mental picture of what people often have of a quantum computer is a machine that is basically removing entropy all the time, and once in a while it is also take a step forward in a computation. So you need extra signals that steer the computation in the right way.”
With 10 million qubits, the data coming out of the quantum computer will easily be on the order of several terabits per second, Vandersypen pointed out. As the number of qubits scales, so will the bandwidth and processing demands on its supercomputer coprocessor.
To accelerate the development of a quantum computing machine like Delft and others are trying to build will require innovations in both hardware and software. It is not possible, for instance, to take the waveform generators, microwave vector source, RF signal source, FPGAs, and cryo-amplifiers that make up a qubit in the Delft labs and stack them up a million or a billion times in a datacenter. The pinout for a quantum computer is going to be large, too, since every qubit has to be wired to the outside world to reach that supercomputer; in a CPU, there are billions of transistors on a die, but only dozens to hundreds of pins.
For quantum computers using the circuit model, Vandersypen says that 5 to 10 qubits is the state of the art, and researchers are aiming for 50 to 100 qubits within the next five years or so. “That is simply not fast enough,” he says. “We really have higher ambitions and are thinking about ways to push up that slope by partnering with engineers and with industry to more rapidly achieve the really large number of quantum bits that can actually solve relevant problems.”
The other thing that Vandersypen wants to do is push down the error correction needed for qubits to run specific algorithms, also accelerating the time when quantum computers become useful. Add these efforts together, and a practical and powerful quantum computer could become available sometime between 2020 and 2025, instead of between 2040 and 2050, as the chart above shows.
So when will we see a quantum computer, of any kind, that can solve at least one hard problem that we care about?
“We can build a 10,000 spin quantum Ising machine in four years’ time, and the particular problems here with Max-Cut are, at least with the computational time, probably four orders of magnitude faster than the best approach possible by GPUs,” said Yamamoto. “Then the question becomes what is the application of Max-Cut can be applied to. We don’t have a focus on any specific target right now, but some sort of reasonable combinatorial optimization problems should be solved by this machine.”
When asked to put a more precise number on it, Vandersypen had this to say: “For the circuit model of quantum computing, the one based on quantum error correction codes and so forth, I think that in the next five years there is no realistic prospect of solving relevant problems unless there is a breakthrough in solving a few qubit algorithms. We are hopeful that on a ten to fifteen year timescale this is going to be possible, and even that is ambitious.”
Smelyanskiy changed the nature of the question away from time and towards money, which was an unexpected shift. Here is what he said:
“We did an analysis at Google recently in what it would take to implement a Grover’s algorithm for extremely hard problems where classical algorithms fail beyond 40 or 50 bits and with 70 bits you would not be able to do it. Grover’s algorithm is a search algorithm that provides a quadratic speedup for extremely hard problems; if you have 2n steps to find the solution on structured search, you would need only n/2 steps to find a solution with that algorithm on a quantum machine. For a problem of size 60, you would need about 3.5 billion qubits and it would take about three hours with a speed up over a single CPU of over 1.4 million times. For the problem of size 70, the speedup would be 34 million and you would need only a little bit more qubits at 5.9 billion. If it is about $1 per qubit, and you spend another $500 million to bid down the price, roughly with a $2 billion investment you would be able to build a decent supercomputer with several millions of CPUs.”
This is much more of a grand challenge than IBM’s BlueGene effort started 15 years ago for a cool $1 billion. The question is, who is going to pay for this? | <urn:uuid:98e82050-cf4c-47e8-ab72-431eba523dc7> | CC-MAIN-2022-40 | https://www.nextplatform.com/2015/07/22/google-sees-long-expensive-road-ahead-for-quantum-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00213.warc.gz | en | 0.954968 | 3,100 | 3.34375 | 3 |
As one of the UK’s leading web-hosting companies, catalyst2 puts security at the top of the list when providing any technological solution. There are currently over 286 million types of malware on the internet, so a key part of our work is to identify new dangers and quickly develop ways of preventing them.
One of the most dangerous threats is the SQL injection, a form of attack used in some of the most high profile security breaches of the last decade and responsible for more than a fifth of all web vulnerabilities.
What is an SQL injection?
SQL stands for Structured Query Language. It’s a standard programming language used to maintain, manage and process data and has been used in commercial software products since the early 1980s. An SQL injection is a method of attacking websites and databases by exploiting weaknesses in code where the hacker is able to write their own SQL code and that is not validated.
How does it work?
There are many different kinds of SQL injection, but they all work in the same basic way, by taking advantage of security weaknesses in a website’s software to insert new commands into the software’s SQL code. Using this method, hackers can get access to and disclose or manipulate confidential information or even destroy data.
How can you prevent an SQL injection?
The good news is that an SQL injection can be prevented, as the main vulnerabilities in SQL are well-known. The best way to prevent these attacks is to ensure that the code employed on your website or database uses validation. This limits what kind of data can be input by users, ensuring that it conforms to strict parameters, and so prevents the manipulation of the SQL.
If you are worried about the threat of SQL injections or other hostile or malicious programs, get in touch with us. Our award-winning service has been helping businesses, individuals and organisations for over 16 years, and with our expertise in web security, we can provide you with a web-hosting solution that will help keep your online technology safe and secure. | <urn:uuid:2a4f3355-c5dd-4b43-8169-ecdfe2494e4e> | CC-MAIN-2022-40 | https://www.catalyst2.com/blog/beat-threat-sql-injections/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00213.warc.gz | en | 0.932934 | 414 | 2.65625 | 3 |
Data shows hackers want to steal money, not classified information.
Most malware attacks against federal agencies are financially motivated, seeking to trick computer users into buying fake security software or providing personal information that can be used to hack into their bank accounts.
Although espionage and terrorism often are considered the primary motivations for breaking into government networks, 90 percent of incidents of malware detected on federal computers in the first half of 2010 were designed to steal money from users, according to data collected from the U.S. Computer Emergency Readiness Team at the Homeland Security Department.
"This statistic represents the dominance of financially motivated malware within the threat picture," said Marita Fowler, section chief of the surface analysis group at US-CERT. "It is not that the federal government is being targeted by organized criminals; it is that we are a smaller portion of a larger global community impacted by this."
Federal officials must consider equally the targeted threat, which Fowler equates to a sniper attack, and the widespread or "battalion" attack.
US-CERT, which is responsible for the collection, coordination and dissemination of information regarding risks to government networks, concluded 51 percent of malware found on federal computers in the first six months of 2010 was so-called rogue ware, which masquerades as a security product that tricks computer users into disclosing credit card information to pay to remove nonexistent threats.
"The criminals behind these campaigns are extremely good at distributing their malware, [using] search engine optimization poisoning [that] helps rank malicious links higher in search results," Fowler said. "They also use spam to distribute the malicious links on popular social networking sites."
US-CERT identified 23 percent of incidents as crime ware, which relies on techniques such as phishing and key logging to steal personal data from computer users, including login information and passwords, and access to their online bank accounts.
"A crime ware can be used to create designer or custom malware," Fowler said. "It is easy to use, which appeals to the less tech-savvy criminals."
The most prolific crime ware kit found on federal computers is Zeus, which recently was used to steal more than $1 million from bank customers in the United Kingdom.
US-CERT categorized 16 percent of malware incidents as Trojans horses that facilitate unauthorized access of the user's computer system, which could be used for financial gain or to manipulate and steal information. US-CERT categorized 3 percent of incidents as spam, and another 3 percent of incidents as Web threats, which is a general term for any risk that uses the Internet to facilitate cybercrime.
Only 4 percent of incidents were identified as computer worms, which self-replicate across computers, consuming bandwidth and affecting performance.
Education is important to mitigate financially motivated cyber threats, Fowler said, although users are often unaware they're being targeted.
"There are plenty of malicious programs designed to steal information from users without their knowledge," Fowler added. For example, he said hackers might log keystrokes as a user enters information into online forms, or remotely harvest data stored locally on a computer. "In these cases, security tools and mitigation strategies are needed to augment user awareness," including patching, antivirus updates, firewalls and filtering spam content, Fowler said.
NEXT STORY: US-VISIT tests of limited value, GAO finds | <urn:uuid:458aaad6-351e-4818-bdc0-a9bd35456017> | CC-MAIN-2022-40 | https://www.nextgov.com/technology-news/2010/08/most-attacks-on-federal-networks-financially-motivated/47374/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00213.warc.gz | en | 0.956268 | 680 | 2.515625 | 3 |
Looking back over 2015, Venafi Labs captured data on a steady stream of cyberattacks involving the misuse of keys and certificates which threaten the underlying foundation of trust for everything that is IP-based. These “attacks on trust,” as we call them, also show how keys and certificates have become interwoven into many aspects of our business and personal lives. From airline Internet services to laptop software to government certificate authorities (CAs) to apps for your car or your fridge to Google and banking sites, keys and certificates secure all our online transactions.
Why is this important? If organizations cannot safeguard the use of keys and certificates for communication, authentication, and authorization, the resulting loss of trust will cost them their customers and potentially their business.
Here is a sample of nine notable security incidents the Venafi Labs threat research team followed:
1. Gogo dished up Man-in-the-Middle (MITM) attacks
To kick off the year, a Google Chrome engineer discovered that the Gogo Inflight Internet service was issuing fake Google certificates. Gogo claimed it was trying to prevent online video streaming, but this practice ultimately exposed Gogo users to MITM attacks.
2. Lenovo pre-installed Superfish malware on laptops
Lenovo found that an adware program it was pre-installing on laptops was making itself an unrestricted root certificate authority which allowed for MITM attacks on standard consumer PCs.
3. CNNIC banned by Google and Mozilla
Google found unauthorized digital certificates for several of its domains issued by CNNIC, China’s main government-run CA, making CNNIC certificates untrustworthy and vulnerable to attack. Google, quickly followed by Mozilla, blocked all CNNIC authorized domains. In a 2015 Black Hat survey, Venafi found that while IT security professionals understand the risks associated with untrusted certificates, such as those issued by CNNIC, they do nothing to prevent them.
4. St. Louis Federal Reserve Bank breached
The US bank discovered that hackers had compromised its domain name register. This allowed the hackers to successfully redirect users of the bank's online research services to fake websites set up by the hackers.
5. New SSL/TLS vulnerability logjam exposed crypto weaknesses
Logjam exposed a problem with the Diffie-Hellman key exchange algorithm, which allows protocols such as HTTPS, SSH, IPsec, and others to negotiate a shared key and create a secure connection. Identified by university researchers, the Logjam flaw allowed MITM attacks by downgrading vulnerable TLS connections.
6. GM’s OnStar and other car apps hacked
A GM OnStar system hack that locks, unlocks, starts, and stops GM cars was made possible because the GM application did not properly validate security certificates. By planting a cheap, homemade WiFi hotspot device somewhere on the car’s body to capture commands sent from the user’s smartphone to the car, hackers could break into the car’s vulnerable system, take full control, and behave as the driver indefinitely. Similar weaknesses allowed hacks in iOS applications for BMW, Mercedes, and Chrysler.
7. Major CAs issued compromised certificates for fake phishing websites
Netcraft recently issued new research that found fake banking websites using domain-validated SSL certificates issued by Symantec, Comodo, and GoDaddy.
8. Samsung’s smart fridge hackable through Gmail
A security flaw found in Samsung’s IoT smart refrigerators allowed hackers to compromise Gmail credentials using MITM attacks because the fridge was not set up to validate SSL certificates.
9. Symantec fired employees for issuing HTTPS certificates for fake Google sites
This list of attacks that leveraged stolen, compromised, and/or unprotected cryptographic keys and digital certificates in 2015 highlights a wide range of potential impacts from attacks on trust, but is by no means a comprehensive list. In truth, many of these attacks go on undetected: cybercriminals use keys and certificates to bypass security controls and hide their actions.
Businesses need to understand that key and certificate management is not just an operations issue; it is critical to securing their networks, data, and trust relationships with customers and partners. The problem is compounded by the fact that most Global 5000 organizations blindly trust the keys and certificates deployed on their networks and use security controls designed to trust these encryption components.
There is an evil force out there in the cyber realm, lurking in the shadows that no one sees until it’s too late. Without the ability to tell friend vs. foe, good vs. bad in the digital realm, our global economy is in a perilous situation -- and this is a problem that’s not going to just disappear. Looking ahead into the New Year and beyond, we’ll only see the misuse of keys and certificates occur more and more, continuing to impact online trust across the globe. | <urn:uuid:7516e173-9fc2-4077-a7dc-f30536745977> | CC-MAIN-2022-40 | https://www.darkreading.com/attacks-breaches/2015-the-year-of-attacks-on-trust- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00213.warc.gz | en | 0.934485 | 991 | 2.75 | 3 |
7 Data Leakage Prevention Tips To Prevent the Next Breach
What is Data Leakage Prevention?
Data leakage prevention involves protecting the organization from various types of data leakage threats. Data leakage occurs when an agent transmits data to external parties or locations without authorization from the organization.
Data leakage can result from the actions of malicious insiders or the accidental actions of insider threats. Another common causes of data leakage are IT misconfigurations and external malicious attacks.
Organizations can prevent data leakage by implementing various tools, practices, and controls. For example, endpoint security, data encryption, and secret management can help enforce security measures that protect your data, in addition to continuous monitoring systems that push out alerts and regular audits performed by internal and external parties.
What Causes Data Leakage?
Here are a few common causes of data leakage:
- Accidental leaks—a trusted individual who accidentally or unknowingly exposes sensitive data or shares it with an unauthorized user. Examples include sending an email with sensitive data to the wrong recipient, losing a corporate device, or failing to lock a corporate device with a password or biometric protection.
- Malicious insiders—an employee or trusted third party who abuses their access to corporate systems to steal data. Malicious insiders might be motivated by financial gain, a desire for revenge, or may be cooperating with outside attackers. Examples include deliberately transferring sensitive documents outside the organization, saving files to a USB device, or moving files to unauthorized cloud storage.
- IT misconfiguration—configuration errors often result in devastating data leaks, especially in cloud environments. Examples include excessive permissions, databases or cloud storage buckets without appropriate authentication, exposed secrets (such as credentials or encryption keys), and mistakes in integration with third-party services.
- Malicious outsiders—an external attacker who manages to penetrate the organization’s systems and gains access to sensitive data. Attackers commonly use social engineering tactics to persuade employees to divulge their credentials or directly send sensitive data to the attacker. In other cases, the attacker infects corporate systems with malware, which can be used to gain access to sensitive systems and exfiltrate data.
How to Prevent Data Leakage
1. Know Where Your Sensitive Data Resides
To prevent data leakage, begin by identifying your sensitive data and its location in the organization. Decide which information requires the highest level of protection, and categorize your data accordingly. Once you are aware of sensitive data, you can take appropriate security measures, such as access control, encryption, and data loss prevention (DLP) software.
Increasingly, organizations are storing sensitive data in the cloud. Read our guide to cloud Data Loss Prevention (DLP)
2. Evaluate Third-Party Risk
Third-party risk is the threat presented to organizations from outside parties that provide services or products and access privileged systems. This risk is significant because third parties do not necessarily have the same protection and security standards as your organization, and you have no control over their security practices.
Here are some ways to monitor the risk of third parties:
- Evaluate the security posture of all vendors to ensure that they are not likely to experience a data breach.
- Conduct vendor risk assessments to ensure third-party compliance with regulatory standards, such as PCI-DSS, GDPR, and HIPAA, and voluntary standards like SOC-2.
- Compile vendor risk questionnaires using questions from security frameworks, or use a third-party attack surface monitoring solution.
3. Secret Management & Protection
Secrets are privileged credentials used by software to access other software. Secrets refer to private data that is key to unlocking secure resources or sensitive data in applications, tools, containers, cloud, and DevOps environments. Both human users and software can access your secrets via your technology stack.
There are three ways software systems can access your organization’s secrets:
With intent—by purposefully connecting to other software (via APIs, SDKs, or the like) by granting access via a specific key, for example, a programmatic password and username.
By mistake—you provided misconfigured access to software where you did not intend to provide it—or granted the wrong level of access.
Via cyberattacks—attackers who should not have access will typically look for entryways into your software stack. They can find ways by identifying its weakest link. Attackers could do this by finding misconfigured or accidentally exposed secrets.
A comprehensive secret protection approach should not only secure but manage your secrets. You must also monitor code for improper use of secrets or accidental exposure, and remediate issues you discover.
4. Secure All Endpoints
An endpoint is a remote access point that communicates with an organizational network autonomously or via end-users. Endpoints include computers, mobile devices, and Internet of Things (IoT) devices.
Most organizations adopt some remote working model. Consequently, endpoints are geographically dispersed, making them difficult to control and secure.
VPNs and firewalls provide a base layer of endpoint security. However, these measures are not sufficient. Malware often tricks employees into permitting attackers to enter an organizational ecosystem, bypassing these security measures.
Educate your staff to identify cyberattackers’ tricks, specifically those used for social engineering and email phishing attacks. Security education is a key strategy for preventing endpoint-related threats. Beyond education, modern endpoint protection technology can provide multi-layered protection for organizational endpoints.
Related content: Read our guide to endpoint protection platforms.
5. Encrypt All Data
Encryption is the conversion of data from readable information to an encoded format. Encrypted data can only be processed or read once you have decrypted it. There are two main types of data encryption: symmetric-key encryption and public-key encryption, the latter considered much more secure.
Cybercriminals will find it hard to exploit data leaks once you encrypt your data. However, sophisticated attackers might find ways to circumvent encryption, for example by gaining access to decryption keys, if they are not carefully managed. Attackers can also exploit systems or processes where data is stored or transmitted in plaintext.
6. Evaluate Permissions
Your sensitive data might currently be available to users that don’t require access. Evaluate all permissions to ensure you don’t give access to unauthorized parties.
Categorize all critical data into different levels of sensitivity, controlling access to different pools of information. Only trusted employees who currently need access should have permission to view highly sensitive information. This process of reviewing privileges can also reveal any malicious insiders who obtained access to sensitive data with the goal of exfiltrating it.
Related content: Read our guide to endpoint privilege management
Data Leakage Prevention with Hysolate
Hysolate’s fully managed isolated Workspace sits on end user devices, but is managed via granular policies from the cloud. These granular policies give admins full control for monitoring and visibility into potential data leakage risks, including sending telemetry data to their SIEM. Admins can limit data transfer out of the isolated encrypted Hysolate Workspace via copy/paste/printing/peripherals, and can set anti keylogging and screen capture policies, as well as setting up a watermark to block external screen capture.
- An additional layer of data leakage protection for both corporate and non corporate devices, including telemetry sent to SIEM solution for additional monitoring and visibility.
- Admins can set policies to limit data transfer in and out of the Hysolate Workspace, including files, documents and applications.
- Hysolate has security capabilities to lock the Workspace and enter only with a PIN.
- Hysolate’s Workspace can also be set with a watermark, to remove risk from external screen capture.
- Admins can wipe the Workspace OS remotely if a threat surfaces, or when it is no longer needed.
Employees can be provided with an isolated Workspace on their corporate device, so that they can access sensitive systems and data from a completely isolated and secure environment. Policies can be set to limit data exiting the Workspace, either accidentally or on purpose.
For contractors, Hysolate’s isolated OS solution provides a secure Workspace to access the necessary data and applications they need to do their jobs. The Workspace can be pre-provisioned with all the required applications and policies that are required for the contractor to connect to and work in the corporate environment. At the end of the contractor’s engagement, the Hysolate Workspace can be instantly deprovisioned remotely without leaving any data on the contractor’s device
Try Hysolate Free for Sensitive Access for yourself. | <urn:uuid:a33bae90-6f46-439c-9384-5f8731ed1962> | CC-MAIN-2022-40 | https://www.hysolate.com/learn/endpoint-security/7-data-leakage-prevention-tips-to-prevent-the-next-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00213.warc.gz | en | 0.902777 | 1,801 | 2.8125 | 3 |
A new website has gone live to check if you can tell a real face from an AI-generated fake in this world of uncertainty.
The website, WhichFaceIsReal.com, is created by Jevin West of the Information School and Carl Bergstrom of the biology department at the University of Washington.
West and Bergstrom gained some degree of fame after presenting a class titled ‘Calling Bullshit in the Age of Big Data’ back in 2017.
Their website continues along these lines and tasks visitors with, as you probably guessed, picking the real face over the fake (I was quietly confident, but I’d put my success rate around 50 percent).
In a post explaining their website, West and Bergstrom wrote:
“While we’ve learned to distrust user names and text more generally, pictures are different. You can’t synthesize a picture out of nothing, we assume; a picture had to be of someone. Sure a scammer could appropriate someone else’s picture, but doing so is a risky strategy in a world with google reverse search and so forth. So we tend to trust pictures. A business profile with a picture obviously belongs to someone. A match on a dating site may turn out to be 10 pounds heavier or 10 years older than when a picture was taken, but if there’s a picture, the person obviously exists.
No longer. New adverserial machine learning algorithms allow people to rapidly generate synthetic ‘photographs’ of people who have never existed.”
The pair did not develop the technology behind it but wanted to bring attention to a serious problem in a fun way. “Our aim is to make you aware of the ease with which digital identities can be faked, and to help you spot these fakes at a single glance,” they claim.
Software engineers from NVIDIA developed the impressive algorithm for generating realistic faces. You may have already seen it at work on ThisPersonDoesNotExist.com.
The algorithm is trained on a ‘General Adversarial Network’ where two neural networks compete against each other; one creating fake images, the other attempting to spot the difference.
Currently, people are spotting the real person around 70 percent of the time. Some inconsistencies to look out for is the background of the photo and how things such as glasses and hair are rendered.
If attempting to determine whether a pic is real or not that you’ve come across, the duo advise looking for images of the same person from different angles. That, as of writing, is not possible for an AI to do.
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo. | <urn:uuid:5f38d141-2f5e-4abd-8f73-7372dd28693c> | CC-MAIN-2022-40 | https://www.artificialintelligence-news.com/2019/03/07/website-checks-real-face-ai-fake/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00413.warc.gz | en | 0.944248 | 598 | 2.5625 | 3 |
As highlighted please send your questions and we will help.
ISUP is used for Connection Orientated signalling, i.e. signalling is performed so that a CIC can be secured. SCCP – Signalling Control Connection Part – provides extra information and helps to obtain extra addressing information for Rout(e)ing. Good example is to use SCCP to obtain information from Databases (HLR/HSS/UDC, MNP, SDF (Intelligent Network, etc..) so that Rout(e)ing can be performed.
For SCCP you will need to understand:-
Global Title – GT (like an IP address)
Sub System Number – SSN (access applications in a SS7 environment)
Originating / Destination Point Codes – OPC/DPC (MTP Layer)
Global Title Rout(e)ing Cases – GTRC –
(Translation of information)
VBR/ Wallis Dudhnath | <urn:uuid:2db60f65-cd43-459b-9eb6-c1759e7285a6> | CC-MAIN-2022-40 | https://www.erlang.com/reply/59511/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00413.warc.gz | en | 0.822803 | 198 | 2.5625 | 3 |
09 November 2007
It's getting more difficult every day to tell the difference between Voice over IP and traditional phone service. That can be a good thing or it can be a bad thing, depending on how you look at it and what aspects you're comparing.
When VoIP first became available, it was very different from traditional phone service in almost every way. Early implementations of consumer VoIP only supported calls from one computer to another, and both parties had to use the same provider. Making a call was a different experience from talking on the "real" telephone: you "dial" via a software program, talked into a desktop microphone, and heard the other party's voice through your computer speakers.
The payment model was different, too. Many of those early IP-network-only VoIP programs and services were free. But as always, you got what you paid for, and neither call quality nor reliability was very good. Calls got dropped a lot, and the audio was sometimes unintelligible. Still, there was a big "cool factor" involved in being able to talk over your Internet connection at no extra cost, especially on an international basis where traditional long distance rates could be prohibitive. | <urn:uuid:6dc7fcf6-4a3b-4dd7-b1c6-3be6c3cbeac1> | CC-MAIN-2022-40 | https://www.myvoipprovider.com/en/VoIP_News_Archive/VoIP_General_News/Why_VoIP_is_looking_more_like_your_old_phone_company | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00413.warc.gz | en | 0.980029 | 243 | 2.734375 | 3 |
Undoubtedly, remote learning has always been an alternative to face-to-face education. Teachers from any field would use cloud-based tools to encourage student’s collaboration. They would also grant access to academic resources and even provide support via teleconferencing.
But it was the COVID-19 lockdowns that made remote education a necessity. Now, many academic curricula are entirely virtual, and most will stay like that.
Educational institutions can utilize remote access solutions to allow students and faculty to securely access school computers remotely from home. They can also allow IT admin and support to manage resources remotely. And additionally, as a way to promote BYOD in classrooms to improve student’s engagement or also to allow faculty and student’s mobility.
In this blog, we’ll discuss the top five Remote Access Solutions for Education. We’ll discuss VPNs, remote desktop software, the new software-defined perimeters, the 100% cloud networks, and the cloud-based channel with additional security. All these solutions work fine for remote access in education, but some are better than others for specific purposes.
Top 5 Remote Access Solutions for Education:
- School, College, and University-level VPNs.
- Remote Desktop Software.
- Software-Defined Perimeters.
- Cloud-Network Remote Access.
- Cloud-based Secure Channel.
1.School, College, and University-level VPNs
Although today VPNs are advertised as privacy tools, they were designed to extend private networks. If configured appropriately, VPNs are fantastic simple tools that provide remote safe access to school networks.
VPNs use robust encryption mechanisms that allow a safe connection of remote clients (students) to their school servers. VPNs can create an encrypted tunnel between client and server via any network, including the public internet.
Remote students with a VPN can access the school’s resources, including files, databases, specific servers, or the entire institution’s intranet.
- VPNs leak data. VPNs are known to leak sensitive data due to misconfiguration. Although they use robust encryption mechanisms, a bad VPN implementation may lead to DNS, IPv6, or WebRTC data leak.
- Lack of granular authorization and authentication controls. It is impossible to grant different levels of access to other users. IT managers would need to install unique VPNs for different servers/users. Also, if a student’s PC is compromised, or the account rights are stolen, the entire network is at risk.
2.Remote Desktop Software
Remote desktop is a popular access feature found in almost all operating systems and proprietary and open-source software. It allows anyone (across the room or the world) to access a desktop computer and gain total control through the established session.
The most common remote desktop software is Microsoft’s RDP (Remote Desktop Protocol). But there are other popular tools, such as TeamViewer, AnyDesk, UltraVNC, and more.
Remote desktop is among the favorite tools for remote administration. It is quick, easy to use, and allows a wide range of management tasks. Usually, remote desktop comes with features like screen sharing, file transfers, mobile connections, chat, etc.
In remote education, entire remote classrooms can be set up, with screen sharing webinars, online conferences, collaboration, etc. When deciding between Enterprise Remote Desktop solutions vs VPNs consider the application and demand.
Weaknesses of Remote Desktop:
- Remote desktops are single-point of failure systems. If a remote network or a single component in the remote desktop fails, the entire desktop remains inaccessible. VPN servers have an advantage here because they are made to grant access to a network, not a single system. Remote desktops don’t have tracking systems in place that could monitor their availability.
- Authentication Weaknesses. If the credentials fall into the wrong hands, the entire desktop and network may be compromised. For example, protocols such as RDP don’t use the MFA (Multi-Factor Authentication) method. If a hacker knows your email and gets a list of your network IP addresses, they can brute-force your AD (Active Directory) password. With MFA, even if a hacker has your credentials, they’ll not be able to log in.
- Remote desktop traffic has easier access and is not typically monitored. Remote desktop solutions grant remote access to user endpoints (sometimes privileged and sensitive servers) inside a network perimeter. To make this solution work, firewalls need to open their ports to remote desktop and accept their incoming traffic. Unfortunately, this traffic is not typically monitored.
Software-Defined Perimeters (SDP), as the name implies, refer to network boundaries based on software and fully independent from hardware. SDPs are also referred to as “Black Clouds” because they hide the internet-facing infrastructure. These black clouds are invisible to hackers or attacks.
SDPs are designed to control access to resources based on Identity Access Management (IAM) and thus are only accessible by authorized users.
SDPs build a virtual boundary at the network layer, which can be extended geographically. This virtual layer makes it possible for remote users and devices to authenticate based on identity. It grants access to a network only after users verify their identity and evaluate their device.
SDPs are different from VPNs because 1) they do not share network connections (as VPNs do) and have a more granular level of access control. Additionally, SDPs are based on the zero-trust policy, making them highly efficient against common cyberattacks.
Some Security-as-a-Service “SECaaS” might include SDP as part of their service offerings.
- SDP communication depends on controllers. If an SDP controller goes offline, it will not be possible to establish communication with the remote resources.
- Device compatibility. If you are attempting to incorporate an old router or switch, it will likely not support SDP.
4.Cloud-Network Remote Access
Although many educational institutions still have on-premises infrastructure, a big percentage is beginning to transition to full cloud-based networks and SaaS applications. On-premises infrastructure is getting thinner, and a big chunk of it is moving to the cloud.
Cloud-network remote access is convenient. As long as teachers and students have access to the internet, they’ll connect to their cloud-based resources and SaaS applications like Office 365, Zoom, Google Apps, WebEx, etc.
Bear in mind, that cloud computing is not a replacement for remote access technology. In fact, many educational institutions prefer (and are required) to keep data and infrastructure on-premises.
Although cloud computing and remote access fall into different categories, they have something in common: they enable collaboration between users, which is critical in education.
With cloud-based resources, education institutions don’t need in-house technicians to maintain and secure the infrastructure. All software, platforms, and infrastructure is hosted on the internet.
Cloud-based networks are based on IAM or PAM (Privileged Access Management), so there is no need to use a VPN.
- Some data can’t be sent to the cloud. For specific industries, like education or health, some sensitive data must be kept on-premises due to regulations and compliance.
- Cloud lock-ins. Having all data and applications on the cloud makes institutions entirely dependent on the cloud.
- Moving data (especially big data) up and down the cloud introduces latency. Downloading and uploading large amounts of data will inevitably introduce latency into communications. Additionally, applications that require instant access to data will be dramatically impacted.
5.Cloud-based Secure Channel
A cloud-based secure channel is one of the most comprehensive and safest solutions for remote access. Having secure remote access is imperative for schools and educational institutions.
Cloudbric’s remote access solution provides an encrypted channel from the remote user to the private network via the cloud. This remote access solution sounds just like what a VPN does. But instead, it takes the already encrypted traffic over its cloud-based, 3-layer security. It monitors traffic on the channel, authenticates users, and prevents hacking attempts.
Remote access solutions, such as VPNs or Remote Desktop, require server/client deployments. With the server/client approach, it is challenging to track access to resources, and it is also a potential for connection errors. A cloud-based secure remote access channel, such as Cloudbric, does not need any hardware or software deployments on either client or server sides.
Cloudbric uses the following 3-Layer security between the user and the private network:
- 24/7 Traffic Monitoring: Keep track of traffic flows between source and destination. It detects and blocks common attacks, including cross-site scripting, injections, and even DDoS attacks.
- User Authentication: Uses a 2-Factor Authentication (2FA) to ensure only real users with valid credentials gain access into the private network.
- Hack Prevention: One of the most significant advantages of CloudBric’s remote access solution is that it can defend against hacking attempts. It blocks malicious traffic from bots or botnets and blocks out network scanning attempts.
Pre-COVID-19, tele-education, and e-learning were only trends, but now they are the new norm. Their demands are too high to ignore. Now, classrooms are becoming remote and virtual. So students and teachers need instant collaboration and access to online resources in a timely and secure manner.
A cloud-based secure channel such as Cloudbric leverages the cloud to do what it does best. With this approach, the cloud runs intensive security workloads on the remote access traffic. It then monitors traffic, detects and prevents hackers, and blocks unauthorized users.
Try Cloudbric’s Remote Access Solution for free for a limited time, and improve your remote learning and teaching. | <urn:uuid:6421b6dc-a28b-4cba-879e-3d933b96688b> | CC-MAIN-2022-40 | https://en.cloudbric.com/blog/2020/10/top-5-best-education-remote-access-solutions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00413.warc.gz | en | 0.929957 | 2,116 | 2.84375 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
How AI and ML can help Against Cyber Attacks
Artificial intelligence and machine learning are changing how we work, transact, wage a war, communicate, and follow protection norms. These technologies can be used to identify and analyze possible attacks quickly. Cybersecurity protects software infrastructure from cybercriminal threats, but AI can be used by cybercriminals to search vulnerabilities and attacks.
Machine learning (ML) is a subset of AI and is based on the idea of developing computer algorithms that automatically upgrade themselves by discovering patterns within existing data, without even being specifically programmed. It also helps to automatically analyze the way interconnected systems work to detect cyber attacks. The ML tools depend on data. As Ml processes more data, the more accurate and effective results it delivers.
With an increase in computing power, data collection and storage capabilities, machine learning and artificial intelligence are being applied more broadly across industries and applications than ever before. This means new weaknesses can quickly be identified and analyzed to help mitigate further attacks.
AI equipped IT infrastructure, detects malware on the network, create a response, and can detect intrusions even before they happen. AI offers organizations protection by automating complex processes for identifying, investigating, and addressing security breaches
AI can open up vulnerabilities as well. This happens particularly when AI depends on interfaces within and across organizations that create access opportunities by bad actors or disreputable agents. Attackers are beginning to deploy AI too to give computer programs the ability to make decisions to benefit the attackers. These programs might gradually develop automated hacks that will be able to study and learn about the systems they target and identify vulnerabilities.
Cloud platforms can help monitor the whole infrastructure that can reduce the risk of malicious resources. IT staff should be educated about cloud security as part of any cybersecurity awareness training.
Secure AI-powered Runtime Monitor ensures that the attacker can’t identify your data. If IT employees become more aware of resources and at monitoring them, criminals won’t be able to steal. AI can speed up the detection of problems by rapidly cross-referencing different alerts and sources of security data and automatically suggest plans for optimizing responses.
Check out: Top automotive Companies | <urn:uuid:3d7c6825-f660-4adb-af76-aa3cffb82048> | CC-MAIN-2022-40 | https://www.cioapplications.com/cxoinsights/how-ai-and-ml-can-help-against-cyber-attacks-nid-3506.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00413.warc.gz | en | 0.921573 | 448 | 3.03125 | 3 |
White House Office of Science Technology observed “World Quantum Day” & issued this statement
(WhiteHouse.gov) The White House Office of Science and Technology Policy (OSTP) observed the first-ever official World Quantum Day. IQT News summarizes highights of the OSTP statement below:
The day was a result of an international, grassroots initiative and was intended to promote public understanding of quantum science and technology.
How did the Biden-Harris Administration celebrate World Quantum Day? OSTP and the National Science Foundation (NSF), through the National Q-12 Education Partnership, along with the National Aeronautics and Space Administration (NASA), are advancing learning opportunities in classrooms across the Nation. Here are some specifics:–
This Is Quantum: A montage video of students, teachers, scientists, and more sharing what quantum is, what technologies it has enabled, and what attracted them to the field. It includes an invitation, “Let’s quantum together,” and wishes to have a “Happy World Quantum Day.”
QuanTime: A coordinated set of middle and high school quantum activities and games, each under an hour long. To date, over 150 teachers have signed up for the online and hands-on learning experiences. More than 600 kits were sent out, and thousands of students from at least 33 states will be engaging in quantum activities over the next month. And it is not too late to join the fun, as QuanTime activities are running until May 31, 2022. Sign up here.
PhysicsQuest Kits: These kits help students discover quantum mechanics and learn about the incredible life and work of the National Institute of Standards and Technology (NIST) Fellow Dr. Deborah Jin, who passed away in 2016. Dr. Jin was a leading quantum scientist who used lasers and magnets to cool down atoms and make new states of matter. To date, more than 15,000 kits have been distributed across the country.
Learning Quantum with NASA: NASA developed classroom worksheets and online games for learning quantum.
How do efforts like World Quantum Day support the U.S. National Quantum Initiative? World Quantum Day is a celebration of the many ways that quantum science has transformed modern society and the possibilities it holds for our future. The National Strategic Overview for Quantum Information Science outlines the United States’ quantum strategy. Two pillars of the strategy are building a diverse, eminent workforce and fostering international cooperation. In February, the National Science and Technology Council’s Subcommittee on Quantum Information Science released its Quantum Information Science and Technology Workforce Development National Strategic Plan. A major action of the Plan is introducing broader audiences to QIS through public outreach and educational materials. World Quantum Day activities this year and in years to follow are a big step in that direction.
Sandra K. Helsel, Ph.D. has been researching and reporting on frontier technologies since 1990. She has her Ph.D. from the University of Arizona. | <urn:uuid:3dcd7eaa-5f78-43f7-b861-d1aec5fe31a8> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/white-house-office-of-science-technology-observed-world-quantum-day-issued-this-statement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00413.warc.gz | en | 0.921637 | 604 | 2.875 | 3 |
“Unstructured data accounts for as much as 80 percent of an organization’s data footprint.” – Gartner
As file storage grows rapidly year after year, new challenges arise around keeping data safe and maintaining control over data storage systems.
Who owns which files? Whose files take up what volume of enterprise storage? Which files have become obsolete? How many copies of a file exist, and where? Are there any stale files that contain sensitive data?
These questions require up-to-date answers to ensure that business, compliance, and data security needs are easily and effectively met. However, due to the overwhelming volume of data stored by organizations, the fulfillment of these needs takes an increasing amount of time and energy.
What contributes to data glut?
Employees storing personal files on enterprise storage, data hoarding, poor data management, and indecision over which files can be safely deleted lead to data glut—a scenario where an organization’s workflows and server performance are bogged down by its own storage.
Analysts found that of the total unstructured data stored by an organization, at least 30 percent is redundant, obsolete, or trivial (ROT). This makes ROT data the primary contributor to data glut and the biggest challenge in data management.
What exactly is ROT data?
Any unneeded, outdated, stale, irrelevant, duplicate, orphaned, or non-business file is ROT data. Let’s break this down further.
Redundant data: These are duplicate copies of files that are stored in multiple locations across your servers.
Obsolete data: These are files that have not been used in a long time, and are unlikely to ever be needed again.
Trivial data: These are files that are not relevant to the enterprise, such as music and large media files, personal files of current employees, etc.
Unless organizations make targeted efforts to manage and mitigate ROT data, they will continue to accumulate junk files in primary storage devices.
Why is ROT data a problem?
ROT files take up valuable space in primary storage devices and impede data visibility and accessibility. They also lead to:
Increased storage costs: If employees continue to hoard once-critical files in expensive Tier-1 storage, the enterprise will eventually need to buy additional storage space. Accounting for factors such as the price of storage hardware, staffing and administration, software for data security and analysis, and more, the annual cost of storing ROT files is around $2,340 per TB.
Reduced business efficiency: The more data there is, the more time and resources it takes to back up, analyze, access, and classify it. This creates a vicious cycle of inefficiency in analyzing storage and increased data management costs, thereby hindering the adoption of cloud storage and affecting innovation.
Slower data discovery scans: Rapid information retrieval is crucial during risk assessment and legal discovery processes. ROT data slows down the identification of pertinent and regulated data.
Data security risks: Since ROT files are left untouched for extended periods of time, their permissions are often outdated or are based on obsolete file security policies. This leaves a startlingly high chance of one of them being accessible by inactive user accounts and makes them susceptible to data breaches.
Risk of non-compliance penalties: Information that is stored beyond its legal retention period increases the risk of non-compliance penalties. Further, stale files may contain sensitive personal data like payment card information (PCI), personally identifiable information (PII), and electronic protected health information (ePHI) with inadequate security measures in place to protect them.
How can you manage ROT data?
Simply buying new storage devices is not the solution to data glut. A holistic approach to ROT data management is required, starting from updating data retention guidelines to setting up processes to purge unneeded data. Eliminating stale, duplicate, and non-business files helps improve data security and organizational efficiency.
To reduce the volume of ROT data, follow this four-step process:
Update storage policies
Set up custom data retention policies based on your data generation and storage trends. One size does not fit all, and organizational needs change continuously. Up-to-date storage policies improve information governance and curb ROT data at its source.
Discover ROT files in your storage environment
Implement a file analysis solution to locate non-business files, files that have been untouched for long periods, duplicate copies, and other junk files.
Set up workflows to manage junk files
Configure policy-based archiving and deletion of junk files. This will free up disk space, improve the performance of storage devices, and ensure continuous storage availability.
ROT data management is a continuous process. CISOs must periodically scan data stores for irrelevant data and respond by either purging it or moving it to secondary storage devices.
Eliminate data glut by managing ROT data using DataSecurity Plus
ManageEngine DataSecurity Plus can locate, manage, and report on ROT data in your storage environment, thereby helping you prevent data glut.
Learn more about the file analysis, ROT data management, duplicate file detection, and file security analysis capabilities of DataSecurity Plus. You can also try your hands on the solution by downloading a free, fully functional, 30-day trial. | <urn:uuid:18d13838-743a-4475-a3ed-f5c0b8c87757> | CC-MAIN-2022-40 | https://blogs.manageengine.com/active-directory/datasecurity-plus-active-directory/2020/10/16/dealing-with-data-glut-why-rot-data-is-an-issue-and-how-to-manage-it.html?source=dspresour | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00413.warc.gz | en | 0.906273 | 1,120 | 2.59375 | 3 |
Biometrics provide a quick and reliable way to identify and authenticate people by their unique physical characteristics. But does it help fight threats like cybercrime, and what does it mean for privacy?
Imagine never having to manually log in again or remember the credentials for a hundred different online accounts. Or, you turn up at work and get back home without having to unlock a single door. Perhaps you need to pay a visit to the local pharmacy and pick up a prescription, but instead of having to wait in line, it’s discretely deposited in front of you without needing to talk to anyone.
Now imagine walking into a store and being greeted by a disembodied voice that doesn’t only know your name, but also the sort of things you like to buy. Things are starting to sound a little disturbing, even if we’re already well-accustomed to personalized advertising on the internet. But it gets worse – imagine being flagged as a criminal until the police figure out your arrest was due to a 92 percent margin of error.
Biometric identification no longer belongs to the realms of science fiction. It’s part of the technologies that are defining the future of cybersecurity and wider crime prevention tactics. Already, fingerprint scanners are standard on mid- to high-end smartphones. That’s the good side. As for the bad side: things like face, fingerprint, iris and voice recognition can also be considered tools of state authority; an all-out assault on personal privacy. But whether we like it or not, biometrics are here to stay, so we may as well make it useful in protecting sensitive personal or business information.
The good: simplifying and securing access to digital systems
There’s an average of 130 accounts associated with every email address. That’s a whole lot of usernames and passwords to remember. It’s hardly any wonder that so many people reuse the same passwords for most, if not all, of their online accounts. To make matters worse, a lot of people also favor simple, easily memorable passwords, such as names of pets or children. Not only are these relatively easy to guess – a brute-force hacking program can usually find them in mere seconds. Then, there’s the constant threat of social engineering attacks, where criminals attempt to dupe victims into giving away their login credentials over email or through a malicious website masquerading as one belonging to a legitimate organization.
We have a password problem, and compromising on digital security is not an option, especially for businesses, which routinely handle sensitive information belonging to themselves and their customers. Instead, they’re increasingly turning to multifactor authentication (MFA) to add another layer of security that’s far harder to compromise. Chances are, you’ve already used it for things like online banking, or whenever you log into your email from an unrecognized device. Even after you’ve entered your password, the system will ask you to verify your identity with a one-time security token, such as a code sent by SMS or a disconnected token generator. But there’s another method that’s rapidly gaining ground – biometric identification.
Many high-end smartphones and business-grade laptops already feature fingerprint scanners, and facial recognition apps are an emerging technology that’s steadily making its way into the consumer market too. Other less common biometric factors include irises, palm veins and prints, retinas and even DNA. What makes biometrics different from other authentication methods is that they’re inherent to the user, which means they can’t be compromised by your average social engineering scam. It’s also much more efficient to look at a camera instead of manually entering login information or risk saving it on a potentially unsecured device.
The bad: there’s no such thing as a fool-proof system
Biometric identification is highly effective because we all have distinct biological characteristics which can’t easily be faked or exploited – although there are exceptions, such as criminal cases featuring identical twins. Actually, that’s something of a myth – while biometrics may seem secure on the surface, that doesn’t make them foolproof. While a password is something that only its owner knows, your biological traits, for the most part, are very much public. You leave your fingerprints everywhere you go, your voice can be recorded and your face is probably stored in hundreds of places, ranging from social media to law enforcement databases. If those databases are compromised, a hackers could gain access to your biometric data.
There’s no such thing as a system that’s 100 percent secure, and there never will be. Any kind of digital data can be hacked and misappropriated. And, contrary to popular belief, it can even be faked. Just a day after the release of the iPhone 5, which featured the TouchID fingerprint scanner, a German hacking group managed to create a fake finger to unlock the devices. Sure, the technology has improved in the past seven years since that happened, but there’s a big difference between improvement and perfection. Five years later, the same hacking group managed to crack the iris recognition in the Samsung S8 simply by placing a contact lens over a high-definition photo of an eye.
The ugly: If you’re hacked, there’s no going back
The fact that biometric data can be hacked can have far wider consequences, some of which are extremely worrying from both a security and privacy standpoint. If your password is stolen, then you can usually just reset it and choose a new one. If a hacker has a photo of your iris, you can’t replace your eye – unless of course, you’re Tom Cruise’s character John Anderton in Minority Report, where he has an eye transplant to hide his true identity. Now, while hackers usually prefer less conspicuous methods than stealing body parts to access secure systems, it’s a fact that biometrics can be abused and, once they are, there’s no going back.
Although biometric technologies are getting better all the time, there will always be a margin of error, which presents concerns for both security and privacy. The security concern is that, like any other identification method, biometric identification isn’t perfect and never will be. From a privacy perspective, you could be misidentified as a criminal, and there’s a good chance you’ll remain in the system long after the misunderstanding has been resolved. Another issue is that, since they’re created by people, biometric recognition is innately biased. Most facial recognition systems, for example, are primarily trained with images of white males, which results in higher margins of error for women and people of color.
This uglier side to biometrics presents serious challenges for businesses, since they need to store biometric data as securely as possible. If the system is hacked, those affected will face an increased risk of hacking for the rest of their lives. In other words, they’ll never be able to rely on biometric security again. This gives businesses, as well as governments and other organizations which rely on biometrics, enormous ethical and financial responsibilities. That’s why it’s important to consider where the biometric data is stored and to give its owners control over how it’s used.
A secure future without compromising privacy
There’s a line between security and privacy that shouldn’t be crossed. The biggest challenge lies in figuring out exactly where this line is. Government-mandated regulations for the storage and use of biometric data are already being developed to protect personal privacy and security. For example, the Supreme Court of Illinois, US, recently ruled unanimously that employees should retain the right to know how their biometric data is collected and used, and that companies should only do so with opt-in consent.
That biometrics are, for the most part, immutable, is both its biggest advantage and worst drawback. While it potentially provides an effective additional layer of security, it can also be a single point of failure – with potentially disastrous consequences. There’s no denying it offers convenience and a high level of security, but it also paves the way for oppressive regimes and technology companies alike to infiltrate yet another aspect of our personal lives. With privacy being the concern of the century, businesses must be mindful about which technologies they choose to adopt and how.
This article represents the personal opinion of the author.
Author: Charles Owen-Jackson | <urn:uuid:8bdbce43-5626-4144-8eae-d508c27d5fba> | CC-MAIN-2022-40 | https://kfp.kaspersky.com/news/the-good-the-bad-and-the-ugly-of-biometric-authentication-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00413.warc.gz | en | 0.940144 | 1,771 | 3.125 | 3 |
2017 will go down as the year ransomware hit the mainstream, thanks largely to malware known variously as WannaCry, WannaCrypt or WannaCryptor 2.0.
The malicious software compromised systems across Asia, Europe and beyond, affecting high-profile victims such as Britain’s National Health Service (NHS) and FedEx in the United States. Outdated operating systems and computers that had not installed a Windows security update were identified as the weak link.
For IT security professionals in the education sector, who also suffer from a general lack of IT resources and security expertise, the rise of ransomware is a worrying development that will put more pressure on already stretched resources.
What is WannaCry?
WannaCry, and its variants, is a form of ransomware, a type of malicious software that blocks access to your files and data until a financial ransom is paid. It typically locks your system, prevents you from using Windows and encrypts your files so you can’t use them. It is spread via spam or targeted campaigns, often arriving in an unsolicited email or attachment.
WannaCry exploits the Server Message Block connection in Windows systems that enables the transfer of data between computers. WannaCry is especially dangerous as it can infect connected systems without any user interaction. And it only needs to reside on a single connected computer to infect an entire network.
Why is the education sector a target?
Like healthcare, educational institutions offer cybercriminals rich pickings in the form of sensitive personal and financial data, as well as valuable academic research and other potentially compromising information unique to the sector.
Security firm BitSight reports that education is the most targeted sector in the US, with 13 percent of educational organizations having been compromised by ransomware in 2016. This is three times the rate of healthcare and more than 10 times the rate recorded in the financial sector.
Ransomware and the education sector
It’s difficult for schools to fight ransomware, primarily due to tight budgets and under-resourced IT teams. And colleges are environments where file sharing is commonplace, making ransomware a huge security challenge for IT departments.
Protecting your organization against ransomware
Even if your IT budget is tight, there are some simple steps you can take to prevent the spread of ransomware and other malware, without incurring significant costs:
- If a computer is infected, isolate it from the network as soon as possible and alert all users about the infection.
- Keep all your software up to date, especially security patches and system-critical updates.
- Implement an awareness program for staff and educate them on how ransomware is delivered.
- Back up data regularly using physical and cloud sources.
- Establish an email security protocol to prevent prospective attacks; discourage users from clicking on links, attachments or emails from companies they don’t know.
- Advise your users to avoid file sharing, which can be a source for ransomware to infiltrate your network.
- Segment your Wi-Fi to keep staff, students and guests on different networks.
If you have the budget, upgrade aging infrastructure and software to reduce your vulnerabilities. It could be critical, especially if you are running systems that no longer receive mainstream support. | <urn:uuid:1526c5b2-2060-4fc6-81f3-37efdbad0df3> | CC-MAIN-2022-40 | https://gulfsouthtech.com/malwaremanaged-service-provderransomware/education-is-a-major-ransomware-target-in-2017/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00413.warc.gz | en | 0.93164 | 693 | 2.796875 | 3 |
CRLF stands for the special characters Carriage Return (\r) and Line Feed (\n), two elements used in specific operating systems, such as Windows, and various internet protocols like HTTP. Carriage Return signifies the end of a line, whereas Line Feed denotes a new line.
Usually, the purpose of the CRLF combination is to signal where an object in a text stream ends or begins. For example, when a client (browser) requests content on a website, the server returns the content with HTTP headers called the response body. The headers in the response are separated from the actual website content through CR and LF.
What is a CRLF injection attack?
However, the CRLF character sequence can be used maliciously as a CRLF injection attack. This attack is a server-side injection at the application layer.
By exploiting a CRLF injection vulnerability in the server that allows user input from an untrusted source, attackers can split text streams and introduce malicious content that isn’t neutralized or sanitized.
For such an attack to be successful, a server must both allow such user input and be vulnerable to using CRLF characters. I.e., if the platform does not use these characters, it will not be vulnerable, even if unsanitized user input can make it through.
If a CRLF injection is successful, this can open the door for further exploits such as cross-site scripting (XSS), web server cache poisoning or client web browser poisoning, client session hijacking, cookie injections, and phishing attacks, website defacement, and more.
In other words, a CRLF injection attack typically is not an end, but a means to open the door for further attacks.
What are the types of CRLF injections?
There are two main types of CRLF injections: HTTP response splitting and log injection. Read more about them below.
A more accurate name for this type of injection is Improper Neutralization of CRLF Sequences in HTTP Headers. This name also describes the main vulnerability associated with the attack.
If a server does not properly sanitize user-provided input, attackers can inject CRLF characters and a text sequence of their own or inject HTTP headers. The purpose of this is to force the server to perform a particular action.
After the injection, the server will respond to the client by including the attackers’ instructions in the response header. Moreover, once attackers have managed to split the response, they can create different responses and send them to the client.
Receiving the instructions, the browser will carry them out. The result of this may be to open the door for further attacks or to carry out actions that lead to a breach and compromise of data.
Log injections are also known as log poisoning or log splitting. This attack entails inserting untrusted or unvalidated data into a log file. Such a file can be anything from a system log to a user or access log and more.
There are several types of log injection attacks. One is to corrupt a log and make it unusable or to forge it and change its data, creating fake log entries. Log forging can be used to cover traces of an attack, draw attention to another party and create confusion, and divert attention from other possible attacks being launched simultaneously.
The second use of log injection is to launch an XSS attack via the log when viewed due to vulnerabilities in a web application. A third way of establishing a log injection is to insert commands that a parser could execute upon reading the log.
In either of these cases, attackers rely on the possibility of injecting unsanitized data into logs with the help of CRLF characters.
What is the impact of CRLF injection attacks?
Most modern servers are likely not vulnerable to CRLF injections as administrators have taken the necessary steps to prevent their possibility. However, depending on an application’s level of security, the severity of a CRLF injection can range from minor to very serious.
A successful CRLF injection can have all the consequences of an XSS attack, cross-site request forgery (CSRF) – such as the disclosure or corruption of sensitive user information. Such an attack can potentially lead to an entire file system being deleted if attackers can gain the necessary access.
How to avoid CRLF injection vulnerabilities?
Luckily, vulnerabilities that may lead to a CRLF injection can easily be fixed. Here are some of the ways in which you can protect your application against them:
- Never trust user input and use it directly in the HTTP stream
- Sanitize and validate all user-supplied input before it reaches response headers and/or encode output in HTTP headers that are visible to users to prevent injection in the response
- Encode CRLF characters so that they are not recognized by the server, even when provided
- Remove newline characters before passing content into the header
- Disable any unnecessary/unused headers in the web server
- Remove CRLF from the data before logging it
- Apply all the latest patches
- Scan regularly | <urn:uuid:9a98b73c-d417-45ae-af9d-41f6b35714e8> | CC-MAIN-2022-40 | https://crashtest-security.com/crlf-injection-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00613.warc.gz | en | 0.917404 | 1,071 | 4.1875 | 4 |
Understanding Data Privacy
by Gergo Varga, Senior Content Manager, SEON
Today, we are living in a world where data is everything. We use it in every part of our lives to communicate with each other and our devices, to organize our lives, and to make our lives easier. It has also become an indispensable part of any businesses as it helps them make informed business decisions and improve their business operations. But they are not the only ones who consider data valuable as cybercriminals have also recognized the value of information. If they get access to that data, cybercriminals can cause disastrous consequences for businesses which is why data privacy and protection should become a priority for all businesses.
Even though the threat of cyber attacks is ever-present, a large number of businesses still haven’t updated their security protocols to include fraud detection software as they don’t believe cybercriminals can do any damage to them since they do not keep any valuable data. This is where they are wrong as for cybercriminals all data is valuable, from standard internal data they can use for ransomware or extract data to use in further cyberattacks, to more valuable information like intellectual property data, payment details from customers or suppliers databases. In 2020 more than 80% of firms reported a dramatic rise in cyber attacks. Every business has something to lose in case of a cyber attack.
Data privacy consists of the policies and processes that dictate how businesses can collect, share, and use data while staying in compliance with the applicable privacy laws. This data can range from customers’ private details to confidential financial records, or from patients’ medical records to employees’ personal files, all depending on the nature of the business which is collecting the data. 65% of American voters say data privacy is one of the biggest issues our society faces. While businesses need to collect and store personal data about users in order to provide services, at the same time they need to be aware of the most secure way they can do it to ensure the privacy and safety of their customers and the business.
Every organization has access to confidential and sensitive information that they need to protect from getting into the wrong hands. From corporate secrets to customer data, keeping this data safe needs to be a priority of any business. From reputational to financial damage or even putting a stop to business continuity, there is no limit in the consequences cyber attacks can leave in their wake. It could cost companies billions of dollars. Take the Marriott breach from 2020 as an example. It started with a theft of the employee login credentials which were used to access 5.2 million guests’ information they can now use for further malicious actions. Not only did Marriott have to pay a fine as they failed to protect customers’ private data and had to face a class-action lawsuit, but the breach also caused a significant lack of trust of possible customers resulting in further financial loss.
Making data privacy a priority for your business is not only a legal matter so you can stay compliant with law regulations, but it is also key to business success. Not only does it protect your business data from falling victim to cyber-attack and from causing significant financial damage, but it also helps you retain the previous customers and even to attract new ones as they feel comfortable putting their trust in you. Without the customers, you would not have a business to run which is why it is extremely important you keep them happy and secure.
About the Author
Gergo Varga, Senior Content Manager at SEON. Fraud Fighters
Gergo Varga has been fighting online fraud since 2009 at various companies – even co-founding his own anti-fraud startup. He’s the author of the Fraud Prevention Guide for Dummies – SEON Special edition. He currently works as the Senior Content Manager / Evangelist at SEON, using his industry knowledge to keep marketing sharp, communicating between the different departments to understand what’s happening on the frontlines of fraud detection. He lives in Budapest, Hungary, and is an avid reader of philosophy and history. | <urn:uuid:84dd6adf-356b-42bd-be4f-b9486f15a37f> | CC-MAIN-2022-40 | https://cybersecuritymagazine.com/why-should-data-privacy-be-a-top-priority-for-companies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00613.warc.gz | en | 0.96293 | 823 | 3.03125 | 3 |
Duke University was created in 1924 by James Buchanan Duke as a memorial to his father, Washington Duke. The Dukes, a Durham family that built a worldwide financial empire in the manufacture of tobacco products and developed electricity production in the Carolinas, long had been interested in Trinity College. The school, then named Trinity College, moved to Durham in 1892, In December 1924, the provisions of indenture by Benjamin’s brother, James B. Duke, created the family philanthropic foundation, The Duke Endowment, which provided for the expansion of Trinity College into Duke University.
As a result of the Duke gift, the original Durham campus became known as East Campus when it was rebuilt in stately Georgian architecture. West Campus, Gothic in style and dominated by the soaring 210-foot tower of Duke Chapel, opened in 1930. East Campus served as home of the Woman’s College of Duke University until 1972, when the men’s and women’s undergraduate colleges merged. Both men and women undergraduates now enroll in either the Trinity College of Arts & Sciences or the Pratt School of Engineering. In 1995, East Campus became the home for all first-year students.
Duke maintains a historic affiliation with the United Methodist Church.duke.edu
Duke University’s IT Security Office provides the strategy and tools required to protect the university’s users, systems, and data. The team not only detects and responds to security incidents, but they also provide security awareness initiatives across campus, designed to better protect Duke’s faculty, staff, and students.
One of the key strategies for the team is to promote good password hygiene by using a password manager.
Initially a supplement to their security awareness efforts, Duke made LastPass available as a part of their campaign to promote the use of different passwords for each website that students visit. Soon after, they deployed LastPass to specific campus groups that had a need to share departmental passwords, including the Office of Information Technology (OIT).
OIT has made use of LastPass’ specific policies to increase the security around the administrator passwords shared in the tool. For example, policies have been set to require stronger passwords (length, character mixes), making accounts more difficult to hack. Additional custom policies, like timing out when the user is idle or when the browser is closed, further protected access to passwords. After already implementing Duo Security for multifactor authentication (MFA), Duke integrated their Duo implementation with LastPass to require MFA when logging in to accounts.
Several other departments at Duke have adopted LastPass to address their specific needs. These departments were able to improve collaboration with secure password sharing and can better manage changing passwords and access when a departmental employee leaves. Staff can also securely access departmental passwords offline when needed.
LastPass has been part of Duke University’s strategy to create strong security behaviors with the faculty, staff and students, particularly around the security of accounts and passwords. By promoting best practices to eliminate password reuse and securely store passwords, LastPass allows the Duke IT Security Office to achieve a responsible computing culture. | <urn:uuid:381d2452-eb2b-4cb1-815e-6ae3485b82d8> | CC-MAIN-2022-40 | https://www.lastpass.com/resources/case-study/duke-university | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00613.warc.gz | en | 0.952582 | 643 | 2.625 | 3 |
Self driving vehicles have been deployed on roads for testing and have been promoted by technology companies for a number of years now. The argument goes that through the use of visual data and advanced algorithms, automated vehicles will result in much safer driving experiences.
The discussions around safety have been explored at length - and rightly so. Automating machines that carry people across large, complex terrains needs to be examined carefully. And whilst attitudes to the deployment of self-driving vehicles vary, it’s not too hard to imagine a near future where removing human error from the roads does indeed result in fewer accidents and/or fatalities.
But given the understandable high profile of user safety when considering the deployment of self-driving vehicles, other concerns are sometimes overlooked. This is particularly true of data privacy and algorithmic bias and/or oversight.
Cars or vehicles that rely on the collection of information to operate autonomously, inevitably are going to collect and store huge swathes of data. This data is often personal and is collected without explicit consent - for example, visual data of someone walking near your vehicle. In theory we could be looking at a world where we have thousands of mobile surveillance systems roaming the roads all over the country, with private companies collecting visual data on unaware passers by.
Not only this, but because companies want to maintain a competitive advantage in the quickly developing market for autonomous vehicles, how this data is processed and used isn’t always clear. The problem of ‘black box’ algorithms has been highlighted in other fields - but isn’t often in the public consciousness when thinking about driverless cars.
The UK’s Centre for Data Ethics and Innovation (CDEI) is looking to bring more attention to this area, with the release of its ‘Responsible Innovation in Self-Driving Vehicles’ policy paper. The paper aims to create a framework for the safe development of autonomous vehicles, which covers everything from road safety to governance.
The paper comes off the back of the British Government's plans to invest £100 million in research and safety development of self-driving vehicles, with the aim of getting them on the road by 2025. It is estimated that this could bring 38,000 new jobs to the UK and create a £42 billion industry.
This report will put road safety to one side, as it has been highlighted how this has been covered comprehensively elsewhere. However, the CDEI does a good job of highlighting the data concerns that result from the introduction of autonomous vehicles (AVs).
As the report notes, whilst AVs collect data in a similar way to other devices that are readily available (smart speakers, video doorbells), their use creates some unique problems. The CDEI explains:
There are two key characteristics of AVs that suggest particular attention should be paid to the privacy implications of these systems.
Firstly, AVs may lead to widespread collection and processing of personal data in order to achieve core functionality such as detecting other road users in situations where explicit consent is not feasible.
Secondly, they require regulatory authorisation for deployment (as discussed in the safety section above) that may be perceived as regulatory endorsement (implicitly or explicitly) about this personal data processing, including how they strike the right balance between what is necessary for safe driving, and sufficient protection of personal data. These challenges merit careful consideration given the potential future scale of AV use in public spaces.
AVs are likely to process several categories of personal data, such as time-stamped location data of the vehicle (which carries a high degree of identifiability), as well as health and wellbeing data on the driver. Not only this, as noted above, AV sensors may also collect personal data from individuals outside the vehicle (pedestrians and other road users), including facial images collected from video feeds.
The report also highlights how some companies are exploring the use of biometric data of road users outside of the vehicle. Biometric data is essentially personal data that relates to the physical, physiological or behavioural characteristics of a person. The reason that this may be useful could be in instances where other road users engage with your vehicle - for example, making eye contact.
The CDEI says that there may be legitimate reasons for collecting data in this way under GDPR legislation, but it is “something of a grey area and would be subject to undertaking a legitimate interests assessment”.
And as highlighted previously, the use of video feeds on AVs creates a potential new ‘surveillance environment’ that’s operated by a select few private companies. The report notes:
Some AVs use video cameras that, while their primary purpose is safe operation, can also function as surveillance cameras by collecting, storing and transmitting video of their environments (in a non-targeted way).
This video data could potentially be reused for other purposes such as evidence of crimes unrelated to road safety, and there is some evidence that this is already happening in both public and private places. Unlike dash cams, these are now potentially core capabilities of the safe operation of an AV, which would be regulated in the future by DfT agencies.
In effect, this is potentially approving a surveillance capability, and DfT should draw on the existing governance frameworks for surveillance cameras.
Black box oversight
Closely related to the issues of data privacy are those of explainability. Given the autonomous nature of self-driving vehicles, the CDEI rightly notes that they lack “moral autonomy”. Simply put, if something goes wrong you can’t blame the vehicle itself.
The report states:
Since a self-driving vehicle lacks agency, any action it performs must be traced back to its designers and operators. The Law Commissions have concluded that it is not reasonable to hold an individual programmer responsible for the actions of the vehicle. Instead, the ASDE (authorised self driving entity) as an organisation bears responsibility.
This raises a fundamental need for an appropriate degree of explainability for the vehicle’s ‘decisions’.
However, explainability in this area isn’t always easy. The CDEI notes how investigations into high profile self-driving vehicle crashes have resulted in poor perception and classification of objects, as well as unsatisfactory post-hoc explanations.
Explainability allows for improvements to safety and accountability, and provides evidence for which to evaluate the fairness of systems. But it seems that this isn’t always easy to do with AVs, given that machine learning based systems are challenging to explain. This is particularly important given the personal data being collected and the personal safety risks at play - where accidents will result in looking for people to place blame. The report adds:
The potential hazards of AVs as robots operating in open-ended, uncertain environments, raise the stakes for the interpretability of AI. With other technologies that make use of machine learning systems, performance has been prioritised over interpretability. Growing interest in explainable AI is starting to redress this balance, but there may be some uses of machine learning in AVs, such as computer vision, that remain incompletely interpretable. It may be impossible to know with certainty why an AV image recognition system classified an object or a person according to a particular category. Other parts of AV systems, such as those that determine the speed and direction of the vehicle, are in many cases rules-based and therefore more easily explainable.
Techniques for ensuring explainability will differ across AV systems. An ASDE may need to review logs from a particular event or replay logs through a simulator. Generating explanations for ML-based systems remains an active research area and it is likely that capabilities will advance significantly in the coming years.
Improving safety on the roads through AVs is a worthy pursuit and one that will likely become a reality in the near future. But what’s needed is effective regulation to ensure that this network of surveillance systems, which rests in the hands of a few privately owned companies, considers privacy and explainability as equally important as safety. This is one of those areas where the likely outcomes aren’t yet predictable, so regulation needs to be thoughtful from the start. | <urn:uuid:bbe998e6-aa36-4c2d-9880-8e606b3b4b36> | CC-MAIN-2022-40 | https://diginomica.com/safety-isnt-only-issue-self-driving-vehicles-data-privacy-and-algorithmic-oversight-need-attention | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00613.warc.gz | en | 0.951807 | 1,682 | 2.890625 | 3 |
Front-End vs Back-End Developer: Which Path is Right For You?
Front-end and back-end development are two of today's most ubiquitous specialization areas of web development. However, for those new to development, it may be challenging to decide which path is better for you. With that, we thought it would be important to discuss the differences between front-end and back-end development and help provide some clarity on which path might be right for you.
How to Gain Development Skills
Becoming a developer requires learning programming languages. Which one you choose to learn first depends on whether you want to be a front-end or back-end developer. And you shouldn't stop at learning just one language. Especially if you aspire to be a back-end developer.
Front-End and Back-End Development
The terms front-end and back-end development might seem straightforward enough. Front-end development accounts for all aspects of web development that influence user experience, whereas, back-end development influences all things that influence backend processes. However, this oversimplification is only scratching the surface of everything involved in these two specializations. Before we dig into exploring which path might be right for you, let's formally define both front-end and back-end development.
Front-End Development: Front-end development is a development specialty that focuses on any part of a web page, software package, or web application that a user interacts with. This speciality of web development, is also known as client-side development and applies to anything a user interacts with on a webpage or application including images, themes, graphs, buttons, text, tables, and a variety of other on-page graphical components.
Back-End Development: Back-end development (also known as server-side development) is a development specialty that focuses on the back-end engine that powers the website or application. Websites require server infrastructure, applications, and databases to run properly, this is where back-end development comes into play. This behind-the-scenes- process is responsible for running the website or applications, storing and serving data via databases, and facilitating all of the user requests.
The Front-End Developer
A front-end developer is a programmer who is tasked with writing code that integrates visual aspects of a webpage or applications such as images, themes, graphs, buttons, text, tables, and a variety of other on-page graphical components. This role is typically great for a developer who also enjoys the visual aspects of development. Here, having a knack for visual design, is a great value for developers considering becoming front-end developers.
The Back End Developer
A back-end developer is a programmer who builds and maintains all of the technology required to power and run the website. Here, the back-end developer configures and maintains the server infrastructure, applications, databases, and security packages, including all of the data migration processes affiliated with supporting the website.
Programming Languages for Front-End and Back-End Development
Because front-end development and back-end development cover such unique aspects of the web development process, it's understandable that the programming languages required to perform front-end and back-end development are going to vary significantly.
Understanding which programming languages align to each web development specialty is critical for students early in their web development education so they can learn the appropriate languages that align with their career aspirations.
Programming Languages Used by Front-End Developers
There are a few languages that are staples of front-end development that every developer pursuing a speciality in the front end needs to know. The most common languages include high-level programming languages and scripting languages such as:
These three languages are the code of front-end development. Mastery of these three languages will set developers up for a career in front-end development without a doubt.
Programming Languages Used by Back-End Developers
Programming languages for back-end developers are more geared toward supporting application design, communication between a website and the backend server as well as supporting an array of back-end functions. Some of the core languages for back-end development include:
What Experience is Needed to Become a Front-End and Back-End Developer?
With so many bootcamps, and courses available to aspiring developers, it can seem overwhelming to say the least to get an understanding of what's required to become either a front-end or a back-end developer. To help bridge this gap, we'll discuss what experience is required to become a front-end developer and what experience is required to become a back-end developer.
Getting a career in development has substantially changed over the past few years. Even as little as five years ago, it was the common sentiment that a person had to get a four-year degree in computer science to lock down a career in web development. However, with incredible velocity, the industry has changed to give opportunities to those who graduate from accelerated bootcamps, or even don't hold any type of degree or certification, but have the chops to handle the development work.
1. Four-Year College Degree
As mentioned, there is no single path into front-end development, however, there are some development considerations to take into account. If you're ready to commit to the long haul, a four-year degree in a specialty such as Computer Science is a great way to strengthen your chances of getting a job in front-end development. Our suggestion is to augment this approach with online courses and certifications to help accelerate your education.
For those uninterested in going the route of a four-year college, bootcamps may be a great option. Bootcamps offer accelerated training programs to get students ramped up in development in as little as 12 to 18 months. And even though these boot camps are considerably shorter than the four-year commitment when attending college, there is a considerably high success rate for graduates getting an offer to become a front-end or back-end developer.
3. Online Courses & Certifications
Another phenomenal way to get ramped quickly and to get your foot in the door is through online courses and certifications. For those who don't want to spend the hefty fees associated with the four-year college or even the bootcamp, but still have as high a likelihood of getting an offer, this path may be perfect for you.
In this path, you'll really be taking your education in your own hands. With this path, students will want to start taking courses and certifications and building their portfolio of work along the way. Here, by building a portfolio and applying for roles, many companies will be eager and willing to give young developers an opportunity in an entry-level development position.
How to Become a Developer
If you're ready to get your career started in front-end or back-end development, consider checking out some of our foundational courses such as HTML, CSS, and JS for Web Developers Online Training and Full-Stack Development. Here, you'll have access to world-class training by some of the best teachers in the web development space. | <urn:uuid:1e7d83b6-258c-45fc-9477-4f85841cd1e1> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/programming/front-end-vs-back-end-developer-which-path-is-right-for-you | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00613.warc.gz | en | 0.941748 | 1,504 | 2.59375 | 3 |
Passwords and PINs are not nearly as secure as they should be. According to a recent Verizon investigation, 81 percent of hacker-related breaches happen due to lost or weak passwords. Despite this, poor password habits persist (for example, reusing the same password across multiple accounts both at home and work). Weak passwords make it easy to hack into a system, learn a password/username combination, and then use the information to access a multitude of sensitive user data. Couple this with an increase in cloud-based computing and you get an undeniable need for more secure password systems. Is Biometric authentication the solution?
An Increased Need for Password Protection
According to the 2018/2019 World Quality Report, the number one goal of software developers is to improve end-user experience. To meet these demands, many software companies now base their applications in the cloud, a scalable, affordable hosting service that is both fast and functional.
The survey found that 76 percent of all applications are based in the cloud. However, because it is an internet-based server, software hosted in the cloud is vulnerable to attack, especially if passwords are weak or used across multiple applications. This is especially concerning given the frequency of financial and health-related transfers via cloud-based servers.
To improve the authentication process without hindering user flow, many companies now employ the use of biometric authentication systems. Common examples of biometric authentications include the fingerprint readers or facial recognition software such as those found on many popular cell phones. Because biometrics are unique to the user and because devices can quickly scan and confirm (or deny) a user’s identity, biometric data may serve as a natural alternative to passwords. The theory is that biometric information represents passwords that cannot be lost or stolen.
Or can they?
Concerns Regarding Biometric Authentication
Though considerably harder, it is possible to steal biometric data. According to Experian’s 2019 Data Breach Report, hackers are taking advantage of flaws in both biometric hardware and data storage.
However, it’s impossible to modify biometric data in the event of a breach (unlike passwords and PINs). For example, users can adjust compromised passwords to protect from future data breaches. Conversely, if someone captures fingerprint data, it cannot be swapped out for a new set. In other words, once someone has biometric information, they have it indefinitely.
Note that biometric data collection need not happen directly on the device itself. Once the information is collected and transformed into usable bits of computer-friendly data, it can be stolen just like any other bit of code.
Additionally, surface-level biometrics like facial recognition or fingerprint scanning can change over time or as a result of trauma. To illustrate, a Wall Street Journal article explains how simple changes in appearance – shaving a beard or wearing a different makeup style – blocks access to some biometric-enabled devices.
Adermatoglyphia, or the loss of fingerprints, is another concern regarding biometric security. The condition mostly affects women (primarily seniors) and those who work in manual labor. Though seemingly benign, a lack of fingerprints makes it difficult to register and access devices that depend on biometric fingerprint recognition. Some genetic conditions such as Down syndrome, Turner syndrome, and Klinefelter syndrome may also pose problems with biometric fingerprinting systems.
Solutions to Common Biometric Authentication Concerns
These concerns make it clear that a single biometric authentication tool is not enough to provide the type of data security necessary for cloud-based computing. This does not mean we should dismiss biometric authentication, however. Quite the contrary; biometrics are still the most secure form of identity verification (especially as we make advances in things like vein recognition) and serve as a valuable tool for software security.
To address biometric security concerns, organizations must adhere to biometric software standards that take into account things like data collection, storage, and protection. The FIDO Alliance is spearheading the movement toward a more secure future in biometric technology. They aim to reduce the world’s reliance on standard passwords for more secure biometric technology.
FIDO certification through an accredited third party helps ensure the interoperability of biometric ecosystems, validates product functionality and conformance, and highlights both product and brand integrity. The more widespread FIDO certifications are across the industry, the more secure biometric-enabled devices and services will become.
Additionally, multi-modal biometric systems can help offset singular abnormalities. These systems collect at least two forms of biometric data and pair them up to create one user profile. Examples include systems that scan both fingerprints and hand veins or those that combines facial recognition with iris recognition. These in addition to standard passwords and PINs greatly improve security surrounding identity verification.
Though biometric authentication can be concerning, organizations can proactively improve biometric application security. First and foremost, all biometric technology should be certified as FIDO complaint through a third-party software testing company. Multi-modal and two-factor authentication practices also help secure private data and protect the public from a virtual attack. | <urn:uuid:3061d477-7aed-4bb6-9f1c-4376120de097> | CC-MAIN-2022-40 | https://www.ibeta.com/biometric-authentication-is-the-future-password-less/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00613.warc.gz | en | 0.916094 | 1,051 | 2.765625 | 3 |
By the end of 2021, there will be 12 billion connected IoT devices, and by 2025, that number will rise to 27 billion.
All these devices will be connected to the internet and will send useful data that will make industries, medicine, and cars more intelligent and more efficient.
However, will all these devices be safe? It’s worth asking what you can do to prevent (or at least reduce) becoming a victim of a cybercrime such as data theft or other forms of cybercrime in the future?
Will IoT security ever improve?
In recent years, the number of security vulnerabilities related to the Internet of Things has increased significantly.
Let us start at the very beginning — most IoT devices come with default and publicly disclosed passwords. Moreover, the fact is that there are many cheap and low-capacity Internet of Things devices that lack even the most basic security.
And that’s not all — security experts are discovering new critical vulnerabilities every day. Numerous IoT devices undergoing security audits repeatedly exhibit the same issues over and over again: remote code execution vulnerabilities at the IP or even radio level, unauthenticated or broken access control mechanisms.
Weak hardware security is one of the issues that have been discovered most frequently. By this complex term, we refer to all the attack possibilities that hackers can exploit when they have an IoT device in their hands: extracting security credentials stored in clear in the device’s memory → Using this data to breach into the servers where the device’s data is sent → sharing or selling these credentials in the “dark web” to remotely attack other devices of the same type, etc.
images from Hacker News | <urn:uuid:9c9883f0-a3fd-4671-9b04-8da0067659e0> | CC-MAIN-2022-40 | https://news.cyberfreakz.com/iot-safe-an-innovative-way-to-secure-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00613.warc.gz | en | 0.939992 | 339 | 3.171875 | 3 |
TL;DR: Half of all internet traffic is Bots. There are good and bad bots and it is important to be able to manage all bot traffic, mitigating the risk posed by bad bots so you can protect your customers and your business.
Bot management is the practice of knowing how bots impact your business and understanding their intent so you can better manage all bot activity. After all, there are “good” bots and “bad” bots. The “good” bots are the ones we rely on—like bots that search for and find things on the internet or chatbots that drive improved customer experiences. Then there are “bad” bots—ones that hoard resources, perform account takeovers and credential stuffing, launch DDoS attacks, steal intellectual property or impact your business intelligence.
As a reminder, bot threats are often defined as any automated misuse of functionality or action that adversely affects web apps. Therefore, it’s important to keep in mind that the bot itself isn’t the true culprit, it’s the bot operator.
Being able to manage all bots effectively will require separating the good from the bad. This is where bot mitigation comes into play—that is, identifying, blocking and mitigating the unwanted or malicious bot traffic that hits your network so you can reduce your risk.
Bot mitigation is far more than just identifying your bot traffic; rather it is about identifying and blocking unwanted bot traffic. Furthermore, bot mitigation boils down to reducing the risk of a bot-related threat.
Majority of threats in any environment start with bots or botnets—they help cybercriminals achieve scale. Every kind of online interaction—website visits, API calls to mobile apps, and others—is being attacked by bots. Equally important, bots are also messing with business intelligence (BI).
These are the Top Business Impacts of Bad Bots:
As bot technology and influencing factors such as machine learning and AI continue to evolve, so will the threats they pose. That’s why it’s critical, when looking at your overall security strategy, that you consider how you will filter out unwanted automated traffic and mitigate malicious bots in general.
Preparing your organization to deal with the impact of bots will help ensure your Intellectual Property, customer data and critical backend services are protected from automated attacks. The best way to mitigate bot threats is to target the attack tool itself and adopt a layered security approach to manage changing attack vectors. While traditional IP intelligence and reputation-based filtering can help here, these technologies need to evolve to keep pace with smarter and smarter bots.
Here’s some steps you can take:
A Bot Protection solution should address technical and business challenges that bots create:
F5 Bot Protection delivers proactive, multi-layered security that blocks and drops bad bot traffic before it can hit your network, mitigating bots that perform account takeovers, vulnerability reconnaissance and denial of service attacks targeted at your network or app layer. Automated threats require automated defenses. | <urn:uuid:68000f40-db1b-4e69-9ec6-5b186cd9fb7e> | CC-MAIN-2022-40 | https://www.f5.com/fr_fr/services/resources/glossary/bot-management-mitigation-and-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00613.warc.gz | en | 0.910475 | 613 | 2.515625 | 3 |
Gone are the days when data are saved into discs and drives. Today, with the advancement of technology, data are already saved in the cloud. Cloud storage is a more convenient and cost-effective way of storing data which helps user backup, sync, and access his files wherever he is as long as he is connected to the internet.
Cloud storage is simply a service model where data are remotely saved, managed, and backed up and is made readily available to the user as long as he is connected to the internet. Typically, users have to pay for space that they use monthly. It allows individuals or companies to store their files in third-party data centers through a cloud provider.
Here is a quick video for cloud storage:
Generally speaking, there are three types of cloud solutions – public cloud, private cloud, and hybrid cloud. Each of these clouds has its own sets of advantages and disadvantages. Take a further look at the types of cloud.
A public cloud is a type of computing service that security and accessibility at the same time. This type of cloud is available on the internet to anyone who wants to use it and is offered by third-party providers for free or sold on demand.
Sold on demand means that users only pay for the storage, CPU cycles, or bandwidth that they have used. This type of security is ideal for files in folders that are under the unstructured data category.
Private Cloud is a type of service which is only offered to selected users or is only dedicated to one business which gives the user the ability to configure and manage it by their computing needs. It provides extensive and different computer resources virtually.
This type of service is more expensive than the public cloud since it is the owner who manages and maintains the physical hardware.
In simple terms, a hybrid cloud is a combination of a public and private cloud. It allows data and applications to be shared between the two platforms. It is more flexible option workloads move between cloud solutions the costs and needs fluctuate. It gives users more control over their data.
It is the perfect place for businesses that have a lot of files. They can save sensitive files on the private cloud and save the not raw files on the public cloud. This type of cloud offers customization and affordability at the same time.
With the increasing popularity of cloud storage, security issues became a hot topic. Every single file that is saved into the cloud is under enhanced protection through different ways which include the following:
Any data that is saved in the cloud is encrypted. This means that opening or cracking your data that is saved in the cloud is almost impossible as the intruder needs to crack it before he could access the file. Each of the information that you enter into the system is protected with an encryption key. It is either you or the data provider who has access to the key. No more, no less.
An advanced firewall refers to the security device of a particular network that inspects every incoming and outgoing data packets to figure out if a data passes the security measures or not. It blocks any unauthorized access to files that are saved into the cloud. Unlike simple firewalls, and advanced firewall can verify the integrity of the packet content.
Intrusion Detection System
Intrusion Detection System or IDS monitors any suspicious activity in the system and gives an alarm when such things happen.
An Intrusion Detection System can either be Network intrusion detection systems (NIDS) or Host-based intrusion detection systems (HIDS). NIDS is the one responsible for analyzing incoming network traffic while HIDS is the one that monitors the important files in the operating system
Basically, event logging is a “logbook” which is monitored and analyzed to increase the security of the network. It can capture many forms of information ranging from account lockouts, login sessions, failed login attempts, and the list goes on.
With event logging, the network actions are recorded, and security analysts are given the assistance that they need to understand the threats in the network. The recorded data are used as the basis to help security analysts predict and prevent a possible security breach.
An internal firewall is a type of firewall which is maximized by two same or different organizations that use the same network. It makes sense that all accounts in your system have open access to everything you have on the cloud.
Setting limitations to what the users or account holders can access increased the security level of the network. With this, even compromised accounts cannot gain full access to all data in the cloud.
Cloud data centers are one of the most secure places. They have different security measures which include fingerprint locks, around the clock monitoring, and armed guards. Cloud data centers have a higher level of security compared with onsite data centers.
Another related video:
There is no such thing as perfect in as much as no system is entirely safe. There will always be holes. Cloud computing is nearly 100 percent secure, but users should still exert extra effort to make sure that everything is under control.
In fact, security breaches rarely occur in the cloud. And according to research, most of the security breaches that happened in the last few years is not relates to cloud storage but are internal data breaches.
Using the cloud doesn’t just offer heightened security, but it provides a lot of benefits too. You can reap these benefits, which include agility, flexibility, reduced maintenance, and infrastructure cost, a more competitive environment, less hassle, and so much more.
Are you ready to maximize what the cloud has to offer you? | <urn:uuid:cac86ea5-90de-4585-ae3e-0a08e18d189d> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/another-look-at-how-secure-is-your-cloud-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00613.warc.gz | en | 0.956195 | 1,157 | 3.109375 | 3 |
Over 45% of Gen Zers are victims of cyberbullying
Bullying has been a part of us for ages. The youngest generation started using computers, smartphones, and other electronic devices to threaten or harass others. According to the data presented by the Atlas VPN team, over 45% of Gen Zers are victims of cyberbullying.
The data is based on the statistics from the Cyberbullying Research Center, which has been collecting data from middle and high school students since 2002. Their latest research surveyed 2,546 students from the United States ages 13 to 17 in April 2021. The research results were published on June 22, 2022.
In January 2014, 34.6% of teens experienced cyberbullying. Next year, in 2015, the percentage of cyberbullied students slightly decreased to 34%. For the third year in a row, in 2016, the lifetime cyberbullying victimization rate remained similar, decreasing minimally to 33.6%.
After 3 years, in 2019, another survey revealed that the percentage of teens experiencing cyberbullying increased to 36.5%. The latest statistics from 2021 showed a 25% increase in cyberbullying since 2019, reaching a 45.5% victimization rate. In addition, 23.2% admitted they had been cyberbullied in the last 30 days.
Despite the increase in cyberbullying victims, fewer Gen Zers admitted to cyberbullying others over the recent years. In 2019, 14.8% of students offended someone online, while 6.3% did so in the past 30 days. In the study from 2021, 14.4% revealed they had cyberbullied others during their lifetime, while 4.9% acknowledged doing so in the previous 30 days.
Online communication tools have become an essential part of youth, which means they can get cyberbullied anywhere and anytime. Other researches have found that experience with cyberbullying ties in with low self-esteem, depression, anxiety, family problems, academic difficulties, and other issues.
Tips to protect yourself against cyberbullying
With the tremendous rise of smartphones and social media, cyberbullies now have an ever-increasing variety of ways to harass their victims. By adopting safe cybersecurity practices and avoiding sharing sensitive data, teenagers can have a more private and enjoyable experience online. Here are a few tips:
- Customize privacy settings.
Go through the privacy settings on all of your social media accounts. Most social media allows only your followers or friends to see your profile, photos, and other personal information you share online.
- Avoid sharing personal information.
One of the easiest ways to secure your internet safety is to maintain a low profile. Whether you use social media for work or pleasure, you should never reveal your phone number, location, or address on these networks.
- Do not interact.
Walking away from online conversations is much easier than in the real world. You can permanently block the person harassing you, turn off notifications, close the browser tab and leave. If the interaction is causing you too many negative emotions, just walk away and do not engage with the bully.
- Keep your data secure.
Some bullies might try to hijack your social media accounts and post insulting posts and comments. To avoid that, make sure to set up strong passwords for all of your accounts, and do not forget to log out of public computers. | <urn:uuid:2ab39d37-7051-46cf-9257-660136a6ad27> | CC-MAIN-2022-40 | https://atlasvpn.com/blog/over-45-of-gen-zers-are-victims-of-cyberbullying | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00613.warc.gz | en | 0.947504 | 689 | 2.828125 | 3 |
Many of us are familiar with a concept know as Security by Obscurity. The term has negative connotations within the infosec community—usually for the wrong reason. There’s little debate about whether security by obscurity is bad; this is true because it means the secret being hidden is the key to the entire system’s security.
When added to a system that already has decent controls in place, however, obscurity not only doesn’t hurt you but can be a strong addition to an overall security posture.
The key determination for whether obscurity is good or bad reduces to whether it’s being used a layer on top of good security, or as a replacement for it. The former is good. The latter is bad.
An example of security by obscurity is when someone has an expensive house outfitted with the latest lock system, but the way you open the lock is simply by jiggling the handle. So if you don’t know to do that, it’s pretty secure, but once you know it’s trivial to bypass.
That’s security by obscurity: if the secret ever gets out, it’s game over. The concept comes from cryptography, where it’s utterly sacrilegious to base the security of a cryptographic system on the secrecy of the algorithm.
A powerful example of where obscurity and is used to improve security is camouflage. Consider an armored tank such as the M-1. The tank is equipped with some of the most advanced armor ever created, and has been shown repeatedly to be effective in actual real-world battle.
So, given this highly effective armor, would the danger to the tank somehow increase if it were to be painted the same color as its surroundings? Or how about in the future when we can make the tank completely invisible? Did we reduce the effectiveness of the armor? No, we didn’t. Making something harder to see does not make it easier to attack if or when it is discovered. This is a fallacy that simply must end.
OPSEC is an even better example because nobody serious in infosec doubts its legitimacy. But what is OPSEC? Wikipedia defines it as:
- A process that identifies critical information to determine if friendly actions can be observed by enemy intelligence, determines if information obtained by adversaries could be interpreted to be useful to them, and then executes selected measures that eliminate or reduce adversary exploitation of friendly critical information.
So basically, protecting information that can be used by an enemy. Like, where you are, for example, or what you’re doing. There are lots of examples:
- There are usually one or more decoy limos and helicopters flying next to where the president, and the reason for this is so that the enemy is not sure which to attack.
- When you do executive protection or military maneuvers, you generally want to keep your movement plans as private as possible to avoid giving the enemy an advantage.
- People are encouraged to take random routes to and from locations that are unsafe so that potential attackers won’t know exactly where to attack you.
These are all about controlling and restricting information. Or, put another way, obscuring it. And if it was such a bad practice it wouldn’t be practiced everyday by the militaries of the world, the secret service, executive protection, and anyone else who knows basic security operations.
When the goal is to reduce the number of successful attacks, starting with solid, tested security and adding obscurity as a layer does yield an overall benefit to the security posture. Camouflage accomplishes this on the battlefield, decoys accomplish this when traveling with VIPs, and PK/SPA accomplishes this when protecting hardened services.
Of course, being scientific types, we like to see data. In that vein I decided to do some testing of the idea using the SSH daemon (full results here).
I configured my SSH daemon to listen on port 24 in addition to its regular port of 22 so I could see the difference in attempts to connect to each (the connections are usually password guessing attempts). My expected result is far fewer attempts to access SSH on port 24 than port 22, which I equate to less risk to my, or any, SSH daemon.
Setup for the testing was easy: I added a
Port 24 line to my
sshd_config file, and then added some logging to my firewall rules for ports 22 and 24.
I ran with this alternate port configuration for a single weekend, and received over eighteen thousand (18,000) connections to port 22, and five (5) to port 24.
That’s 18,0000 to 5.
Let’s say that there’s a new zero day out for OpenSSH that’s owning boxes with impunity. Is anyone willing to argue that someone unleashing such an attack would be equally likely to launch it against non-standard port vs. port 22? If not, then your risk goes down by not being there, it’s that simple.
Another foundational way to look at this is through the lens of risk, whereby it can be calculated as:
risk = probability X impact
This means you lower risk (and increase security) by doing one of two things:
- Reducing the probability of being attacked, or…
- Reducing the impact if you are attacked.
Adding armor, or getting a better lock, or learning self-defense, are all examples of reducing the impact of an attack. On the other side, hiding your SSH port, rotating your travel plans, and using decoy vehicles are examples of reducing your chances of being hit.
The key point is that both methods improve security. The question is really which should you focus on at any given point. Is adding obscurity the best use of my resources given the controls I have in place, or would I be better off adding a different (non-obscurity-based) control?
That’s a fair question, and perhaps if you have the ability to go from passwords to keys, for example, that’s likely to be more effective than moving your port. But at some point of diminishing return for impact reduction it is likely to become a good idea to reduce likelihood as well.
- Security through obscurity is bad because it substitutes real security for secrecy in such a way that if someone learns the trick they compromise the system.
- Obscurity can be extremely valuable when added to actual security as an additional way to lower the chances of a successful attack, e.g., camouflage, OPSEC, etc.
- The key question to ask is whether you’re better served by adding additional impact reduction (armor, locks, etc.), or if you’re better off adding more probability reduction (hiding, obscuring, etc.).
Most people who instinctively go to “obscurity is bad” are simply regurgitating something they heard a long time ago and think makes them sound smart.
Don’t listen to them. Think through the ideas yourself. | <urn:uuid:b0eb9fed-5cd6-483a-9f8e-15324301cd71> | CC-MAIN-2022-40 | https://danielmiessler.com/study/security-by-obscurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00013.warc.gz | en | 0.945968 | 1,475 | 2.828125 | 3 |
The health effects of RF energy have been a concern voiced by many in recent years. Questions have been raised about the use of cell phones and the introduction of 5G. Now the military has raised concerns about the cognitive effects of radio frequency energy on pilots.
U.S. Military Concerns About RF Noise Exposure
A recent article in Military & Aerospace Electronics reported that the U.S. Defense Advanced Research Projects Agency (DARPA) is seeking help to determine if RF emissions effect human cognitive processes. They are also, if the study proves positive, looking for ways to mitigate those effects.
This follows reports from pilots of minor cognitive performance issues during flight. Because a cockpit flooded with RF noise, many experts believe this energy may cause spatial disorientation, memory lapses, misprioritization, and complacency. This is a very real concern because spatial disorientation is a leading cause of accidents resulting in loss of life.
Findings from this study should have an impact on commercial applications, where RF environments are becoming increasingly active. This would include commercial pilots and perhaps even motor vehicles.
Previous Concerns About the Health Effects of RF Energy
Most concerns about health and electromagnetic energy have focused on ionizing radiation. Ionizing radiation has enough renergy to break bonds between molecules and ionize atoms. This type of activity requires large amounts of radiant energy.
Typical sources would include X-rays, cosmic rays, and radon. Exposure to these sources of radiation can result in cancer risks.
Non-Ionizing radiation sources of radiation include radar, microwave ovens, cell phones, and the electronic devices that we encounter in our daily lives. While this type of radiation cannot directly damage DNA, there have been concerns about RF absorption causing heat in cells and tissues. Studies conducted by the World Health Organization (WHO) have not been conclusive. While there have been some evidence of increased gliomas in heavy users, there was inadequate evidence to draw conclusion about other types of cancer.
Low Frequency Electromagnetic Fields (EMF)
Low frequency EMF is generated from source such as power lines. There has been some evidence suggesting a link between exposure and childhood leukemia. While evidence suggests that exposure is possibly carcinogenic, again no conclusive evidence was found.
Guidelines on the Health Effects of RF Energy
Exposure standards and guidelines have been developed by various countries around the world. While the U.S. does not currently have a standard for exposure limits, the Federal Communications Commission (FCC) has adopted and used safety guidelines for evaluating RF exposure. the Occupational Safety and Health Administration (OSHA) did release a standard but later deemed it to be advisory. Perhaps the most substantial and used guidelines are those put forth by the American National Standards Institute (ANSI) and the Institute of Electrical and Electronics Engineers (IEEE).
Where are We Headed?
Devices that create radio frequency energy are proliferating at an ever accelerating rate. Radar, lidar, and Internet of Things, are creating a very dense radio energy environment. So much so that devices are having coexistence performance issues.
We are truly treading into uncharted territory. While extant focuses on health effects of RF have dealt with physical effects, the effects of RF on the electrical chemical activities of neurological processes have not been explored.
New findings may well lead to required changes in the way we interact with devices in our lives. If electromagnetic emissions do indeed effect cognitive processes then new standards for safety will need to be created. As a result, product design constraints and changes in required test and evaluation for product certifications will arise. This could very well be a subject to monitor closely. | <urn:uuid:3bfe5d79-f4a2-4c5a-a1ee-822e1c9a097f> | CC-MAIN-2022-40 | https://cvgstrategy.com/health-effects-of-rf-energy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00013.warc.gz | en | 0.951598 | 760 | 3.125 | 3 |
Modern criminals around the world use the internet to plan and prepare their illegal activity, routinely having more than one account on various social media sites in order to keep their true activity hidden from law enforcement agencies.
While social media contains open-source intelligence which can provide invaluable insights into the views and activities of suspects, with the information spread out across a large number of platforms the collection and analysis of the right data is time-consuming and authorities are unlikely to prevent crimes from occurring.
By using advanced investigation tools and easy-to-use artificial intelligence web intelligence platforms like Cobwebs Technologies’ Tangles, which can scour all levels of the internet and the various social media platforms, authorities are able to obtain precise and unrivaled web intelligence in the open and dark web in a fraction of the time it used to take.
While some of the posts on social media platforms might seem benign to the human eye, online investigation tools and media intelligence platforms like Tangles can provide valuable intelligence in identifying suspects and concrete threats.
The automated web intelligence capabilities allow authorities to locate suspects with limited public information as well as locate other individuals in the suspect’s online network. Online investigation tools can also carry out deep target profiling, extract the real identities of virtual personalities, and map group connections tied to the suspect.
Using the latest media intelligence platforms with advanced artificial intelligence capabilities, Cobwebs Technologies’ Tangles platform can also search for information about possible crimes before they happen by monitoring for keywords in different languages, locations, or events and generating alerts to notify authorities of potential threats.
With capabilities like Natural Language Processing (NLP), Tangles is also able to understand the meaning behind a suspect’s posts and perform sentiment analysis to map out the suspects feelings towards topics.
Advanced web intelligence tools can also detect social changes in a suspect’s life.
With seamless integration of new data sources, the web intelligence platform is also used to provide threat intelligence with real-time content monitoring able to scan images, videos and texts which provide unmatched situational awareness.
While some criminals might hide from law enforcement intelligence behind the dark web’s technological veil of anonymity, the automated AI web intelligence and machine learning platform can also carry out face detection screening, detecting individual faces and attributes in images across the different layers of the web as well as generate alerts for faces that appear to be a match.
Online investigation tools with capabilities which use advanced algorithms to identify and mitigate threats allow authorities to remain one step ahead of criminals who keep adapting in order to circumvent law enforcement agencies.
With one click, authorities using the online investigation tools can easily expand a single lead into a complete and efficient end-to-end investigation. | <urn:uuid:a884ceb6-6d6b-4e61-a3c4-17a187cb0836> | CC-MAIN-2022-40 | https://cobwebs.com/using-web-intelligence-for-criminal-investigations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00013.warc.gz | en | 0.911982 | 548 | 2.703125 | 3 |
Accurate weather forecasting has, by and large, been situated squarely in the domain of high-performance computing – just this week, the UK announced a nearly $1.6 billion investment in the world’s largest supercomputer for weather and climate. Now, researchers at Johannes Gutenberg University Mainz and Università della Svizzera italiana are aiming to challenge that status quo with a new algorithm that allows PCs to run tasks that used to require supercomputers.
The algorithm is based on a concept called scalable probabilistic approximation, or SPA, and took many years to develop. The SPA algorithm is able to take just a few dozen components of a system and analyze those elements to predict future behavior with strong accuracy. “For example, using the SPA algorithm we could make a data-based forecast of surface temperatures in Europe for the day ahead and have a prediction error of only 0.75 degrees Celsius,” said Susanne Gerber, co-author of the research and a bioinformatics specialist at Johannes Gutenberg University Mainz.
The researchers specifically designed the algorithm to be interpretable, in contrast to existing machine learning methods. “Many machine learning methods, such as the very popular deep learning, are very successful, but work like a black box, which means that we don’t know exactly what is going on,” Gerber said. “We wanted to understand how artificial intelligence works and gain a better understanding of the connections involved.”
The real advantage of the algorithm, of course, is its performance requirements. “This method enables us to carry out tasks on a standard PC that previously would have required a supercomputer,” said Illia Horenko, another co-author and a computer expert at Università della Svizzera italiana. For example, in Gerber’s weather prediction example, the algorithm produces a result with an error rate 40% better than the computer systems used by many weather services – all while running on a conventional PC at a cost that is lower by five to six orders of magnitude.
The algorithm has applications in a wide range of sectors, ranging from weather to breast cancer diagnosis to neuroscience. For biological applications, the algorithm is broadly useful in situations where large numbers of cells need to be sorted. “What is particularly useful about the result is that we can then get an understanding of what characteristics were used to sort the cells,” said Gerber.
“The SPA algorithm can be applied in a number of fields, from the Lorenz model to the molecular dynamics of amino acids in water,” said Horenko. “The process is easier and cheaper and the results are also better compared to those produced by the current state-of-the-art supercomputers.”
About the research
The research discussed in this article was published as “Low-cost scalable discretization, prediction, and feature selection for complex systems” in the January 2020 issue of Science Advances. It was written by Susanne Gerber, L. Pospisil, M. Navandar and Illia Horenko and can be accessed at this link.
To read the release from Johannes Gutenberg University Mainz discussing this research, click here. | <urn:uuid:b1845729-6a1a-4b8a-b173-3e967d4a1033> | CC-MAIN-2022-40 | https://www.hpcwire.com/2020/02/19/new-algorithm-allows-pcs-to-challenge-hpc-in-weather-forecasting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00214.warc.gz | en | 0.949423 | 678 | 3.578125 | 4 |
Emails are some of the most common forms of communication that a business uses for internal and external conversations, and consequently are some of the favorite targets of cybercriminals and thieves. Even internal emails aren’t safe from this threat: depending on the type of the industry and the potential data that can be mined from leaked or hacked emails, methods to illegally acquire emails have only gotten more sophisticated over the years.
But which industries are more vulnerable (or more attractive) to email hacking? Overall, industries that possess a large amount of data and share it via email like financial services, the medical field, and legal practices need an extensive suite of services to encrypt their emails to prevent outside access or internal leaks. By understanding how these particular industries are vulnerable to attack, it becomes easier for you to minimize risks.
Three Industries That Need To Protect Their Emails
Email encryption can take a variety of forms with multiple layers of protection, but the barest requirement is that it has to be able to disguise, encrypt, hide, or scramble an email from anyone else aside from the sender and receiver. This way, even if the email is somehow plucked from its transfer between sender or recipient, it becomes harder to pry sensitive information from its contents.
There are three industries that should always factor in emails in their IT security:
Banks, financial services, and other money-related businesses contain the majority of the world’s currencies and legal tender. As a result of the interconnected nature of financial transactions and the relative sizes of these institutions, they’re some of the most vulnerable to email-based cyberattacks.
While most monetary transactions aren’t usually done through email, there can be an astonishingly large amount of extraneous data that hackers can seize and interpolate financial details from. While the most popular method of doing this is by phishing the clients of these institutions, another way of gaining access to emails is by direct attacks on industry servers.
One reason why these attacks can be particularly effective is that while financial institutions have steadily grown over the years, the technology used by these institutions has been slower to keep up. Considering the demand for more accessible financial services using channels like mobile devices, a bank has multiple potential points of entry for criminals to break in and gain access to servers – which in turn, gives them access to data and emails.
Once inside, attackers have a variety of methods at their disposal to disrupt services like DDOS attacks and ransomware. Leaked emails can give them authentication codes, company databases, and even locations of cash deposits in your own network. By making email encryption mandatory for internal and external communications, you can effectively close this method of entry from criminals.
The medical sector has arguably one of the largest deposits of personal data today – and extremely sensitive information like this is a prime target for many cybercriminals. Because of the rapid movement of staff, patients, and other personnel through the medical sector, engineering and email-based cyberattacks on medical facilities, deposits, and other locations can be frighteningly easy.
Given that most patients and doctors prefer to communicate by email, there’s a veritable data mine of information that hackers can glean from simply piggybacking the email of a medical institution. Adding to the fact that medical institutes regularly communicate with one another or across different sectors, the emails of the medical sector can prove to be extremely valuable in the right hands.
Unlike financial services, it’s often social engineering or simply carelessness that makes email-based attacks on this sector so effective. Nurses and doctors, in particular, may not always have the self-awareness to conduct their emails with security in mind at the end of their shifts: and the sheer volume of emails that a medical institute may receive in a single day alone makes manual encryption or close attention to security protocols difficult.
This is why one of the best solutions for email encryption for the medical sector is to partner with a provider that can automate the email encryption process, and provide additional protections to cover any potential gaps in email security. With the right provider, you’ll be able to communicate securely without the added hassle of maintaining an IT system to protect your emails.
As intermediaries in disputes, consultants with high-profile and mundane cases, or advisors on sensitive laws, the legal sector goes through plenty of emails in its daily operations. Given that clients are not always careful about protecting sensitive information in their emails to legal practices, it falls on the legal sector itself to take steps to prevent any email leaks.
This is especially crucial if the practice or organization has extensive communication and/or business with other legal practices. Since the networks of the legal sector are closely linked with one another, the overall security of a network of firms will always depend on the security of the weakest one in their network. Given the multiple methods of entry that cybercriminals can use to gain access to a legal practice’s server, consistent security across a practice’s entire IT system is a must-have.
One particular vulnerability that attackers can exploit is the varying degrees of software and hardware standards in the legal industry. While plenty of legal practices and businesses have transitioned to newer methods of keeping information digitally, their security practices have lagged behind. This means that it’s technically possible for an attacker to gain access to more secure servers by infiltrating through less secure channels, either via outdated software or old hardware. With such an interconnected IT system, exploits are extremely easy to find.
The best way a legal practice can avoid this is by ensuring that their protection strategies – not just email encryption – are up to date. This can be done by making sure that any software is consistently up to date so they have access to all the latest security features, and replacing any outdated hardware once the needs of the business or the practice render it obsolete. Practicing email etiquette is also an effective way of making sure that emails are more likely to stay secure.
While these practices don’t represent all of the businesses that are vulnerable to email attacks, they are industries that can stand to benefit from extensive email encryption. If your business, company, or organization belongs to these industries and doesn’t have email encryption in place, we recommend going through the free trial of the Zix email encryption software.
With features like content filtering and data loss prevention, it becomes easier for you to send and receive emails without the fear of them being intercepted and stolen. Real-time protection and automated processes make the process of sending a secure email as easy as possible, with no hassle to either sender or recipient.
Encrypt Your Emails And Secure Your Data With Abacus Managed IT Services
Email encryption is an often-underlooked part of IT security that can play a huge role in determining how well-protected a business’s IT systems are. While there are some internal changes that a business can make with how it handles its emails, one of the best ways to ensure that they’re well–protected against internal leaks and outside access is by using a service that can securely encrypt them from end to end. And while there are industries that may require this type of security more than others, email encryption should always be within the best practices of a company.
Abacus IT Managed Services has extensive experience in end-to-end encryption of company communications, including emails. Our services are specialized for banks and other companies in the financial sector, helping them improve their IT infrastructure to make sure that their data is secure. Contact us today for more information about our services. | <urn:uuid:8cd3d207-2a86-495a-83c3-c93fd077633a> | CC-MAIN-2022-40 | https://goabacus.com/3-industries-that-need-email-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00214.warc.gz | en | 0.94408 | 1,546 | 2.53125 | 3 |
The new 5G technology has been a significant advance in the way we communicate, but are there really risks in this innovative network?
The new 5G mobile technology promises to be innovative in the advancement of communication, data speed and connection; as well as the interconnection of our homes and devices in real time; it will also revolutionize many areas of life such as medicine, work, education and leisure. However; we know that every new technology attracts a series of risks that must be combated because they could be used for much less pleasant purposes; in this case we talk about the vulnerabilities that have been found in 5G technology and that could represent another side of this innovative breakthrough.
What is 5G technology?
This is the fifth generation of mobile networks that we normally know and use in our devices or Smartphones. It started from the introduction of the 1G in cell phones with which we could only receive and make calls; next came the 2G in which text messages were introduced; then the telecommunications companies took the decision to go a step further and expand the communication barrier by adding the internet connection with the 3G; following that the broadband connection was added and the speed to play videos in real time with the 4G.
Until finally the 5G will allow us to surf at 10gbs, 10 times faster than what we were used to; which gives us infinite opportunities with the use of our mobiles.
What are the benefits of 5G technology?
In addition to the speed of navigation in our cell phones, 5G technology can benefit us in different ways. Below are a series of benefits that will revolutionize the arrival of the 5G network:
- Higher latency: latency is understood as the time it takes for a network to respond or load after a link is clicked; normally this waiting time is 20 milliseconds; but with 5G technology it will be 1 millisecond, which will mean a very significant change in the network and in the response time; since we will be able to navigate in real time.
- Increase in the number of connected devices: many of the electronic devices used both in our homes and at the urban level will be able to connect to this new network; making it easier to control it from our mobile devices in real time. Whether we turn on a television, or turn on the coffee machine when we wake up; all of this could be possible with 5G technology.
- Greater coverage: another important benefit of the 5G network is the coverage of accessibility regardless of the agglomeration; people will have the usual coverage regardless of whether they are in concerts, events or any other situation where a large number of people are concentrated; the speed of navigation will be the same as the rest of its uninterrupted characteristics.
- Lower cost of devices: Normally the cost of creating and running smartphones is getting higher and higher; due to the processor capacity it must have to support each of the necessary upgrades of the new models. However, the 5G network can contribute to the decrease in costs by helping considerably to move data at a much higher speed; this reduces processing costs and in the same cost of the devices.
Are there any disadvantages?
Well, if; as every innovation has its weak points that can interfere with the effectiveness of the technology; we must recognize that these disadvantages do not mean that it is impossible to use this innovation; but rather that they are errors or failures that must be resolved as we go along. Some of these disadvantages are:
- The launch time: this is a temporary disadvantage; because the 5G network is not expected to be completely accessible until 2024 or perhaps a little longer; due to the work required to implement the functionality of greater coverage regardless of geographic space; which implies greater cost and effort in achieving this goal.
- New electronic devices will have to be acquired: due to 5G’s capacity; equipment with certain characteristics that can “support” this new technology is required; so new devices are being adapted to 5G and in time these devices that cannot support 5G will become obsolete.
- The technological gap may increase in some places: as we know; it is almost impossible for 5G networks to cover absolutely every corner of the world; so it is more likely that in the places where it cannot reach, the communication conditions will decrease; thus creating a technological imbalance in some places of the world where it is geographically complicated to extend these advances.
Vulnerabilities of 5G networks.
From the moment the launch of this mobile network was announced; there has been a great deal of concern from both users and large companies about the vulnerabilities that the 5G network may have. As we know; previous networks such as the 3 and 4G had some vulnerabilities that are not expected to be inherited by this new technology; but on the contrary, it can be solved. The European Union’s cybersecurity agency, ENISA; carried out a study entitled “ENISA THREAT LANDSCAPE FOR 5G NETWORKS”; which found a series of vulnerabilities and threats posed by this new network.
Some of the most important threats and vulnerabilities found were
- Physical attacks (PA): these are related to the action of attacking, destroying, stealing or altering the physical element that guides 5G networks; such as their structure or hardware.
- Natural disasters: this vulnerability is also associated with the destruction of hardware that can affect connectivity; with the difference that the destruction is not carried out deliberately; but as a result of a natural disaster, be it an earthquake, flood, fire, etc.
- False access network node: this is an access vulnerability in which a base station (gnB) is modified by posing as a legitimate one in order to gain access and be used illegally to execute man-in-the-middle attacks or manipulate data traffic on the network.
- Session Hacking: This is a threat in which some cyber-crooks hijack the access data of some user and use it to steal sensitive information and to attack other devices interconnected by the network in the future.
- Malicious code implementation: This is a generic threat, although for an advanced technology it is a large-scale vulnerability, especially given the number of users who will use this network and the number of people and companies that could be affected if this vulnerability is not fixed. The way in which this vulnerability occurs in the implementation of the 5G network is the use of an unauthorized or illicit VNF that could be installed and registered in an excessive manner on the central network to expose malicious APIs.
The future of telecommunications is growing and while it is a huge advantage for society and mobile communications, it is also an issue that has created controversy around the world. However, it is necessary that the creators of this network can take into account the errors and consequent threats of previous generations of mobile devices in order not to inherit them in this new generation. | <urn:uuid:c6aad436-adc4-4d54-8427-29eb8ed776ca> | CC-MAIN-2022-40 | https://demyo.com/5g-technology-and-its-risks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00214.warc.gz | en | 0.961378 | 1,409 | 2.9375 | 3 |
Importance of Password Policy Best Practices- Companies must follow password policy best practises in order to adequately protect private, sensitive, and personal communication and data. Passwords are used by system end-users as a first line of defence to prevent unauthorised users from accessing protected systems and data. As a result, proper password regulations and procedures must be developed in order to address security issues caused by bad practises and weak passwords.
Password policies are a set of guidelines designed to improve computer security in the face of growing cyber threats. To ensure correct use, the regulations urge system users to develop secure, dependable passwords and store them securely. It is the responsibility of every organisation to set strong password policies, manage them, and update them as needed.
Importance of Password Policy Best Practices
According to a recent Verizon Data Breach Investigation Report, hackers take advantage of any weakness in password policy best practises. Complex password restrictions that do more harm than good are the leading source of cyber-attacks and data breaches, according to the survey. Furthermore, the main tactics for penetrating a secured system were identified as stolen credentials (usernames and passwords) and phishing attacks.
As if bad password policies weren’t bad enough, the State of Password and Authentication Security Behaviors report from 2019 found some intriguing facts about employee password protection. It was discovered that 51% of those polled use the same password for both personal and work accounts. At the same time, 68 percent of those polled admitted to exchanging sensitive passwords with coworkers. A more concerning tendency is that 57 percent of phishing assault participants admit to not using more secure password habits. These are troubling numbers that show why organisations of all sizes and industries need to follow best password policy practises.
Current Password Policy Standards
Passwords were designed to help with authentication issues, but they’ve turned into a major source of issues. The majority of users continue to use weak, easy-to-guess passwords that they reuse across many accounts. Password policies, on the other hand, change as new security requirements emerge. As a result, professionals and regulatory agencies have focused heavily on what constitutes best password practises.
National Institute of Standards and Technology (NIST)
NIST creates and updates information security principles and standards for all federal agencies, but they can also be used by businesses. The NIST Special Publication (SP) 800-623B (Digital Identity Guidelines – Authentication and Lifecycle Management) tackles password policy concerns. The publication outlines a novel password security protocol. For example, it encourages system users to create memorised secrets, which are passwords that are easy to remember but difficult to guess. Other difficult password requirements that have been advocated in the past are also discouraged in the publication. System-generated passwords must have a minimum of six characters, whereas recommended passwords must have eight or more alphanumeric characters.
Furthermore, the NIST paper advises users to check their passwords against a list of universal, compromised, or expected passwords before safeguarding their systems. Dictionary words, passwords identified from previous breaches, sequential or repetitive passwords (e.g., 1234qwerty), and context-specific phrases are among the passwords that are banned when checked. The following are some other NIST password policy best practises:
- To make using password managers easier, enable the paste functionality on the password entry box.
- Instead of passwords, a system should keep a salted hash.
- Allow users to enter passwords with dots or asterisks rather of the more secure dots or asterisks.
- Adding a second element of authentication
- To request memorised secrets, use authenticated protected channels and approved encryption.
Department of Homeland Security (DHS) recommendations
The Department of Homeland Security has designed a card to help users generate strong passwords and secure their systems and information from internet attacks. To help limit the risk of a security issue, the card contains simple rules, some of which are similar to NIST password standards. The tips include:
- Make passwords that are at least eight characters long.
- Use a pass with a mix of capital and lowercase letters, as well as punctuation marks.
- When creating passwords, avoid using common words or personal information.
- Use distinct passwords for each account.
Recommendations for Password Policy from Microsoft
Microsoft has developed suggestions for both end-user and administrator password rules based on information gathered over the years. The data comes from threats like phishing, bots, trojans, and worms that are tracked. Microsoft also emphasises the importance of frequent employee training to guarantee that all system end-users can detect the most recent security concerns and properly apply password policy changes. The Microsoft password policy model suggests passwords that follow the following best practises for access and identity management:
- Using passwords that are exactly eight characters long.
- Special characters, such as *&( percent $, are not required by users.
- Password resets should not be enabled in user accounts on a regular basis.
- Remind system users of the dangers of repeating passwords.
- Multi-factor authentication should be enforced.
Recommendations for Password Policy Best Practices
To build a robust password policy, system administrators in all enterprises should consider the following suggestions:
Make Multi-Factor Authentication a requirement
Multi-factor authentication (MFA) protects data and information systems by forcing users to prove their identity and validity with extra ways. It’s a highly effective method that demands users to submit a correct username and password, as well as additional forms of identification. A text code sent to a mobile device or confirmation of a biometric registered as an extra authentication item are examples.
MFA protects individuals who do not have the necessary access privileges from accessing sensitive data and IT infrastructure. MFA also protects locked assets from being accessed by someone with a stolen credential.
Implement a Password Age Policy
It’s a policy that specifies the shortest time a password can be used to determine how long users must change their passwords. A minimum password policy is necessary because it prohibits system users from reverting to their old passwords after changing theirs. Before urging users to generate new passwords, the minimum age password policy should specify a time period of three to seven days. The policy gives users plenty of time to change their passwords and prevents them from reverting to previous passwords.
However, system administrators should be aware that passwords can be hacked. A password policy requiring a minimum age can prevent users from changing hacked passwords, and administrators should be available to make the necessary adjustments.
Passwords have a higher level of security than single-word passwords. Consider the following sentence: “Every Sunday, I Enjoy Spending Time At The Zoo.” When a sentence is used to construct a password, such as ILSTATZES, powerful passwords are created. Alternatively, utilising the complete text to generate a pass with a mix of capital and lowercase letters minimises the chances of it being hacked. It is simple to remember a passcode, but it gives better security.
Enforce a Password History Policy
When asked to generate new passwords, most people re-use passwords they’ve already created. Organizations should adopt a password history policy that regulates how often a user can reuse an old password, despite the fact that it is common practise. It’s a good idea to impose a password history policy that allows a system to remember at least ten previously used passwords. By preventing password reuse, such a strategy prevents users from alternating between popular passwords. Hackers can use brute-force attacks to break into systems protected by common passwords. Although some users may find a way to circumvent a password history policy, enforcing a minimum password age regulation is a preventative measure.
Create Unique Passwords to Protect Different Accounts
Many users succumb to the temptation of using the same password for many accounts, causing them to lose track of which password belongs to which account. This is risky because a malevolent user can break into one account and gain access to all other accounts. The protection layer of the protected accounts is increased by using a single password for each account. When safeguarding multiple systems, it’s also critical not to reuse outdated passwords. Hackers’ ability to compromise information and information systems is aided by password reuse and using the same password for several accounts.
Immediately Reset Passwords no Longer in Use
Due to intimate knowledge, disgruntled employees might become a company’s biggest enemy. As a result, system administrators must reset the passwords of accounts belonging to former employees. Ex-employees may use their previous passwords to get access to essential information for a variety of reasons, including retribution, monetary gain, and continuous access to vital information. Companies should provide IT and HR departments the authority to intervene as soon as an employee leaves the premises. They should keep track of their actions in accordance with the relevant password policies.
Always Log Out
Employees should be required to log out of their laptops whenever they leave their workstations. To avoid insider threats and hackers from obtaining personal information, employees must sign out of all accounts that aren’t in use. System administrators should set computers to lock or sign out after a predetermined amount of time when they are not in use to guarantee that everyone follows the policy. Users should also revoke rights granted to third-party apps that are linked to the main account. Hackers can get access to the main account by attacking applications with lesser protection.
Clean Desk Policy
One of the most effective password policy best practises is keeping a clean desk. Users must ensure that their desks and workstations are free of tangible things carrying sensitive information, such as passwords, under a clean desk policy. To avoid forgetting passwords, some users prefer to write them down on a piece of paper. However, they may wind up leaving the same passwords for everyone, giving everyone easy access. Users must clean their desktops before departing in order to avoid this.
Secure Emails and Mobile Phones
Mobile phones and emails can be used by malicious actors to reset the passwords of associated accounts. Most accounts have a “lost password” feature that allows users to generate a new password by receiving a unique link or code to their device or email account. Anyone with access to the devices or email accounts has the ability to change passwords at any time while maintaining access privileges. Strong passwords and biometric security, such as fingerprints, are two secure ways to protect the gadgets.
Utilize a Password Manager
Professionals and businesses are increasingly prioritising password manager software. Password management programmes like Zoho Vault and Lastpass are useful for keeping track of passwords and ensuring that they are secure. To access other passwords stored in a password manager, users just need to remember a master password. Password managers are also advantageous because they suggest strong passwords for various accounts and automatically sign a user in. Using a password manager to create and save passwords is strongly recommended whenever possible.
Practices to Avoid
In terms of password security and management, best practises for password policies preclude the following methods:
Using Dictionary Terms: When creating a password, users must avoid using words from a dictionary. Dictionary attacks are vulnerable to passwords formed with dictionary words, whether it’s a single word or a mix of words.
Personal Names as Passwords: Passwords that reflect personal names or place names are weak and insecure. Hackers can utilise social media to scan a target’s profile for key personal information such as family members’ names and frequented locations, and then use that information to crack a password. Furthermore, minor deviations in personal information have no impact on password security because cyber enemies can patiently attempt all letter and word combinations to find the correct password.
Reusing Passwords: Security experts emphasise the dangers of reusing old passwords in the same or many accounts. Users must create new passwords since reusing passwords raises the risk of hostile actors and insider threats cracking them.
Using Letter Strings: Users can be confident that any letter strings on a keyboard, such as qwertyuiop or mnbvcxz, have already been entered into a password dictionary. String-based letters are straightforward to decipher.
Password Revealing: Users should refrain from sharing their passwords with their coworkers. Passwords can not only be misused, but they can also be intercepted if shared across insecure networks. | <urn:uuid:a016c419-950c-402e-8d71-ba7576500b7f> | CC-MAIN-2022-40 | https://cybersguards.com/importance-of-password-policy-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00214.warc.gz | en | 0.920538 | 2,522 | 3.015625 | 3 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 30