text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
The latest ‘Economic impact of digital inclusion in the UK’ report, in partnership with Capita and the social change charity Good Things Foundation, has found that every £1 invested in building the essential digital skills of digitally excluded people, contributes £9.48 to the UK economy.
The Centre for Economics and Business Research (Cebr) conducted the research, building on reports in 2015 and 2018, to understand more about the economic impact of investing in interventions to help digitally excluded people build their basic digital skills.
The report comes at a time when the cost of living continues to rise, and a recession is looming. Two million households already struggle to afford internet access in the UK today, and this will only worsen, which poses a greater risk to those who already face inequality and additional pressure on their health, educational attainment, and work prospects.
A narrower, but deeper digital divide
The latest findings show that significant progress has been made to close the digital divide, since the previous reports in 2015 and 2018. Cebr estimates the number of people without basic digital skills in the UK has fallen from 12.4 million at the end of 2019 to an estimated 10.6 million by the end of 2022.
This reflects the hard work of citizens, communities, and the private, public and voluntary sectors, especially during the pandemic. However, more needs to be done, as although the digital divide may have narrowed, it has also deepened. Without further intervention in building basic digital skills, 5.8 million people are estimated to remain digitally excluded by the end of 2032, and 3.7 million of them will be aged 75 years or older.
From 2023 to 2032, 470,000 are expected to gain basic digital skills without intervention each year. Assuming that 750,000 will still lack or have lost their digital skills by the end of the 10-year period, an estimated 508,000 will still need extra support annually. Over the ten-year period, the estimated total costs of providing this support reaches £1,443 million, and the economic benefits accrue to £13,683 million.
Improving lives and local government services by enhancing digital skills
A lack of digital skills negatively impacts a person’s life, leading to greater social isolation and less access to employment. It can also mean they lack a voice and visibility, as government services and democracy increasingly move online. By increasing digital skills, more people can confidently interact with these services which can save them time and money. The report found that interacting with government and financial services online is estimated to have a value of £3,906 million. People can also save an estimated £3,480 million by shopping online.
Of course, these benefits cannot be achieved without significant investment, but encouragingly value for money from investing in digital skills remains very high. Benefits to the government are estimated to be £1,355 million through efficiency savings alone, plus £483 million in increased tax revenue. The NHS is expected to save an additional £899 million.
It’s vital we ensure a level playing field and give all citizens the same opportunities to interact and contribute to their communities and society. To do this we need to help people within local government to do what they do best – build relationships with citizens. And importantly, it’s technology that can enable this to happen more frequently and more broadly. By using AI, automation & robotics, local authorities can transform their back-office processes and free more time for their people to support citizens as best they can.
Automation, artificial intelligence and process improvement can all drastically reduce the need for human involvement in transactional tasks and free up to 40% capacity by creating bots to deal with repeatable tasks.
The impact of the coronavirus pandemic on digital skills
The coronavirus pandemic accelerated the adoption and application of digital technology which has been transformative for both people and businesses. This contributed to a fall in the number of people who require additional assistance to gain essential digital skills over the entire ten-year appraisal period, from 6.9 million in 2018 to 5.1 million.
During the pandemic many younger people improved their digital skills, as they had to adapt to online learning, and this significantly reduced the number of young people deemed to be without essential digital skills for life.
However, the number of people aged 75 and over without essential digital skills for life increased by 11% between 2019 and 2021. So, although digital exclusion has reduced overall, the divide itself has worsened, with the most vulnerable lagging further behind.
When it comes to employment, government organisations and businesses are keenly aware of the digital skills gap when trying to recruit skilled people to fill roles. Our findings show that supporting working-age adults to improve their digital skills and find employment would generate £2,719 million for corporates by enabling them to fill vacancies, and provide a total of £586 million in increased earnings, which can contribute to UK growth.
Since the pandemic, businesses are much more likely to embrace hybrid and flexible ways of working, enabling more employees to work from home. However, without sufficient digital skills, workers will have limited scope to take full advantage of this trend.
Improved digital skills for all
Achieving a digitally included society will not happen without strategic, coordinated action targeted at the people and places where need is greatest. We need to see digital inclusion strategies at all levels - from county councils to combined authorities. And Cebr’s analysis suggests that the most challenging stretch of the country’s digital inclusion journey lies ahead. If we are to achieve an inclusive recovery and ensure everyone has the opportunity to benefit from the digital world - we have to step up to this challenge.
Download a copy of the ‘Economic impact of digital inclusion in the UK’ report
Download the report | <urn:uuid:6147a3ef-e3e3-4887-a0b1-36f77f39706c> | CC-MAIN-2022-40 | https://www.capita.com/our-thinking/creating-better-outcomes-citizens-and-economy-closing-digital-divide | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00215.warc.gz | en | 0.956665 | 1,191 | 2.921875 | 3 |
Since I posted the article about malware using the 0x33 segment selector to execute 64-bit code in an 32-bit (WOW64) Process, a few people have asked me how the segment selector actually works deep down (a lot of people think it’s software based). For those who haven’t read the previous article, I suggest you read it fist: http://www.malwaretech.com/2013/06/rise-of-dual-architecture-usermode.html
Global Descriptor Table
The global descriptor table (GDT) is a structure used by x86 and x86_64 CPUs, the structure resides in memory, consists of multiple 8-byte descriptors, and is pointed to by the GDT register. Although GDT entries can be segment descriptors, call gates, task state segments, or LDT descriptors; we will focus only on segment descriptors as they are relevant to this article.
A segment descriptor uses a ridiculous layout for backwards compatibility reasons. There is a 4 byte segment base address which is stored at bytes 3,4,5 and 8; The segment limit is 2 and a half bytes and stored at bytes 1, 2 and half of 7; The descriptor flags are the other half of the 7th byte, and the Access flags are byte 6. That’s probably pretty confusing, so I’ve made an example image.
|(A segment descriptor) Fragmentation is cool now.
The only part of the segment descriptor that is relevant for this article is the “Flags” part, which is a total of 4 bits:
- Granularity (if 0, the segment limit is in 1 Byte blocks; if 1, the segment limit is in 4 Kilobyte blocks).
- D/B bit (If 0, the segment is 16-bit; if 1, the segment is 32-bit).
- L Bit (If 0, the D/B bit is used; if 1, the segment is 64-bit and D/B bit must be 0).
- Doesn’t appear to be used.
|The 4 bit “Flags” part of the segment descriptor.
In real mode registers are 16-bit, which means that the CPU should only be able to address 216 (64 KB) of memory, that’s not the case. The CPU has a special 20-bit register it uses for addresses which allows it to address 220 (1 MB), but how is that achieved? The CPU has a segment register which are also 16-bit, the segment register is multiplied by 16 (shifted left 4 bit) then added to the address in order to give a 20 bit address and allowing the whole 1 MB of memory to be accessed.
Protected mode segmentation is significantly different, the segment register is not actually a segment at all, it’s a selector which is split up into 3 parts: Selector, TL (Descriptor Table), and RPL (Request Privilege Level):
- Segment Selector (13 bits) specifies which GDT/LDT descriptor to use, 0 for 1st, 1 for 2nd, etc.
- TL (1 bit) specifies which descriptor table should be used (0 for GDT, 1 for LDT).
- RPL (2 bits) specifies which CPU protection ring is currently being used (ring 1, 2, or 3). This is how the CPU keeps track of which privilege level the current operation is executing at.
Segment selector format
When you switch into 64-bit mode by doing a “CALL 0x33:Address” or “JMP 0x33:Address”, you’re not actually changing the code segment to 0x33, you’re only changing the segment selector. The segment selector for 32-bit code is 0x23, so by changing the selector to 0x33, you’re not modifying the TL or RPL, only changing the selector part from 4 to 6 (If you’re wondering how the selectors are 4 and 6 not 0x23 and 0x33, it’s because the low 3 bits are for the TL and RPL so 0x23 (00100011) is actually RPL = 3, TL = 0, Selector = 4 and 0x33 (00110011) is actually RPL = 3, TL = 0, Selector = 6).
|A visual representation of the above.
So, changing the code segment register doesn’t necessarily mean you’re changing segment like it would in real mode, it totally depends on what the selector’s corresponding descriptor says. As we know 0000000000100 is binary for 4 and 0000000000110 is binary for 6, so we need to pull GDT entries 4 and 6.
Here we can see the only difference between the entry for the 32-bit and 64-bit selector is the “Limit” and “Flags” field. The limit is easily explained: it’s 0 for the 64-bit entry because there is no limit, and the 32-bit limit is 0xFFFFF because the granularity bit is set, making the limit (0xFFFFF * 4KB) AKA 4GB (the maximum addressable space using 32-bit registers). To understand the difference in the Flag’s field, we’ll have to view the individual bits.
Here we can see the Granularity and D/B bits are set for entry 4, but for entry 6 they’re not. Entry 6 also has the L bit set, why? The L bit means the CPU should be in 64-bit mode when this segment descriptor is being used, thus the D/B bit must be 0 as it is not in 16-bit or 32-bit mode. The granularity bit is 0 because the descriptor has no limit set as we showed earlier, so the limit granularity is irrelevant. So there you have it, both segment descriptors point to exactly the same address, the only difference is then when the 0x33 (64-bit) selector is set, the CPU will execute code in 64-bit mode. The selector is not magic and doesn’t tell windows how to interpret the code, it’s actually makes use of a CPU feature that allows the CPU to easily be switched between x86 and x64 mode using the GDT.
If you’re interested in how to dump GDT entries, you need to setup a a virtual machine and remotely debug it with windbg (you cant use local kernel debugger). Once you’re connected remotely you can do “DG 0x23” to dump entry for segment selector 0x23 and it will output it in pretty text. If you want to get the raw bytes for the entry, you’ll need to do “r gdtr” to get the address of the GDT from the GDT register, then you’ll need to do “dq (GDT_Address+(0x8 * SELECTOR) L1” example: “dq (fffff80000b95000+(0x08*4)) L1”. | <urn:uuid:b02c5f48-10d5-433f-8708-05b92620f33c> | CC-MAIN-2022-40 | https://www.malwaretech.com/2014/02/the-0x33-segment-selector-heavens-gate.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00215.warc.gz | en | 0.896179 | 1,541 | 2.609375 | 3 |
Surveillance and security companies use many different forms of technology and methods to improve society’s wellbeing and security. Technology within these sectors is evolving, improving, and growing every single day. However, they can also cross a line for personal space and privacy without proper regulations or roles. Whether it is physical security with vehicles, borders, government facilities, or other spots, or technological and personal safeties, we have to know how much artificial intelligence is affecting us. There is a lot of thought that goes into surveillance and security technology, including the implementation of AI technology. Read on to learn how artificial intelligence is affecting surveillance and security companies today.
What Exactly is Artificial Intelligence and How Does It Affect Security Technology?
A lot of technology has now become robust and complementary enough to drive insane advances in surveillance and security monitoring, including the newfound growth in machine learning and the onset of deep learning. Cloud computing and online data collection, a brand new generation of microchips and computer hardware, all of which help improve the overall functions of AI algorithms. A lot of vendors and security technology businesses are selling facial recognition capabilities. Us at Gatekeeper alone has many forms of artificial technology that is more than capable of using facial recognition. A lot of our technology that implements it is used to identify drivers by law enforcement agencies and potential terrorist threats by scanning their personal identification information in real-time.
Advantages and Disadvantages of AI Security Tech
When handled properly, AI can help figure out where police forces should implement and deploy. It is also very effective at monitoring and can help direct police or other agencies where trouble could be located. However, artificial intelligence can be quite vulnerable in a few particular situations. One other concern is that criminals could hack into certain surveillance systems themselves, and many don’t know how to fight against these kinds of attacks properly. Fingerprints and DNA samples can be collected in person, but a hacker online will leave no trace. This makes it a bit tough for security technology developers to trace back who the hacker is.
There has to be a more educated understanding of AI and its implementation with security and surveillance technology going forward, but for now, its role is minimal and effective. Give Gatekeeper Security a call if you wish to learn more!
Groundbreaking Technologies with Gatekeeper
Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 37 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company. | <urn:uuid:11295a76-9231-44e5-b04e-139bcdef643c> | CC-MAIN-2022-40 | https://www.gatekeepersecurity.com/blog/surveillance-security-companies-using-artificial-intelligence-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00415.warc.gz | en | 0.943537 | 595 | 2.796875 | 3 |
Not so long ago, radio networks relied heavily on telephone lines for remote broadcasts and transportation of program audio. In contrast, radio stations today have become highly computerized, consolidated, and digital. Similar trends can be observed in audio transmission everywhere.
In the last few years, audio technology has shifted towards more digital options like T1 audio lines. These lines are capable of sending audio packets digitally from one point to another. They function and work very differently from traditional audio transmission processes.
Let’s explore T1 lines and their benefits in audio transmission.
T1 lines belong to the T-carrier systems group developed in the 1960s by Bell Labs. They were the first version, hence the name T1 or Transmission System 1.
Half a century later, T1 lines are still relevant because of their ability to transmit both voice and data faster than standard telephone lines. With a speed of 1.544 Mbps, each T1 line delivers high-speed connections that are point to point and dedicated in nature.
Typically, T1 lines can be delivered as channelized and unchannelized services. Subsequent innovations have led to the introduction of T2 and T3 lines. However, only T1 and T3 lines are commercially used today.
Here are four reasons why you should be transmitting audio on T1 lines.
Even T1 transmission over copper lines is wide enough to send FM stereo and digital radio programs over long distances. The audio quality doesn’t degrade because the digital signals are regenerated as they move along the path.
T1 lines are bi-directional and full-duplex. That means you can send remote pickup programs, transmitter data, telephone lines, or satellite audio in the opposite direction at the same time.
Other audio transmission technologies, such as satellite communication or equalized analog phone lines, are very expensive. In comparison to these, T1 lines are a cheaper option.
Line of sight (LoS) is a type of propagation that can transmit and receive data only where transmit and receive stations are in view of each other without any obstacle between them. 950 MHz STL frequencies, FM radio, and satellite transmission all require line-of-sight communication, but T1 lines do not require this. You don’t need line of sight transmissions or locations near each other in T1 lines.
The provisioning and upkeep of most traditional equipment and technologies are becoming increasingly difficult to maintain. Interestingly, T1 lines not just survived but have also thrived despite other technological advancements. Services providers are offering T1 services at lower rates, and the services are available almost all over the world. If you want to make the switch to T1 lines for audio transmission or explore other options, contact us directly for a free consultation today! | <urn:uuid:d516eb5a-b8a3-4346-a7d2-014d47125fa4> | CC-MAIN-2022-40 | https://www.carrierbid.com/audio-transmission-t1-lines-switch/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00415.warc.gz | en | 0.944832 | 564 | 3.0625 | 3 |
Bad actors are constantly raising the ante on email scams. According to Microsoft, “phishers have been quietly retaliating, evolving their techniques to try and evade protections. In 2019, we saw phishing attacks reach new levels of creativity and sophistication.”
To keep pace with these evasive attacks, threat protection software has to adapt, and machine-learning algorithms can be a powerful way to keep pace.
A Learning Computer
Machine-learning algorithms include:
Sender Behavior Analysis: detects imposter or spoofed emails, using header analysis, cousin or look-alike domain detection, as well as natural language processing to determine whether the language in the body of an email might be indicative of social engineering.
URL Behavior Analysis: protects users from credential theft by extracting URLs from emails and examining the destination web page for evidence that it might be a phishing site. Underlying technologies should be built specifically to detect evasive phishing tactics. For example, automatically access suspect sites from multiple source IP addresses and emulate different browsers to observe how the site renders in different environments.
Mailbox Behavior Analysis: profiles mailbox activity to create a baseline of trusted behaviors and relationships. Who sends emails to whom and at what time of day? What volumes? What do the contents look like? And many others. Mailboxes are then continuously monitored for anomalous behaviors and predictive analytics are used to detect threats. For example, if an executive never sends emails to a finance cloud, and then suddenly he does, late on a Friday evening, requesting a money transfer, this behavior will be an anomaly, indicating a possible BEC attack.
Incident Analysis: Enables rapid investigation, containment, response and remediation of threats. Incidents are created whenever an email contravenes a security policy or is reported by the user. Look for automation here too, including clear display of detailed forensic data per incident and automatic aggregation of similar incidents into a single case that can be remediated in one fell swoop.
Employee Insights Are Valuable
Your employees’ “gut feelings” are incredibly valuable and can help you crowdsource threats, however, learning how to identify phishing links can also be helpful. But companies rarely leverage this unique threat intelligence, and these insights usually languish inside IT’s ticket queue.
Cyren Inbox Securityincludes a simple-to-install and -use Outlook plugin that helps Microsoft 365 users identify phishing attacks, and provide critical feedback to the intelligence engine. They’re one click away from flagging an email as suspicious, and telling Cyren to search for lookalike emails in the system. Over time, the engine gets smarter, enriched by employees’ instincts and critical thinking.
To learn more about Cyren Inbox Security and start a 30-day trial, visit https://www.cyren.com/inbox-security-free-trial > | <urn:uuid:f055c820-a822-4545-b056-2adf788e292f> | CC-MAIN-2022-40 | https://www.cyren.com/blog/articles/how-machine-learning-is-building-a-better-spam-trap | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00415.warc.gz | en | 0.902418 | 599 | 2.609375 | 3 |
HTML and CSS Interview Questions
Why do we use html?
What is the extension for any html page?
What is the difference between html4 and html5?
What are block level elements?
Provide some examples for block level elements?
What are inline elements?
Provide some examples for inline level elements.
What is comment in html?
How can we connect different or same documents in html?
What is required attribute?
Give structure to write html table?
What is rowspan and colspan in table?
What is tag and attribute in html?
Explain html lists with examples
Write html code to make a form with two text fields, one checkbox and a submit button?
What is ‘audio’ element? How do we define the audio in html?
What is ‘video element? How do we define the video in html?
Which tag is used to define quotations in html?
What is <pre> tag in html?
Which is the purpose of <del> tag?
Which html element is used to draw graphic on a webpage?
Is there any tag in html, with which we can make a progress bar?
What is marquee tag?
Write html code to make a navigation?
Why do we use css?
What are css frameworks?
Name any two css frameworks.
What is concept of responsiveness in css?
How can we make a website responsive without using any css library?
How can we integrate css file in html document?
Which css property is used to change the transparency?
What is z-index?
What is similarity between “display:none” and “visibility:hidden”?
What is difference between “display:none” and “visibility:hidden”?
Why we use “!important”
What is css flexbox?
Write some of the font attributes
What is the difference between ‘class’ and ‘Id’?
Which property is used to manage the scrolling of background image?
Why we use @font-face?
What is the difference in ‘capitalize, uppercase’ and ‘lowercase’ in text-transform property?
How can we hide the bullets of <ul>?
How universal selector is written in css?
What does “margin: 0 auto” means?
How can we put shadow to elements?
What is purpose of using ‘postion:fixed’?
What are the basic components of css box model?
Provide some different ways to define a color?
Which css property is used to move, rotate, scale elements?
About The Author | <urn:uuid:238bb766-18a4-408c-8def-046a92a405b1> | CC-MAIN-2022-40 | https://ipwithease.com/html-and-css-interview-questions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00415.warc.gz | en | 0.845459 | 595 | 2.8125 | 3 |
As consumer IoT devices continue to proliferate, a large part of the domestic population is at risk of having their home network security compromised if steps are not taken to secure their IoT devices. Vulnerable connected devices can also expose sensitive data that is collected—from health information to personally identifiable information (PII).
Dellfer for Consumer IoT
Dellfer takes a unique approach to protecting IoT devices, such as smartwatches, electronics, television systems, virtual reality, and health tracking devices. Conceptually, it is simple. Dellfer essentially takes a fingerprint of the software used to run an IoT device, then sets up detection mechanisms that trigger defenses if any changes appear. For instance, if malware is injected into the software, Dellfer detects it and quarantines it. Or, if the software is altered to behave differently, Dellfer identifies the source of the issue and neutralizes it.
Consumer IoT Threats
According to the National Institute of Standards and Technology (NIST) Report on International IoT Cybersecurity Standardization:
“Without adequate cybersecurity safeguards, even inexpensive, consumer IoT components with limited functionalities may be exploited to threaten confidentiality, integrity, availability of consumer data and services, consumer privacy and safety, and other systems on the Internet.”
Issues with Consumer IoT Security
According to Einaras von Gravrock from the Forbes Business Council:
“The largest issue in IoT security is that many consumer IoT devices are manufactured without some basic security considerations. There are too few incentives and not enough pressure from consumers to create devices that are secure by design.”
Risks for Consumer IoT
- Security researchers at Kaspersky say there were 1.5 billion attacks against IoT devices during the first half of 2021.
- IoT products in the home can be exposed to more than 12,000 hacking attempts in a single week.
- Over 50% of connected devices in a typical hospital have critical risks.
- More than 1.5 billion attacks have occurred against IoT devices in the first six months of 2021. | <urn:uuid:103c475d-59e3-4168-9acd-d50d0620d652> | CC-MAIN-2022-40 | https://dellfer.com/industries/consumer-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00415.warc.gz | en | 0.931093 | 420 | 2.796875 | 3 |
The cloud has rapidly become an essential part of business through the many benefits it offers to companies connected to, and dependent upon, the digital landscape. Its cost effectiveness and scalability are well documented, and a multitude of services are now delivered exclusively to users via the cloud; from Software as a Service (Saas), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). For the difference between these services, read our previous blog post ‘Cloud Services Explained: SaaS, PaaS and IaaS’.
This cloud adoption has seen a dramatic rise in the cloud’s global market value; last year that figure reached $325.1 billion. But hot on the cloud’s heels is Artificial Intelligence (AI). The global AI market is projected to hit $190 billion by 2025, making it one of the fastest growing industries in the world.
That being said, these industries are not mutually exclusive. Both cloud computing and artificial intelligence support each other’s growth and technological advancements. This blog post will explore the relationship between the cloud and AI and investigate the immense potential artificial intelligence has to shape future technologies.
The concept of creating an artificial brain, capable of learning and making decisions, can be traced to the 1940’s and Alan Turing’s development of the World War II machine ‘The Bombe’, used to break the German Enigma code. This kickstarted research into the newly classified term ‘artificial intelligence’ and scientists started to develop systems capable of communicating and solving equations.
However, by the 1970’s a lack of progress, largely due to computing technologies lacking the required advancements, saw the AI industry lose funding and interest dropped off. This trend continued until the 1990’s when computing technologies advanced. In 1997, IBM’s ‘Deep Blue’ beat world chess champion Gary Kasparov, generating international headlines.
Basic ‘expert systems’ were developed in the 1980’s, capable of answering questions based on catalogued data set within a defined area, such as screening for bank loans or assisting with medical advice. But as the popularity of desktop computers rose and took over these processes, the focus of AI’s purpose shifted into creating ‘intelligent agents’ capable of advanced communication. Many examples of these can be seen today, from Apple’s Siri to Amazon’s Alexa.
Through cloud computing and Big Data, AI is able to analyse and learn from human knowledge and behaviour like never before, leading to big advancements in the industry and the capabilities of its technology. Its decision-making abilities and ways in which it can assist us are constantly evolving and progressing.
It’s important to understand the differences between the two commonly divided categories of AI to understand why it still has a long way to go.
Narrow AI refers to systems which perform a particular task, generally based upon large amounts of data to which an algorithm is applied, such as a self-driving car.
General AI refers to systems that are capable of thinking and learning for themselves, without any input or pre-planned training from humans.
Although Narrow AI is making considerable progress, General AI is reliant on our understanding of how our own brains work, and currently that level of knowledge simply isn’t deep enough. However, that’s not to say General AI and the idea of creating an artificial human brain isn’t possible; but most experts agree this is many years away. In terms of raw brain power and unfocussed intelligence, robots are behind rats. So don’t worry, the possibility of a machine uprising is still a long way off!
As mentioned earlier, the cloud serves as a fundamental component of AI’s usability. The data required for AI to function and make real-time decisions would not be available, at least not quickly enough, if not for cloud technology. IDG network contributor, Gary Eastwood, nicely summarises the relationship between cloud and AI, “the many, disparate servers which are part of cloud technology hold the data which an AI can access and use to make decisions and learn things like how to hold a conversation. But as the AI learns this, it can impart this new data back to the cloud, which can thus help other AIs learn as well.”
In this regard, cloud computing is really one of the cores of anything that artificial intelligence achieves; at least for the time being, anyway. In the future, it’s predicted the two will eventually merge into one seamless technology, complimenting and supporting each other. One thing is for certain, though. As technology around artificial intelligence advances, we will see a prolific increase in streamlined, automised processes, removing the possibilities of human error. As all of these AI-based solutions are supported by the cloud, AI will help grow and strengthen cloud computing’s status throughout industries and the connected digital world.
We hope you’ve enjoyed this blog post. As always, if you have any questions about anything on the blog or any of Secura’s services, please feel free to get in touch.
Image credit: bygermina/Shutterstock.com
Matthew is Secura's content specialist, producing gripping, emotionally complex, edge of your seat, cloud hosting articles and videos.
Tweet me at: | <urn:uuid:afea6c20-dd5d-4035-9db6-19d5d5d96fa9> | CC-MAIN-2022-40 | https://secura.cloud/industry-insight/the-cloud-and-artificial-intelligence | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00615.warc.gz | en | 0.942123 | 1,115 | 3.1875 | 3 |
As the spectacle and competitive atmosphere of the Rio Olympic Games have drawn the world’s attention, hackers who use social engineering are inching closer to our private information. Although our systems may be prepared for the likes of malware and worms, social engineering is a different beast of its own. If used effectively, hackers can manipulate people into disclosing personal information, rendering security systems useless. So how exactly do they go about doing this? Below are five of the most utilized social engineering tactics you should be aware of.
Phishing scams are perhaps the most common type of social engineering attack. Usually seen as links embedded in email messages, these scams lead potential victims into seemingly trustworthy web pages, where they are prompted to fill in their name, address, login information, social security number, and credit card number.
Phishing emails often appear to come from reputable sources, which makes the embedded link even more compelling to click on. Sometimes phishing emails masquerade as government agencies urging you to fill up a personal survey, and other times phishing scams pose as false banking sites. In fact earlier this year, fraudulent Olympics-themed emails redirected potential victims to fake ticketing services, where they would eventually input their personal and financial information. This led to several cases of stolen identities.
What’s the best way to infiltrate your business? Through your office’s front door, of course! Scam artists can simply befriend an employee near the entrance of the building and ask them to hold the door, thereby gaining access into a restricted area. From here, they can steal valuable company secrets and wreak havoc on your IT infrastructure. Though larger enterprises with sophisticated surveillance systems are prepared for these attacks, small- to mid-sized companies are less so.
Quid pro quo
Similar to phishing, quid pro quo attacks offer appealing services or goods in exchange for highly sensitive information. For example, an attacker may offer potential targets free tickets to attend the Olympic games in exchange for their login credentials. Chances are if the offer sounds too good to be true, it probably is.
Pretexting is another form of social engineering whereby an attacker fabricates a scenario to convince a potential victim into providing access to sensitive data and systems. These types of attacks involve scammers who request personal information from their targets in order to verify their identity. Attackers will usually impersonate co-workers, police, tax authorities, or IT auditors in order to gain their targets’ trust and trick them into divulging company secrets.
The unfortunate reality is that fraudsters and their social engineering tactics are becoming more sophisticated. And with the Olympics underway, individuals and businesses alike should prepare for the oncoming wave of social engineering attacks that threaten our sensitive information. Nevertheless, the best way to avoid these scams is knowing what they are and being critical of every email, pop-up ad, and embedded link that you encounter in the internet.
To find out how you can further protect your business from social engineering attacks, contact us today. | <urn:uuid:76bca239-a114-435c-8721-f7928e17cf7b> | CC-MAIN-2022-40 | https://www.datatel360.com/2016/08/18/beware-of-these-social-engineering-tactics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00615.warc.gz | en | 0.945333 | 622 | 2.765625 | 3 |
Sharing economy apps make use of mobile voice calls, but users don’t like sharing their private number with strangers. Solution: telcos can provide disposable cloud numbers for such apps.
The seamless transition between digital apps and mobile telephony has increased customer convenience. For instance, many ride-hailing platforms like Uber, Didi Chuxing in China or Gogovan in Hong Kong work on 3G or 4G networks but allow drivers and passengers to communicate over the phone once the order has been placed. This operation takes place over the phone network, as it is more reliable and ubiquitous, especially in markets where 3G networks are not universal.
Most users would not give a second thought to the mechanics described above. However, given increasing concerns related to privacy and safety of confidential data, users are reluctant to reveal their private number to unknown drivers, passengers or delivery people. It is the responsibility of the service providers who wish to grow and benefit from the sharing economy to find feasible solutions that provide adequate safeguards, without hampering the user experience.
Which brings us to ‘cloud’ mobile numbers.
Companies in the digital sharing economy can use disposable numbers – or cloud numbers – to resolve concerns about safety and privacy. With this model, the user’s private number is not revealed to the driver, neither does s/he have access to the driver’s private number.
Cloud numbers are number ranges originally allocated by the regulator to individual mobile operators but have not been assigned to individual subscribers. These number ranges can be ‘acquired’ by digital service platforms to support anonymized communications.
The platform assigns these cloud mobile numbers temporarily to the platform’s users just for the duration of the transaction – from the time a driver is allocated till the trip is marked as ‘complete’. Both driver and passenger get a cloud mobile phone number, and this gives them a reliable way of reaching each other (even without data coverage), but with their privacy intact. After the trip, the numbers are ‘released’ and can be re-allocated to another transaction. Importantly, neither party can reach the other again through these numbers.
As more people participate in the ‘gig economy’ and use shared services, there is a great need to connect involved parties safely, reliably and for the short term. For companies like Uber, Airbnb or Task Rabbit, providing a secure communication channel linking clients to drivers, tradespeople, house owners or professional service providers is an important way to keep customers safe and loyal to their platform.
Cloud numbers enable this without the need for heavy investment or technical know-how on the part of the digital platform. Moreover, they have many other applications beyond ride-sharing. For instance, they can also be used in call centers or conference calling. With cloud numbers, companies can provide customers with local phone numbers to call the service center, no matter where in the world the customer or the call center are located.
Hassle-free expansion route for Asia’s startups
While using cloud numbers may seem straightforward enough, the process involved in acquiring them is extremely complex, requiring deep and extensive relationships with mobile and telecoms operators. After acquiring these number ranges there is a need for extremely deep telecom expertise to set up the systems that provision and de-provision these numbers smoothly.
Many of Asia’s digital economy startups and platforms are on a rapid regional expansion trajectory. The task of delivering the same service standards everywhere around the region and even the globe makes the problem even more complex.
Digital companies and shared economy service providers located in markets from Singapore to the Philippines can enjoy the advantages of cloud numbers by partnering with wholesale telecoms specialists. Building off of a global footprint, such players are the perfect conduit to offer cloud mobile number services at cost effective rates, often with a customizable, pay-as-you-scale model.
The cloud has fundamentally changed communications. It’s helping Asia’s startups drive efficiencies, reduce costs, tap into new market opportunities, and is a growth catalyst for digital service providers, enabling them to create a global footprint. Cloud numbers are the fundamental building blocks of cloud communications and have a range of benefits from brand visibility, collaboration and customer service. | <urn:uuid:afec8350-725e-4384-98c4-13636a89b9e8> | CC-MAIN-2022-40 | https://disruptive.asia/cloud-numbers-sharing-economy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00615.warc.gz | en | 0.932609 | 869 | 2.515625 | 3 |
Virtual private networks are essential tools for anyone who values their privacy and security online. Whether you want to prevent your ISP from looking at what you’re doing online or you want to ensure your connection is secure when using public Wi-Fi, VPNs have a lot to offer.
What is a VPN?
Usually, when a device connects to the internet, it directly communicates with online servers. So, for example, if you load a website on your laptop, your internet browser converts the URL into an IP address and then connects to the server associated with that IP address.
A VPN consists of an encrypted communications tunnel and an intermediary server. When you connect to the internet through a VPN, your device sends encrypted instructions to a VPN server. It is then this server that connects to and communicates with the wider internet. The result is that the websites and online services you connect to see the IP address of the VPN server you use, not your device.
What are the benefits of using a VPN?
VPNs greatly improve user’s online privacy and security. All your communications with the VPN server are encrypted, so they can’t be intercepted and read by anyone else. Public Wi-Fi networks often feature little to no security, and intercepting traffic passing through them is trivial for anyone with the right tools.
Another advantage of using a VPN is the ability to change your apparent location. For example, if you connect to Netflix from the UK via a VPN server located in the USA, Netflix will think you are connecting from the USA. VPNs can therefore enable users to circumvent geographical region blocking and access content and services that usually aren’t available to users in their country.
VPNs are also essential tools for businesses. Many businesses require their remote working staff to connect to their corporate network through a VPN. This requirement ensures that any sensitive data passed between their staff’s devices and their corporate servers is secure and encrypted.
Also known as network-based VPNs, site-to-site VPNs connect two networks together. Many organisations utilise site-to-site VPNs to create a secure internet connection for private traffic. For example, businesses that want to connect offices in different geographic locations can link their networks with a site-to-site VPN. These networks can similarly be linked with the main corporate network, meaning it all behaves as one contiguous network.
Organisations using site-to-site VPNs need to configure their own endpoints, the devices responsible for encapsulating and de-encapsulating the encrypted data that passes through the network. They also need to decide what authentication method to use and set the rules governing how traffic flows through the encryption tunnel.
Client-based VPNs are the type that most people are familiar with. These VPNs enable users to connect to remote networks via an application on their device. This application automatically establishes an encrypted communications tunnel with the VPN server and handles all the configurations. All the user needs to do is launch the application on their device and log in using their credentials, making this the most user-friendly type of VPN.
Client-based VPNs account for the vast majority of VPNs people use on their personal devices, but they are less common in corporate contexts, where businesses often want a greater degree of control over who can access what and how they access it.
VPNs provide a simple way of enhancing privacy and security for internet users and keeping corporate networks safe. Anyone who regularly connects to the internet via public Wi-Fi networks should consider installing a VPN to protect their data, especially if they are connecting to a business network or sending other sensitive personal information over the network.
Do you have any VPN requirements? Please feel free to contact us and we can advise. | <urn:uuid:205d35b0-c34c-4c4b-81cc-a3db013e4ca5> | CC-MAIN-2022-40 | https://www.catalyst2.com/blog/what-are-vpns-and-why-are-they-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00015.warc.gz | en | 0.938244 | 776 | 2.71875 | 3 |
Malware vs. Virus vs. Worm: An Overview
Malware, viruses, and worms are all cyber security threats. While they are each different things, the threats they pose intersect in important ways.
Malware is a general term that encompasses all software designed to do harm. You can compare the term “malware” to the term “vehicle.” All software-based threats are malware, just like all cars and trucks are vehicles.
However, similar to vehicles, there are many different kinds of malware. In other words, you can have a car, an SUV, and a truck, and you would have three vehicles. But not every vehicle is a car, a truck, or an SUV. Similarly, viruses and worms are both malware, but not all malware is a virus or a worm.
Viruses can be spread from one computer to another inside files. For the virus to be activated, someone has to trigger it with an external action. For example, a virus can be embedded inside a spreadsheet. If you download the spreadsheet, your computer will not necessarily be infected. The virus gets activated once you open the spreadsheet.
With a worm, there is no need for the victim to open up any files or even click on anything. The worm can both run and spread itself to other computers. Because a worm has the ability to automatically propagate itself, you can get a worm in your computer just because it is on the same network as another infected device.
Comparative Analysis of Malware, Virus, and Worm
All worms and viruses or malware, but there are significant differences between worms and viruses. Malware, being a general term, can also include many other threats. However, a worm behaves in a very specific way, making it significantly different than a virus.
A worm can replicate and spread itself from one computer to another. On the other hand, a virus cannot self-replicate, and it needs to be sent by a user or software to travel between two different computers.
Malware, Virus or Worm: What Is More Dangerous?
While it is difficult to say which is the most dangerous, the following is generally true.
Malware vs. Worm vs. Virus
In a comparison of malware vs. worm, malware is more dangerous because it encompasses both worms and all other software-based threats, such as spyware, ransomware, and Trojans. The same can be said of the malware vs. virus conversation. Trying to ascertain which is more dangerous—malware, viruses, or worms—is like trying to figure out which is better at transporting people: vehicles, cars, or trucks.
Virus vs. Worm
On the other hand, the "virus vs. worm" discussion is a little more nuanced. Both viruses and worms can do significant damage to your computer, but the ways in which they spread and are activated can make one a more significant danger than the other. In many cases, it depends on how your network is structured.
Why a Worm is Dangerous
If your network consists of many computers connected to each other in a ring formation, then a worm may be a bigger threat than a virus. The same could be said of a network set up in a hub formation with a server in the middle that serves all the computers in the network, particularly if the server does not have adequate antimalware defenses.
In these kinds of architectures, a worm, once introduced to one computer, can replicate itself and spread to the other computers in the network. This can give one worm the power to infect the entire network. If a virus is introduced to an unprotected hub-and-spoke network or a ring network, users will still have to send the virus to each other and then open the file for each computer in the network to get infected.
Why a Virus is Just as Dangerous
On the surface, a worm, which is also referred to as a worm virus, will appear more dangerous than a virus, but because computers within an organization's network interact with the internet often more than they do with each other, viruses can be just as dangerous. For example, a single website that several users visit can download a virus to their computers, and when they open the file containing the virus, all of them can get infected.
In many situations, a worm's functionality can also work against itself. Because the worm is designed to spread from one computer to another, it risks the chance of exposing itself with each lateral move. If, for example, a worm has to go through a firewall as it tries to go from one computer to the next, the firewall may detect it. At that point, system administrators can use relatively basic forensic analysis to figure out where the worm came from.
This is not the case with viruses. Several users can download the same or different viruses, and figuring out where they came from, especially if they did not come from the same emails or websites, can present a significant challenge.
Therefore, the difference between malware and a virus is not as much of a factor as is the difference between a virus and a worm. The same can be said of the difference between malware and worm because malware encompasses worms.
How To Protect Devices from Malware, Viruses, and Worms
There are several ways to protect your computer from threats like viruses, worms, and other malware:
How Fortinet Can Help
With the FortiGate next-generation firewall (NGFW), your organization is protected from worms, viruses, and other kinds of malware. The FortiGate NGFW uses deep packet inspection (DPI) to detect and mitigate data packets that contain threats, as well as machine learning algorithms that can detect zero-day attacks based on their behavior.
The FortiGate NGFW integrates with the Fortinet Security Fabric and can process all incoming and outgoing data, ensuring all devices on your network are thoroughly protected.
What is the difference between malware and a virus?
All viruses are malware, but malware can also include threats like spyware, ransomware, and worms.
What is the difference between malware and a worm?
All worms are malware, but malware can also encompass threats like Trojans, spyware, ransomware, and viruses.
What is the difference between a virus and a worm?
A worm can self-replicate and spread to other computers, while a virus cannot. A virus needs to be sent from one computer to another by a user or via software. | <urn:uuid:b28b33dc-ff22-46fa-a11c-7457ed3ad615> | CC-MAIN-2022-40 | https://www.fortinet.com/kr/resources/cyberglossary/malware-vs-virus-vs-worm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00015.warc.gz | en | 0.949041 | 1,343 | 3.53125 | 4 |
Feature Engineering as a Core of Machine Learning Business Value
“How can we ensure the success of our machine learning project?”
Your business has asked this questions. Your competitors have asked this question. Unfortunately, there is no straightforward answer, but a good candidate could be feature engineering.
Some data scientist described it as a bottleneck, other as a superpower. Either way, all of them agree that it is a vital part of any successful machine learning project. Let’s dig in to see what business value feature engineering has to offer.
What is Feature Engineering
Feature engineering is the process of identifying features with predictive power from raw data sets. The role this process plays in a machine learning project is irreplaceable; it has a direct impact on how good the final predictions will be.
Some even call feature engineering an art. And truth be told, they are not far off in their definition. As we’ll illustrate later, feature engineering takes a lot of domain knowledge and machine learning skills, but ultimately it’s tied together by creativity.
If done correctly, feature engineering increases the predictive power of machine learning algorithms, and can even offset the adverse effects of a poorly chosen model or wrong parameters. But let’s not jump ahead. Before understanding the how let’s look at the why.
Why Feature Engineering is Important
The success of a machine learning project can be boiled down to two things: well-chosen algorithm and good data. To pick a suitable algorithm can be difficult but not impossible. Many different algorithms can solve the same problem with equal success.
The problem of data, on the other hand, might be more difficult to deal with. Many data sets often come in raw formats that have no or little predictive value. The data is also often unstructured, stored in multiple formats, contains missing values, etc. In other words, it has nothing to teach an algorithm.
Algorithms need structure. Feature engineering is the tool that creates or reveals that structure. The whole process of feature engineering is based on answering the question of “How do you get the most out of your data?”
The quality and quantity of your engineered features will influence the results of your predictive model. Poorly made features will naturally produce poor results.
On the other hand, well-executed feature engineering can weigh up for other subpar factors of your machine learning project:
- Wrong models: even if you choose an algorithm that is not entirely optimal for your machine learning problem, you can still get good results. Feature engineering means well-structured data, which most models can pick up on.
- Wrong parameters: the same can be said about parameters. You do not need to work as hard to optimize parameters.
- Simpler models: with well-executed feature engineering, you can also get away with using less complex models that are faster to run, easier to understand and easier to maintain.
All these 3 points entail better, more robust results and shorter time-to-market for your machine learning project.
How Does Feature Engineering Work
Let’s look at a real-life example taken from a public Kaggle competition.
The primary goal of the competition is to predict what customers will return to make a purchase. The teams are given a data set of customer purchasing data gathered over a year. The set contains around 350 million rows and has the following features:
Although this dataset contains a lot of information about each customer and their shopping habits, on their own, each of these features hold little predictive value. What can we do about that? For ones, we can combine some of them to form new features. For instance, if put together, the features “Manufacturer”, “Date of purchase” and “Customer ID” can tell us who buys from the same manufacturer more than once.
This new feature can be called “Has bought from manufacturer x”. It can be a simple 1 or 0 (yes or no), or we can split it into multiple new features that include time periods. Another useful feature we can generate is “Total amount spent” which can be made by combining “Customer ID”, “Product price”, “Product quantity”, and “Brand”. Some customer might not shop that often but they spend a lot of money once they do, and that can also signal customer loyalty.
Now that we have our two features we would want to check their predictive power. This means running our model to see if it produces satisfactory results. If not, we will have to go back to the data sets to look for more features.
This is the process of feature engineering. The basic steps can be narrowed down to:
- Brainstorming features
- Creating features
- Checking if the model produces satisfactory results with the chosen features
- If not, go back to step 1
The domain knowledge and creativity are mostly required in steps 1 and 2. Patience is needed for steps 3 and 4. It’s a lengthy process.
Up to 70% of the time in a machine learning project can be spent on feature engineering. That number might be lower if the data contains a lot of raw features (obtained directly from the dataset with no extra data manipulation or engineering), or it might be higher if the data scientist needs to work on deriving features.
The data scientist working with feature engineering has to consider the underlying machine learning problem when designing features.
The key questions that a data scientist needs to ask herself during feature engineering process in order to do it well are:
- What are the essential properties of the problem we’re trying to solve?
- How do those properties interact with each other?
- How will those properties interact with the inherent strengths and limitations of our model?
- How can we augment our dataset so as to enhance the predictive performance of the AI?
Automated Feature Engineering
The time-consuming aspect of feature engineering makes it a perfect candidate for automation. There have been a number of attempts to automate feature engineering. Most of them can:
- generate new features automatically
- know which algorithms call for feature engineering
- understand what types of feature engineering work best with what algorithm
- systematically compare engineered features and algorithms to find the best match
Such systems are more efficient and repeatable than manual feature engineering. By spending less time on steps 1 and 2, you can build better predictive models faster. The results of automated feature engineering are also much less error-prone.
However, automated feature engineering is not intended to replace data scientist but rather assist them.
Automated feature engineering does, on average, find more features. But what features should be included in the training of the final model should be determined by domain experts.
Domain experts are often better than machines at suggesting patterns that hold predictive power. For example, a tool can recommend around 40.000 features but domain experts will pick only 100 that are relevant. | <urn:uuid:e431cf6e-130a-40e3-9cc0-dc499ff418e2> | CC-MAIN-2022-40 | https://indatalabs.com/blog/feature-engineering-machine-learning-value | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00015.warc.gz | en | 0.941069 | 1,439 | 2.515625 | 3 |
Over the last half century, pundits and prognosticators have waxed poetic about the potential of virtual reality (VR). Unfortunately, the technology hasn’t quite lived up to the hype.
But, now, thanks to enormous advances in digital systems, VR finally appears ready for liftoff. By 2018, market research site Statistica predicts that 171 million active users of virtual reality will exist.
One of the more interesting projects taking shape: an immersive virtual reality platform that imagines life on Mars, including weather, buildings, vehicles, farms and clothing. The HP Mars Home Planet initiative was launched by HP with a group of partners, including NVIDIA, Autodesk and Fusion. The system uses a wearable VR PC—dubbed the HP Z Backpack—with special goggles and other gear to create an ultra-realistic Martian experience.
According to an HP press release, “The HP Mars Home Planet project advances work initially done for Mars 2030, a virtual reality experience created by Fusion with the National Aeronautics and Space Administration (NASA). Now, HP and its partners are uniting engineers, architects, designers, artists and students to imagine, design and experience humanity’s future on Mars through VR.”
The virtual reality world is based on an actual site on Mars, Mawrth Vallis (which means “Mars Valley” in Welsh). It depicts what one million people living and working on the red planet would look like. Yet, the VR environment isn’t just fun and games. The project will generate data that NASA and others might use to design future habitats and environments on Mars.
“The goal of the project is to engage creative thinkers to solve some of the challenges of urbanization on the red planet,” HP noted.
Participants will use Autodesk software on HP Z workstations with NVIDIA Quadro graphics to create the transportation and infrastructure framework. They will also develop 3D models and renderings that ultimately produce the virtual reality experience of life on Mars, including vehicles, buildings and entire cities. An advisory board includes leading experts from Technicolor Experience Center, Twentieth Century Fox, Paramount Pictures and the University of Arizona.
Make no mistake, immersive multimedia virtual reality is finally taking shape. And while the mission to Mars may seem a bit out there, it’s important for business and IT leaders to tune in.
Over the next decade and beyond, VR is almost certain to impact, if not revolutionize, industries as diverse as retail, travel, medicine and engineering. It will change the way we view the world—and other worlds. | <urn:uuid:219430e7-c0eb-495c-97f5-5ec50f5791de> | CC-MAIN-2022-40 | https://www.baselinemag.com/blogs/how-virtual-reality-will-change-our-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00015.warc.gz | en | 0.914343 | 538 | 2.625 | 3 |
You can still get the same simple HTML5 web-based remote application access you count on as part of Ericom Connect. Check out the Ericom Connect online demo or contact us to speak to an Ericom representative.
ERICOM CONNECT DEMO
What is Malvertising?
What is malvertising? How do malvertising attacks occur, and how can you protect against them?
What is malvertising?
Malvertising, as it sounds, is a portmanteau -- that is, a combination of ‘malicious advertising’. Malvertising occurs when hackers manage to inject malicious code into an online ad, and use a genuine online advertising network to spread the ad across the web. Malvertising often uses malicious code called an ‘exploit kit’, which detects vulnerabilities on the user’s browser or a web app, and uses these vulnerabilities to install malware. Once installed, the malware can allow access to the user’s computer, infiltrate a network, steal sensitive data, such as financial data, or even use ransomware to encrypt the user’s files and demand payment.
Malvertising can appear on ads provided by even the biggest, most popular and most reputable advertising networks, leading to infected ads appearing on highly trusted websites and tricking users into believing they are legitimate. Sometimes, malvertising triggers a drive-by attack, which no user interaction is needed to trigger - the user only needs to navigate to the page with the malvertising on it, and the malware is downloaded automatically.
Due to the sophisticated ways in which hackers hide the malicious intentions of their ads, malvertising has appeared on some of the world’s most well-known websites, giving even the most careful user a false sense of security.
As online advertisements are so popular, a very large number of ads are submitted to ad networks, making it hard to detect every ad containing malicious code. In addition, many times the ads being displayed on a particular page are changed very frequently, so two users visiting the same page may not see the same ads, and only one may become the victim of malvertising. This makes it very difficult to track the culprit.
Is malvertising the same as adware?
Malvertising and adware are both malicious, and contain ads, but that’s where the similarity ends. Adware is usually installed without a user’s knowledge, or bundled with other software, and runs on the user’s computer. The adware will display ads directly to that particular user. In contrast, malvertising exists on live web pages, and malicious ads are shown to a wide audience. In order to become a victim of malvertising, the user must visit a particular page, or click on the ad.
Who does malvertising affect?
• Web site owners - for a trusted website owner, such as a well-known retailer or service provider, if users visit their website and become infected with malware, their reputation will be badly damaged due to security concerns that could drive people away.
• Advertising networks - advertising networks may lose customers if they are found to have been displaying malicious ads.
• Users - of course, if a user becomes a victim of a malvertising campaign, this can lead to their device becoming compromised. The hacker could then steal financial data, such as bank or credit card details, and use it to withdraw money or make transactions. The malicious code could also lead to a ransomware situation, whereby the computer is locked, or data is encrypted, and the user is pressured into paying a ransom to release it.
How does malvertising work?
The first step in the malvertising process is that the hackers create an infected ad. Then, they use an ad network to buy advertising spaces on websites. The hackers provide the network with the infected ads, which are then displayed in the spaces they bought. Sometimes there are numerous parties involved, such as different servers for different types of ads, creating an opportunity for cybercriminals to find a way to infiltrate and inject malicious code into existing ads.
Once a user visits a web page with one of these malicious ads, or clicks directly on the ad, one of the following things could happen, depending on the type of malware with which the ad is infected:
The malicious ad could redirect the user to a malicious website that is completely different from the one appearing in the ad, sometimes through numerous redirects to successfully avoid detection by the advertising network.
The ad could redirect the user to a fake version of the legitimate site that appears in the ad, to carry out a phishing attack and gather valuable user data.
Malicious code could run, which begins the automatic download and installation of malware onto the user’s device, whether it’s a desktop, laptop, or mobile.
What does malvertising look like?
There are no hard and fast rules for identifying malicious ads, as they can look just like legitimate online advertising. With that being said, here are some particularly suspicious things to look out for:
• Pop-up ads - these are notoriously sketchy, especially ones that encourage software downloads to ‘protect your computer’.
• Website ad banners - sometimes these banners promise rewards or special offers. Think before you click - if it’s too good to be true, it could well be malicious.
• Ads with a fake button - some ads have a fake close button or OK button, when really clicking on the ad can launch a malware download.
• Any ad provided by a third party on a website - sadly, no ad network is completely immune, and therefore, no website ad can be trusted completely.
• Text ads inside content - sometimes ads are just text, often containing hyperlinks. Clicking on these links could also trigger malicious code to run.
Preventing malvertising attacks
Here are some tips on how you can prevent malvertising attacks:
Tips for everyone
• Keep your software up-to-date, especially applications that access the web, including browser extensions. Every application should have the latest security patches to ensure vulnerabilities used by browser exploits are kept to a minimum, decreasing the chance that a malvertising campaign will be able to successfully breach your device.
• Use ad blockers that will prevent ads from displaying, thus stopping malicious code from launching a malware download.
• Use security software such as antivirus and firewalls. Remember to keep them all up-to-date, to protect against the latest threats.
Tips for organizations
• Educate users about cyber threats like malvertising, and provide guidance for how to browse the web safely, including not clicking on suspicious links or ads.
• Use an advanced security solution to make browsing the web safer for all users. Remote browser isolation (RBI), for example, allows users to browse the web as normal, while running all active code in an isolated container in the cloud, away from the endpoint. Only safe rendering data is sent to the user’s browser, where they interact with it just as they would with the actual website -- only without risk. The container is destroyed once the user stops browsing, along with any malicious code, so it can never reach the end user’s computer at all.
Tips for website owners
• Work only with reputable online third-party ad vendors, especially ones that are known to take proactive steps and precautions to prevent malvertising.
For more information about Malvertising, see these blog posts:
We worked with Ericom to implement a web security solution that provides the highest level of protection against web-based cyberthreats. This gives our employees the broad secure web access they need to remain productive while ensuring our organization remains secure.
Paul E. Rousseau, SVP IT Architecture and
Engineering Director at Enterprise Bank
TEST FONT SIZES | <urn:uuid:02474488-43a2-4dd2-9b18-ef09fc3de97b> | CC-MAIN-2022-40 | https://www.ericom.com/whatis/malvertising/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00015.warc.gz | en | 0.917329 | 1,679 | 3.1875 | 3 |
As quantum capabilities keep growing, it is important to ask how scaling will affect both individual devices and large scale quantum networks. On the micro scale, single devices will have to consist of multiple quantum “cores” which connect together to become one large device, whereas on the macro scale many devices will come together in unison to provide one effectively massive quantum computer. In both of these cases, distribution of tasks on subunits will be of utmost importance to get the most out of the quantum capability we have created. For Aliro, part of our research is focused on this so-called “distributed quantum computing.” In short, this amounts to finding ways of connecting multiple quantum units together to compile and/or implement a quantum algorithm.
The first question that comes to mind is why are multiple subunits necessary? Why can’t we just build larger individual devices? The answer to this comes from the hardware implementation details, especially when we look at trapped-ion or even potentially hybrid devices. Although ion traps can fit many ions, the necessity to address individual ions with lasers only allows for so many to fit in a given trap. Unless someone were to build an ion trap the size of, say, a soccer stadium, it makes much more sense to hook up multiple traps together using cross-device entanglement. This is not an entirely new concept, and has been discussed at length by the folks at IonQ, a trapped-ion hardware maker, and University of Maryland (see here), and we can anticipate this sort of structure will dominate many types of hardware in the mid to long term. For instance, if we wanted to create a device that uses both superconducting loops and ions (this could take advantage of the various pros/cons of each, like longer coherence times for ions or faster gate times for superconducting loops), we would already need to consider this distributed quantum computing question in some detail. Past these subunits on a single device, when we take a further step out and think about networks of devices we can immediately see that a similar problem arises, and the solutions will be the key towards building large computing setups in the future.
When it comes to designing algorithms for distributed computing there is actually some precedent to go off. As early as 2004 there was a paper describing how to implement Shor’s algorithm for factoring large numbers using a distributed setup, which expectedly has an emphasis on non-local interactions. In general, we will want to avoid any multi-qubit gates that consist of qubits across devices as those will be the toughest to losslessly implement, but inevitably we must communicate across devices. Thus, there is plenty of space here for compilation solutions as well as designing hardware that can use non-local entanglement efficiently. Quantum networks hope to achieve this, which would allow us to not necessarily need multiple devices in a single room (and not need individual groups to build multiple devices!), but instead collaboratively compute with the help of others.
At Aliro, our main focus is to enable distributed quantum computing via networks of quantum nodes; this is one of the main goals of quantum networks after all. We hope to do this by using hardware-aware compilations and designing networks such that we can efficiently and optimally allow devices to talk to each other with the end goal of finishing a particular computation. In addition, we believe other researchers can take advantage of these networks to start to design algorithms and protocols that will make effectively massive quantum devices possible in the near future, which gives us the inspiration to deploy these capabilities to the community. | <urn:uuid:9bc5a469-630e-4478-a0d6-30854a5d86ea> | CC-MAIN-2022-40 | https://www.aliroquantum.com/blog/quantum-network-applications-distributed-quantum-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00215.warc.gz | en | 0.945234 | 728 | 2.546875 | 3 |
Hi and welcome to this CertificationKits CCNA training video on EIGRP concepts. What we are going to be discussing in this video is the three tables that EIGRP is; the neighbor table, the topology table, routing table and how those work together. How EIGRP goes about finding neighbors to exchange routing information with loop avoidance, with a successor and feasible successor route stored in the topology table, as well as dual diffusing update algorithm that EIGRP uses.
Now EIGRP is a hybrid protocol, meaning it has some characteristic of both link state and distance vector routing protocols. The big drawback to EIGRP is it is a proprietary protocol; Cisco only. So if you have an all Cisco environment, it’s going to work great, easy to configure, less overhead than OSPF, more scalability than RIP. However if you are in a multi vendor environment you are not going to able to use it. The first thing I want to talk is how EIGRP goes about populating its routing table. It keeps three tables; a topology table, a routing table and a neighbor table. Now the first table it’s going to want to populate is the neighbor table this is number one that’s got to come first. Now, what EIGRP does is it send this hello package out every five seconds on the LAN interface in a point a point lanlink. So this should be five seconds going here point to point lanlink as well as every five seconds going out it's LAN interface. Now there aren’t any other routers coming out of these Ethernet0 interface. So if we wanted to we could prevent EIGRP from sending information out of the Ethernet0 interface using the command passive interface. And what it does is it prevents EIGRP from establishing neighbor relationships on Ethernet0. But it would still listen for information, but it's not going to establish any neighbor relationships so it won't exchange any information anything like that. We would like that because there aren’t any routers on this side of router A. We’ll call this router A, call this router B and we’ll call this router C.
Now, what will happen is if this was on a multi-point LAN link, like a frame-relay link. It would send hellos out every 60 seconds. So it would slow them down a little bit in a frame-relay environment that was a multi-point LAN link. So here is a router connected through a frame-relay environment, multi-point mean one router connects into multiple other routers. In that environment it would send out every 60 seconds. So, what happens is the hello allow the EIGRP to established relationship and what they’re looking for basically is IP connectivity .1.2 IP address here so 126.96.36.199, 188.8.131.52 the same subnet mask, that’s got to match. And then the big thing is the autonomous system number will say the autonomous system number is 100.
As said in an earlier CCNA video, the autonomous system number is simply an administrative control of routers so if we had a bunch of routers on the public domain we would have what’s called our own autonomous system. We would actually get an assigned autonomous system number. In a private environment EIGRP still need to know what routers it wants to share information with. So if I were to give this router – router A an autonomous system number of 100 and router B an autonomous system number of 110, anything other than 100, they would not exchange information. So it allows me to kind of control what routers my EIGRP router is going to share information with. So in here if I want them all to talk everybody has to have an autonomous system number of 100. They will establish them as a neighbor so our neighbor table to get populated in the neighbor table and then they will do a full topology change for like down here full topology change. Then update messages to a multi-cast address so the first thing they do is they do a full topology change. And they’ll look for the best cost path and then what’s called the feasible successor, which would be the second best cost path and we’ll talk about that in a moment. But the key thing is to establish your neighbor, relationship, they do a full topology table change, that’s a second step right there. Then based on that topology information they put the best cost path in their routing table, that’s the third step. But the big difference between OSPF and EIGRP is this topology table, it only keeps minimal information than topology table as oppose to everything.
Let’s take a look at what information actually gets put into the topology table. Brought up this amazing CCNA slide here that allow us to go in and take a look at how information gets put into the topology table. So, the first thing that’s going to happen is we’re going to configure EIGRP on all the routers at the same autonomous system number and make sure all the IP addresses are functioning. And what will happen is the neighborly relationships will start being established with all that hello traffic that goes back and forth so we got all the hellos going out every five seconds, everybody chatting with each other. Saying yeah I like to be your neighbor and they’re going to start showing information. What I want to look at is what will enter in to the topology table for Palaestra1 over here for this particular subnet 184.108.40.206.
CCNA slide 16 matches so, this subnets all the way over here. And how is that information going to get entered into Palaestra1 topology table, when there are two different paths Palaestra1 can take. He could take serial0, or serial1. Now, just because both pass are there does not mean they’ll both get entered into topology table. What happens is Palaestra5 once you established those neighbor relationship send the information off about 220.127.116.11 to Palaestra3, and Palaestra4. They pass that information along with their topology tables getting populated and they’ll fall at same pattern that Palaestra1 did, but I just want to focus on Palaestra1 here. I’m not worried about the details as far as two, three and four go.
So eventually through serial0 a path gets to Palaestra1 to subnet 18.104.22.168 and it has a cost of 10 plus five plus ten. So its cost is 25, now that’s through serial0. Through serial1, it gets there and it has a path that 20.10. 0 .0 and its cost is 20. So it has to determine whether or not it's safe to put both paths in the topology table, remember this is just a next stop on the way to the routing table. Only the best cost path will get into the routing table, so with this he looks at the information and he says okay I have a got a pass to serial1 with the cost of 20 to get to subnet 22.214.171.124 that’s my best cost path that is definitely going to go in to my topology table and that will be called my successor rout. That successor route is the best cost path successor equals best.
Now the big question is – is whether or not this path would be able to be entered into the topology table as a feasible successor. So this feasible successor equals backup, now EIGRP wants to avoid loops but it doesn’t want to keep all the information that OSPF keeps in its topology table. So to avoid loops, or to guarantee EIGRP there are no loops, there’s what’s called a feasibility condition that a backup path has to meet before the router will put this information into its topology table. And what that feasibility condition is a question of whether or not the next top router for this path remember we are evaluating the serial0 path right now. He has to look at his best cost path to get there is 20. He looks at Palaestra2 and sees what Palaestra2’s cost to get to the same subnet with B in Palaestra2’s cost is a cost of 15.
Since Palaestra2 has a cost of 15, and that’s lower than Palaestra1’s best cost path. Palaestra1 is guaranteed that when he send something to Palaestra2 to go to subnet 126.96.36.199 that Palaestra2 is not going to send it back to him in hopes that it will go around that direction. So since that cost path is lower he’ll go ahead and put that into the topology table as a feasible successor route because this path has met the feasibility condition its safe. Let’s look at the numbers if it doesn’t meet the feasibility condition.
I've cleaned up the CCNA slide here. Now we’re going to take a look at what's going to happen if we use different numbers for the paths. I cleaned up the CCNA slide too much it’s supposes to say Palaestra4, right here. So our subnet over here is still 188.8.131.52 let’s see what happens as this information gets shared with different cost now, instead of 10 this is now 15 instead of 5 that is now 10. So Palaestra5 will send this information down as it goes along the line all the numbers get added up. So when it hits Palaestra1, it’s going to see two paths, a serial0 path to subnet 184.108.40.206 and it’s going to have a cost of 35. And there's also going to be a serial1 path to the same subnet with the cost of 20.
So right off the back, when he’s getting this information he’s going to see his best cost path and go okay, you are my best path I like you. You are going to go into my topology table and you will be my successor. So that’s the best path to the destination. The second one is got to evaluate this cost of 35. He’s looking at it and he doesn’t know so what he’s going to do is, he’s going to check Palaestra2’s cost to get there. And Palaestra2’s cost is 25. So what Palaestra1’s thinking, he can't really see the big picture here he’s just thinking hey if Palaestra2 has a cost of 25 to get to network 220.127.116.11 my best cost path is 20 how do I know that Palaestra2 is the only way to get there is not going back through me. What he’s thinking is that all of these over here may not exist, and that this path has a cost of five, so what he’s looking at is okay Palaestra2, I'm getting information back here and how do I know that if I send something to him. I am not his path back to the destination and he can't know for sure with the information that EIGRP looks at. So what he does is even though all of that does exist over there and I have cleaned up the slide again so you can see it. So this does exist I mean there is another path but Palaestra1 can't be guaranteed that there's another path because this cost is 25. So he doesn’t know for sure that Palaestra2, when he sends it there, isn’t just going to send it right back around this way. So since he can't be guaranteed to that, not going to put that information into the topology table as a feasible successor so they’ll not be a feasible successor to that destination.
If this path were to go down I want to say this path goes down and his favorite path is no longer available to him anymore then he might, saying a different tune he’s going to start using his dual defusing update algorithm. And what he is doing is he’s querying his other options now and he’s going to start asking around to find out if there is another path. And when asking around he will find out that there is another path and he can still get there and he will put that in his topology table now as 20.10 with the cost of 35 and this will be his new successor, but it will not be used immediately. The problem with the feasible successor if it were in there, what would happen is if the successor path went down like it did. He would immediately use the other path without checking first what DUAL does or without having it in there as a feasible successor. This causes him to check and make sure that there is a good path to the destination, then he’ll use it. So, just gives them an extra step in there to guarantee it's not creating a routing loop.
We’re going to go in and take a look at a show command. The show EIGRP topology command and view this information as the router would show it to you. Here we have the show IP EIGRP topology command in action. It shows the codes, P passive that is good, everything is up and running, and this is what we’re looking at. This is the destination subnet, again this is the topology table, not the routing table only the successor route would enter in the routing table. So 18.104.22.168 is the destination subnet. To this destination subnet there is one successor route, meaning one best cost path to the destination. The feasibility distance is 2692856. This is an important number. This is the best cost path or the best cost to the destination. And it says VO1721620.2 and notice this number right here matches the feasibility distance. So it’s saying okay this number is the best cost path or this best cost period to this destination subnet 172.16.2.0. Or 22.0 through next top router 20.2. What this is, is this is the next hop routers cost, so it throws the next hop routers cost in there.
There is an additional route, this is the feasible successor. It's entered into the topology table because it needs that feasibility condition, next hop router 172.16.21.2, the destination subnet 22.0 it would cost 46738176 to get to the 22.0 subnet through this next hop router. Then the reason it’s entered into the routing table is because the next hop router has a cost of 2169856 which is lower than the successor cost of 2692856. So, since this next hop router has a lower cost than the best cost path that this router has to the destination it's allowed to get entered in to the topology table as a feasible successor. Because this router knows that this router right here 21.2 will not try to send the packet back through the Palaestra1 router to get to the destination so he knows there is no routing loop. If this number right here was a 2769 this information would not get entered into the topology table. It would be ignored because the nest hop router has a higher cost than this router has to get to the destination.
So, again what that’s telling him is, hey here’s the next hop router B, it's possible that when A sends it to B then he might have to send it back through and that creates a routing loop when this number is higher than the best cost that A has. But, again since B’s cost is lower, A knows if he sends it to B there’s no way he is going to send it back. He will send it on to the destination, and we can look at it and more examples here 30.0.1 successor, feasibility distance 2187456. And so it shows here okay this is the cost to the destination and the next top router has a 26282600. Pay attention to this number, when you see this number right here that I am outlining over and over, what that’s telling you is that this next top router is directly connected to the subnet. So here is the router A, router B directly connected to the subnet where that machines on or whatever. So, if A sends it to B, B doesn’t have to send it to anybody else, he can send it right to the machine, very important 281600 means the next top router is directly connected.
The last one on the bottom here again same thing one successor to 90.0 through both different routers 20.2 and 21.2 this is the best cost path so this is the successor, one successor. This is the feasible successor right here because his cost is higher than the best cost path but the next top router has a lower cost than this cost. You can actually see here the next top routers actually directly connected to the subnet. There’s just a really slow link to the next top router. So, this scenario right here would play out something like this. You got router A, router B and the destination subnet right there and the other way there’s probably a couple of routers, router C and router D. So, A is actually choosing this path because this link is crawling slow maybe it’s a 56k link or something like that. It’s crawling slow, so it takes a lot longer to get there than it is to go over these faster multiple links to get to the destination subnet right here. So, that’s looking at this table very important understand this information. And again, if you see that 281600 you know that the next top router is directly connected and there’s no routing loop possible then.
There’s one other thing I want to talk about that’s why I clean up the CCNA slide and give myself a little working room, dual diffusing update algorithm. Let’s take an example of the 30.0 subnet. So, we’ve got Palaestra1 right here, and another router Palaestra2, which is directly connected to this destination subnet of 172.16.30.0. So, it’s got a path there and it has a cost of 2187456, so that’s just cost to get there. Now, maybe there is another way to get there but he’s been ignoring that information maybe through router Palaestra3, Palaestra4 he’s got another path to get there out of the serial1 interface, this is serial0. So serial1, he might have been getting information all along when the updates take place or during the initial update. And this cost might be something that is 2936759 I don’t know I'm just throwing some numbers up there, so it’s a big number. And the next hop router which is Palaestra3 has a cost to get to this destinations subnet of 2876321 so, that is a number eight right there. So, what’s happening is Palaestra3 has a higher cost to get to the destination than Palaestra1 does. So, what happens is Palaestra1 ignored this path. He saw that path might be a loop since I have a lower cost than Palaestra3, I can’t be guaranteed that if I send him something he’s not going to send it back around to me. So he doesn’t put it in the topology table, he has just one successor route and no feasible successor, this path does not meet the feasibility condition.
So, what happens when this goes down DUAL kicks in and takes over and start querying the routers to make sure they’re still in open path to the destination. So, it basically explores this option that he’d previously been ignoring. He finds out that it’s a good path and he puts that in as a successor route. So, DUAL allows the router to query for additional paths when the successor goes down and there’s no feasible successor. Now, we’ve talked about the CCNA EIGRP concepts, the three tables the neighbor table, again that’s where all his friends going, that he’s going to exchange topology information with the topology table which keeps only successor and feasible successor routes. This feasible successor routes have to meet the feasibility condition and again that’s the guaranty no loops, loops are bad. No loops, the routing table which the successor route gets entered into how the routers go about finding their neighbors loop avoidance with the successor and feasible successor route and dual what’s kick in the gear if the successor route goes down and there’s not a feasible successor route to take. So, I hope you have enjoyed this CertificationKits CCNA training video on EIGRP concepts. | <urn:uuid:37e149d6-b26a-4166-a275-adb7f1f1edcd> | CC-MAIN-2022-40 | https://www.certificationkits.com/cisco-certification/ccna-articles/cisco-ccna-link-state-a-hybrid-routing-protocols/cisco-ccna-eigrp-concepts-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00215.warc.gz | en | 0.955351 | 4,472 | 2.859375 | 3 |
Consumers are confident that the introduction of robotics and artificial intelligence in the workplace will enhance and not destroy their jobs, according to a recent survey from semiconductor company ARM.
In its AI Today, AI Tomorrow global survey of 4,000 consumers, carried out by Northstar Research Partners, more than six out of ten (61 percent) of respondents said they believe that an increase in automation and AI would make “society become better”. In particular, 37 percent believe there will be advancements in medicine and science that help humans live longer and healthier lives, and are prepared to trust machines to diagnose illnesses.
Those who believe advancements in AI and robotics will lead to fewer jobs for humans are in the minority, with just 30 percent identifying “fewer or different jobs for humans” as the biggest drawback to these technological changes.
Instead, 29 percent of respondents feel that tedious or dangerous tasks will be done by robots, and 11 percent see less chance of human accidents or mistakes. In fact, many companies are already doing this, including General Electric in automating its field services and the use of maintenance drones from enterprise applications company, IFS.
Ripe for disruption?
On a more granular level, survey respondents said they believe that jobs in manufacturing and banking would be most disrupted by new AI technologies, while occupations related to cooking, fire-fighting and farming will continue to be the domain of humans. This was the view of most people surveyed about a robotic future; with those surveyed in Asia responding most positively, followed by the US and then Europe.
“It is encouraging to see the survey results highlighting the optimism and opportunities tied to AI, but we are just scratching the surface of its potential,” said Joyce Kim, vice president of global marketing, brand and communications at ARM.
“The impact of AI on jobs will be disruptive but it can be a manageable and highly positive disruption in terms of opportunities and enhancing our lives. If we increase our investments in STEM and educating the next-generation workforce on AI technologies, we can ensure they are not left behind in the robot economy.” | <urn:uuid:299a14d1-67f5-45cc-be99-7b61df631b9e> | CC-MAIN-2022-40 | https://internetofbusiness.com/arm-ai-robotics-workplace/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00215.warc.gz | en | 0.946905 | 431 | 2.53125 | 3 |
The Department of Basic Education is making significant changes to the school curriculum to boost mathematics, science and technology among learners in the country – and is rolling out equipment and software to support new subjects including robotics and coding.
Responding in a written parliamentary Q&A this week, basic education minister Angie Motshekga provided an update on how schools are being supported in this strategy.
One of the key strategies being used, Motshekga said, is leveraging existing STEM programmes at schools.
The department launched the Dinaledi Schools project in 2005, which was subsequently merged with the Mathematics, Science and Technology Conditional Grant following a review by the DBE in 2015.
The strategic goal of the MST Grant is to increase the number of learners taking mathematics, science and technology subjects, improve the success rates in the subjects, and improve teachers’ capabilities, the department said.
“The grant’s purpose is to provide support and resources to schools, teachers and learners in line with the Curriculum Assessment Policy Statements (CAPS) for the improvement of mathematics, science and technology teaching and learning at selected public schools,” Motshekga said.
Notably, the department’s recent push into new subjects like robotics and coding and vocational training has become a significant part of the project.
According to Motshekga, 485 schools have so far been supplied with subject-specific computer hardware and related software for CAPS tech subjects, including coding and robotics pilot schools.
There have also been 1,256 laboratories supplied with apparatus and consumables for mathematics, science and technology subjects, including coding and robotics kits, she said.
In terms of student support, the department noted that 50,000 learners in the country registered to participate in mathematics, science and technology olympiads/fairs/expos and other events, including support through learner camps and additional learning, teaching and support material such as study guides.
There have also been 1,500 teachers attending specific structured training and orientation in subject content and teaching methodologies on CAPS for electrical, civil and mechanical technology, technical mathematics, and technical sciences.
Over 1,000 teachers and subject advisors have attended targeted and structured training in teaching methodologies and subject content either for mathematics, physical, life, natural and agricultural sciences, technology, computer applications technology, information technology, agricultural management and technology subjects, it said.
The department plans to fully implement coding and robotics as new schools subject for Grade R-3 and 7 students in the 2023 academic year.
A pilot curriculum for these subjects was initially introduced at some schools in the third term of the 2021 academic year, it said. It plans to expand these tech-focused subjects to other grades in subsequent years.
The coding and robotics pilot for Grades 4-6 and for Grades 8 was planned for 2022 and will be followed by a Grade 9 pilot in 2023. The full-scale implementation for Grades 4-6 and Grade 8 is planned for 2024, and Grade 9 in 2025, the department said.
While the department is boosting its support and training for these new technical subjects, experts in the education field have warned that the country is facing a shortage of skilled teachers, mainly because a large percentage of the current workforce is nearing retirement age.
The Department of Basic Education has previously responded to claims of a skills crisis in teaching, saying that the number of new teaching graduates is increasing every year.
“The number of initial teacher education graduates has grown over the last 10 years from an output of about 7,973 in 2010 to 31,799 in 2020,” it said.
The 25,000 graduates mark was reached in 2017, it said, adding that the current enrolment trends point to the upward trajectory in graduation numbers.
The output of graduates is favoured towards the Senior/Further Education and Training Phases (SP/FET) – partly because the two qualification pathways allow for SP/FET to qualify through both the Bachelor of Education (BEd) and Post Graduate Certificate in Education (PGCE) while Foundation Phase (FP) is largely limited to BEd pathway, the department said.
The average teacher attrition rate over is 15,200 a year – largely due to retirement but also because of resignations, ill health and death.
The teacher supply in terms of quantity is reasonably adequate, the department said, at least from the analysis of the situation in public education. | <urn:uuid:5cc50413-0995-4f91-8f47-6e0f037d5f35> | CC-MAIN-2022-40 | https://www.businessmayor.com/government-rolling-out-new-subjects-at-schools-in-south-africa-businesstech/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00215.warc.gz | en | 0.959605 | 918 | 2.875 | 3 |
When we fall asleep, our brains are not merely offline, they’re busy organizing new memories–and now, scientists have gotten a glimpse of the process.
Researchers report in the journal Cell Reports on May 5 the first direct evidence that human brains replay waking experiences while asleep, seen in two participants with intracortical microelectrode arrays placed in their brains as part of a brain-computer interface pilot clinical trial.
During sleep, the brain replays neural firing patterns experienced while awake, also known as “offline replay.”
Replay is thought to underlie memory consolidation, the process by which recent memories acquire more permanence in their neural representation.
Scientists have previously observed replay in animals, but the study led by Jean-Baptiste Eichenlaub of Massachusetts General Hospital and Beata Jarosiewicz, formerly Research Assistant Professor at BrainGate, and now Senior Research Scientist at NeuroPace, tested whether the phenomenon happens in human brains as well.
The team asked the two participants to take a nap before and after playing a sequence-copying game, which is similar to the 80s hit game Simon. The video game had four color panels that lit up in different sequences for the players to repeat.
But instead of moving their arms, the participants played the game with their minds–imagining moving the cursor with their hands to different targets one by one, hitting the correct colors in the correct order as quickly as possible.
While the participants rested, played the game, and then rested again, the researchers recorded the spiking activity of large groups of individual neurons in their brains through an implanted multi-electrode array.
“There aren’t a lot of scenarios in which a person would have a multi-electrode array placed in their brain, where the electrodes are tiny enough to be able to detect the firing activity of individual neurons,” says co-first author Jarosiewicz.
Electrodes approved for medical indications, like those for treating Parkinson’s disease or epilepsy, are too big to track the spiking activity of single neurons. But the electrode arrays used in the BrainGate pilot clinical trials are the first to allow for such detailed neural recordings in the human brain. “That’s why this study is unprecedented,” she says.
BrainGate is an academic research consortium spanning Brown University, Massachusetts General Hospital, Case Western Reserve University, and Stanford University. Researchers at BrainGate are working to develop chronically implanted brain-computer interfaces to help people with severe motor disabilities regain communication and control by using their brain signals to move computer cursors, robotic arms, and other assistive devices.
In this study, the team observed the same neuronal firing patterns during both the gaming period and the post-game rest period. In other words, it’s as though the participants kept playing the Simon game after they were asleep, replaying the same patterns in their brain at a neuronal level. The findings provided direct evidence of learning-related replay in the human brain.
“This is the first piece of direct evidence that in humans, we also see replay during rest following learning that might help to consolidate those memories,” said Jarosiewicz. “All the replay-related memory consolidation mechanisms that we’ve studied in animals for all these decades might actually generalize to humans as well.”
The findings also open up more questions and future topics of study who want to understand the underlying mechanism by which replay enables memory consolidation.
The next step is to find evidence that replay actually has a causal role in the memory consolidation process. One way to do that would be to test whether there’s a relationship between the strength of the replay and the strength of post-nap memory recall.
Although scientists don’t fully understand how learning and memory consolidation work, a cascade of animal and human studies has shown that sleep plays a vital role. Getting a good night’s sleep “before a test and before important interviews” is beneficial for good cognitive performance, said Jarosiewicz. “We have good scientific evidence that sleep is very important in these processes.”
Funding: This work was supported by the U.S. Office of Naval Research, NINDS, NIDCD, the Department of Veterans Affairs, the Fyssen Foundation, and the Center for Neurorestoration and Neurotechnology from the United States (U.S.) Department of Veterans Affairs, Rehabilitation Research and Development Service. The authors would like to thank participants T9 and T10 and their families and caregivers.
WHAT ARE DREAMING AND MIND WANDERING?
“Dreaming” is usually understood as subjective mental experi- ences during sleep. Although most famously (and strongly) asso- ciated with REM sleep (Aserinsky and Kleitman, 1953; Dement and Kleitman, 1957), dream-like thought is also reported during other sleep stages (see Methods).
For several reasons, by “dreaming” we will generally be refer- ring to subjective reports drawn from REM sleep: for one thing, the majority of “dream” reports have been elicited from REM sleep-stage laboratory awakenings; further, only REM sleep shows a particularly strong correlation with dream mentation ( 80% of awakenings from REM sleep result in dream reports:
Hobson et al., 2000). For the purposes of the present paper, then, “dream- ing” refers to mentation reports from REM sleep.
“Undirected” thought is a similarly complex construct, and can be divided into several different categories (Christoff, 2012). “Mind wandering” (MW) and “stimulus-independent thought” (SIT), for instance, are typically defined as thinking that devi- ates from a particular task a subject is meant to be completing (McGuire et al., 1996; Mason et al., 2007; Christoff et al., 2009).
“Spontaneous thought,” on the other hand, is characterized rather by its undirected, effortless nature—more akin to the everyday concept of “daydreaming” (Singer, 1966; Klinger, 1990; Christoff,2012); no particular task, or deviation from it, is required.
Subtle differences are apparent: MW, for example, might be initiated deliberately (as when a subject decides to “tune out” during a boring task) rather than being “spontaneous.”
Nonetheless, these terms are often used interchangeably or with only minimal def- inition. Fluidity of terminology seems inevitable, however, in a relatively young field of inquiry (Christoff, 2012); moreover, the subjective content and neural basis of these states appear highly similar (compare, e.g., Singer and McCraven, 1961; Christoff et al., 2004, 2009; Stawarczyk et al., 2011).
We therefore use these terms relatively interchangeably throughout this paper. MW, spontaneous thought, or daydreaming, then, all refer to subjec- tive reports of undirected thoughts during wakefulness (whether deviating from, or in the complete absence of, a task).
THE DEFAULT MODE NETWORK (DMN) AND REM SLEEP
Though specific neural correlates of both daydreaming and dreaming remain somewhat elusive, these mental states, and their associated subjective content, are strongly correlated with the “resting state” and REM sleep, respectively (Aserinsky and Kleitman, 1953; Dement and Kleitman, 1957; Maquet et al., 1996; Mason et al., 2007; Christoff et al., 2009; Andrews-Hanna et al., 2010; Vanhaudenhuyse et al., 2010; Christoff, 2012; Hasenkampet al., 2012).
The default mode network (DMN) was discovered some- what serendipitously as a pattern of brain deactivations associated with the difference between brain activity during a quiet, resting state (the typical baseline condition for early fMRI studies) and a goal-oriented, directed task (Raichle et al., 2001).
Particular regions were consistently more active during “rest” than during goal-directed tasks of many kinds, suggesting a “default mode” network of regions active when a subject was “doing nothing” (Raichle et al., 2001; see Table 3 and Figure 2 for core regions of the DMN).
It quickly became clear, however, that physical “rest” by no means implied mental inactivity. With no explicit task, subjects almost immediately engaged in spontaneous thought, including daydreaming, planning for the future, recalling mem- ories, and so on (Gusnard et al., 2001).
Subsequent research has tied the subjective experience of MW to core DMN regions (Christoff et al., 2004, 2009; Mason et al., 2007; Andrews-Hanna et al., 2010; Vanhaudenhuyse et al., 2010; Hasenkamp et al., 2012).
Although regions beyond the DMN appear to also be recruited during MW (e.g., Christoff et al., 2009), the DMN still remains the most commonly used neural proxy for spontaneous thought (see also Methods).
REM sleep is initiated by a network of cells in the pons and nearby portions of the midbrain (Siegel, 2011), but involves a widespread recruitment of higher cortical brain regions (see our meta-analytic results, below, for regions of this theoretical REM network: Table 2 and Figure 1).
REM sleep recurs, in increas- ingly lengthy periods, approximately every 90 mins throughout the sleep cycle, overall constituting about 1.5–2 h of an aver- age night of sleep. Whereas non-REM (NREM) sleep stages are generally characterized by deactivation of many regions as com- pared to wakefulness (e.g., Kaufmann et al., 2006), REM is unique in that many brain regions are clearly more active than dur- ing wakefulness (Table 2, Figure 1). REM also appears to be the
most active state from the subjective point of view, with longer, more emotional, and more frequent dream mentation in REM than any other sleep stage (Hobson et al., 2000). REM therefore appears to be by far the best neural marker of dreaming, though it nonetheless remains problematic (see Methods).
SUBJECTIVE AND NEURAL SIMILARITIES BETWEEN DREAMING AND MIND WANDERING
A number of similarities in the subjective experience of dreaming and MW have previously been noted (see Section First-person Reports of Content from Mind Wandering and Dreaming for a detailed overview).
The possibility that the neural substrate of the DMN might be involved in, overlap with that of dream- ing/REM sleep has also been raised (Fosse and Domhoff, 2007; Pace-Schott, 2007, 2011; Ioannides et al., 2009; Nir and Tononi, 2010), but these comparisons too have remained qualitative: a quantitative meta-analysis has yet to be applied to the question of the similarity in neural substrates between DMN/MW and REM sleep/dreaming.
While major reviews and meta-analyses of the DMN have allowed for a tentative consensus regarding its neural basis (e.g., Buckner et al., 2008), a meta-analytic evaluation of brain activity during REM sleep has yet to be undertaken, making a direct comparison between brain activity in the two states difficult. The execution of such a meta-analysis of REM sleep was therefore a major goal of the present review.
FIRST-PERSON REPORTS OF CONTENT FROM MIND WANDERING AND DREAMING
Similarities in subjective content have been noted since the begin-ning of such research. For instance, the dreamlike nature of relaxed waking thought was documented in two early studies of what is now called MW, which were carried out in a sleep laboratory using EEG to monitor wakefulness. In both studies, participants were randomly asked to report anything that was going through their minds at the time of the probe. In the first
study, Foulkes and Scott (1973) found that 24% of thoughts could be categorized as visual, dramatic, and dreamlike. In a replication study, Foulkes and Fleisher (1975) discovered that 19% of reports were dreamlike.
The qualitative characteristics of dreaming have been inten- sively studied over the past century, yielding a considerable body of research from which some firm conclusions can be drawn regarding subjective content.
Though qualitative data on the content of MW is not nearly as comprehensive, a tentative overview is nonetheless possible. Although a comprehensive review of the lit- erature is beyond the scope of this article, we highlight consistent findings regarding the subjective content of dreaming and MW.
We focus on similarities in subject matter across several key areas, including sensory, emotional, fanciful, mnemonic, motivational, and social aspects, as well as addressing the presence or absence of cognitive control and metacognition. Various disparities and inconsistencies are addressed here, as well as in the Discussion.
The broadest similarity between dreaming and MW is perhaps also the most basic: the sensory building blocks of spontaneous thought in both waking and dreaming are overwhelmingly visual and auditory (though experiences in other sensory modalities are by no means precluded).
The largely audiovisual nature of dreaming was noted over two millennia ago by Artemidorus in his Oneirocritica (Harris- McCoy, 2012) and has been often replicated in contemporary research. For instance, a recent review of dream content (Schredl, 2010), based on more than 4000 dream reports from both laboratory awakenings and home dream diaries, found that visual content was present in 100%, and auditory content in 57%, of all reports (Table 1).
Other sense modalities (tactile, olfactory, gustatory, and nociceptive experiences), by contrast, were present in 1% or less of all reports. Indeed, the next most prominent modality after vision and audition was the vestibular sense: 8% of reports contained experiences of flying, floating, acceleration, etc. (Schredl, 2010).
Intriguingly, a comparison with studies of dream reports from more than a century ago shows a very similar trend: in the late nineteenth century, dream reports also almost always featured visual elements, followed by auditory imagery as the next most dominant aspect, and with the remaining senses accounting for very small percentages ( 1–7%) (Schwartz, 2000). This suggests that the sensory aspects of dreaming may be consis- tent cross-culturally (or at least, cross-temporally).
The apparent predominance of audio-visual content in dreams may underestimate other sensory modalities, however. A num- ber of studies sampling other sensory data revealed that, when prompted specifically for sensations such as pain (Nielsen et al., 1993; Raymond et al., 2002; Solomonova et al., 2008) or bodily orienting movements (Solomonova et al., 2008), participants often reported more information. To our knowledge, similar targeted sensory-content probes have not yet been undertaken during MW, precluding a more detailed comparison.
Content findings from mind wandering are not usually directly comparable, since MW researchers have tended to focus on the intensity (rather than the prevalence) of audiovisual imagery, but available evidence suggests similar trends.
For example, factor analysis of nearly 1500 experience reports found that visual and auditory intensity are two of eight dimensions significantly characterizing spontaneous thoughts (Klinger and Cox, 1987).
A more recent study similarly found a very high prevalence of self-reported visual and auditory imagery during spontaneous thoughts (mean ratings of 4.22 and 4.02, respectively, on a 7- point Likert scale) (Stawarczyk et al., 2011).
Along these lines, a recent review concluded that the average spontaneous thought is moderately visual, contains at least some sound, and is very likely (74% of reports) to contain some form of interior monolog or “self-talk” (Klinger, 2008).
POSITIVE AND NEGATIVE EMOTIONALITY
It appears that most dreams ( 70–75% or more in adults) con- tain some emotion, though affect in dreams may not always be particularly strong, or appropriate to the context (see Domhoff, 2011, for a discussion). A number of studies have found a relative predominance of negative emotions in dreams, particularly when dreams are scored by judges rather than by dreamers (see Schredl, 2010, for a review).
Other studies, however, have found a balance of emotions in REM sleep dream reports, and one study (Fosse et al., 2001) found that joy/elation was in fact the most fre- quently reported emotion. An interesting study directly compared self-reports of dreaming vs. waking events, finding that negative emotion (particularly fear) was more prevalent during dreaming, and positive emotions more common in waking (Nielsen et al., 1991).
It may be, however, that more intense and negatively toned dreams are better remembered, and thus over-reported. Additionally, sampling techniques (e.g., laboratory awakenings vs. home dream journals) may contribute to differences in findings. Irrespective of these differences and methodological limitations, however, it is evident that both positive and negative emotions are ubiquitous during dreaming.
Though not yet extensively studied, emotion appears to be similarly ubiquitous during MW. One recent study, for instance,involving thousands of reports, found that the majority (69%) of spontaneous thought reports involved emotion (positive emo- tion in 42.5% of reports, negative emotion in 26.5%), whereas only 31% of reports were reported to be emotionally neutral (Killingsworth and Gilbert, 2010).
Though data are generally lacking, it is interesting to note that, in contrast to dream- ing, positive emotion appears to predominate during waking MW, and that many more waking spontaneous thoughts appear to be characterized by relatively flat (neutral) affect.
Also of interest is that the temporal focus of MW content appears to be more directed toward the past when negative mood has been experimentally induced (Smallwood and O’Connor, 2011).
IMPLAUSIBILITY AND BIZARRENESS
Though the typical spontaneous thought or dream is a rel- atively plausible simulation or elucidation of past memories, current events, or future plans, generally in line with the cur- rent concerns of the subject (see “Motivational Aspects,” below), nonetheless implausible and bizarre elements are common to both states—though their precise frequency remains disputed (Snyder, 1970; Dorus et al., 1971; Zadra and Domhoff, 2011). Examples are physically impossible or socially unlikely situa- tions, fanciful locales and characters, large discontinuities of time and/or space, and so on.
Depending on scoring criteria, it has been estimated that between 32% (Schredl, 2010) and 71% (Stenstrom, 2006) of dream reports feature bizarre or impossible elements. Despite widely varying estimates, however, there is general agreement that bizarre, incon- gruous or impossible elements are features of at least a substantial proportion of dreams. Differences in precise estimates are likely due to differing scoring procedures, as well as differences between dreamer- or judge-rated scores.
Though many MW episodes contain relatively realistic simula- tions of plausible events in the external world, nonetheless a substantial number ( 20% of reports) contain elements that are bizarre, implausible, or fanciful (defined as “departing sub- stantially from physical or social reality”) (Klinger and Cox, 1987; Kroll-Mensing, 1992; Klinger, 2008).
A more recent study has provided a general replication of earlier results: analyzing thousands of thoughts reported by 124 subjects, Kane et al. (2007) found that the average thought during MW contained a moderate level of fantasy (a mean of 3.77 on a 7-point scale).
In a rare study examining both waking fantasy and dream reports in the same 12 subjects, Williams et al. (1992) found that bizarre elements were about twice as prevalent in dreams vs. waking spontaneous thought.
In a similar vein, dream and daydream bizarreness have been studied in relation to “thick” vs. “thin” boundaries (Kunzendorf et al., 1997): though thin bound- ary personality was associated with more bizarre dreams and daydreams than thick boundary, dreams were scored more bizarre than daydreams across both personality types.
MNEMONIC FEATURES: CONTRIBUTIONS OF EPISODIC AND SEMANTIC MEMORY
Both dreaming and MW draw on episodic and semantic mem- ory sources as building blocks for novel subjective experiences. In this section we discuss the prevalence of past-oriented thoughts during both wakefulness and dreaming, and the potential contri- butions of both episodic and semantic memory to these states.
There is an intriguing literature suggesting that sleep, especially NREM sleep, may have a role in memory consolidation (Walker and Stickgold, 2006; Born and Wilhelm, 2012), including specific roles for REM sleep in consolidation of procedural (Smith et al., 2004) and emotional episodic (Nishida et al., 2009; Groch et al., 2013) memories.
A dynamic model of sleep-dependent memory consolidation and reconsolidation has recently been proposed, suggesting a complex relationship between sleep stages, memory types and their contribution to cognitive stability, flexibility and brain plasticity (Walker and Stickgold, 2006, 2010).
It is now well documented that dream content borrows from both temporally proximal and distal memories (Nielsen and Stenstrom, 2005). The most proximal memories (those from the previous day) are generally known as “day residue” (Freud, 1908), whereas the recurrence of elements 5–7 days following an experience is referred to as the “dream-lag” effect (Nielsen and Powell, 1989).
Personally relevant and emotionally salient events appear to manifest themselves in dream content as day residue and dream lag effects, but can also surface many years after initial encoding (Grenier et al., 2005).
The presence of emo- tional and personally relevant content in dreams may be related to the fact that emotional and impactful events are preferentially consolidated in memory (McGaugh et al., 2002; Nishida et al., 2009). While dreaming contains clear episodic autobiographical elements, memories only rarely get “replayed” in dream content
(∼1–2% of reports: Fosse et al., 2003).
MW appears to involve roughly equal percentages of thoughts about the past and future (Fransson, 2006), though some stud- ies suggest a “prospective bias” toward future-oriented thoughts (Smallwood et al., 2009; Andrews-Hanna et al., 2010; Stawarczyk et al., 2011), and also a past-bias inducible by negative mood (Smallwood and O’Connor, 2011).
Overall, however, it is clear that memories, particularly episodic ones, play a large role in spontaneous thought. Many studies have reported a high preva- lence ( 20% or more of reports) of past-focused MW (Fransson, 2006; Smallwood et al., 2009; Andrews-Hanna et al., 2010; Smallwood et al., 2011).
Indeed, one of the first studies to explore “resting state” activity using PET noted the similarities between such activity and episodic memory recall, as well as the fact that subjective reports of “rest” actually involved a large amount of past recollection and future planning (Andreasen et al., 1995).
Similar to dreaming, memories incorporated in waking MW tend to be of emotional and personally relevant material, and are often related to people’s current concerns (see section below on “Motivational Aspects”).
In summary, dreaming and MW both contain specific trace- able episodic and semantic memory sources, but very rarely
reproduce memories in their entirety. Rather, memories tend to reappear in novel, re-contextualized thoughts and scenarios (Nielsen and Stenstrom, 2005).
MOTIVATIONAL ASPECTS: CURRENT CONCERNS
Reports from both dreaming and MW show a strong proclivity to reflect the ongoing concerns of subjects, as well as elements of anticipating and planning for the future.
A wealth of data supports the notion that dreaming reflects ongo- ing waking concerns, desires, and experiences, in line with the “continuity hypothesis” of dreaming and waking mental activity (see, e.g., Domhoff, 1996, Ch. 8). For example, transient stress- ful situations, such as divorce (Cartwright et al., 1984) and grief (Kuiken et al., 2008) are also often present in dream reports in a general form.
Although dream content is often found to be thematically and emotionally consistent with the waking state of the dreamer, cer- tain activities prevalent in waking are only rarely found in dreams. These include cognitive activities such as reading, writing, and using a phone or a computer (Schredl, 2000).
Similar to dreaming, the content of waking MW also centers heav- ily on subjects’ current concerns (Klinger and Cox, 1987; Klinger, 2008; Andrews-Hanna, 2012).
Further, when the temporal focus of MW is examined, a large percentage ( 40% in one recent study: Andrews-Hanna et al., 2010) of spontaneous thoughts center around the present time 1 day, supporting the notion that MW strongly involves current concerns and experiences. Future-oriented thought is also incred- ibly common during MW (Smallwood et al., 2009; Andrews- Hanna et al., 2010; Stawarczyk et al., 2011), further supporting a role for MW in future-planning and potentially problem-solving.
Intriguingly, in one of the few neuroimaging studies to directly examine periods of MW, MW was associated with activations not only in the DMN but also in key executive prefrontal areas, including the dorsal anterior cingulate cortex and dorsolateral prefrontal cortex (Christoff et al., 2009). Such results are con- sistent with the prevalence of current concerns and unresolved issues in first-person content reports, and may reflect an ongoing (if unconscious) effort to address them (Christoff et al., 2009; see
IMAGINED SOCIAL INTERACTION
Similar to waking life, dreaming is nearly always organized around interactions with others. Most dreams include other characters in some kind of relationship with the dreamer, or a generalized social situation (Hall and Van de Castle, 1966; Nielsen et al., 2003; Schredl et al., 2004; Zadra and Domhoff, 2011).
Social interactions in dreams follow a multitude of patterns, including threatening (Valli et al., 2005) and other- wise emotionally-charged situations (Cartwright et al., 1984). Occasionally, recognizable dream characters may change appear- ance or appear as a generalized entity, fused with features of other individuals.
Also of interest is the prevalence of “mental- izing” or use of “theory of mind” in dreaming—i.e., thinking about others’ thoughts, emotions and motivations (even though the “others” are of course merely imagined) (McNamara et al., 2007). In general, meaningful interactions with others may be one of the key factors guiding the progression of the dream narrative.
First-person reports of MW often involve imagined social inter- actions with others, as well as thoughts about the intentions and beliefs of other people (Klinger, 2008). This has led to the general notion that “mentalizing” (i.e., thinking about the thoughts and minds of others) and the consideration of hypothetical social situ- ations may be key components of spontaneous thought (Buckner et al., 2008; Andrews-Hanna, 2012). Supporting this idea, numer- ous studies have found that brain activity underlying “theory of mind” and mentalizing overlaps significantly with DMN regions (see Buckner et al., 2008, for a review).
COGNITIVE CONTROL AND METACOGNITION
A singular aspect of dreams is the seemingly total lack of metacog- nitive awareness in the dream state. One experiences a com- plex simulation of oft-bizarre experiences, but without the overt
capacity to reflect on the bizarre state of affairs the mind and body are actually in see, e.g., Rechtschaffen (1978). Intriguingly, it appears that well-trained, or talented, individuals can develop metacognitive awareness of the dream state, becoming “lucid” in the dream and sometimes even directing its course and content (Dresler et al., 2012).
The exceptional nature of “lucid” dream- ing, however, serves to prove the rule of the general lack of control and metacognitive awareness in ordinary dreaming, a character- istic likely attributable to the deactivation of numerous prefrontal cortical regions during REM sleep (see our results in Table 2 and Figure 1; also Hobson et al., 2000; Muzur et al., 2002).
A lack of explicit goals, and an unawareness that one is even daydreaming or has deviated from the task at hand, are typi- cal of MW (Schooler et al., 2011). But although MW tends to be less characterized by intentional thought and self-reflective awareness, this is not always the case.
A recent study from our group, for instance, found that subjects who were probed at ran- dom intervals reported being unaware that they had been mind wandering about half (45%) of the time.
One’s impression of the “controllability” of a segment of MW also varies widely, from a sense of being able to end it at any time, to being com- pletely absorbed in and swept along by a daydream (Klinger, 1978, 2008; Klinger and Cox, 1987; Kroll-Mensing, 1992; Klinger and Kroll-Mensing, 1995).
Collectively, these results suggest that cog- nitive control and metacognitive awareness in MW lie somewhere between the relative lucidity and self-reflectiveness of normal waking thought and behavior, and the near-total lack of con- trol and metacognitive nescience characteristic of regular (i.e., non-lucid) dreams. See the Discussion for an elaboration of this theme. | <urn:uuid:9a42696a-baa8-4cf8-9032-03249a98fd0b> | CC-MAIN-2022-40 | https://debuglies.com/2020/05/07/first-direct-evidence-that-human-brains-replay-waking-experiences-while-asleep/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00215.warc.gz | en | 0.930157 | 6,517 | 3.046875 | 3 |
What is a VPN? A VPN, or “Virtual Private Network” is a technology that creates an encrypted, private connection on a public network so that data can be sent and received with an extra layer of security.
But how does a VPN work and why should you worry about personally using one? If you’re concerned about your online privacy and security, this is an important question to answer.
Be sure to subscribe to the All Things Secured YouTube channel!
It is estimated that there are close to 4.57 billion people accessing the internet globally each month. That number continues to climb at a crazy rate.
Like it or not, the internet is here to stay.
With this increase in global internet literacy, it seems logical that online security would be a top priority for users…and yet this doesn’t always seem to be the case.
As you can see in this insightful online security infographic, even though more than 50% of consumers recognize a need for a VPN, less than 30% actually use one .
Strangely, the highest adoption rate comes from the unlikely countries. You can see the comparison between VPN usage in the USA and UK and other countries.
Our ignorance of VPNs portrays our lack of understanding about internet security. Downloading an antivirus or just using a good password manager app and hoping for the best is the common approach to internet security.
In this article, we’re going to cover the following topics.
Understanding what a VPN is, how to use one and even which are the best personal VPNs is becoming more important with each passing day.
Note: Some of the links in this article may be affiliate links, which means that at no extra cost to you, I may be compensated if you choose to use one of the services listed. I only recommend what I personally have used, and I appreciate your support!
What is a VPN or “Virtual Private Network”
Let’s begin by giving a simple answer to the question “What is a VPN?”, one that even non-tech savvy people can understand.
A VPN, or Virtual Private Network, is a secured, personal express lane on the internet highway. Instead of having your internet activity travel on public roads like everybody else, a VPN connects you to your destination via your own personal tunnel (encryption) that nobody else can use or see, giving you a more secure and private experience.
Now I’ve already explained this earlier, but let’s start here with what the letters “VPN” stand for.
- V – Virtual
- P – Private
- N – Network
When we access the internet on our phone, tablet or computer, we are entering a massive public network that connects the entire world.
As confusing as this sounds, you can think of the internet like the highway system that connects different cities across the country. You travel with hundreds of other vehicles on public roads and they can see exactly where you’re going.
But what if you had the ability to drive your own private tunnel instead of the highway.
You’d probably take that, wouldn’t you?
I know I would. And that’s exactly what a VPN does for you.
On the internet highway, a VPN creates a dedicated lane that is encrypted so that nobody else can see what you’re doing, where you’re going or where you’re coming from.
Although it can be used for a number of different purposes (bypassing censorship, privacy, security over public networks, etc.), the main function of a VPN is encryption.
The VPN server is like your identical twin in another location. Connecting to a server in another country makes it seem as if your location is in that other country.
How Does a VPN Work?
A virtual private network operates by giving you encrypted access to a computer server operated by the VPN provider. As mentioned above, this connection allows you to browse the internet in “stealth mode” by building a secure tunnel between you and the internet.
If you want another analogy, you can think of the internet as a cloud. The secure tunnel is the secure connection between you and the remote server.
When you first download and open a VPN client (a software that helps you connect to a server), you’ll have the option to choose from one of hundreds/thousands of servers.
These servers are located all across the globe, as you can see in this example from the NordVPN software.
When a selection is made, the encrypted tunnel is created between your location and the location of the server. There are a number of ways to build this tunnel, known as “VPN connection protocols”. Some of these protocols offer greater security while others offer more speed (more on this below).
When connected to the server, you are now accessing the internet from the location of the server, not your physical location.
Benefits of Encryption & Location Masking
Perhaps you understand the analogy of the internet like a highway system or a cloud, but you still don’t understand why you would need one.
Don’t worry, you’re not alone.
Thankfully, the benefits are easier to understand than you might imagine. Let’s look at a few of the most important benefits:
- Internet Privacy: When you connect to the internet, especially on a public network (i.e. airport or coffee shop WiFi), there is a risk when transmitting your internet data. While using https websites already encrypts your data, using a VPN adds an extra layer of protection and a safety net in case you access an http website by accident.
- Location Spoofing: The ability to access the internet from what appears to be a different location is particularly useful for content that is geo-blocked or geo-restricted. Let’s say you’re traveling to Asia for a trip but you still want to watch your favorite movies or TV shows. In this case, a virtual private network will allow you to stream Netflix in a country like China where all access is blocked.
- Anonymity: Did you know that when you connect to the internet, your computer is assigned a number (known as an IP address) that provides a lot of information about your location? Using a VPN hides this IP address, thus providing you with more anonymity as you surf the internet.
Maybe you won’t be traveling internationally and you don’t care if somebody knows the location of your computer.
However, in this age of government spying and identity theft, if you ever find yourself connecting to a public network…
…a VPN is an extremely important consideration.
Why aren’t more people using a VPN?
At this point, you must be wondering why everyone isn’t using a VPN. The use of a Virtual Private Network isn’t as ubiquitous as it should be because:
- It slows down your internet speed. It takes a lot of processing power to encrypt and reroute internet traffic. This is especially true if you’re using higher encryption. The additional route your data has to travel through another server also results in reduced speeds. The greater the distance between your location and the location of the virtual private network server, the slower the speed.
- Good personal VPNs cost money. While there are a limited few free VPNs that I recommend, most I tell people to avoid. Most of the free options either have several ads or sell your information. Some may even practice both. Additionally, these free services only hide your IP address. They don’t protect your privacy. It is, therefore, better to subscribe to a premium service, which will charge a nominal fee of $2-$5/mo. Nevertheless, the cost is still a deterrent for some people.
If you’re experiencing reduced internet speeds, you’ll want to read through these 5 steps you can take to increase speeds while using a VPN.
Understanding VPN Connection Protocols
The tunnel created between your device and the virtual private network server is built using what is known as a “connection protocol”.
If you need more details, read more about these connection protocols here. Each protocol has its own strengths and weaknesses and not every software gives you the option to choose how you connect.
The bottom line is this: Whether you realize it or not, you have to use a connection protocol to connect to a server. So what options are there to choose from?
Most Common VPN Connection Protocols
Although there are a number of protocols available, we’re going to cover five of the most common connection protocols, and one newer protocol:
- OpenVPN: This is the most secure connection protocol and therefore the most recommended if you have a choice. One reason for this is that it is the only open source VPN connection protocol.
- PPTP: This stands for “point-to-point tunneling protocol”. It is a faster connection but also not quite as secure as OpenVPN.
- L2TP/IPSec: which stand for “Layer 2 Tunneling Protocol” and “IP Security” respectively. It’s considered slightly more secure than PPTP but slower than OpenVPN.
- SSTP: This stands for “Secure Socket Tunneling Protocol”. This protocol was developed for the Windows operating system, so it mostly only works on Windows computers.
- IKEv2: This stands for “Internet Key Exchange version 2”. Works best on mobile devices and is considered highly secure.
OpenVPN is the industry standard and most secure commercial software. In most cases, your provider will default to an OpenVPN connection.
However, there is a newer security protocol called WireGuard that is quickly becoming even more popular than OpenVPN. It’s fast, lightweight and considered to be even more secure than all the older options.
There are some companies like Mozilla that only offer WireGuard as a connection option and a growing number of VPNs that offer Wireguard. In most commercial services, though, you’ll probably have a choice between a few different connection protocols.
Brief History of Virtual Private Networks
The history of the Virtual Private Network dates back to 1996 when Gurdeep Singh-Pall invented the Point-to-Point Tunneling Protocol (PPTP).
His invention was motivated by the desire to find a way for people to work remotely on company tasks over a secure internet connection (can you imagine a world before remote work?!).
In its early days, a virtual private network was exclusively used by corporations primarily for remote working purposes.
Virtual Private Networks have been around for a long time and are widely used.
Although this security feature hasits roots in the corporate world, it has now become more popular among personal users. As hacking and identity theft become common, the usage of encryption software has risen.
Are VPNs Legal to Use?
One final question that most people tend to ask is whether or not they are legal to use.
A virtual private network can be used for privacy and security, but it can also be used to hide illegal activity. Understanding this is key to answering this question of legality.
A VPN itself is not illegal…but it is often used for illegal activity.
In most countries, we are given the right to basic privacy. This includes a right to use a tool like a virtual private network, as long as we aren’t doing illegal activities (copyright infringement, unethical hacking, etc.).
There are some countries, however, where this is a bit of a grey area. These countries include China, Iraq, and many others in the Middle East.
The simple answer is that yes, it is legal to use . However, there is a more nuanced answer to the legality of VPNs that you can read as well.
How to Install a VPN on Your Device
Thankfully, installing the software on your computer, tablet or phone isn’t as daunting as you might first think.
Sure, there are manual ways to set up a VPN that are difficult to understand, but there are also automatic ways that even a 5th grader could do. I’ve developed a number of helpful tutorials to guide you through the process.
We’re going to cover the two most common places you’ll be installing this software: on your computer and on your phone.
Installing a VPN on Your Computer
To set up a virtual private network on your computer, you’ll want to follow the steps outlined in this video published on the All Things Secured YouTube channel.
As you can see, the process is fairly seamless and takes less than 15 minutes if you’re using a premium service such as the ExpressVPN example.
How to Install a VPN on an iPhone iOS
Installing a virtual private network on an iPhone is also the same as installing on your iPad. And honestly, since we’re talking about apps here, it’s practically the same for Android as well.
There are three primary ways that you can set it up on your iPhone:
- via an App
- via your Internet Browser
Watch this video for an example of each.
In this video, I use the NordVPN app as an example, but the setup process will feel familiar no matter which you decide to use.
Final Thoughts | Securing Your Internet
A VPN is important for your internet security, especially if you travel frequently and use public Wi-Fi. It also allows you to avoid government censorship and utilize online services that are restricted to particular geographical locations.
For recommendations, check out our list of the best VPNs on the market today.
Great strides have been made towards creating highly secure VPNs that provide you with a private and secure internet connection. A server connection that uses an OpenVPN, charges a monthly subscription fee, and guarantees your internet privacy and security is one worth exploring.
You don’t need to be a great computer master in order to access and effectively use the security software.
Stop making excuses. Find a secure VPN provider today. | <urn:uuid:d82dd51b-36cf-4397-a133-5caa7058f038> | CC-MAIN-2022-40 | https://www.allthingssecured.com/vpn/faq/what-is-vpn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00215.warc.gz | en | 0.936586 | 2,934 | 2.5625 | 3 |
Rapid technological innovations are changing our present and our perspectives for the future.
The innovative technologies such as IoT, machine learning, artificial intelligence, and big data have revolutionized the way organizations conduct business in the digital landscape.
From financial institutes to the automotive sector, industries are increasingly relying on these evolving digital technologies to create value.
These technologies help develop entirely new businesses and revenue streams or deliver a more efficient experience for consumers.
However, these new opportunities bring a radically different set of challenges, which businesses need to mitigate and manage to stay ahead in the data-driven market.
One of the severe challenges of the digital age is the growing cybersecurity risks.
At this juncture, we bring you three technological innovations and trends that will shape the future of digital security.
3 Emerging Technologies That Impact Cybersecurity
- Quantum Computing
The present-day computers store or process information using bits represented by 0s or 1s. Whereas quantum computers leverage quantum mechanical phenomena such as superposition and entanglement to manipulate, store or process data. Quantum computing relies on qubits (quantum bits) instead of bits.
These properties allow quantum computers to spur the development of breakthroughs in artificial intelligence, machine learning and robotics, among others.
Despite the ongoing experimental progress since the early 1980s, it is believed that quantum computing is still a rather distant dream. However, scientists have made significant progress in recent years.
- In October 2019, in partnership with NASA, Google AI announced that they had performed a quantum computation that is infeasible on any classical computer.
- Likewise, researchers in UC Santa Barbara used 53 entangled qubits to solve a problem in just 200 seconds that would have taken 10,000 years on a classical supercomputer.
Nevertheless, the developments raised immediate concerns for cybersecurity experts, who claim that quantum computing could easily break the current day encryption practices.
Exports worry that the Public Key Infrastructure (PKI) systems used currently can easily collapse when public keys become vulnerable to risk by quantum computers.
However, it remains uncertain how the cybersecurity community will address these security risks.
- 5G Technology
5G is the most anticipated technology owing to its lucrative benefits including high-bandwidth, low latencies, network slicing, and high data speeds.
Even though 5G technology is still in initial stage, the pace of development and deployment has proliferated rapidly. However, it also portrays greater security challenges.
Extremely high-speed data could make 5G devices and IoTs more susceptible to Distributed Denial of Service (DDoS) attacks.
In fact, according to a recent report, 62% of organizations are concerned that 5G could increase the risk of cyberattacks.
- IT/OT Convergence
The rapid penetration of IoT technology has led to the convergence of two distinct domains of a business, Operational Technology (OT) and Information Technology (IT).
IT/OT convergence is the amalgamation of IT systems used for data-centric computing and OT systems used for monitoring events, processes, and devices. This brings in a host of benefits including reduced operational costs, increased manufacturing output and reduced downtime.
According to Gartner, 50% of OT service providers would collaborate with IT-centric providers for IoT offerings by 2020.
However, this trend brings a need for a new “ITOTSecOps” methodology that explicitly addresses security risks associated with IT and OT systems working together.
With IT/OT convergence, IT security teams lack visibility across their entire IT and OT infrastructure and the control over security policies.
Cybercriminals are leveraging innovative techniques to gain unauthorized access to networks and steal sensitive data.
The new technologies that have just emerged in the market are boon to the cybercriminals. They capitalize on the organization’s lack of understanding of how new technologies work and the security loopholes in the new technology.
So, organizations must stay abreast of the emerging trends and understand how they impact their security posture. | <urn:uuid:bb94506c-b6b0-448b-9334-9d70300170de> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/top-3-emerging-technologies-that-define-future-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00215.warc.gz | en | 0.9311 | 817 | 2.96875 | 3 |
Security of IoT: 5 indisputable facts
The Internet of Things (IoT) continues to become more embedded in our everyday lives, adding value to business and augmenting society for human and environmental gain. As we become more dependent on IoT it is important we prioritize cyber security to protect our investments and competitive advantages for businesses and for individuals across the globe.
Sensors and Internet of Things devices are making data collection inevitable and limitless. From emails and pictures through to factories, buildings, cars and turbines, it is predicted that by 2020 over 50 billion connected devices will be in use globally, creating 44 trillion gigabytes of data every year. Cognitive systems will improve our ability to use the information collected from the vast volumes of data to help us make informed decisions. It’s changing the way we live and work for the better. However, if data is left to assimilate it can eventually cause more harm than good.
The average person may not necessarily be aware of the risks that come with IoT devices, meaning they don’t always take the best precautions, allowing hackers and cyber criminals to take advantage, making breaches inevitable. It is therefore the responsibility of all stakeholders involved in the IoT ecosystem, from silicon designers, device manufacturers, vendors, solution providers and end users to all take a stance on security for IoT.
As data becomes more accessible, helping us to make crucial decisions, there are five facts on cyber security we need to be aware of to prevent cyber-attacks, loss of data and plethora of other issues.
Learn more about IoT security
- To learn more about how IBM can help your organization improve its security environment and take advantage of IoT technology visit our website or to see more on IoT and security, visit here.
- To read further about how to protect and clean out your data for better security, head to this blog.
- Download the infographic here. | <urn:uuid:29b8208e-4507-4cb3-884c-8aa648e812e7> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/internet-of-things/iot-facts-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00215.warc.gz | en | 0.92086 | 383 | 2.6875 | 3 |
Internet Security Research Group originally developed an Automated Certificate Management Environment (ACME) protocol for their Public CA, Let’s Encrypt. ACME is what drives Let’s Encrypt’s entire business model, which allows them to issue 90-day, domain validated SSL certificates, which can be renewed and replaced without the website owner’s intervention.
The objective is to set up an HTTPS server that will automatically obtain trusted certificates without any human intervention.
Table of Contents
IETF developed an Automated Certificate Management Environment (ACME) for Automatic Certificate Management. ACME protocol provides an efficient way to validate that a certificate requester is authorized for the requested domain and automatically installs the certificates.
This validation is performed by requiring the requester to place a random string (provided by the CA or certificate manager) on the server for verification via HTTP or in a text record of the server’s Domain Name System (DNS) entry. Client programs, such as Certbot, can automatically perform all of the operations needed to request a certificate—minimizing the manual work. Let’s Encrypt, and several other public CAs support public-facing certificates’ automated management by using the ACME protocol. However, public CAs cannot perform ACME validation for certificates installed on systems inside organizational networks. External entities cannot make HTTP or DNS connections to internal systems. The certificate manager can make internal HTTP and DNS connections and be used for ACME-based certificate management on internal networks. A variety of CAs, certificate managers, and clients across a broad set of TLS servers and operating systems support the ACME protocol, which gives it an advantage. A disadvantage of ACME is that there is no primary method for triggering a certificate replacement in response to a certificate event (e.g., CA compromise).
ACME defines an extensible framework for automating the issuance and validation process of these certificates. The servers are allowed to obtain certificates without any human intervention.
ACME Protocol Model
ACME servers run on Certificate Authorities (CA) and respond to the client’s action if they are authorized. The client uses ACME protocol to request certificate management actions. ACME Clients are represented by “account key pairs.” A private key is used to sign all messages to the server, and the ACME server uses public access to verify the authenticity of the messages and ensure integrity.
How ACME Protocol Works
An ACME server needs to be appropriately configured before it can receive requests and install certificates. Steps to set up ACME servers are:
- Setting up a CA: ACME will be installed in a CA, so we would need to choose a CA on the domain we want ACME to be available.
- Enter the domain where ACME will be installed
- Choose on which CA it will be installed
- The client contacts the CA and generates an authorized key pair
- CA issues DNS or HTTPS challenges that the client responds to and solves to prove authority and control.
- CA also sends a nonce, a random number, which is signed using the client’s private key and sent back for verification to the CA.
This concludes the setting up of ACME. Post-installation, the automation would begin to work. There are a few steps that ACME takes:
- Issuing/Renewing Certificates: ACME has the authority to issue or renew certificates to authorized users. At first, the client (or agent) generates a Certificate Signing Request (CSR), sent to the CA. The CSR is signed by the agent, which the CA can confirm is genuine and comes from the agent. The CA, after verification, issues the certificate for the domain and returns it to the agent.
- Revocation: Like the previous process, the agent signs a revocation request sent to the CA. The CA again confirms the request’s authenticity and then revokes the certificate, publishing on CRL, OCSP, etc., for the PKI infrastructure.
ACME Protocol Functions
ACME uses various URLs and resources for different management functions it can provide. Some functions include:
- New Nonce
- New Registration
- New Application
- New Authorization
- Revoke Certificate
- Key change
ACME provides an automated way to give certificates and revoke them quickly, without human error. Apart from these, there are a few advantages to look out for…
- ACME is free, which lets any domain owner get a trusted certificate at no cost.
- As previously stated, the ACME automates the certificate lifecycle with no human error.
- ACME can be used by anyone, which supports uniform protocols for all functions instead of separate APIs.
- They are supported by open-source, which helps to impact the whole community and grow more impactful projects, enhancing security.
- In case of a compromise, ACME can help quickly mitigate the issue, replace the old certificates with new ones, and switch to a new CA. | <urn:uuid:d430a214-432f-45af-bd0b-9a4074097c0c> | CC-MAIN-2022-40 | https://www.encryptionconsulting.com/education-center/what-is-acme-protocol/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00215.warc.gz | en | 0.88588 | 1,083 | 2.734375 | 3 |
What is an M.2 SSD?
M.2 solid state drives (SSD) are internal circuit boards that embed flash memory chips and controllers on thin circuit boards with a range of lengths and widths. M.2’s small and light form factor enables flash manufacturers to minimize flash storage sizes in laptops and workstations, and more recently in high-performance servers.
Under its original name Next Generation Form Factor (NGFF), M.2 set out to replace mSATA drives, which are thin circuit boards that fit onto a logic board or motherboard. The mSATA standard uses PCIe Mini Card design and connectors to enable SSDs on thin laptops. Some devices still use mSATA, and M.2 has a SATA connector. However, the SATA designs have a top throughput speed of 6.0Gb/s, and M.2 is capable of much faster speeds with faster buses like PCIe x4.
As noted above, the M.2 is a solid state drive; a key fact about this storage technology is that it lacks an enclosure. It’s a small form factor circuit board that is internally mounted in a laptop, desktop, or server. M.2 can connect to multiple types of buses, which are the data pathways between the M.2 and connected computing components. The most common M.2 buses include SATA 3.0 and PCI Express (PCIe); and USB 3.0, which is backwardly compatible with USB 2.0.
The M.2 spec allows for different physical sizes and storage capacities. Speed depends on the type of connected bus: M.2 SATA can support speeds up to 6.0Gb/s, and up to 1GB/s per lane with PCIe 4x and NVMe. The spec also covers two M.2 controllers: AHCI for SATA compatibility, or NVME for high SSD performance.
Also see: Best Solid State Drives.
What are the types of M.2 Connections?
In server environments, the most popular connection types are SATA and PCIe.
M.2 SATA SSD
- Improves on mSATA standard. M.2 was developed in response to the mSATA standard. M.2 standardized SATA into its new drive interface under SATA 3.2 specifications. The M.2 spec reduced length and width measurements while doubling storage capacity using more flash memory chips.
- Still supports mSATA. Although M.2 SATA is distinctly slower than M.2 PCIe, SATA is widely deployed in enterprise applications and industrial settings. M.2 enables compatibility with SATA in a hot-swappable, very small form factor.
The PCIe spec has been around for a while: standards group PCI-SIG introduced PCIe 1.0a in 2003. Today its most popular version is 4.0, and PCI-SIG has ratified PCIe 5.0.
- PCIe interface standard. The PCIe standard connects high-performance computing components, including M.2 SSDs. Serial connections contain from 1 path to multiple data transmission paths, called lanes. Each lane independently connects the controller to another computing component, so a 4x lane transports data 4 times faster than a single lane.
- PCIe and M.2. M.2 PCIe SSDs use PCIe data transfer lanes to accelerate data transfers. The NVMe controller connected to PCIe 4.0 4x enables the M.2 SSD to achieve very high performance. The configuration scales performance up to intensive database workloads with high transfer speeds and potentially thousands of processing queues. There is also a distinct performance boost depending on the number of PCIe lanes. An M.2 SSD with an NVMe controller on a PCIe x2 transfer data at about 15.75Gb/s. But double 2x to 4x, and the speed doubles to about 31.5Gb/s.
- Clearing up the confusion. Sometimes an M.2 running PCIe and NVMe may be called an “M.2 PCIe SSD” or an “M.2 NVMe SSD” as if they were different products. They’re usually not, but this takes some explaining. NVME is a data transfer protocol that uses a PCIe bus. The M.2 spec allows manufacturers to put an NVME controller and PCIe 4x connector on an M.2 card. However, the NVMe/PCIe combination is not limited to M.2 form factors, and can run on other types of cards. M.2 NVMe or M.2 PCIe terms usually refer to the same type of SSD: an M.2 form factor running NVMe on a PCIe bus.
M.2 Connections and their Usage:
|Module Key||Common Interfaces||Typical Usage|
|A||USB 2.0, PCle x2||Wireless (Wi-Fi, Bluetooth)|
|B||SATA,PCle x2,USB 2.0 and 3.0||SATA and PCle x2 interfaces|
|E||PCle x2, USB 2.0||Wireless|
|M||PCle x4,SATA||PCle x4 SSDs|
M.2 Size and Capacity
- M.2 sizes. M.2 SSDs are rectangular. They are 22 millimeters (mm) wide and usually 60 mm or 80 mm long, although there are also 30 mm, 42 mm and 110 mm length cards. Longer length M.2 drives usually hold more flash memory chips for extra capacity than the shorter versions. The card size is identified by a four- or five-digit number. The first two digits are the width and the remaining numbers are the length. For example, a 2260 card is 22 mm wide and 60 mm long. The most common dimensions are 2242, 2260, and 2280.
- Single or double? M.2 drives can also be single or double-sided. Most laptops use the 22 mm width and single-sided boards, whose thin profile fits into ultra-thin laptops. In environments where longer lengths would be an issue, the M.2 spec allows both single-sided and dual-sided versions to fit more components into a smaller length. The maximum thickness if 1.5 mm per side.
- Size = storage capacity. Servers may also use the 22 mm width, but are better fits for longer lengths and/or double-sided boards. These factors hold more chips, which increases M.2 storage capacity. For example, 80 to 110 mm lengths can house up to 8 NAND chips with a typical 1TB in storage capacity, although some manufacturers like Samsung have reached 2TB in capacity on the M.2 form factor.
M.2 Speed and Performance
The M.2 specification is used on SSDs with some fast read write times.
- It’s not the size. M.2 sizes have everything to do with capacity thanks to more flash memory chips, but have little to do with performance speed. M.2 SSD performance depends on the speed of the bus and the number of PCIe paths.
- A world of difference. A SATA M.2 is limited to SATA speeds, with typical SATA SSD read/write speeds coming in around 500-550MB/s. If the M.2 is attached to NVMe and PCIe, write speeds reach as high as 3500MB/s.
Keys and Sockets
- Module keys. M.2 form factors not only have different dimensions and sizes; they also come with different physical connectors. Each type of connector has a different module key, and each key exposes different interfaces. Keys can connect directly to USB 3.0 and SATA, but may need an adapter card for PCIe/NVMe. For example, the Samsung 970 EVO is an M.2 PCIe SSD that can connect directly to a PCIe slot, or it can use an adapter card to fit into a PCIe x4 slot.
- Keying connections. M.2 manufacturers key the SSD so users cannot insert the wrong M.2 card connector to the host’s card sockets. Key types are labeled on the edge connectors of the SSD, and help users to make proper connections. For example, M.2 SATA using AHCI can fit in B- and M-keyed modules. M.2 NVMe for PCI 2.0 x4 are only M-keyed. Different key configurations exist with different interfaces.
- Sockets. The sockets have their own keys, and are not interchangeable. Socket 1 serve wireless connections, Socket 2 serve different connections such as GPS solutions and cache configurations, and Socket 3 is the drive socket for M.2 SATA or PCIe SSDs. | <urn:uuid:905e0929-92b8-4706-b4b5-66a564f742e9> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/hardware/m-2-ssd/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00415.warc.gz | en | 0.906374 | 1,791 | 3.265625 | 3 |
Wednesday, September 28, 2022
Published 2 Years Ago on Friday, May 22 2020 By Mounir Jamil
Healthcare AI has made great strides in the past decades. From AI early cancer detection, to computer-driven research and development, technology bolstering the healthcare system for everyone’s benefit. However, there is a crucial aspect of the healthcare industry that AI cannot fill, and that is the relationship between doctor and patient
Over the past years, health organizations, such as the FDA, have given clearances to many AI and other such technologies for use in Medicare, showing a clear shift in global perception towards these tools.
However, though the regulatory side of the healthcare AI is pretty much won, there is another challenge. Founder and CEO of diagnostic AI company Cardiologs,Yann Fleureau, said during a HIMSS20 online seminarthat the necessary next step is “adoption by the caregiver community”. Like any other technology, he adds,“it needs both extensive research and testing, academic proof-of-concept as well as real-world deployment before it can be relied upon for regular use.”
Fleureau also states that most of the healthcare AI tools, technologies and algorithms are made to extract hard data from large numbers, not to diagnose individual patients. Therein lies the other side of the coin. What does a patient do with all this data without the doctor’s guidance? “There are many decisions and questions in medicine for which there is no right or wrong answer.” says Fleureau.
“There are aspects of healthcare that can never be fully controlled by AI”, says Fleureau.He affirmsthat the human-to-human, doctor-to-patient relationship in healthcare will always have a fundamental place.”
In addition, he believes the role of doctors in the near future will not only be to guide the patient, but to bear the responsibility of “transgression rights” towards his patient. In other words, the doctor can override AI, and go against its decision with full authority and with full responsibility.
In short, Fleureau believes there are two core principles that can never be replaced by healthcare AI: transgression rights, and empathy, the trust and care between doctor-and-patient.
The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:87c9b881-2071-4e98-bc2b-038bc992aa9b> | CC-MAIN-2022-40 | https://insidetelecom.com/healthcare-ai-can-do-a-lot-but-not-everything/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00415.warc.gz | en | 0.950526 | 590 | 2.59375 | 3 |
Almost two years have passed since the beginning of the COVID-19 pandemic that reshaped the world as we knew it. The virus responsible for the disease not only infected millions of people around the world, but also changed the way people live, work and spend their free time. Despite important progress in delivering vaccines and medicines to those who need them, the pandemic remains one of the most important issues the world still faces. However, new antiviral pills might provide people everywhere with good reason to hope that their lives will soon return to normal.
Companies like Merck and Pfizer have already announced that new antiviral pills to treat COVID-19 will soon be available by prescription, while other major pharmaceutical companies are also working on developing and delivering new drugs. Moreover, the Food and Drug Administration (FDA) is expected to soon authorize the two pills, and to continue to work with the US government and other partners in finding other new solutions to address the global crisis. Although vaccines and social distancing measures, as well as the decision of wearing face masks, are now the best methods available of fighting against the pandemic and the virus that caused it, antiviral pills might soon change the game.
Why do we need antiviral pills?
Since the COVID-19 pandemic began almost two years ago, people have discussed the importance of developing and delivering safe and effective vaccines. However, vaccines are designed to protect people before they have been exposed to a virus, which makes them of little or no use for those who have already contracted SARS-CoV-2. Unlike vaccines, antivirals are drugs or treatments that prove effective against viruses, drugs that can help the body fight and eliminate those viruses, reduce the symptoms of a viral infection, and ultimately shorten the length of the disease.
In October, Merck announced that a new antiviral pill seemed to prevent severe disease if administered within days since the COVID-19 symptoms appeared. According to the company, the new antiviral called Molnupiravir lowered the risk of hospitalization or death by half, when compared to placebo. As scientists and doctors around the world started to look at the new pill as a potential game-changer for the pandemic, Pfizer quickly followed with another good news. According to a study, Pfizer’s new antiviral pill Paxlovid also reduced the risk of hospitalization or death by an incredible 89%, when compared to placebo.
Before and after the two new antivirals
According to scientists, only a handful of drugs are actually effective in treating COVID-19. Until the development of the two new antivirals, doctors could only use antiviral monoclonal antibodies to treat patients who are not hospitalized. Moreover, antiviral monoclonal antibodies administration proved to be a difficult thing, because all patients who receive them have to be monitored by a medical professional in a clinic or in an otherwise controlled environment. This makes access to monoclonal antibodies difficult, simply because many patients don’t have access to an administration site nearby. Unlike antiviral monoclonal antibodies, antiviral pills like Molnupiravir and Paxlovid can be used at home.
Maybe even more important may be the fact that some of the new antiviral pills now in development or in testing may prove effective against new variants of the virus that causes COVID-19. Pfizer already announced that not only does Paxlovid have near 90% efficacy in preventing hospitalizations and deaths in high-risk patients, but a recent study also suggests that this new pill is also effective against the new Omicron variant of the virus. Although no pill is yet approved as an effective antiviral treatment for COVID-9 in the US, both Pfizer and Merck have submitted applications for Emergency Use Authorization (EUA) to the FDA recently.
If authorized, the two antiviral pills developed by Merck and Pfizer could be not only the first two oral antiviral drugs used for the treatment of COVID-19, but also two strong weapons to be used in the fight against the pandemic. Although vaccination will probably remain essential for preventing the disease and for slowing the spread of the virus, only an effective antiviral treatment can help those who are already suffering from COVID-19. Furthermore, the two antiviral pills will probably join vaccines, social distancing measures and masks wearing in forming the necessary tools we need to finally stop this global crisis. | <urn:uuid:94ba84a6-a490-4a97-8b6b-1436202e272d> | CC-MAIN-2022-40 | https://biopharmacurated.com/could-antiviral-pills-change-the-covid-19-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00415.warc.gz | en | 0.954975 | 914 | 3.1875 | 3 |
A network protocol for data transmission in encrypted form.
SSH is used as a tunnel for other protocols (for example, TCP), which allows almost any content to be sent through it. SSH creates secure channels for password transfer, video streaming, and remote system control. An important feature of the protocol is data compression.
A disadvantage of SSH is the weak level of protection against intruders with root privileges. | <urn:uuid:049cf7b5-33c0-402c-873e-a21d02cc48da> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/glossary/ssh/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00415.warc.gz | en | 0.915539 | 82 | 3.34375 | 3 |
At most any time of the day, there's a distributed denial-of-service (DDOS) attack underway somewhere on the Internet.
Yes, it's still true, despite reports that some ISPs have experienced fewer DDOS attacks overall during the last six months. It's a matter of quality, not quantity: "When DDOSes do occur, they are done with much greater purpose than they used to be," says Rodney Joffe, senior vice president and senior technologist for Neustar, a directory services and clearinghouse provider for Internet industry. "They are usually to obscure what's [really] happening in the background."
Phishing and pharming are more lucrative for cybercriminals, he says. "So they are using DDOS strategically" instead of as the main attack mode, he says.
ISPs consider DDOS attacks -- where an attacker floods network connections, Websites, or systems with packets -- one of their biggest threats. Most of these attacks are being waged by botnets -- some as large as tens of thousands of bot machines, according to a recent survey of ISPs by Arbor Networks. Arbor found an average of 1,200 DDOS attacks each day across 38 ISP networks. On 220 of the last 365 days, there has been at least one DDOS attack of one million packets per second, says Danny McPherson, chief research officer for Arbor Networks. (See Report: Attacks on ISP Nets Intensifying.)
Just like botnets, DDOS attacks have become stealthier and tougher to trace than ever, with layers of bot armies disguising the original source. "Tracing a DDOS is a particularly vexing problem, with the whole notion of obfuscation and onion routing [techniques]," says Steve Bannerman, vice president of marketing and product management for Narus. (See ISPs Try on Anti-Botnet Services Model.)
And finding the origin of the attack is becoming more important than ever. Some DDOSes won't die if you don't really get to the source. "It's critical to ID the source in some cases -- not just because [you] want to know who's behind it, but [you] can't actually stop the attack" until you do, Joffe says.
But finding the source isn't as simple as identifying the IP addresses of the actual bots that sent the packets. "In a large-scale DDOS, you don't initially ID the source, because it's often innocent," he says. "It tells you these 25,000 machines worldwide are the source of this attack, but it's a giant problem to track the owners of all those machines and get them to stop. Almost without exception, they are innocent owners who have no idea -- and would not know how to turn [the attack] off."
There are three main stages of mitigating a DDOS attack. The key is for ISPs to stop the damage, while at the same time carefully peeling back the layers of the attack to be sure they actually get to the root of it.
Stage 1: The First Five Minutes
Like any attack, it's the first few minutes that are the most crucial to minimizing the damage -- and getting the victim organization back online if the attack has overwhelmed its connection. "This requires a well-oiled group to react -- to spot it and push mitigation in place in real-time," Joffe says.
Devices like Arbor Networks's IPS can filter out the bad traffic at the edge. "This allows you to push the attack back upstream through the major backbone providers, where you can once again begin to operate" normally, he says.
It's in this phase that the ISP can trace the direct attackers, usually the clueless, infected bots that launched the packets at the victim. But these decoys are just the first layer of the attack. "How do you contact those 35,000 machine owners somewhere in the world? That would take a few weeks," he says. "But the problem is in the first five minutes."
That's if you can trace the bots at all: Many sophisticated botnet operators hijack so-called "darknet" IP addresses -- the unused IP address space held by ISPs -- to make them more untraceable. "When you try to trace it back, you find the addresses were hijacked, so you don't know who the attacker is," Narus's Bannerman says. So Narus's system monitors traffic for so-called "hijacked prefixes," he says.
Still, the priority of the enterprise under siege isn't identifying the bad guy -- it's ending the attack. "They are less concerned about the source of the attack and taking any other [investigative] actions -- which lead into forensics and legal, which may be futile," says Cecil Adams, senior product manager for Verizon Business, which offers a DDOS service. "They look to us to stop the attack... We make sure all the links are not congested with it."
That means filtering out the malicious traffic, and also working with other ISPs. Verizon can identify if an attack is spoofed, or if it originated from another provider or a third party, Adams says. "Then we can close out the botnets generating the traffic," he says.
Stage 2: The First Hour
Once the attacking packets have been blocked and the victim is recovering, it's time to trace the command and control infrastructure behind the DDOS-attacking botnet. "This is not as easy as it used to be," Neustar's Joffe says. Botnets are increasingly using encrypted links and peer-to-peer connections rather than the more conspicuous Internet Relay Chat (IRC) channels that are often used for nefarious purposes.
"This [stage] requires a lot more resources, cooperation, and knowledge," he says.
It's in this stage that ISPs and researchers look at things from the point of view of the target of the DDOS. Who might be a logical attacker? A competitor? A crime ring that's been waging these attacks regularly?
Neustar lurks in underground chat sites to check for any hints or intelligence on the attacks or who might be behind them. And it tries to track how the bots are getting their orders, and over what communications channel.
"If we can disrupt that particular channel, it may have the ability to stop the attack more easily than trying to shut down 35,000 bot machines," he says. And that means going after the second layer of the command and control infrastructure. "They tend not to use their own machines in any part other than at the initial site to communicate," he says. "Ten or 15 machines actually operate as the controllers... We can contact those owners and ISPs can block those machines. That's more manageable."
Stage 3: The Investigation
Putting a face to the bad guys behind the attack is the stage where most ISPs prefer to defer to law enforcement and security researchers. They cooperate with law enforcement, but must be mindful of NDAs and privacy concerns with their customers.
"A lot of this doesn't get reported," Arbor's McPherson says. "Most of the time, network operators don't want to be party to that [law enforcement investigation]. ISPs typically aren't the actual target, and they [have to protect] customer data. And the victims don't want to report an attack because it could damage their reputation."
Not only that, but there's not one central place to report a DDOS attack, he says.
And even in the aftermath of a DDOS, it can take hours or days to determine the real objective of the attack, which is typically a diversion for a backdoor and a more dangerous targeted attack. "More often than not, you discover that what looked like the target really wasn't the end target," Joffe says.
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message. | <urn:uuid:84a84313-7961-48e9-b512-dcf6edc431a7> | CC-MAIN-2022-40 | https://www.darkreading.com/perimeter/how-to-trace-a-ddos-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00615.warc.gz | en | 0.965636 | 1,640 | 2.640625 | 3 |
Current state-of-art tools putting together quantum physics and artificial intelligence
Data science and machine learning are definitely among the buzz words nowadays. At startup competitions and conferences I have seen too much. AI-trained beer draft to give you the perfect taste, plastic bag with sensor training IoT systems based on machine learning on how much people swing them, toilets with machine learning based system to optimize lights usage. Seriously? Is this innovation?
Either people just put this technology anywhere to sound cool, or machine learning is the “new electricity” and therefore there is nothing cool anymore about it.
About 10–15 years ago, there was another buzz word which was popping up a lot, it had a more esoteric association and it was more difficult to use in everyday application: quantum (the fundamental theory describing nature at the scale of atoms and subatomic particles). Ever seen anyone with a quantum-based coffee machine? Or quantum based-toilet?
Well, maybe things will change as we are on the edge of merging both technologies.
Quantum machine learning is the intersection between quantum computing and AI that might change what the future of computing looks like.
Before we continue, to make things friendly let’s brush up what we are talking about:
- Machine Learning is a subset of artificial intelligence (AI), which allows systems to learn and to improve from experience, without being explicitly programmed. It is generally used for classification tasks.
- Quantum computing:is the use of quantum mechanical phenomena to perform computations.
The rest of this article comprises
- a brief introduction to quantum machine learning,
- a summary of the current used programming languages,
- a summary of the available services.
In classical computers, bits are stored as either a 0 or a 1 in binary notation. Quantum computers use quantum bits — or qubits — which can be both 0 and 1, this is called superimposition. Last year Google and NASA claimed to have achieved quantum supremacy, raising some controversies though. Quantum supremacy means that a quantum computer can perform a single calculation that no conventional computer, even the biggest supercomputer can perform in a reasonable amount of time. Indeed, according to Google, the “Sycamore” is a computer with a 54-qubit processor, which is can perform fast computations.
Machines like Sycamore can speed up simulation of quantum mechanical systems, drug design, the creation of new materials through molecular and atomic maps, the Deutsch Oracle problem and machine learning.
When data points are projected in high dimensions during machine learning tasks, it is hard for classical computers to deal with such large computations (no matter the TensorFlow optimizations and so on). Even if the classical computer can handle it, an extensive amount of computational time is necessary.
In other words, the current computers we use can be sometime slow while doing certain machine learning application compared to quantum systems.
Indeed, superposition and entanglement can come in hand to train properly support vector machine or neural networks to behave similarly to a quantum system.
How we do this in practice can be summarized as
- Log into your IBM, Xanadu or whatever quantum cloud.
- Set up the number of shots (or attempts) your algorithm will take
- Set up the number of qubits the circuit will have (the number of qubits should be equivalent to the number of features in your dataset)
- run the machine learning algorithm having some quantum computing behaviour.
In practice, quantum computers can be used and trained like neural networks, or better neural networks comprises some aspects of quantum physics. More specifically, in photonic hardware, a trained circuit of quantum computer can be used to classify the content of images, by encoding the image into the physical state of the device and taking measurements. If it sounds weird, it is because this topic is weird and difficult to digest. Moreover, the story is bigger than just using quantum computers to solve machine learning problems. Quantum circuits are differentiable, and a quantum computer itself can compute the change (rewrite) in control parameters needed to become better at a given task, pushing further the concept of “learning”.
2. PROGRAMMING LANGUAGES
The most common libraries are Qiskit and Pennylane
Qiskit [quiss-kit] is an open source SDK for working with quantum computers at the level of pulses, circuits and algorithms. It provides tools for creating and manipulating quantum programs and running them on prototype quantum devices. It is available in Python. The way how it works is implemented is having a hidden layer for a neural network using a parameterized quantum circuit, in this way creating a quantum neural networks.
Another popular tool is Pennylane. This is also written in Python and multi-platform. It is also easily integrable with Qiskit. Among the possibilities, this library can perform parameter-shift within a gradient descent optimization, leading to a quantum gradient descent.
There are several services that allow you to perform quantum machine learning. Mostly from 2 big corporations (Google and IBM) and 2 very promising startups (Rigetti and Xanadu).
IBM has launched The IBM Q Experience. It is an online platform that gives users in the general public access to a set of IBM’s prototype quantum processors via the Cloud. The service is complete of a circuit composer, Python support and Qiskit.
Forest by Rigetti Computing, is a toolsuite for quantum computing. It includes a programming language and development tools.
Xanadu is the first phonotic hardware based cloud from a Canadian start-up. The Photonic quantum processors can handle 8-, 12 and 24-qubit chips While the Rigetti’s and IBM’s system use the qubit model, Xanadu is developed a continuous variable hardware. Continuous variables is the area of quantum information science that makes use of physical observables (quantity that can be measured). In simpler words it means that Xanadu cloud measures photons and therefore its hardware is truly photon based.
The field of quantum machine learning is still in its infancy, but already some successful applications have been published and it is expected that will provide more opportunity in the future.
This article was originally published in Medium. | <urn:uuid:6b65cebd-89e3-454d-a30f-eb404c3d9814> | CC-MAIN-2022-40 | https://resources.experfy.com/ai-ml/quantum-machine-learning-next-thing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00615.warc.gz | en | 0.926526 | 1,293 | 3.234375 | 3 |
Agile testing is a software development practice that promotes frequent, automated testing of new code as it is completed and stipulates that defects should be fixed as soon as they are found.
What should I know about agile testing?
Agile testing is a critical component of agile integration, in which development teams strive to maintain a continuous, stable build that is suitable for release at any given time. With agile testing, tests are integral—not ancillary—to the primary software creation process and provide the feedback necessary for developers to iterate the next, incremental build.
What are the benefits of agile testing?
Traditional programming methods treat the role of testing as a gatekeeping function; after a build is completed, it’s tested and the results of the test determine whether the build is ready for release. This is different from the role of testing in agile methodology, wherein testing is performed routinely—from the early stages of development and onward. Testing early and often means the product is always either in a bug-free state or being remediated of just-discovered bugs. This prevents release delays that often occur in the conventional practice as bug fixes are prioritized and addressed.
Moreover, in conventional development, testers are often held accountable for quality assurance, which creates an adversarial relationship between developers, who want to see their code released on time, and testers, who must defend their decisions to delay releases. Adherents of agile testing maintain that joint accountability for software quality creates a healthy collaboration between developers and testers. | <urn:uuid:2c306031-39dc-4f77-96e0-eaae7273d07d> | CC-MAIN-2022-40 | https://www.informatica.com/in/services-and-training/glossary-of-terms/agile-testing-definition.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00615.warc.gz | en | 0.959775 | 307 | 2.6875 | 3 |
Open topic with navigation
The Community component can authenticate users in a number of different ways, either by using a local password, or by communicating with external servers such as LDAP or NT Domain. This section looks at local password use, and describes the various parameters available to control password security and settings. It also describes how passwords are stored in IDOL Server.
To use local authentication, the
autnsecurity library must be configured in the configuration file. This library is the default authentication mechanism for Community. If you want to use another authentication mechanism, you must explicitly state it in the appropriate ACI actions (
UserRead). You can change the default authentication mechanism by using the
DefaultSecurityType configuration parameter.
The following sections describe the configuration parameters that you can use to control passwords and authentication. For more details about these parameters, refer to the IDOL Server Reference.
Community does not store user passwords in plain text. It uses the Blowfish hashing algorithm to generate a hashed string. It then encodes this hash in base64, along with a few other pieces of information such as the salt and salt length. This generated string is stored in the Community component database.
When a user attempts to authenticate their password using Community, the same procedure is applied to the input password, and the resulting string is compared with the stored one. If they are identical, the user is successfully authenticated. The Blowfish algorithm is a one-way process, so it is impossible to deduce the password from the generated string.
PasswordHashSaltLength configuration parameters both affect salt generation. The salt is a small amount of random data added to the hashing process, designed to defend against dictionary attacks and other hack attempts. The
PasswordHashMaxIterations parameter directly alters several characters in the generated salt, while the
PasswordHashSaltLength parameter affects the length of the salt. In most cases, the default values for these parameters are appropriate. Do not change the values of these parameters after users have started storing passwords.
You can define constraints for the passwords that users can set, using the following configuration parameters:
MaxNumPasswordPerUser relates to the password history feature. Community stores the previous
N passwords for each user, and prevents a user from reusing any of these passwords when they change it. This parameter provides the value for
N. If you do not want to store the password history, set the parameter value to
MinPasswordLength enforces the minimum length allowed for passwords. Micro Focus recommends that if you use this feature, you set a value of at least eight.
PasswordStrength implicitly defines the minimum set of characters passwords can be drawn from; for example, lowercase and uppercase letters, numbers, punctuation, and so on. If the value is
N, it means that if an attacker program tries all combinations of characters and can examine 1 million strings per second, it would take at least eN days to finish in the worst case. In general, a value of 1-2 means weak, 3-5 means acceptable, and 5-10 means strong. Micro Focus recommends that you set a minimum value of 3. A higher value means that users must use more sets of characters in their passwords.
You can also control how long a user can use their current password before they must change it:
PasswordChangeDuration specifies the maximum length of time for which a user can keep a password.
KeepPasswordDuration specifies the minimum length of time allowed after a password change before the user can change the password again.
Micro Focus recommends that you do not set
PasswordChangeDuration to be too high, and do not set
KeepPasswordDuration to be too low. Sensible values for these parameters are three months and one month, respectively. To avoid using this feature, set these parameters to
User locking prevents access to a user account. In IDOL Server, a user lock can be triggered in one of the following ways:
A user attempts to authenticate with incorrect credentials more than the number of times specified by the
LoginMaxAttempts configuration parameter, within the amount of time specified by the
LoginExpiryTime configuration parameter.
An administrator explicitly locks a user account by using the
A user account is inactive for the configured
The value to use for the
LoginMaxAttempts parameter depends on your application, because the seriousness of failed authentication varies for different applications. However, Micro Focus recommends that if you use this configuration, you do not set the value too high or too low. For most applications, a value of
3 is sufficient. You can set
-1 to turn off this method of automatic user locking.
You can use the
LoginExpiryTime parameter in conjunction with
LoginMaxAttempts to specify the period of time before the number of unsuccessful authentication attempts is reset. If a user makes
LoginMaxAttempts unsuccessful attempts with the specified
LoginExpiryTime, the account is locked. The count of unsuccessful attempts resets after the expiry time, or whenever the user makes a successful authentication attempt.
In this example, if a user makes two failed authentication attempts in an hour, and than another an hour later, IDOL does not lock the account, because the expiry time has reset.
The correct value to use for
LoginExpiryTime depends on your application. Any value more than a few seconds slows down a potential attacker, but a higher value might be better. You can set
-1 if you do not want to periodically reset the count.
By default, only an administrator can unlock a user account, by using the
UserLock action. You can configure IDOL Server to automatically unlock a user after a specified amount of time, by setting the
LockRemovalDuration parameter. You can also set this parameter to
-1, in which case an administrator must manually unlock the user account.
IDOL Server can automatically lock or delete users who have not authenticated recently. The
InactiveUserDeleteCycleDuration parameters determine how frequently IDOL checks for inactive users. The
InactiveUserDeleteDuration parameters control how long a user must be inactive for before IDOL takes action to lock or delete the user account. The correct value to set for these parameters depends on your application, but higher values are more sensible than lower ones for all these parameters. You can set each parameter to
-1 to turn off inactive user locking and deletion.
If you have set the
LockRemovalDuration parameter, IDOL automatically unlocks users that have been locked by any method, including inactive users. | <urn:uuid:8929a19b-bdab-494b-af58-307e82cc3187> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/idol/IDOL_11_6/IDOLServer/Guides/html/English/expert/Content/IDOLExpert/Security/User_Passwords.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00615.warc.gz | en | 0.825101 | 1,417 | 2.5625 | 3 |
In the United States, data privacy is hard work—particularly for the American people. But one US Senator believes it shouldn’t have to be.
In June, Democratic Senator Sherrod Brown of Ohio released a discussion draft of a new data privacy bill to improve Americans’ data privacy rights and their relationship with the countless companies that collect, store, and share their personal data. While the proposed federal bill includes data rights for the public and data restrictions for organizations that align with many previous data privacy bills, its primary thrust is somewhat novel: Consent is unmanageable at today’s scale.
Instead of having to click “Yes” to innumerable, unknown data collection practices, Sen. Brown said, Americans should be able to trust that their online privacy remains intact, no clicking necessary.
As the Senator wrote in his opinion piece published in Wired: “Privacy isn’t a right you can click away.”
The Data Accountability and Transparency Act
In mid-June, Sen. Brown introduced the discussion draft of the Data Accountability and Transparency Act (which does not appear to have an official acronym, and which bears a perhaps confusing similarity in title to the 2014 law, the Digital Accountability and Transparency Act).
Broadly, the bill attempts to wrangle better data privacy protections in three ways. First, it grants now-commonly proposed data privacy rights to Americans, including the rights of data access, portability, transparency, deletion, and accuracy and correction. Second, it places new restrictions on how companies and organizations can collect, store, share, and sell Americans’ personal data. The bill’s restrictions are tighter than many other bills, and they include strict rules on how long a company can keep a person’s data. Finally, the bill would create a new data privacy agency that would enforce the rules of the bill and manage consumer complaints.
Buried deeper into the bill though are two proposals that are less common. The bill proposes an outright ban on facial recognition technology, and it extends what is called a “private right of action” to the American public, meaning that, if a company were to violate the data privacy rights of an everyday consumer, that consumer could, on their own, bring legal action against the company.
Frustratingly, that is not how it works today. Instead, Americans must often rely on government agencies or their own state Attorney General to get any legal recourse in the case of, for example, a harmful data breach.
If Americans don’t like the end results of the government’s enforcement attempts? Tough luck. Many Americans faced this unfortunate truth last year, when the US Federal Trade Commission reached a settlement agreement with Equifax, following the credit reporting agency’s enormous data breach which affected 147 million Americans.
Announced with some premature fanfare online, the FTC secured a way for Americans affected by the data breach to apply for up to $125 each. The problem? If every affected American actually opted for a cash repayment, the real money they’d see would be 21 cents. Cents.
That’s what happens for one of the largest data breaches in recent history. But what about for smaller data breaches that don’t get national or statewide attention? That’s where a private right of action might come into play.
As we wrote last year, some privacy experts see a private right of action as the cornerstone to an effective, meaningful data privacy bill. In speaking then with Malwarebytes Labs, Purism founder and chief executive Todd Weaver said:
“If you can’t sue or do anything to go after these companies that are committing these atrocities, where does that leave us?”
For many Americans, it could leave them with a couple of dimes in their pocket.
Casting away consent management in the Data Accountability and Transparency Act
Today, the bargain that most Americans agree to when using various online platforms is tilted against their favor. First, they are told that, to use a certain platform, they must create an account, and in creating that account, they must agree to having their data used in ways that only a lawyer can understand, described to them in a paragraph buried deep in a thousand-page-long end-user license agreement. If a consumer disagrees with the way their data will be used, they are often told they cannot access the platform itself. Better luck next time.
But under the Data Accountability and Transparency Act, there would be no opportunity for a consumer’s data to be used in ways they do not anticipate, because the bill would prohibit many uses of personal data that are not necessary for the basic operation of a company. And the bill’s broad applicability affects many companies today.
Sen. Brown’s bill targets what it calls “data aggregators,” a term that includes any individual, government entity, company, corporation, or organization that collects personal data in a non-insignificant way. Individual people who collect, use, and share personal data for personal reasons, however, are exempt from the bill’s provisions.
The bill’s wide net thus includes all of today’s most popular tech companies, from Facebook to Google to Airbnb to Lyft to Pinterest. It also includes the countless data brokers who help power today’s data economy, packaging Americans’ personal data and online behavior and selling it to the highest bidders.
The restrictions on these companies are concise and firm.
According to the bill, data aggregators “shall not collect, use, or share, or cause to be collected, used, or shared any personal data,” except for “strictly necessary” purposes. Those purposes are laid out in the bill, and they include providing a good, service, or specific feature requested by an individual in an intentional interaction,” engaging in journalism, conducting scientific research, employing workers and paying them, and complying with laws and with legal inquiries. In some cases, the bill allows for delivering advertisements, too.
The purpose of these restrictions, Sen. Brown explained, is to prevent the aftershock of worrying data practices that impact Americans every day. Because invariably, Sen. Brown said, when an American consumer agrees to have their data used in one obvious way, their data actually gets used in an unseen multitude of other ways.
Under the Data Accountability and Transparency Act, that wouldn’t happen, Sen. Brown said.
“For example, signing up for a credit card online won’t give the bank the right to use your data for anything else—not marketing, and certainly not to use that data to sign you up for five more accounts you didn’t ask for (we’re looking at you, Wells Fargo),” Sen. Brown said in Wired. “It’s not only the specific companies you sign away your data to that profit off it—they sell it to other companies you’ve never heard of, without your knowledge.”
Thus, Sen. Brown’s bill proposes a different data ecosystem: Perhaps data, at its outset, should be restricted.
Are data restrictions enough?
Doing away with consent in tomorrow’s data privacy regime is not a unique idea—the Center for Democracy and Technology released its own draft data privacy bill in 2018 that extended a set of digital civil rights that cannot be signed away.
But what if consent were not something to be replaced, but rather something to be built on?
That’s the theory proposed by Electronic Frontier Foundation, said Adam Schwartz, a senior staff attorney for the digital rights nonprofit.
Schwartz said that Sen. Sherrod’s bill follows on a “kind of philosophical view that we see in some corners of the privacy discourse, which is that consent is just too hard—that consumers are being overwhelmed by screens that say ‘Do you consent?’”
Therefore, Schwartz said, for a bill like the Data Accountability and Transparency Act, “in lieu of consent, you see data minimization,”—a term used to describe the set of practices that require companies to only collect what they need, store what is necessary, and share as little as possible when giving the consumer what they asked for.
But instead of ascribing only to data minimization, Schwartz said, EFF takes what he called a “belt-and-suspenders” approach that includes consent. In other words, the more support systems for consumers, the better.
“We concede there are problems with consent—confusing click-throughs, yes—but think that if you do consent plus two other things, it can become meaningful.”
To make a consent model more meaningful, Schwartz said consumers should receive two other protections. First, any screens or agreements that ask for a user’s consent should not include the use of any “dark patterns.” The term describes user-experience design techniques that could push a consumer into a decision that does not benefit themselves. For example, a company could ask for a user’s consent to use their data in myriad, imperceptible ways, and then present the options to the user in two ways: one, with a bright, bold green button, and the other in pale gray, small text.
The practice is popular—and despised—enough to warrant a sort of watchdog Twitter account.
Second, Schwartz said, a consent model should require a ban on “pay for privacy” schemes, in which organizations and companies could retaliate against a consumer who opts into protecting their own privacy. That could mean consumers pay a literal price to exercise their privacy rights, or it could mean withholding a discount or feature that is offered to those who waive their privacy rights.
Sen. Brown’s bill does prohibit “pay for privacy” schemes—a move that we are glad to see, as we have reported on the potential dangers of these frameworks in the past.
Because Congress is attempting—and failing—to properly address the likely immediate homelessness crisis that will kick off this month due to the cratering American economy colliding with the evaporation of eviction protections across the country, an issue like data privacy is probably not top of mind.
That said, the introduction of more data privacy bills over the past two years has pushed the legislative discussion into a more substantial realm. Just a little more than two years ago, data privacy bills took more piece-meal approaches, focusing on the “clarity” of end-user license agreements, for example.
Today, the conversation has advanced to the point that a bill like the Data Accountability and Transparency Act does not seek “clarity,” it seeks to do away with the entire consent infrastructure built around us.
It’s not a bad start. | <urn:uuid:789182e8-b232-42dd-b697-4616dd2c5638> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2020/08/data-accountability-and-transparency-act-2020 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00615.warc.gz | en | 0.945552 | 2,253 | 2.546875 | 3 |
It's time to reconsider what you install in your web browser, what mobile information you send, or the types of email you're sending.
Here's why in this edition of cyber security news you need to know.
1. Watch Out For Fraudulent Chrome Extensions
More than 100,000 computers have been infected by a new malware family called NigelThorn, which affects only Chrome users. NigelThorn is “capable of credential theft, cryptomining, click fraud, and other nefarious actions,” says SecurityWeek.
The malware works because of social engineering: friends appear to send links to victims, who are then “redirected to a fake YouTube page that asks them to install a Chrome extension to play the video,” SecurityWeek says. “Once they accept the installation, the malicious extension is added to their browser, and the machine is enrolled in the botnet.”
NigelThorn steals Facebook login credentials and Instagram cookies to spread the link to another unsuspecting friend either through Facebook Messenger or a post on Facebook in which a friend is tagged. The process continues when one of those friends sends the link.
Protect yourself from this malware by avoiding suspicious links and Chrome extensions, and get smarter about security awareness.
2. Your Encrypted Emails May Not Actually Be Encrypted
Sensitive emails may be revealed due to a new set of vulnerabilities in encryption technologies, says a recent report.
“The flaws, collectively dubbed EFAIL by the team of European researchers who discovered it, affect the end-to-end encryption protocols known as OpenPGP and S/MIME,” writes Threatpost.
You may be affected if you use tools like Thunderbird, Apple Mail, and Outlook for your email. However, the Signal service is not affected.
“In a nutshell,” writes Johns Hopkins University Assistant Professor Matthew Green, “if I intercept an encrypted email sent to you, I can modify that email into a new encrypted email that contains custom HTML. In many GUI email clients, this HTML can exfiltrate the plaintext to a remote server. Ouch.”
The Electronic Frontier Foundation has steps you can take to prevent secure emails from leaking.
3. New Attack Threatens Data Corruption
An attack technique called Nethammer can execute code on targeted systems by writing and rewriting memory on dynamic random access memory (DRAM) chips, according to The Hacker News.
Ultimately, this “bit flipping” technique can allow attackers to take control of a victim’s system.
No fix is known as of this writing to fix the issue since software patches cannot fix exploited hardware weaknesses. That leads THN to suspect that the “threat … has potential to cause real, severe damage.”
4. Cell Phone Data Leaked
If you’ve used the website LocationSmart to track your mobile device, snoopers may discover your location.
Phones operating on all major U.S. mobile carriers—AT&T, Sprint, T-Mobile, and Verizon—may have been affected, according to KrebsOnSecurity.
The LocationSmart demo allows users to enter their information, including a phone number, to see the approximate location of their mobile phone. The service texts the phone number, and after receiving consent, texts people their location on a Google Street View map.
However, a security researcher at Carnegie Mellon University soon learned that anyone could track the location of any phone number without authorization.
Krebs reports that it’s unclear what the carriers will do about the breach.
The LocationSmart service is currently offline.
Organizations should have a mobile policy in place, and that policy should address issues related to location. | <urn:uuid:0cb8ec5f-54e7-4fa9-9188-bb8219631d04> | CC-MAIN-2022-40 | https://blog.integrityts.com/4-cyber-security-news-items-you-need-to-know-about | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00615.warc.gz | en | 0.922506 | 769 | 2.890625 | 3 |
Intracranial pressure monitors (ICP) measure pressure in the skull by placing a small probe inside the skull. This probe is attached at the other end to a bedside monitor. The device senses the pressure inside the skull and sends its measurements to a recording device.
The intracranial pressure monitoring is important neurosurgical equipment. The monitoring of intracranial pressure is used in treating severe traumatic brain injury patients. Studies conducted suggest that raised ICP is the commonest cause of death in neurosurgical patients and in head injury patients. Approximately 40% of patients who are admitted in an unconscious state have raised ICP, and in 50% of those die due to raised ICP. Thus intracranial patient monitors help in keeping the pressure levels in normal range. Latest techniques involve telemetric ICP probes inserted in the brain to monitor ICP in several neurosurgical procedures. The telemetric device allows for constant monitoring over a period of weeks which makes the patient completely free to follow daily activities.
Traumatic brain injury is an increasing health problem in Asian countries. Asia has the highest percentage of Traumatic Brain Injury (TBI) patients. Asia accounted for 77% of the world TBI cases as a result of falls while 57% of the world TBI cases due to other unintentional injuries. Rising incidences of neurological disorders, autoimmune and cardiovascular diseases, brain disorders, and sleep disorders, rising awareness regarding neurodegenerative diseases and technological advancements in brain monitoring devices are major drivers slated to propel this market. The market is further driven by factor such as continuous ICP monitoring recommendation for TBIi patients. Continuous monitoring of ICP has been recommended as an essential parameter for making traumatic brain injury therapeutic decisions by The Brain Trauma Foundation (BTF), U.S. and American Association of Neurological Surgeons (AANS). These guidelines have played a pivotal role in escalating the usage and demand of ICP devices market. However, the shortage of skilled technicians to handle these complex devices is a factor limiting the growth of this market.
This report segments the global intracranial pressure monitors market by applications, products, end user, and geographies. The application segments included in this report are traumatic brain injury, intracerebral hemorrhage, meningitis, and other applications. The product segments included in this report are monitors, probes, accessories, kits, and other products. The end user segments included in this report are hospitals, home, and other end users such as laboratories, research institutes, and universities. The geographic segments consist of Asia, Europe, North America, and Rest of the World.
Some of the prominent players operating in this market include DePuy Synthes (U.S.), Focus Medical Group Inc. (U.S.), Headsense Medical Inc. (Israel), Integra Lifesciences Corporation (U.S.), Linet (Netherlands), Medtronic (U.S.), Raumedic (Germany), Sophysa (France), Spiegelberg Gmbh (Germany) and Vittamed (U.S.). | <urn:uuid:fef233f7-a8fe-4f36-b346-9feb89a55327> | CC-MAIN-2022-40 | https://www.marketsandmarkets.com/Market-Reports/intracranial-pressure-monitor-market-181633310.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00615.warc.gz | en | 0.931424 | 632 | 2.71875 | 3 |
"Fake news" is one of the most widely used phrases of our times. Never has there been such focus on the importance of being able to trust and validate the authenticity of shared information. But its lesser-understood counterpart, "deepfake," poses a much more insidious threat to the cybersecurity landscape — far more dangerous than a simple hack or data breach.
Deepfake activity was mostly limited to the artificial intelligence (AI) research community until late 2017, when a Reddit user who went by "Deepfakes" — a portmanteau of "deep learning" and "fake" — started posting digitally altered pornographic videos. This machine learning technique makes it possible to create audio and video of real people saying and doing things they never said or did. But Buzzfeed brought more visibility to Deepfakes and the ability to digitally manipulate content when it created a video that supposedly showed President Barack Obama mocking Donald Trump. In reality, deepfake technology had been used to superimpose President Obama's face onto footage of Jordan Peele, the Hollywood filmmaker.
This is just one example of a new wave of attacks that are growing quickly. They have the potential to cause significant harm to society overall and to organizations within the private and public sectors because they are hard to detect and equally hard to disprove.
The ability to manipulate content in such unprecedented ways generates a fundamental trust problem for consumers and brands, for decision makers and politicians, and for all media as information providers. The emerging era of AI and deep learning technologies will make the creation of deepfakes easier and more "realistic," to an extent where a new perceived reality is created. As a result, the potential to undermine trust and spread misinformation increases like never before.
To date, the industry has been focused on the unauthorized access of data. But the motivation behind and the anatomy of an attack has changed. Instead of stealing information or holding it ransom, a new breed of hackers now attempts to modify data while leaving it in place.
One study from Sonatype, a provider of DevOps-native tools, predicts that, by 2020, 50% of organizations will have suffered damage caused by fraudulent data and software. Companies today must safeguard the chain of custody for every digital asset in order to detect and deter data tampering.
The True Cost of Data Manipulation
There are many scenarios in which altered data can serve cybercriminals better than stolen information. One is financial gain: A competitor could tamper with financial account databases using a simple attack to multiply all the company's account receivables by a small random number. While a seemingly small variability in the data could go unnoticed by a casual observer, it could completely sabotage earnings reporting, which would ruin the company's relationship with its customers, partners, and investors.
Another motivation is changing perception. Nation-states could intercept news reports that are coming from an event and change those reports before they reach their destination. Intrusions that undercut data integrity have the potential to be a powerful arm of propaganda and misinformation by foreign governments.
Data tampering can also have a very real effect on the lives of individuals, especially within the healthcare and pharmaceutical industries. Attackers could alter information about the medications that patients are prescribed, instructions on how and when to take them, or records detailing allergies.
What do organizations need to consider to ensure that their digital assets remain safe from tampering? First, software developers must focus on building trust into every product, process, and transaction by looking more deeply into the enterprise systems and processes that store and exchange data. In the same way that data is backed up, mirrored, or encrypted, it continually needs to be validated to ensure its authenticity. This is especially critical if that data is being used by AI or machine learning applications to run simulations, to interact with consumers or partners, or for mission-critical decision-making and business operations.
The consequences of deepfake attacks are too large to ignore. It's no longer enough to install and maintain security systems in order to know that digital assets have been hacked and potentially stolen. The recent hacks on Marriott and Quora are the latest on the growing list of companies that have had their consumer data exposed. Now, companies also need to be able to validate the authenticity of their data, processes, and transactions.
If they can't, it's toxic. | <urn:uuid:2de92da1-ed70-4073-a3b4-b87984b0d9cf> | CC-MAIN-2022-40 | https://www.darkreading.com/application-security/toxic-data-how-deepfakes-threaten-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00015.warc.gz | en | 0.952999 | 879 | 2.890625 | 3 |
Machine learning and artificial intelligence have become two of the most important words to reckon; their use has become imperative in almost all industries of today. With the introduction of machine-learning-as-a-service, data science has become a focus of the masses within a short span of time. While machine learning is a part of artificial intelligence, it is widely regarded as the process through which self-iterating algorithms are run to analyze vast spectrums of data, in the absence of around-the-clock the clock human supervision.
As machine learning takes precedence over other technologies, machine learning-as-a-service (MLaaS) has come up to meet the growing demands of data-driven industries. MLaaS is a set of services which are offered to companies so that they can access and obtain the benefits of Machine Learning without having to hire a data scientist to do the necessary footwork. As cloud technology is gaining momentum, more companies are outsourcing their data needs to be able to benefit from the advantages of MLaaS.
Why is Machine Learning so Important?
As discussed above, Machine Learning is all about running algorithms to achieve desired data-driven conclusions. Such models, which are equipped with the knowledge of machine learning, are adept at forecasting trends, creating real-time analyses, and performing accurate predictions based on user data. Given its adaptive nature, machine learning can grow from past mistakes and outcomes, which ordinarily help drive future positive results.
No matter the realm, machine learning can do it all. From fraud detection, to price optimization, to crime prevention, there is no end to the capabilities of this advanced technology. For companies looking at optimizing their day to day services, MLaaS is the best data optimization solution. MLaaS is offered as a Cloud-based service and consists of automatic learning tools, which learn as they go. These options can be used in the Cloud, or even in a more hybrid fashion, as per the need of the hour.
The Current Situation of MLaaS Implementation
If you think that MLaaS options are new entrants in the market, you cannot be further from the truth. The technology is not new; Microsoft, Amazon, Google Cloud and IBM have already been providing customized services to their customers. These tech conglomerates offer an excellent platform to their customers; wherein organizations can then create their personalized machine learning algorithms without having to get into the know-how of the technology.
While a majority of big, financially sound companies are making the most use of these platforms, the trend certainly seems to be changing rapidly. With the entry of new MLaaS companies, even small and medium-sized companies are raking the benefits from machine learning services. Since qualified data scientists are scarce, more companies are beginning to traverse on this path to make their data ends meet.
Benefits of MLaaS for Companies and Organizations
Just like SaaS (Software as a Service), MLaaS is hosted by a vendor, which means outsourcing is going to help you reduce your expenses drastically. Since many organizations do not have the infrastructure or the funds to host their own data storage servers, they seek the help of MLaaS vendors to do their bidding. Storing vast amounts of data can be a costly affair especially for small and medium-sized businesses (SMBs). What can be better than bringing in vendor managed platforms for data management, and letting them do all the data-driven algorithms? For this very reason, many MLaaS companies offer scalable technology, which can help SMBs pick and choose as per their requirements.
With so much riding high on this technology, there is a lot of scope for machine learning to progress within the near future. The capacity for expansion is limitless, which means companies are becoming more competitive in the market. MLaaS helps small and medium-sized business improve their technology, enhance their services, and lower their overall operational costs.
As the list of benefits increases so does the need for these platforms in everyday functioning within these companies. As more businesses begin to seek the services of machine learning, there is an inherent scope of expansion, as companies start to look for greater benefits from their machine learning partners.
Understand How Artificial Intelligence and Machine Learning Can Enhance Your Business
How Your Small Business can Benefit from Machine Learning
The Future of Data Science Lays within Cloud-Based Machine Learning and Artificial Intelligence
Machine Learning’s Impact on Cloud Computing | <urn:uuid:b979536f-2019-4235-b283-0b824f71bde3> | CC-MAIN-2022-40 | https://www.idexcel.com/blog/tag/machine-learning-implementation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00015.warc.gz | en | 0.959163 | 902 | 2.953125 | 3 |
Keeping your identity safe and secure online is a constant concern for consumers today. Let's say you're minding your own business, searching for instructions on how to build a coffee table or fight weeds in your front lawn.
As you browse from site to site, you innocently click a link that promises to deliver what you want. Only, it’s not a real website. You’ve just joined the other one-third of Americans who have experienced identity theft.
It happens fast. Scammers play on emotions like fear or excitement to convince you to act against your better judgment. All it takes is a click on the wrong link or email attachment to give scammers access to your most sensitive information.
Before you’re even aware, that click opened the door for malware to infect your computer. It’s called phishing, and it can take many forms—ransomware, spyware, adware and viruses—but they all end up compromising your pocketbook, your identity, or both.
According to the FBI, Americans lost nearly $30 million in 2017 to phishing scams. Here’s what you need to know to protect yourself.
What is phishing?
Phishing scams are attempts by scammers to get unsuspecting consumers to reveal sensitive personal information like a social security number, date of birth, or login credentials to a financial or credit card account, which they can use to open lines of credit, make purchases or sell to other criminals.
How does it work?
In a typical email phishing scam, the sender poses to be a legitimate organization, such as an online retailer, government agency or credit card company. The scammer uses one of two main tactics—either offering something that seems too good to be true, or taking advantage of common fears.
How to spot phishing
Phishing emails and texts often look like they came from a company you know and trust, such as a bank, credit card company, online store or online payment website. The Federal Trade Commission recommends consumers be wary of emails from trusted organizations that:
- say they’ve noticed suspicious activity or log-in attempts to an online account.
- claim there’s a problem with your account or your payment information.
- say you must confirm some personal information.
- include a fake invoice.
- want you to click on a link to make a payment.
- say you’re eligible to register for a government refund.
- offer a coupon for free stuff.
In most cases, the differences are in the details. Sharpen your eagle eye and watch for these.
- Use caution if the “From” or “Reply to” email address doesn’t match the supposed sender’s public website address.
- Watch for misspellings and obvious mistakes in grammar. Most legitimate companies employ copywriters and editors who make sure their emails read correctly.
- Don’t open email attachments, even from sources you know, unless you’re expecting them. Be especially suspicious of .exe and .zip file extensions.
- Before clicking on any hyperlinks, hover over the linked word or phrase with your cursor. Wait for the web address to appear. If the address doesn’t match the description, don’t click it.
How to prevent identity theft
- Subscribe to computer security protection from a reputable provider like McAfee, Norton or Kaspersky. These companies constantly scrub the internet for new viruses and scams, and will keep your computer up to date to guard against fraud.
- Make sure your software programs are up to date. Companies will issue periodic updates that fix security holes and vulnerabilities.
- Watch your typing. Typo-squatting, the practice of buying website domains that are a keystroke away from legit sites, and then mimicking those sites, is another way phishers can dupe consumers.
- If your bank, credit card or other financial institution offers multi-factor identification, use it. That means if you accidentally give up your password, or an algorithm guesses it correctly, there is a second barrier to protect your information. But if you’re not on the right website to begin with, even the savviest consumer could fall for tricks like sending a confirmation code to your cell phone.
- Backup your data. Google Drive and Microsoft OneDrive offer ways to sync your computer to their cloud storage servers, so if ransomware locks up your computer, your files are not lost.
- Remember that the IRS will not call to demand immediate payments or threaten you with arrest for not complying, and it won’t initiate contact by email, text messages or social media channels to request personal or financial information such as PIN numbers, passwords or similar information for credit cards, banks or other financial accounts. | <urn:uuid:8a2a8e56-39dd-4842-8f7f-91f7daca2a9f> | CC-MAIN-2022-40 | https://blog.cspire.com/home-fiber-tv-phone/its-tax-time.-is-your-identity-safe-online | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00015.warc.gz | en | 0.911806 | 992 | 3.109375 | 3 |
It’s hard to imagine any business that doesn’t use any form of technology these days. The problem is, any computing infrastructure or equipment can be exposed to various methods of cyberattacks. Just last May, the WannaCry ransomware affected more than 10,000 organizations of all sizes in more than 150 countries. The attack caused stoppages in critical services and operations such as the UK’s National Health Service and several of Renault’s automotive manufacturing plants. Last year, one billion Yahoo users saw their accounts hacked, costing the company dearly.
While these reported ones were about large organizations, there were many anecdotal accounts of SMEs getting hit by the attack. Many of these smaller organizations are running on older systems and have little to no protection. Startups often get tied up with the more pressing parts of the business such as sales and operations that most often overlook security as part of your agenda. Here are 7 things entrepreneurs need to know about cybersecurity.
1. No such thing as too small
You may think that cybercriminals only target high profile organizations like the incidences we often hear and read about on the news. However, a Ponemon Institute study reports that 55 percent of SMEs experienced some form of cyberattack. If your business uses any computing device or the internet or has a digital presence such as a website or cloud accounts, then you are at risk of cyberattacks. Most attacks are now carried out by automated malicious software and scripts that seek out vulnerable computers and networks regardless of the size and nature of the organization.
According to cloud security provider Indusface, SMEs, which are more at-risk due to their limited experience with cybersecurity measures, are required to deal with today’s complex threats. Most small businesses have no dedicated IT staff that focuses on such things. This is why it’s important for startups to make security a shared responsibility across all members.
2. Threat 1: Data breaches
There are several common cyberattacks that you should be aware of. The first one is data breach. This is when cyber criminals seek to steal your company’s data by gaining access to your databases. Personal and financial information are sold on the black market for use in identity theft and fraud. Startups who have websites or apps that gather customer information such as ecommerce, online support, or CRM are prime targets for such attacks.
You may think that large organizations that have experienced data breaches such as Sony, Dropbox and LinkedIn survived the data breach fallout so you shouldn’t worry too much about such attacks. However, these major companies have resources and longstanding relationships to weather such issues. Startups don’t fare too well dealing with loss of customer trust and stained reputations. According to the U.S. National Cyber Security Alliance, 60 percent of small businesses fail within six months after suffering from such attacks.
3. Threat 2: Ransomware and malware
Security company Kaspersky identifies ransomware among the top cybersecurity threats to businesses today. Ransomware are a specific type of malware (malicious software) that infect computers (including mobile devices) over a vulnerable network. The ransomware encrypts files on the compromised computer. Users won’t be able to access the files unless they get a decryption key by paying ransom to the attackers. Even with paying the ransom, there’s no assurance that attackers will actually honor your payment.
Most ransomware attackers demand between $500 to $1,000 in exchange for your files. Some ransomware such as Jaff demand as much as $4,000. Ransom payments are often in cryptocurrencies like Bitcoin due to the anonymity these methods offer. The major impact to businesses isn’t exactly the ransom but the disruption to the business. Getting locked out of all your work files can halt your operations indefinitely.
4. Threat 3: DDoS attacks
Distributed denial-of-service attacks (DDoS) render your website or server inaccessible by overwhelming your network with traffic. An hour of downtime from a DDoS attack can cost up to $20,000 for a third of companies. For high transaction websites such as ecommerce services, this figure can be upwards $100,000 for every hour.
Small businesses are often left to weather the downtime and absorb lost sales and productivity. Even if not directly targeted, SMEs could still be affected by DDoS attacks on larger infrastructure providers. Last year, thousands of sites and services went down after a massive DDoS attack hit DNS provider Dyn.
5. People are often the weakest link
People are often the weakest link in a security chain. A BakerHostetler report found that most security breaches are caused by human lapses. Many systems are left vulnerable to data breaches and ransomware attacks through phishing where people are tricked into clicking on links and installing malware.
Some can even bring these threats into your infrastructure by carelessly plugging in their own phones, notebooks, and storage devices to your network and computers. Educating yourself and your staff on the best day-to-day security practices would be a worthwhile investment to prevent attacks caused by human error. Have security policies in place that would govern how you and your staff should be using your IT resources.
6. Access control counts
Know to whom you’re giving infrastructure access. As a startup, you may be unnecessarily handing out critical infrastructure access to just about anyone like that freelancer you hired to build and maintain your page may still have access to your servers or the guy you let go last week may still have the passcode to void transactions on your POS system.
Today, most administration tools and services allow you to set user roles with corresponding levels of access so that you can control who gets to do what on your infrastructure. Encourage people to use strong passwords and protect them at all times. Revoke access of anyone not working for your company as soon as they go. Cover yourself legally as well by putting in nondisclosure clauses to prevent them from leaking passwords on agreements with people you involve in the business.
7. Invest on security
As a startup, you may be averse to take on added expenses. However, cybersecurity is just one of the IT investments you have to make. Besides, there are cost-effective anti-malware and security software that you can use for your office computers.
In addition, security-as-a-service is now a thing which means you don’t have to make heavy upfront investments on security applications and appliances to protect your network. Instead, you can subscribe to scalable security services such as web application firewalls and DDoS mitigation services for your online infrastructure and applications. Startup cyber security is just among the many realities IT professionals must focus on. Know the risks and put up programs in place that would help you avoid getting hit by cyberattacks down the line. | <urn:uuid:7681eea2-feda-4ce2-9bad-67934d6a610f> | CC-MAIN-2022-40 | https://www.cio.com/article/230228/7-things-startups-need-to-know-about-cybersecurity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00015.warc.gz | en | 0.956748 | 1,394 | 2.5625 | 3 |
Climate change conspiracies are spreading rapidly during UN's COP26 event
Amplified by bots and influencers, millions of posts on social media networks peddle false ideas about climate change.
Conspiracy theories that promote climate-change skepticism and denial spread rapidly across the internet ahead of the United Nation's ongoing COP26 Climate Change summit in Glasgow, Scotland.
Amplified by bots and influencers, a large volume of climate change denial content spread on social media starting in June, according to researchers at Blackbird.AI. The technology firm's platform uses machine-learning algorithms to scan millions of posts across mainstream social networks — including Twitter, Telegram, fringe sites and others — and, aided by human analysts, identified four major climate denial trends targeting U.S. and European climate-change policy. | <urn:uuid:b762fe9f-2eee-4e49-b10b-9e90f17fdeee> | CC-MAIN-2022-40 | https://blog.danpatterson.com/p/climate-change-conspiracies-are-spreading | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00015.warc.gz | en | 0.876557 | 162 | 3 | 3 |
With television, the Internet, phone calls, and print media, our world is flooded with data. The quantity of data doubles every 24 months — the data equivalent of Moore’s Law.
The amount of worldwide data has grown over 30% per year for the past several years. So much so it is now measured by the exabyte — 1018 bytes, or a billion gigabytes.
As a result, we are faced with a new challenge: what should we do with all of the data?
By itself, data is unusable. To distill data into meaningful information requires finding and manipulating patterns and groupings within the data. And powerful computer systems are required to find these patterns in an ever-increasing pool.
To ensure that today’s computers are able to handle future applications, they will need to increase their processing capabilities at a rate faster than the growth of data.
Defining Tera-Era Workloads
To develop processor architectures capable of delivering tera-level computing, Intel classifies these processing capabilities, or workloads, into three fundamental types: recognition, mining, and synthesis, or RMS.
The RMS model is a good metric for matching processor capabilities with a specific class of applications.
Recognition is the matching of patterns and models of interest to specific application requirements.
Large data sets have thousands, even millions, of patterns; many of which are not relevant to a specific application. To extract significant patterns in the data, a rapid intelligent pattern recognizer is essential.
Mining, the second processing capability, is the use of intelligent methods to distill useful information and relationships from large amounts of data.
This is most relevant when predicting behaviors based upon a collection of well-defined data models. Recognition and mining are closely dependent on and complimentary to each other.
Synthesis is the creation of large data sets or virtual worlds based upon the patterns or models of interest.
It also refers to the creation of a summary or conclusion about the analyzed data. Synthesis is often performed in conjunction with recognition and mining.
The RMS model requires enormous algorithmic processing power as well as high I/O bandwidth to move massive quantities of data. Processor architects use different approaches to maximize performance for each workload in the RMS model; balancing and trading-off combinations of factors including the number of transistors on the die, power requirements, and heat dissipation.
These choices result in architectures optimized for specific classes of workload.
The RMS workloads in tera-level computing require several similar, application-independent capabilities:
- Teraflops of processing power.
- High I/O bandwidth.
- Efficient execution and/or adaptation to a specific type of workload.
With tera-levels of performance, it becomes possible to bring these workloads together on one architectural platform, using common kernels. Tera-level computing platforms will use a single optimal architecture for all RMS workloads.
Enabling the Era of Tera
The power of the computing architecture required for tera-level applications is 10-to-100 times the capabilities of today’s platforms. The figure below illustrates that while the rate of frequency is slowing, other techniques are actually increasing the rate of overall performance.
Moving forward, we see that performance will be derived from new architectural capabilities such as multi- and many-core architectures as well as frequency scaling.
We can expect the rate of performance improvement to actually improve faster than we’ve seen historically with only frequency scaling.
Recognizing the need to increase today’s platform capabilities, Intel is developing a billion-transistor processor. Yet processor improvements in clock speed and transistor count and will not meet the requirements of tera-level computing in the next 25 years.
A number of the less-friendly laws of physics are more limiting than Moore’s Law. As clock frequencies increase and transistor size decreases, obstacles are developing in key areas:
- Power: Power density is increasing so quickly that tens of thousands of watts per square centimeter (w/cm2) will be needed to scale the performance of Pentium processor architecture over the next several years. This creates a problem, though, being hotter than the surface of the sun.
- Memory Latency: Memory speeds have not increased as quickly as logic speeds. Memory access with i486 processors required 6-to-8 clocks. Today’s Pentium processors require 224 clocks, about a 28x increase. These wasted clock cycles can negate the benefits of processor frequency increases.
RC Delay: Resistance-capacitance (RC) delays on chips have become increasingly challenging. As feature size decreases, the delay due to RC is increasing.
In 65nm (nanometer) and smaller nodes, the delay caused by a one millimeter RC delay is actually greater than a clock cycle.
Intel chips are typically in the 10-to-12 millimeter range, where some signals require 15 clock cycles to travel from one corner of the die to the other; again negating many of the benefits of frequency gains.
- Scalar Performance: Experiments with frequency increases of various architectures such as superscalar, CISC (complex instruction set computing), and RISC (reduced instruction set computing) are not encouraging.
As frequency increases, instructions per clock actually trend down, illustrating the limitations of concurrency at the instruction level.
Performance improvements must come primarily from architectural innovations, as monolithic architectures have reached their practical limits.
The New Architectural Paradigm
In the past, mini- and mainframe computers provided many of the architectural ideas used in personal computers today. Now, we are examining other architectures for ways to meet tera-level challenges.
High-performance computers (HPC) deliver teraflop performance at great cost and for very limited niche markets. The industry challenge is to make this level of processing available on platforms as accessible as today’s PC.
The key concept from high-performance computing is to use multiple levels of concurrency and execution units. Instead of a single execution unit, four, eight, 64, or in some cases hundreds of execution units in a multi-core architecture is the only way to achieve tera-level computing capabilities.
Multi-core architectures localize the implementations in each core and effect relationships with the “Nth” level — second and third levels of cache. This creates enormous challenges in platform design.
Multiple cores and multiple levels of cache scale processor performance exponentially, but memory latency, RC interconnect delay, and power issues still remain — so platform-level innovations are needed.
This architecture will include changes from the circuit through the microprocessor(s), platform, and entire software stack.
The SPECint experiments show that microprocessor-level concurrency alone is not sufficient. A massively multi-core architecture with multiple threads of execution on each core with minimal memory latency, RC interconnect delay, and controlled thermal activity is needed to deliver teraflop performance.
The three attributes that will define this new architecture are scalability, adaptability, and programmability.
Scalability is the ability to exploit multiple levels of concurrency based on the resources available and to increase platform performance to meet increasing demands of the RMS workloads.
There are two ways to scale performance. Historically, the industry has “scaled up,” by increasing the capabilities and speed of single processing cores. An example of “scaling up” can be found in the helper thread technology.
Helper threads implement a form of user-level switch-on-event multithreading on a conventional processor without requiring explicit OS or hardware support.
Helper threads improve single thread performance by performing judicious data prefetching when the main thread waits for service of a cache miss.
Another method of scaling performance is “Scaling out;” adding multiple cores and threads of execution to increase performance. The best-known examples of “scaling out” architectures are today’s high performance computers which have hundreds, if not thousands, of cores.
In today’s platforms, processors are often idle. For server workloads, processors can spend almost half of their total execution time waiting for memory accesses.
Therefore the challenge and opportunity is to use this waiting time in an effective way. Experiments in Intel’s labs showed that helper threads can eliminate up to 30% of cache misses and improve performance of memory intensive workloads on the order of 10%-to-15%.
Adaptability is also an attribute of this new architectural paradigm. An adaptable platform proactively adjusts to workload and application requirements.
The platform must be adaptable to any type of RMS workload. Multi-core architectures not only provide scalability but also the foundation for adaptability.
The following adaptability example uses special purpose processing cores called processing elements to adapt to 802.11a/b/g, Bluetooth, and GPRS.
Each multiple processing element in the graphic is considered to be a processing core. These processing elements can each be assigned a specific radio algorithm function, such as programmable logic array (PLA) circuits, Viterbi decoders, memory space, and other appropriate functions.
Each processing element may be a digital signal processor (DSP) or an application-specific integrated circuit (ASIC). The platform can be dynamically configured to operate for a workload like 802.11b by meshing a few processing elements.
In another configuration, the platform can be reconfigured to support GPRS or 802.11g or Bluetooth by interconnecting different sets of processing elements.
This type of architecture can support multiple workloads like 802.11a, GPRS, and Bluetooth simultaneously. This is the power of the multi-core micro architecture.
The challenge of bringing high performance computing to the desktop has been in defining parallelizable applications and creating software development environments that understand the underlying architecture.
A programmable system will communicate workload characteristics to the hardware while architectural characteristics will be communicated back up to the applications.
Intel has started down this path with compilers such as those developed for Itanium processors. Much more must be done to take advantage of the new architectural features in these computing platforms.
We are on the cusp of another leap in computing capabilities that will dramatically impact virtually everything in our lives. With the immense amount of data generated by corporate networks, it is necessary to scale computing to match the increasing level.
The solution to the challenge of tera will herald changes perhaps as dramatic as those brought about by the printing press, the automobile, and the Internet.
R.M. Ramanathan has been a technology evangelist and a Marketing Manager in Intel. In his 10 years with Intel he has held various positions, from engineering to marketing and management. Before coming to Intel, Ramanathan was director of engineering for a multinational company in India.
Francis Bruening has been with Intel foreight years, and has bachelors in computer science from Cleveland State University. He has been a SW developer and manger, and is currently a technology marketing manager, promoting and developing the ecosystems necessary for industry adoption of new technologies. | <urn:uuid:3e072b46-3139-4e2b-8da1-47a9a4abda3a> | CC-MAIN-2022-40 | https://cioupdate.com/architecting-for-the-era-of-tera/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00216.warc.gz | en | 0.909692 | 2,323 | 3.140625 | 3 |
While the pandemic provided a funding boost to research supercomputing and life sciences HPC, the next wave of investments in HPC systems and simulation software might come from the rapidly evolving hypersonics space.
We’ve had supersonic aircraft for decades but hypersonic—moving above Mach 5—is expected to explode into civil and military use over the next five years with billions of dollars allocated to both classified and commercial in that span.
Between now and 2027, the Institute for Defense and Government projects an estimated global total of $127.3 billion will be spent on hypersonic weaponry alone and Defense Department budget numbers in the U.S. expect $2.5 billion in classified hypersonic work through just 2024. Boeing is developing its own hypersonic transportation technology and startups like Hermeus are pitching a Mach 5 jet that make the New York to Paris jump in 90 minutes.
Supercomputing has been deployed over the last several decades to simulate every aspect of supersonic flight but hypersonics could kick the modeling capabilities needed up quite a notch. According to Valerio Viti, who leads aerospace and defense industry efforts at HPC simulation software company, Ansys, the multi-physics modeling requirements will be pushed to the next level, requiring far more out of scalable software and systems than we’ve seen in both research and commercial HPC spheres.
“The design requires extensive understanding of all the physics involved and their interaction. This can include everything from aerothermodynamics, structural analysis, electromagnetics, sensors, guidance and control systems, and so on,” Viti explains.
While modeling and simulation have been at the core of supersonic testing, there has also been ground and flight testing to complement simulations. With the hypersonic age, however, ground testing will need ultra-specialized facilities that have limited scale that will be very expensive to develop and run. Flight testing is the most expensive with test cycles lasting five years or longer. The only way to make it start working faster is with far more advanced multi-physics simulation.
Simulation for hypersonics goes far beyond just adjusting to the increased thermal, materials, flow, and other physics challenges. Full simulation platforms, systems and software, will also need to model comprehensive hypersonic systems, which means connecting dramatically different codes and tooling to standard modeling and simulation.
As Viti points out from an Ansys perspective, this is everything from the control systems (simulating flight controls using different behaviors/environments), different navigation and guidance systems, and doing all of this in fully virtual simulations of the entire host of componentry via realistic 3D physical models for wargaming, for instance.
Ansys’s own hypersonic modeling and simulation package highlights the complex, interconnected, multi-physics/multi-system nature of the hypersonics simulation problem that lies ahead. With great simulation complexity comes the need for even large investments in HPC infrastructure. Much of the multi-physics modeling still uses traditional HPC tools and software (AI has not infiltrated here yet and from all accounts that’s some time off) and the computational resources to simulate an entire aircraft with specific, new requirements in thermals, power, interconnected systems, and so on will be great. | <urn:uuid:4e08f121-48a1-4166-9a92-68682395e351> | CC-MAIN-2022-40 | https://www.nextplatform.com/2021/06/02/hypersonics-could-fuel-next-wave-of-hpc-investment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00216.warc.gz | en | 0.923158 | 681 | 2.546875 | 3 |
The word “clickjacking” might conjure an image of some dangerous species lurking in the shadows at night in the jungles of an unexplored continent, or perhaps an image of “carjackers” in the urban jungle. In reality, those descriptions aren’t too far off, except that instead of a jungle, we’re talking about the dense and complex network of the web. So, what is clickjacking, and how can you prevent it?
OWASP offers a good example of a clickjacking attack:
…imagine an attacker who builds a web site that has a button on it that says “click here for a free iPod”. However, on top of that web page, the attacker has loaded an iframe with your mail account, and lined up exactly the “delete all messages” button directly on top of the “free iPod” button. The victim tries to click on the “free iPod” button but instead actually clicked on the invisible “delete all messages” button. In essence, the attacker has “hijacked” the user’s click, hence the name “Clickjacking”.
If you were the manager of the website with the “delete all messages” link that was invisibly framed above, you would want a way to prevent such an attack, right?
Going Head First
In a prior blog post we discussed the importance of using HTTPS on all of your organization’s websites, and the use of an HTTP header called HTTP Strict Transport Security (HSTS) to help ensure that the communications between your website visitors and your servers are safe.
An HTTP header is a bit of communication that gets sent by a server to your browser (Chrome, Firefox, Internet Explorer, or Safari) to help it properly display the page you want to view. HTTP headers offer great opportunities for improving security because the level of effort to implement them is usually low and the protection they offer is strong.
Now let’s look at another HTTP header that can help keep your website from being compromised.
The term “X-Frame-Options” isn’t nearly as exotic-sounding as “clickjacking”. It sounds like a poorly named robot in a bad science fiction movie. Despite its sci-fi name, we recommend you implement X-Frame-Options on your organization’s website, because it virtually guarantees that clickjacking attacks won’t work against it.
Don’t just take our word for it. As noted web application security guru Robert Hansen tweeted at last year’s BlackHat conference:
Quick note: if you aren’t using X-Frame-Options on your website you may want to start. Like, pronto. As in now. #Blackhat
— RSnake (@RSnake) August 1, 2013
There are multiple ways X-Frame-Options can be implemented. The two most popular are X-Frame-Options: Deny and X-Frame-Options: SameOrigin.
We’ll leave it to the experts at your organization to determine which implementation is best for you. Whatever happens though, if they mention the words “frame busting” or “frame busters”, please remind them that this is 2014. As Troy Hunt put it:
Frame busters are hacks. Nasty, messy hacks of limited efficiency. What we really need is a simpler, more semantic means of specifying how and where a page may be used when it comes to being embedded in a frame and that’s what we have in the X-Frame-Options (XFO) header.
If you need another resource to help guide your team in getting X-Frame-Options up and running, this post by Eric Lawson is a sound starting point. Lawson is responsible for creating this HTTP header while working at Microsoft.
Tired of reading? This recent video from Google’s Mike West makes XFO crystal clear.
If you have any questions, feel free to contact us. | <urn:uuid:8966f9e5-df19-4ae9-9a54-d9bbb1125a40> | CC-MAIN-2022-40 | https://lookingglasscyber.com/blog/threat-intelligence-insights/x-frame-options-clickjacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00216.warc.gz | en | 0.911265 | 871 | 2.703125 | 3 |
Researchers from Nanjing University in China, have shown that Android smartphone owners’ actual movements and goings-on can be tracked by simply analyzing and compiling the data recorded and provided by the device’s orientation sensors and accelerometers. Since these features of a phone are predominantly used by applications with special permissions, phones are more vulnerable to breaches of user privacy, unlike GPS data which is harder to harvest my malicious applications and attackers.
The research paper, the test and its startling results.
In a paper aptly named “We Can Track You If You Take the Metro: Tracking Metro Riders Using Accelerometers on Smartphones”, the team of researchers from Nanjing University revealed to the world that they were able to tap into and gather accelerator readings in users’ phones.
In essence, this helps chart the entire route taken by commuters. With an ‘interval classifier’ based on semi-supervised machine learning techniques built in order to make their research a reality, the team of researchers merged accelerometer and train location data to assess where a commuter was. Additional malware was then installed on eight different volunteer phones which gathered and remotely uploaded the accelerometer readings from each phone.
The theory was put to the sword on a metro in a major Chinese city. The results were indeed startling. The eight volunteers were easily tracked while visiting four and six stations with an astonishing accuracy of between 89 and 92 percent, respectively for the number of stations. The researchers added that the accuracy could also be bettered, as long other stations were covered for their location data.
“We believe this finding is especially threatening for three reasons,” the researchers assessed.
“First, current mobile platforms such as Android allow applications to access accelerometer without requiring any special privileges or explicit user consent, which means it is extremely easy for attackers to create stealthy malware to eavesdrop on the accelerometer. Second, metro is the preferred transportation mean for most people in major cities. This means a malware based on this finding can affect a huge population,” they pointed out.
A good example of a major city and its subway would be the New York City Subway, which has anywhere between 2.5 and 5.5 million subway commuters every single day.
“Last and the most importantly, metro-riding traces can be used to further infer a lot of other private information. For example, if an attacker can trace a smartphone user for a few days, he may be able to infer the user’s daily schedule and living/working areas and thus seriously threaten her physical safety.”
The researchers add that there are a few ways of preventing such attacks:
- Bringing or introducing noise into Android sensor readings will essentially dispel and scramble location-based readings and information.
- Keeping track of applications with high battery usage is to raise a few red flags as in theory, constant pings and requests for data by malicious apps with spike battery usage. | <urn:uuid:2b8090bc-4af5-4fe1-9b8c-045e47eeec44> | CC-MAIN-2022-40 | https://www.lifars.com/2015/05/new-malware-tracks-smartphone-using-commuters-in-the-subway/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00216.warc.gz | en | 0.9539 | 602 | 2.609375 | 3 |
The hash join has two inputs like every other join: the build input (outer table) and the probe input (inner table). The query optimizer assigns these roles so that the smaller of the two inputs is the build input. A variant of the hash join (hash aggregate physical operator) can do duplicate removal and grouping, such as SUM (OrderQty) GROUP BY TerritoryID. These modifications use only one input for both the build and probe roles.
The following query is an example of a hash join, and the graphical execution plan is shown in Figure 1 below:
--Hash Match SELECT p.Name As ProductName, ps.Name As ProductSubcategoryName FROM Production.Product p JOIN Production.ProductSubcategory ps ON p.ProductSubcategoryID = ps.ProductSubcategoryID ORDER BY p.Name, ps.Name
Figure 1. Sample Execution Plan for Hash Join
As discussed earlier, the hash join first scans or computes the entire build input and then builds a hash table in memory if it fits the memory grant. Each row is inserted into a hash bucket according to the hash value computed for the hash key, so building the hash table needs memory. If the entire build input is smaller than the available memory, all rows can be inserted into the hash table. (You see what happens if there is not enough memory shortly.) This build phase is followed by the probe phase. The entire probe input; it is the Production.Product table) is scanned or computed one row at a time, and for each probe row (from the Production.Product table), the hash key’s value is computed, the corresponding hash bucket (the one created from the Production.ProductSubCategory table) is scanned, and the matches are produced. This strategy is called an in-memory hash join.
If you’re talking about the AdventureWorks database running on your laptop with 1GB of RAM, you won’t have the problem of not fitting the hash table in memory. In the real world, however, with millions of rows in a table, there might not be enough memory to fit the hash table. If the build input does not fit in memory, a hash join proceeds in several steps. This is known as a grace hash join. In this hash join strategy, each step has a build phase and a probe phase. Initially, the entire build and probe inputs are consumed and partitioned (using a hash function on the hash keys) into multiple files. Using the hash function on the hash keys guarantees that any two joining records must be in the same pair of files. Therefore, the task of joining two large inputs has been reduced to multiple, but smaller, instances of the same tasks. The hash join is then applied to each pair of partitioned files. If the input is so large that the preceding steps need to be performed many times, multiple partitioning steps and multiple partitioning levels are required. This hash strategy is called a recursive hash join.
SQL Server always starts with an in-memory hash join and changes to other strategies if necessary.
Recursive hash joins (or hash bailouts) cause reduced performance in your server. If you see many Hash Warning events in a trace (the Hash Warning event is under the Errors and Warnings event class), update statistics on the columns that are being joined. You should capture this event if you see that you have many hash joins in your query. This ensures that hash bailouts are not causing performance problems on your server. When appropriate indexes on join columns are missing, the optimizer normally chooses the hash join. | <urn:uuid:fc0bb81b-3248-4ce4-b5ef-047c31c4225d> | CC-MAIN-2022-40 | https://logicalread.com/sql-server-hash-join-w02/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00216.warc.gz | en | 0.890034 | 732 | 2.59375 | 3 |
July 31, 2018 by Siobhan Climer
You probably use it every day. You might even have used it to find this article. But do you really understand what it is and how deep it goes?
Internetwork – A system of interconnected networks, abbreviated as “internet”. The most well-known internetwork is “the Internet”, which is a global computer internetwork that uses standardized communication protocols.
World Wide Web – The World Wide Web, which prefaces most websites in the form of ‘www’, is an information system that connects documents using hypertext links on the Internet.
Surface Web – Indexed and publicly visible websites accessible by the most common search engines, such as Google, that comprise approximately 4% of all online content.
Deep Web – Content that is not directly accessible by common search engines. This might mean content that is intentionally hidden or requires login credentials for access. Databases and unlinked websites are examples of deep web content.
Darknet – A network built on top of the Internet designed specifically for anonymity. Users of darknets require special browsers and tools that give them access.
Dark Web – These are the websites found on a darknet.
Internetworks, The Internet, And The World Wide Web
To understand the question, “What is the darknet?” let us first begin with answering the precursor question: what is the Internet?
An internet (short for ‘internetwork’) is any system for connecting numerous devices. The internet you use is only the most famous internetwork. It is a global system that uses the Internet protocol suite (TCP/IP). During the 1960s, the United States federal government commissioned research into developing a fault-tolerant communications system using computers. Out of this research came the ARPANET, a precursor to the Internet, that served academics and the military. The modern Internet as we know it today didn’t come into existence until the 1990s.
We use the web to access the Internet. That’s what the ‘www’ at the beginning of a webpage listing means. ‘WWW’ stands for the ‘World Wide Web’ or ‘the Web’. The web is simply a way of sharing and accessing the Internet. The websites you use are identified by Uniform Resource Locators, or URLs.
The webpages you visit every day – like this one – are text documents formatted using HTML (hypertext markup language) and embedded with multimedia content, like videos, images and audio. Webpages are organized together to form websites, like https://mindsight.wpengine.com. The blog you’re reading now is one part of Mindsight’s larger website, which uses a similar theme, centers around a central topic, and has a shared domain name.
Indexed Content And Web Crawlers
The Internet is indexed. You open a search engine, like Google or Firefox, and type in a query. For example, to find this article you might have typed, “what is the darknet?” Search engines then use keywords and metadata – data used to describe other data (like card catalogue systems in libraries of the late 20th century) – to catalog content on the Internet.
How do search engines find all that information? There might be millions of webpages that are relevant to a specific search query. Search engines, like Google, use web crawlers to comb through the ever-increasing web content. Web crawlers are a software developed specifically for discovering web content, especially as it changes and adds new, more relevant material every day.
The Surface Web
This is the complete collection of indexed web content that is accessible to search engines. And it only comprises approximately 4% of what exists online. The visible websites we use every day make up the surface web.
The Deep Web
The deep web isn’t as mysterious as it sounds; in fact, it’s quite probable you’ve used the deep web at some point. The web content on the deep web isn’t indexed or directly searched by the most common search engines. This includes any web content that requires registration or login credentials, unlinked websites, intentionally hidden websites, or databases. It isn’t surprising that this comprises most of the content available on the Internet.
For example, your local library likely has an online database that requires residents to enter an ID to login and search. Within that database, you can search for books, eBooks, and equipment. The database is intentionally locked down for credentialed users. It is part of the deep web.
What Is The Darknet And The Dark Web?
What is the darknet? The darknet is a shadow network that hovers on top of the Internet we use every day. It is purposefully hidden, enabling users’ anonymity. In order to use the darknet, you need to have special browsers and tools that are specifically designed for interfacing with the different protocols.
Most darknets – like Tor (The Onion Router), I2P (Invisible Internet Project), Freenet, and DN42 – use encrypted routing protocols that bury user’s connections in a series of tunnels. Although each of these different darknets has a unique purpose and/or protocol system, the gist is that a user can act anonymously and with increased privacy as they access dark websites that might subject them to civil or criminal lawsuits. But is everyone who uses the darknet a criminal?
Want to find out more about how the darknet works? Check out this article on how it works.
Nefarious Purposes Or Legitimate Anonymity?
Drugs, counterfeit materials, stolen credentials, weapons, hacking, gambling, terrorists, hitmen-for-hire, or explicit materials are all found – in great quantity – on the darknet. Drugs and weapons can be sold and purchased in darknet markets. The Silk Road is the most famous darknet market. In 2011, the Silk Road launched as the first modern dark market for selling illegal drugs. In 2013, Ross Ulbricht was arrested by the FBI and later sentenced to life in prison for founding the marketplace.
But not everything that happens on the darknet is criminal activity. Just as law enforcement might go undercover into areas with high violence, they also utilize the darknet to apprehend criminals or protect government data. Privacy advocates, researchers, private companies, politicians, journalists, and the military all use the darknet to protect sources, investigate crimes, and provide privacy to data and people.
Jamie Bartlett, a journalist and tech blogger for The Telegraph, authored the book The Dark Net: Inside the Digital Underworld. In it, he explores the deep web, darknets, and the dark web in depth (pun intended). And of the dark net, he wrote:
“The dark net is a world of power and freedom: of expression, of creativity, of information, of ideas. Power and freedom endow our creative and our destructive faculties. The dark net magnifies both, making it easier to explore every desire, to act on every dark impulse, to indulge every neurosis.”
For more information on the dark net, check out Darkowl, a great website available on the surface web, that explores the darknet and the dark web.
Contact us today to discuss how to protect your company’s data from dark web hackers.
Like what you read?
Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our highly-certified engineers and process-oriented excellence have certainly been key to our success. But what really sets us apart is our straightforward and honest approach to every conversation, whether it is for an emerging business or global enterprise. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges.
About The Author
Siobhan Climer, Science and Technology Writer for Mindsight, writes about technology trends in education, healthcare, and business. She previously taught STEM programs in elementary classrooms and museums, and writes extensively about cybersecurity, disaster recovery, cloud services, backups, data storage, network infrastructure, and the contact center. When she’s not writing tech, she’s writing fantasy, gardening, and exploring the world with her twin two-year old daughters. Find her on twitter @techtalksio. | <urn:uuid:d6290e9a-155b-4a83-b2c1-f0cd15cdef58> | CC-MAIN-2022-40 | https://gomindsight.com/insights/blog/what-is-the-darknet-deep-dive/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00216.warc.gz | en | 0.904983 | 1,802 | 3.375 | 3 |
(OSPF) is the routing protocol for the Internet Protocol (IP) systems. It utilizes a connection state routing calculation and falls into the gathering of inside routing protocols, working inside a solitary independent framework (AS). It is characterized as OSPF Version 2 in RFC 2328 (1998) for Ipv4. Following is the more information about this protocol.
The redesigns for Ipv6 are tagged as OSPF Version 3 in RFC 5340 (2008).OSPF is maybe the most broadly utilized inside portal protocol (IGP) in vast endeavor systems. IS-IS, an alternate connection state element routing protocol, is more regular in substantial administration supplier systems. The most broadly utilized outer surface portal protocol is the Border Gateway Protocol (BGP), the primary routing protocol between independent frameworks on the Internet. OSPF is an inner part passage protocol (IGP) for routing Internet Protocol (IP) bundles exclusively inside a solitary routing space, for example, a self-governing framework. It assembles connection state data from accessible routers and builds a topology guide of the system. The topology is introduced as a routing table to the Internet Layer which courses data grams built singularly in light of the end of the line IP location found in IP parcels. OSPF helps Internet Protocol Version 4 (Ipv4) and Internet Protocol Version 6 (Ipv6) systems and gimmicks variable-length subnet veiling (VLSM) and Classless Inter-Domain Routing (CIDR) tending to models. OSPF recognizes changes in the topology, for example, join disappointments, and meets on another circle free routing structure inside seconds. It figures the briefest way tree for each one course utilizing a technique focused around Dijkstra's calculation, a most brief way first calculation.
The OSPF routing approaches for developing a course table are represented by connection expense elements (outside measurements) connected with each one routing interface. Expense variables may be the separation of a router (round-trek time), information throughput of a connection, or connection accessibility and dependability, communicated as basic unit less numbers. This gives an element methodology of activity burden adjusting between courses of equivalent cost.an OSPF system may be organized, or subdivided, into routing ranges to streamline organization and enhance movement and asset usage. Ranges are distinguished by 32-bit numbers, communicated either basically in decimal, or regularly in octet-based dab decimal documentation, commonplace from Ipv4 address notation.by tradition, zone 0 (zero), or 0.0.0.0, speaks to the center or spine area of an OSPF system. The IDs of different zones may be picked freely; regularly, heads select the IP location of a fundamental router in a region as range ID. Every extra range must have an immediate or virtual association with the OSPF spine region. Such associations are kept up by an interconnecting router, known as range outskirt router (ABR). An ABR keeps up partitioned connection state databases for every range it serves and keeps up abridged courses for all ranges in the network. OSPF does not utilize a TCP/IP transport protocol, for example, UDP or TCP, yet embodies its information in IP datagrams with protocol number 89. This is as opposed to other routing protocols, for example, the Routing Information Protocol (RIP) and the Border Gateway Protocol (BGP). OSPF executes its own particular slip recognition and redress capacities.
Leverage of single area OSPF is that each router or simply a 3 layer router like WS-C3560x-48t-L has full learning of the topology of the whole system. In view of this we can say that each router will dependably pick the Ideal way to any destination. The up side in single area OSPF is that routers with full information of the complete topology will dependably pick the Ideal way. Yet the drawback in single area OSPF is that all routers must stay informed regarding the greater part of the points of interest, creating bigger information base of LSAS, with more subtle elements that must experience the computations expanding the CPU overhead, and making each router subject to the effect of an interface fold anyplace in the network.it is conceivable to situated up a system with OSPF in which there are different range or to set up a system with OSPF in which there is just a solitary zone for the whole network. When there is just a solitary region for the whole system then the arrangement is some more basic. What's more doubtlessly every router will have full knowledge of the greater part of the parts of the system which would permit it to dependably make the most Idealroutingdecision. There was some source at Cisco which utilized the quantity of 50 routers and that number has ended up far reaching in examination of OSPF area size. I would make the point that the sort of routers being used when the first explanation was made were unique in relation to the routers being used now (particularly regarding the memory that they had and the force of the CPU that they utilized) and 50 is not as great a number as it used to be. I would additionally make the indicate that in attempting get the right size of an OSPF zone it is more vital to consider what number of interfaces (what number of subnets) there are and how steady or how unstable the system is. Both of these are more huge contemplations than having 50 routers.
This is how you can configure the OSPv2.
Place 3 Routers & 3 Routers on the sensible topology map. (You ought to utilize 2811 Routers and 2960 Routers)
Twofold click the current name underneath the symbols on the guide. At that point rename them as seen in the screenshot.
Click on the symbol for achieve gadget then search to the CLI tab to get to the comfort. At that point enter the worldwide design of every gadget and rename utilizing the hostname command. Verify the name matches the one on the legitimate topology map.
At the point when logging into the routers, a brief may be holding up asking to enter arrangement mode. Basically sort n and afterward hit enter to dodge that setup.
Click on every router and peruse to the Physical tab. One on that tab clicks the force catch on the cover of the router to shut down the router. Next find WIC 2t recorded in the left sidebar and click on it. The interface card now shows up at the base of the screen, click and drag the interface card to an unfilled space on the router as seen in the screenshot. When the card is set up, force over on the router by clicking on the force catch.
You ought to utilize the accompanying design for the gadgets. To practice some sub netting don't hesitate to utilize your Subnets \ Networks.
Do it through the following command;
Click on the Thunderbolt looking symbol on the bottom left and that will give you a determination of link sorts. Straight-Through links are utilized from router to router Serial links are utilized from router to router. Click the orange thunderbolt to have it picked the kind of link for you.
This is how you do this;
Now verify all the LAN segments. Also, make sure to read about the packet tracker software since it can help you a lot.
This is how it can be done:
Now here is the configuration:
Each OSPF routerinside the system will have a routerID that remarkably distinguishes it to alternate routers on the system. This routerID could be statically appointed, or it might be alterably allotted focused around an interface IP addresses.to statically appoint the routerID; utilize the routerIDcommand from router arrangement mode. The routerID is a 32 bit number basically spoken to in IP address format. If the routerIDcommands are not given, the router will get its ID from the IP location of one of its interfaces in an up/up state. This method for course, in the event that you have no interfaces up/up with IP addresses, the OSPF methodology won't begin as it will have no possibility to get to know its router ID. The first paradigm the router utilizes when selecting an interface to build its router ID in light of is interface sort. The main refinement it makes between sorts is whether it is a loopback interface or not. Loopback interfaces are initially considered to build the routerID with respect to. On the off chance that there are no loopback interfaces, the router will utilize an IP address from an alternate other up/up interface.
The second foundation is focused around the estimation of the IP address. On the off chance that the router has chosen to utilize a loopback interface to get its routerID and there are numerous loopbacks, it will then picked the interface with the most noteworthy IP address e.g. 192.168.1.1 is higher than 10.1.1.1. The same is genuine if a determination is focused around another non loopback interface.
You can utilize the passive interface charge with a specific end goal to control the promotion of routing data. The order empowers the concealment of routing upgrades over a few interfaces while it permits redesigns to be traded regularly over different interfaces.
With most routingprotocols, the aloof interface commands confines cordial ads just. Be that as it may, when utilized with Enhanced Interior Gateway Routing Protocol (EIGRP), the impact is marginally distinctive. This report shows that utilization of the uninvolved interface order in EIGRP stifles the trade of hi parcels between two routers, which brings about the loss of their neighbor relationship. This stops routing overhauls from being publicized, as well as smothers approaching routing upgrades. This record additionally talks about the arrangement needed keeping in mind the end goal to permit the concealment of friendly routing redesigns, while it likewise permits approaching routing upgrades to be gained typically from the neighbor.
An OSPF system might be partitioned into sub-spaces called ranges. A region is a coherent accumulation of OSPF systems, routers, and connects that have the same territory distinguishing proof... A routerinside a territory must keep up a topological database for the zone to which it has a place. The router doesn't have definite data about system topology outside of its zone, consequently decreasing the span of its database. Areas limit the extent of course data dispersion. It is impractical to do course overhaul sifting inside a range. The connection state database (LSDB) of routersinside the same territory must be synchronized and be precisely the same; be that as it may, course rundown and sifting is conceivable between diverse zones. The principle profit of making territories is a lessening in the quantity of courses to proliferate by the separating and the rundown of courses.
The LSA is a fundamental communication method for the OSPF routing protocol for the Internet Protocol (IP). It conveys the router's neighborhood routing topology to all other nearby routers in the same OSPF region. OSPF is intended for versatility, so a few LSAS are not overwhelmed out on all interfaces, yet just on those that have a place with the proper zone. Thusly itemized data could be kept limited, while outline data is overflowed to whatever remains of the system. The first Ipv4-just Ospfv2 and the more current Ipv6-perfect Ospfv3 have extensively comparable LSA sorts.
Protocols play an important role when It comes to networking. So knowing about all the protocols are of immense importance since they can tell that how the internet can get connected to other device.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from [email protected] and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:e07246e8-e490-4b2a-86c8-e9794596972b> | CC-MAIN-2022-40 | https://www.examcollection.com/certification-training/ccna-configure-and-verify-ospf.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00216.warc.gz | en | 0.918213 | 2,521 | 3.265625 | 3 |
Series: RACF v2 for z/OS v2.4 Series
RACF – Introduction 2.4
This course introduces the learner to IBM’s RACF security software, explaining how it has evolved and how it is typically used in z/OS, and can interact with non-z/OS workloads. It discusses the importance of security, and the types of resources it protects. The course then introduces the concept of user and group profiles and describes from a user perspective, RACF’s interaction with day-to-day user tasks. Examples showing how various users can interact with RACF are also provided.
RACF – Defining and Managing Users 2.4
The “RACF - Defining and Managing Users” course details the skills that are required by a security administrator, programmer, or DBA in using RACF to secure systems and data. It explains how to define and maintain individual users within RACF, using several interfaces.
RACF – Managing RACF Groups and Administrative Authorities 2.4
The “RACF - Managing RACF Groups and Administrative Authorities” course follows on from the “RACF - Defining and Managing Users” course describing how users can be connected to group profiles and can be assigned special privileged access.
RACF – Protecting Data Sets Using RACF 2.4
The “RACF - Protecting Data Sets Using RACF” course describes how RACF is used to define access to z/OS data sets. Information on the profiles used to provide this access is also discussed in detail.
RACF – Protecting General Resources Using RACF
The “RACF - Protecting General Resources Using RACF” course describes how RACF is used to define access to system resources such as DASD and tape volumes, load modules (programs), and terminals. Details of the profiles used to provide access to these items is also discussed in detail.
RACF – RACF and z/OS UNIX 2.4
The “RACF - RACF and z/OS UNIX” course describes the requirements for configuring security in a z/OS UNIX environment using RACF. It covers the creation and use of UID and GID definitions as well as file and directory permission bits and access control lists that are referenced when accessing those z/OS UNIX resources.
RACF – Managing Digital Certificates 2.4
In the “RACF - Managing Digital Certificates” course you will see how encryption keys are used to securely manage data, and the standards that enforce encryption protocols. You will be introduced to various types of certificates and see how data that is stored in them. From a z/OS perspective you will see how IBM’s Digital Certificate Access Server (DCAS) provides password free access to that environment using a certificate. Commands used to generate and manipulate digital certificates, and keyrings is discussed in detail.
RACF – For System Programmers 2.4
The “RACF - For System Programmers 2.4” course describes how the RACF database is structured and configured, and the skills needed to ensure that it runs optimally.
RACF – For Auditors 2.4
The “RACF - For Auditors” course describes the various types of data center audits and discusses the role of an internal auditor when performing a RACF audit. It expands this to look at the general steps to ensure that RACF managed security is aligned with both organizational security standards, and external compliance regulations. RACF auditor privileges are discussed in detail describing how audit information is stored and the commands used to request the capture of specific events. The type of data that can be unloaded from SMF, and the RACF database, is explained along with details on how ICETOOL can be used to process this information to create audit reports. | <urn:uuid:3bf31064-e5f7-4329-9159-2b166c17cbac> | CC-MAIN-2022-40 | https://interskill.com/series/racf-v2-for-z-os-v2-4-series/?noredirect=en-US | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00216.warc.gz | en | 0.889429 | 841 | 2.625 | 3 |
It can be difficult to understand the power of cloud computing. The transformational impact it can have on a business is staggering. There are different types of clouds. There are differences between the Public Cloud, the Private Cloud, and Hybrid Clouds.
Before we describe each one of these cloud types, let’s look at the components that make up cloud computing.
The fact that technology grows at a multiplying rate is due to cloud computing. The “cloud” refers to software and services that run on the Internet instead of your computer’s hard drive.
Netflix is an example of a service that runs on the Internet. Microsoft 365 is another one. There are many great things about cloud computing. The best single benefit of the cloud is accessibility. All your applications and all your stuff are accessible from any computer, smartphone, or tablet anywhere. All you need is an Internet connection.
Cloud Computing is a Powerful Game Changer
The Cloud is a vast array of servers spread all over the world. Adobe, and SalesForce are both examples of companies offering ways to utilize cloud technology. Think of the cloud as a giant utility in the sky. Because these apps are stored on the cloud, providers can update them whenever they like. The end-user can use the updated version instantly.
Most people know something about the computer “cloud.” This article will explore the basic elements of the Public, Private, and Hybrid Clouds. What makes them different from each other? Which cloud type meets certain business needs and objectives.
Every cloud has a collection of some basic technologies. They include an operating system, virtualization software, management and automation tools, and application programming interfaces.
These technologies are the ingredients. By themselves, they do not form a cloud. They have to be brought together in a certain way. Then they become the most sought after IT infrastructure since virtualization.
In the standard IT model, the enterprise manages and controls the network connections to the private or public Internet. A full system includes the Internet, the storage layer, hardware, the Virtualization layer, the operating system, middleware, runtime, data, and the application itself.
Infrastructure as a Service (IaaS) means a company only manages the operating system level up to the application layer.
The cloud is a data center, or a collection of data centers, made up of compute and storage resources connected by a network. The data centers become a cloud when all those resources virtualize onto one big pool of assets. These resources can be intelligently and automatically orchestrated. That means it can adapt to meet the ever-changing needs of your apps. The cloud can also adapt to the ever-changing use and availability of each resource.
Applications can be provisioned more quickly without custom provisioning boxes. Once deployed over the cloud, those apps, no matter how power-hungry they may be, can be dynamically scaled on demand. Resources used, like congestion or failure, can be resolved automatically. Clouds can be more efficient and cost-effective than traditional data centers. Data is more secure on the cloud as well.
In Platform as a Service (PaaS), the enterprise only manages the data and the application layer. The Platform as a Service provider takes care of everything else. Software as a Service (SaaS) Provider handles the whole bundle completely. It’s a one-stop-shop. We’re focusing on Infrastructure as a Service (IaaS) provided as either a Public or Private Cloud. What’s the difference between these two cloud types?
Types of Cloud
Let’s take a look at each type of cloud. As we go through each one, ask yourself, which cloud type would best serve your needs.
First up is the Public Cloud. This is the type of cloud that most people think of when they talk about “the cloud.” They are public because they’re hosted by a cloud service provider. These cloud service providers rent space on the cloud to many customers. Like tenants renting apartments. Those tenants usually only pay for services they actually use.
Public clouds let customers off-load management to someone else. This is great for people who don’t mind giving up some control. Public Clouds are popular for hosting everyday apps like email, CRMs, and other business support apps.
Private Clouds are next. They are “private” because they only have one tenant. That one tenant gets all the benefits of being on the cloud. The tenant also has the advantage of controlling and customizing the Private Cloud to fit their needs. Many companies migrate IT systems to Private Clouds for this very reason. The organization can run core business apps that provide unique competitive advantages like research, manufacturing, supply chain management, proprietary designs, and more.
Private Clouds offer privacy settings and management responsibilities. Resources are dedicated to a single customer with isolated access. On-site and vendor-owned infrastructures can power Private Clouds.
There are 2 subtypes of Private Clouds: Managed Private Clouds and Dedicated Clouds
Managed Private Clouds enable customers to create and use a Private Cloud. A third party vendor deploys, configures, and manages the Private Cloud. This is a cloud delivery option that helps understaffed enterprises provide better Private Cloud services. It also has a tighter infrastructure.
Dedicated Clouds are a special kind of Private Cloud. They exist within another cloud. For example, a corporation might have an accounting department with its own dedicated cloud within the organization’s Private Cloud.
Finally, there are Hybrid Cloud environments. They are called “hybrid” because they combine both the Public and Private Clouds. A Hybrid Cloud combines 2 or more interconnected cloud environments, public or private.
The Hybrid Cloud’s pooled virtual resources are developed from hardware by a third-party company and from hardware owned by the user. On-premise IT infrastructure, traditional virtualization, bare-metal servers, and containers can be incorporated. But it must be alongside the Public and Private Clouds. Without at least 1 of each cloud type, it’s considered a hybrid environment and not a Hybrid Cloud.
The Hybrid Cloud offers the best of the Public and Private Clouds and then some. The Hybrid environment is great for developing new innovative apps with uncertain demand. These apps deploy to your Private Cloud. The apps extend to the Public Cloud when demand spikes.
Remember: there are several types of clouds to consider. Look at the benefits of each before moving an IT system to the cloud. Cloud migration can be complex. Get advice. A trusted professional can assess the situation against your objectives. They will develop a strategy best suited for your specific service delivery needs. | <urn:uuid:e5238b94-fa12-4081-bffc-96ec0e759511> | CC-MAIN-2022-40 | https://www.ironorbit.com/blog/the-difference-between-a-public-private-and-hybrid-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00416.warc.gz | en | 0.930492 | 1,383 | 3.078125 | 3 |
If you watch a lot of CSI Cyber or hacking movies you might be lead to believe that the IP address is the missing link between an activity on the Internet and identifying who acted. In reality this is rarely the case.
There are at least 4 common technologies that obscure who is tied to an IP.
There are many other less transient signatures of a system than an IP address.
Once a computer is identified it does not always identify who is using it.
What is an IP address?
IP stands for Internet Protocol. An IP address is an address given to a system for a period of time that makes data routable to and from the system on networks. The IP address creates a mapping that the rest of the network can use to identify and communicate with the system hardware.
Only a few network devices need to keep the system’s address (known as a MAC address) because everything else uses the IP to communicate. There are 2 major versions of IP in use today:
- IPv4, which has around 4 billion addresses
- IPv6, which has so many addresses that it’s compared to the number of grains of sand on Earth
IPV4 is exhausted in many ways and has lead to a slow migration to IPV6. Most major networks and devices today support IPV6. These 2 versions are significant because they both have their own ways of being an obstacle in identifying a person by an IP.
Why aren't IP addresses easily tied to people?
There are a number of things that may be in the way of an IP being useful to identifying people. Some of them were created specifically for privacy. Others were needed to solve limited network addresses available before IPV6.
Virtual Private Networks (VPNs) are used to encrypt traffic between a machine and the VPN so that any untrusted networks in between can’t easily snoop on the data. Most corporations use VPNs, although individual people can also purchase a VPN service or create their own. VPNs are useful for privacy for a few reasons:
Proxies are just like the name implies. They usually route traffic for a specific protocol like website traffic. These are typically used for purposes like filtering unwanted websites from schools, public places, and companies. Proxies present the same issue as an IP address that’s recorded by a destination - only the proxy IP can be seen, not the IP of the system.
Network Address Translation (NAT) is a technology that creates an internal network that can’t seen by an external network. This is used when there are a lot of internal devices and only a few public IP addresses available. The effect on a destination is the same. They will only see the IP address of the NAT device. Unlike with other technologies, the NAT device is usually in the vicinity of the systems it connects to.
DHCP is a technology that shares an IP address contemporaneously. This ensures that a pool of IP addresses are used for devices that still need them. Any that are not don’t get a new lease on an IP, which means it’ll be available for others. If you’re getting logs of IP visits you must also keep the time for the visit, and then match the time of the visit to when someone had an IP. The system assigned that IP now may not be the same one.
The above technologies are often in used conjunction with one another. Together they make an IP address much less reliable as a personal identifier. Advertisers, for example, will only use an IP address to determine an approximate region, while for everything else they use other means. In the security industry they are used to identify systems and kept within that context.
How can systems and people be identified?
The list of practical systems and people’s signatures changes constantly. There are privacy features created to remove them and new research and technologies that create new ones all the time. For a comprehensive list of web browser signatures you can go to https://panopticlick.eff.org/ and run their test. It shows your list of browser plugins, cookies, settings, and technologies used to track you. That’s not the end of it though. All of our interactions can create signatures that can identify the people behind a system.
What can identify a person on a system?
This is another area of ongoing research. Conceptually, anything we do on a system can be used to create a signature.
For instance, the unique way we type or use a mouse are both very easily recorded from a remote system. None of the technologies mentioned will mask this. Storing information at this level simply isn’t practical though.
A more common method used is the correlation of your personal accounts. Anything that requires authentication is generally assumed to be you. This includes things like work accounts, email, and social media. A reasonable connection can be made by correlating the information between the logs of someone’s personal systems and the system someone wishes to identify them on.
Uploaded information can also be used to identify someone. Files contain a good amount of embedded information that can link someone to a system. Many cameras automatically embed geographic coordinates, making them particularly useful for identification purposes.
What can I do if I don't want to be tracked online?
There are a lot of reasons that people want to have some level of privacy online. Some may fear for their personal safety in response to expressing themselves, while others simply don’t like advertising anything too personal. Whatever your reasons, there are a few steps you could consider, such as:
Using a privacy VPN that doesn't keep logs
Using an Operating system with a browser built with privacy in mind. Consider the TAILS OS for online activity as a start.
Not using the same Browser/OS/System for things that identify you personally and things you do not want to be identified with easily.
I sincerely hope you found this information useful. If you are interested in what useful intelligence can be derived from IP addresses research “Threat Intelligence.” There are a number of companies that track information related to IP addresses within a useful context. Anomali has a Threat Intelligence Platform designed to work with this information and make it useful with computer operations.
Topics:Cyber Threat Intelligence | <urn:uuid:325515bb-643e-426e-8dbc-dedeb698ae3d> | CC-MAIN-2022-40 | https://www.anomali.com/blog/ips-arent-people | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00416.warc.gz | en | 0.953447 | 1,292 | 3.109375 | 3 |
Researchers at the University of Texas at Dallas have managed to come up with an idea to enable future smartphone owners to share the same trait as Superman - namely, the super power to be able to see through walls.
The researchers made two scientific breakthroughs, allowing them to develop devices capable to see through objects.
The discovery made by the Professor of Electrical Engineering, Dr. Kenneth O, and his team, relies on the terahertz band of the electromagnetic spectrum. This wavelength of energy can go through objects, much like that of X-rays.
Dr. Kenneth O also used a new type of consumer grade microchip, based on CMOS technology. A sensor in the chip captures the terahertz signals and generates images.
"The combination of CMOS and terahertz means you could put this chip and receiver on the back of a cellphone, turning it into a device carried in your pocket that can see through objects," explained Dr. O.
The applications of this technology are used within a wide range of industries, from medicine to detecting counterfeit money - and would also prove to be a useful tool for any wannabe spy. However, In order to prevent intrusive people breaching the privacy of others, the technology will not work if used more than 4 inches (10 cm) away from its target.
Source: UT Dallas (opens in new tab) | <urn:uuid:faa2fece-42ba-4f43-983d-92cda31cac4d> | CC-MAIN-2022-40 | https://www.itproportal.com/2012/04/21/future-smartphones-may-be-able-to-see-through-walls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00416.warc.gz | en | 0.953885 | 280 | 3.234375 | 3 |
Protected Health Information (PHI) is any data that is handled by a health care service provider, whether a Covered Entity (CE) or Business Associate (BA), that relates to the physical or mental health of an individual in some way.
Any US organization that handles PHI is required to comply with HIPAA (Health Insurance Portability and Accountability Act of 1996). Below are some tips to help organizations achieve compliance with HIPAA and ensure that their PHI is secure.
1. Carry out a HIPAA Assessment
Doing so will help organizations understand their current security posture and provide insight into what they can do to improve it. They will also need to carry out regular security audits to monitor the effectiveness of their security strategy.
2. Appoint Privacy and Security Officers
Organizations will need to appoint one or more security personnel to ensure that the organization is following the HIPAA guidelines, and to ensure that staff members are trained to a sufficient level.
3. Sign a BAA (Business Associate Agreement)
Healthcare organizations and any third-parties that have access to PHI will need to sign a BAA (Business Associate Agreement), which states how PHI can be stored, processed and transported.
4. Password Protect All Devices
Any devices that have access to PHI must be password protected. Each user should have their own set of credentials, and they will need to be reset at least twice a year.
5. Use Two-Factor Authentication
Any cloud-based solutions, such as EMR (Electronic Medical Record) solutions and communication/chat applications, should use 2FA for added security. In addition to a simple username and password, the user will be required to enter a code, provide a fingerprint scan, or anything else which can strengthen the authentication process.
6. Secure Your Physical Assets
While this may seem obvious, some smaller service providers have been known to overlook physical security procedures. Only authorized personnel should be allowed to enter the server room. Use security cameras, alarm systems and electronic door access to protect all physical assets which may contain PHI.
7. Implement a Breach Notification Plan
Should a service provider fall victim to a data breach, HIPAA requires that they notify those who were affected within 60 days. Service providers will need a documented set of procedures to follow in the event of a breach.
8. Restrict access to PHI
Organizations need to ensure that access to the PHI is limited to those who need it. A common method for restricting access to sensitive data is Role-Based Access Control (RBAC), whereby access is restricted based on roles as opposed to individuals.
9. Audit Changes to Access Controls
Should an attacker or malicious insider gain unauthorized access to a privileged account, they may seek to elevate their privileges in order to gain further access. Organizations will need to monitor these privileges and receive real-time alerts when they change. There are a number of data security solutions which can detect, alert, report and respond to changes made to privileged accounts, as well as any files, folders and mailbox accounts, that contain PHI.
10. Encrypt PHI Both at Rest and in Transit
Any data stored on portable drives, mobile phones and laptops will need to be encrypted in order to protect the data, should the device fall into the wrong hands. Likewise, PHI sent in emails will need to be encrypted.
11. Securely Dispose of Old Equipment
Any equipment that contains sensitive data will need to be destroyed, and the process will need to be documented.
12. Tighten up Perimeter Security
Ensure that firewalls are well configured. If you don’t have a firewall, you can use a Firewall-as-a-Service or go a step further and implement an Intrusion Detection Prevention System (IPDS). | <urn:uuid:bc8f9fc6-073c-44aa-b1be-6f17d03cd961> | CC-MAIN-2022-40 | https://www.lepide.com/blog/12-tips-for-protecting-phi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00416.warc.gz | en | 0.937315 | 771 | 2.5625 | 3 |
What does AIOps mean? AIOps is short for Artificial Intelligence for IT Operations. Other names you might recognize include Cognitive Operations, Algorithmic IT Operations and IT Operations Analytics (ITOA).
AIOps is the multi-layered application of big data analytics and machine learning to IT operations data. The goal is to automate IT operations, intelligently identify patterns, augment common processes and tasks and resolve IT issues. AIOps brings together service management, performance management and automation to realize continuous insights and improvement.
Industry analysts have defined a set of capabilities that an AIOps platform should provide. These include:
- Collecting and aggregating data from many sources such as: networks, applications, databases, tools and cloud as well as in a variety of forms including metrics, events, incidents, changes, topology, log files, configuration data, KPIs, streaming and unstructured data like social media post and documents (natural language processing).
- Managing the data, storing the data in a single place accessible for analysis and reporting, also including functions like indexing and expiration.
- Analyzing the data through machine learning including pattern detection, anomaly detection and predictive analytics. Separate significant alerts from ‘noise.’
- Conducting root cause analysis (RCA) which involves reducing the volumes of data to the few (or one) most likely causes. Correlate and contextualize data together with real-time processing for problem identification.
- Acting as a strategic overlay that aggregates multiple monitoring tools and other investments. Codify knowledge into automation and orchestration of response and remediation.
- Continuous learning to improve handling and resolution of problems in the future.
Why is AIOps needed?
Many organizations have transitioned from the static, disparate on-site systems to a more dynamic mix of on-premises, public cloud, private cloud and managed cloud environments where resources are scaled and reconfigured constantly.
More devices (most notably Internet of Things, or IoT), systems and applications are providing a tsunami of data that IT needs to monitor. For example, a locomotive can produce terabytes of data during a trip. In IT terms this explosion is called Big Data.
No human can process the explosion of data IT Operations is expected to handle. IT teams cannot prioritize different issues for resolution in a timely fashion. They are inundated with a large volume of alerts many of which are redundant. This negatively impacts user and customer experience.
Traditional IT management solutions cannot keep up with this volume. They cannot intelligently sift through events from the sea of information. They cannot correlate data across interdependent but separate environments. They cannot deliver the predictive analysis and real-time insight IT operations needs to respond to issues quickly enough.
To identify, resolve and prevent high-impact outages and other IT operations problems faster, organizations are turning to AIOps. AIOps enables IT operations teams to respond quickly and proactively to outages and slowdowns while expending much less effort. It bridges the gap between a dynamic, diverse and difficult IT landscape on the one hand and user expectations for minimal or no interruption in system availability and performance.
Benefits of AIOps
The benefits users have found using AIOps include:
- Improved employee and customer experience
- More efficient use of infrastructure and capacity
- Better alignment with IT services and business service outcomes
- Faster time to deliver new IT services
- Reduced firefighting and avoid costly disruptions
- Better correlation between change and performance
- Improved efficiencies in managing change
- Reduced workload on IT Operations staff because AI is helping with the analysis
- Reduction in false alarms. Faster root cause analysis (RCA) because AI pinpoints the problem or reduces the number of items operators must look at to a small set
- Prevent problems before customers are impacted via anomaly detection
- Achieving faster Mean Time to Resolve (MTTR)
- Reducing the skills gap
- Reduction of human error
- Unified view of the IT environment
- Insights into what workloads drive costs
- Support for traditional infrastructure, public cloud, private cloud and hybrid cloud
- Moving from reactive to proactive to predictive problem management
- Modernizing IT operations and the IT operations team
- Higher levels of security-to-operations collaboration | <urn:uuid:eecb6ab9-1aad-46b8-9727-a44ffcd0a30c> | CC-MAIN-2022-40 | https://www.microfocus.com/en-us/what-is/aiops | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00416.warc.gz | en | 0.924547 | 884 | 3.140625 | 3 |
Saturday, October 1, 2022
Published 2 Years Ago on Wednesday, Oct 21 2020 By Yehia El Amine
The contributions of recent technological developments toward humanity has characterized this period in history as the best time to be alive. From advancements in the field of medicine, to construction, even reaching the service industry, we have made strides in bettering humanity’s quality of life.
But there remains a lot to be done; and the basic necessity that we need to address is that of world hunger, and many foresee that the biggest player to shift this landscape will be Artificial Intelligence (AI).
According to the World Economic Forum, the agriculture sector employs roughly 25 percent of the world’s population, while being responsible for feeding and sustaining 7.5 billion people.
Despite countless efforts by governments, NGOs, non-profits and the like, a staggering 1.9 billion people remain moderately or severely food insecure, and roughly 820 million do not get enough to eat on a daily basis, as per the UN Food and Agriculture Organization (FAO).
In parallel, the global population is projected to expand to almost 10 billion by 2050, experts estimate that feeding the planet will require farmers to grow 69 percent more calories than they did in 2006.
In the wake of increasing produce to meet global demand, humanity has wreaked havoc on the environment by cutting down forests and ploughing more farmland and grassland, which has contributed to almost 10 percent of global carbon emissions.
Massive areas within South America and continental Africa have taken the worst hits from this, while adding the effects of climate change and urbanization that pose an even greater risk to crop production.
The sooner changes can be enacted, the better.
While AI has suffered from a bad rep in the media due to people’s fearfulness of intelligent machines, new AI-powered businesses and startups are mushrooming everywhere to fight back against humanity’s biggest threats, from climate change to Covid-19.
AI already has hurdles that need to be dealt with and fast; since poverty-stricken people in remote areas require investment in basic infrastructure, social services, as well as law to support the proper distribution of food.
With all that in mind, AI can prove to be the biggest protagonist within this struggle by lowering operational costs, simplifying access to local and international markets. Thus, unlocking this valuable information and knowledge can be the main catalyst in supporting people’s decision-making in the most productive and sustainable way.
In addition, an investment into AI can bring with it a pool of financial and technical resources from a myriad of diverse partnerships that would lay the foundations for more sustainable farming – making it easier for decisions and policy makers to apply AI solutions within their programs.
However, an effective outcome requires alignment between the discourse of AI integration and human values. People, governments, and enterprises across the board need to understand that the role of AI will be placed for the greater good of serving humanity, shying away from the “stealing our jobs” narrative to look at the bigger picture.
The world needs to change its direction when it comes to feeding the world, by transitioning from small farming to smart farming.
Smart farming unlocks the power of precision, using real time data coupled with algorithms to analyze a huge volume of data to ascertain common patterns and in turn transform those patterns into predictions.
An example of this can be made through cost cutting in reducing waste, where algorithms and machine learning can apply the necessary resources of pesticides, water, and the like according to the needs of each patch of land.
In parallel, real time insights with the help of sensors, in-field cameras, and micro weather data can provide farmers with the most accurate information to make the best possible decisions. Early signs of damage to the crop can be detected and addressed with the help of deep learning and computer vision algorithms.
“There might be a number of crop related issues that can skip the eyes of humans but can be detected with the help of proven and well-trained algorithms. These smart sensors are able to detect rainfall, humidity, crop water demands, water stress, micro-climate data, canopy biomass, chlorophyll and so much more,” a report by the World Economic Forum said.
There are a number of companies and initiatives that are already being worked on as we speak. The greatest example of this is an AI company called Prospera, that is training powerful algorithms on vast new datasets to improve the efficiency and performance of traditional farms.
Prospera collects 50 million data points every day across 4700 fields, which are then analyzed with the help of AI to identify pests and disease outbreaks, while digging up new opportunities to increase yields and reduce the carbon footprint by eliminating waste.
Another approach being taken by companies such as Plenty and Aerofarms is vertical indoor farming which utilizes AI algorithms to optimize nutrient inputs and increase yields in real time.
Others such as Root AI, are using computer vision coupled with robotics to identify when fruit is at its ripest.
It is important to note that the most advanced forms of vertical indoor farming are estimated to produce over 20 times more food per acre than traditional fields, using roughly 90 percent less water.
The industrial meat production sector also has a cut of the action.
Companies such as Latin American NotCo and Fazenda Futuro are using AI tools to analyze plant data to identify the best circumstances to replicate the taste and texture of meat.
The market has obviously taken note of this, with sales of refrigerated plant-based meat growing by a whopping 125 percent. This is necessary since meat production accounts for almost 50 percent of global agricultural emissions, according to the FAO.
The private sector isn’t the only one riding this bandwagon. Governments across the globe are also looking to ride the winds of change in hopes of providing better feeding conditions for their citizens.
An example of this was brought about by a partnership between the World Economic Forum and the government of India to identify high-value use cases for AI in agriculture, develop innovative AI solutions, and drive their widespread adoption.
And the world is swiftly following suit; such as investments by the government of Zimbabwe will intensify climate smart agriculture, a farming technique that helps farmers to be more productive on a warming planet, than adopting “harmful” genetically modified crops.
In parallel, the government of Japan formulated “the Basic Plan for Food, Agriculture, and Rural Areas,” in March 2020 which shows the direction of measures with a vision for approximately the next 10 years.
Australia has already allocated millions of its budget to improving landcare, waste management, and farming sustainability.
While no single technology can solve the world’s biggest problems, the integration of AI could be a stepping stone to a more accurate and efficient future that the world is desperately in need of.
Artificial intelligence (AI) systems are already seeing huge adoption by businesses big and small. Its ability to enhance marketing tactics, customer service, business strategy, market analytics, preventive maintenance, autonomous vehicles, video surveillance, medical, and much more. Making AI technology invaluable across all sectors. Here are the fastest advancing AI trends to watch for in 2022. Small […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:fc613b54-1429-4051-9fcd-2907634afe8e> | CC-MAIN-2022-40 | https://insidetelecom.com/could-ai-be-the-answer-to-world-hunger/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00416.warc.gz | en | 0.94143 | 1,519 | 3.15625 | 3 |
CHARLOTTESVILLE, Va., Jan. 16, 2018 — The University of Virginia School of Engineering & Applied Science has been selected to establish a $27.5 million national center to remove a bottleneck built into computer systems 70 years ago that is increasingly hindering technological advances today.
UVA Engineering’s new Center for Research in Intelligent Storage and Processing in Memory, or CRISP, will bring together researchers from eight universities to remove the separation between memories that store data and processors that operate on the data.
That separation has been part of all mainstream computing architectures since 1945, when John von Neumann, one of the pioneering computer scientists, first outlined how programmable computers should be structured. Over the years, processor speeds have improved much faster than memory and storage speeds, and also much faster than the speed at which wires can carry data back and forth.
These trends lead to what computer scientists call the “memory wall,” in which data access becomes a major performance bottleneck. The need for a solution is urgent, because of today’s rapidly growing data sets and the potential to use big data more effectively to find answers to complex societal challenges.
“Certain computations are just not feasible right now due to the huge amounts of data and the memory wall,” said Kevin Skadron, who chairs UVA Engineering’s Department of Computer Science and leads the new center. “One example is in medicine, where we can imagine mining massive data sets to look for new indicators of cancer. The scale of computation needed to make advances for health care and many other human endeavors, such as smart cities, autonomous transportation, and new astronomical discoveries, is not possible today. Our center will try to solve this problem by breaking down the memory-wall bottleneck and finally moving beyond the 70-year-old paradigm. This will enable entirely new computational capabilities, while also improving energy efficiency in everything from mobile devices to datacenters.”
CRISP is part of a $200 million, five-year national program that will fund centers led by six top research universities: UVA, University of California at Santa Barbara, Carnegie Mellon University, Purdue University, the University of Michigan and the University of Notre Dame. The Joint University Microelectronics Program is managed by North Carolina-based Semiconductor Research Corporation, a consortium that includes engineers and scientists from technology companies, universities and government agencies.
Each research center will examine a different challenge in advancing microelectronics, a field that is crucial to the U.S. economy and its national defense capabilities. The centers will collaborate to develop solutions that work together effectively. Each center will have liaisons from the program’s member companies, collaborating on the research and supporting technology transfer.
“The trifecta of academia, industry and government is a great model that benefits the country as a whole,” Skadron said. “Close collaboration with industry and government agencies can help identify interesting and relevant problems that university researchers can help solve, and this close collaboration also helps accelerate the impact of the research.”
The program includes positions for about a dozen new Ph.D. students at UVA Engineering, and altogether, about 100 Ph.D. students across the entire center. The center will also create numerous opportunities for undergraduate students to get involved in research. The program provides all these students with professional development opportunities and internships with companies that are program sponsors.
Engineering Dean Craig Benson said the new center expresses UVA Engineering’s commitment to research and education that add value to society.
“Most of the grand challenges the National Academy of Engineering has identified for humanity in the 21st century will require effective use of big data,” Benson said. “This investment affirms the national research community’s confidence that UVA has the vision and expertise to lead a new era for technology.”
Pamela Norris, UVA Engineering’s executive associate dean for research, said the center is also an example of the bold ideas that propelled the School to a near 36 percent increase in research funding in fiscal year 2017, compared to the prior year.
“UVA Engineering has a culture of collaborative, interdisciplinary research programs,” Norris said. “Our researchers are determined to use this experience to address some of society’s most complex challenges.”
UVA’s center will include researchers from seven other universities, working together in a holistic approach to solve the data bottleneck in current computer architecture.
“Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating ‘intelligent’ memory and storage architectures that do as much of the computing as possible as close to the bits as possible,” Skadron said.
This starts at the chip level, where computer processing capabilities will be built inside the memory storage. Processors will also be paired with memory chips in 3-D stacks. UVA Electrical and Computer Engineering Professor Mircea Stan, an expert on the design of high-performance, low-power chips and circuits, will help lead the center’s research on 3-D chip architecture, thermal and power optimization, and circuit design.
CRISP researchers also will examine how other aspects of computer systems will have to change when computer architecture is reinvented, from operating systems to software applications to data centers that house entire computer system stacks. UVA Computer Science Assistant Professor Samira Khan, an expert in computer architecture and its implications for software systems, will help guide the center’s efforts to rethink how the many layers of hardware and software in current computer systems work together.
CRISP also will develop new system software and programming frameworks so computer users can accomplish their tasks without having to manage complex hardware details, and so that software is portable across diverse computer architectures. All this work will be developed in the context of several case studies to help guide the hardware and software research to practical solutions and real-world impact. These include searching for new cancer markers; mining the human gut microbiome for new insights on interactions among genetics, environment, lifestyle and wellness; and data mining for improving home health care.
“Achieving a vision like this requires a large team with diverse expertise across the entire spectrum of computer science and engineering, and such a large-scale initiative is very hard to put together without this kind of investment,” Skadron said. “These large, center-scale programs profoundly enhance the nation’s ability to maintain technological leadership, while simultaneously training a large cohort of students who will help address the nation’s rapidly growing need for technology leadership. This is an incredibly exciting opportunity for us.”
Source: University of Virginia | <urn:uuid:95562f00-8752-493f-8390-977400bb9e5d> | CC-MAIN-2022-40 | https://www.hpcwire.com/off-the-wire/uva-engineering-tapped-lead-27-5-million-center-reinvent-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00416.warc.gz | en | 0.931153 | 1,389 | 3.03125 | 3 |
AI, machine learning, and predictive analytics are used synonymously by even the most data-intensive organizations, but there are subtle, yet important, differences between them. Machine learning is a type of AI that enables machines to process data and learn on their own, without constant human supervision. Predictive analytics uses collected data to predict future outcomes based on historical data.
Regardless of the application, these forms of advanced analytics have common threads, and can ultimately determine the success of an organization’s digital transformation initiatives. Here are some real-world examples:
Boost and sustain revenue
Perhaps the most talked about use case, big data analytics has become a technology imperative to augmenting the top line. Emerging streaming technologies dramatically increase overall data volume, particularly sensor data from the Internet of Things (IoT). New ways to bridge formerly distinct data silos now enable organizations to finally bring analytics to the data. As a result, organizations are more successful in deriving accurate and actionable insights to outpace competitors by acting on unmet customer needs, under-funded parts of the business, emerging business models, and more. In fact, 61% of organizations are already realizing higher revenue growth than competition with an effective digital strategy (1), and this is largely attributable to personalized customer behavior analytics.
Drive customer engagement
Organizations are constantly looking for new and better ways to engage customers at a reasonable cost, and analytics play a critical role in this endeavor. Organizations often eliminate intermediaries and employ digital platforms to reach and serve customers directly, closing the loop between data and action and truly understand their customers and better satisfy their needs. Around 70% of customer engagements will be driven by intelligent systems by 2022 (2), which will largely be driven by cognitive search and knowledge discovery and ChatBot technology.
Streamline and enhance processes
Today, IoT is creating massive volumes of sensor data with untapped value. By applying IoT analytics at scale, organizations can reduce service costs, improve customer satisfaction, and create entirely new business models. For instance, IoT analytics delivers on the promise of predictive maintenance, smart metering, intelligent manufacturing, and more. Operations analytics ensures automated IT monitoring and remediation to reduce MTTR and operations costs. Legal departments use predictive coding, or technology-assisted review, to improve and streamline the process of reviewing billions of data objects for legal matters instead of sending each data object to an attorney to review individually. Analytics even drive better collaboration and productivity across geographies and departments. This might be why more than half of organizations are planning to leverage AI and machine learning in the next year (3).
Protect customer privacy
One of the hot-button topics in boardrooms around the world right now is protecting customer privacy. While there were earlier, smaller-scale privacy regulations in place, this topic went mainstream once the General Data Protection Regulation (GDPR) became effective in May 2018. This action to protect EU citizen data has since mushroomed globally, as other jurisdictions move to protect their citizens’ data as well. Analytics has yet again come into play, as the sheer volume of information to protect requires a new level of intelligent classification. Organizations do not have to protect all their data—nor do they have the budget or infrastructure to do so—but instead just the right information. File analytics and structured data management technologies play a critical role in protecting organizations from fines, sanctions, lawsuits, and erosion of market credibility.
Detect and prevent risk
Enterprise risk comes in many forms, and analytics are critical to address virtually all of them. Security Operations (SecOps) and Intelligent GSOC (Global Security Operations Center) can benefit by automating the analysis across vast amounts of data—a task that would take SOC analysts months to complete on their own. With proven and targeted analytics, security teams can investigate real threats instead of testing hypotheses or chasing false alerts. When looking for insider threats, for example, user and entity behavioral analytics (UEBA) centers on user information — abnormal logins, time of work, processes, etc.—to identify these difficult-to-find threats. Analytics even delivers real-time threat intelligence, or physical security, by scrutinizing video, text, and audio from CCTV, social media and sensors.
In summary, digital transformation is upon us—worldwide spending is expected to reach almost $2 trillion in 2022 (4). What may be less obvious is the critical importance of analytics—across many departments such as marketing, operations, legal, compliance, privacy, and security—to support this transition. Organizations that expand their vision of digital transformation, pursue holistic technology solutions spanning the above use cases, and ensure these solutions are underpinned by advanced analytics, will more accurately predict and influence outcomes and attain long-term fiscal health and commercial success.
(1) Harvey Nash / KPMG CIO Survey 2018, 5 June 2018
(2) Forrester Research, Feb 26, 2018, Digital Rewrites The Rules Of Business
(3) IDG, State of Digital Business Transformation, 2018
(4) IDC, Worldwide Semiannual Digital Transformation Spending Guide, Nov 2018 | <urn:uuid:e9955de3-f093-48cc-9cab-35531e10fd56> | CC-MAIN-2022-40 | https://www.dbta.com/Editorial/Trends-and-Applications/The-Multi-Faceted-Role-of-Advanced-Analytics-in-Digital-Transformation-132710.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00416.warc.gz | en | 0.924011 | 1,038 | 2.71875 | 3 |
If you are a Data Scientist wondering what companies could have the most career opportunities or an employer looking to hire the best data science talent but aren’t sure what titles to use in your job listings — a recent report using Diffbot’s Knowledge Graph could hold some answers for you.
According to Glassdoor, a Data Scientist is a person who “utilizes their analytical, statistical, and programming skills to collect, analyze, and interpret large data sets. They then use this information to develop data-driven solutions to difficult business challenges. Data Scientists commonly have a bachelor’s degree in statistics, math, computer science, or economics. Data Scientists have a wide range of technical competencies including: statistics and machine learning, coding languages, databases, machine learning, and reporting technologies.”
Table of Contents
DATA SCIENCE COMPANIES: IBM tops the list of employers
Of all the top tech companies, it is no surprise that IBM has the largest Data Science workforce. Amazon and Microsoft have similar amounts of Data Science employees. Despite their popularity, Google and Apple are in the bottom two. Why is this the case? It could have something to do with their attitude to how to attract and retain a data scientist. The report does not clearly mention the reasons for these rankings.
However, Data Scientists want to work for companies that provide them with the right challenges, the right tools, the right level of empowerment, and the right training and development. When these four come together harmoniously, it provides the right space for Data Scientists to thrive and excel at their jobs in their companies.
TOP FIVE COUNTRIES WITH DATA SCIENCE PROFESSIONALS: USA, India, UK, France, Canada
The United States contains more people with data science job titles than any other country. Glassdoor actually names “Data Scientist as the best job in the United States for 2019.” After the United States are the following countries in this order:
- United Kingdom
China has the least amount of data science job titles at 1,829 compared to the United States’ number of 152, 608. But what is the scenario for Data Scientists in Europe? What is the demand and supply?
Key findings indicate that demand for Data Scientists far outweighs supply in Europe. The existence of a combination of established corporations and up-and-coming startups have given Data Scientists many great options to choose where they want to work.
MOST SOUGHT AFTER DATA SCIENCE JOB ROLES: Data Scientist, Data Engineer and Database Administrator.
Among all companies, the most common job roles are Data Scientist, Data Engineer and Database Administrator. Data Scientist is the most common job role among all companies, with Database Administrator coming in at second place. If you remove Database Administrator, you find that Microsoft leads the way in terms of data science employees. This means that the reason for IBM’s lead in its data science workforce could largely be due to its sheer amount of Database Administrators. Unsurprisingly, across every job title in data science, males outnumber females 3:1 or more. It is also interesting to note that this ratio only exists within the Database Administrator category. At the Data Scientist category, the ratio reads 6:1.
It also comes to no surprise that Data Scientist ranks number 1 in LinkedIn’s Top 10. It has a job score of 4.7, job satisfaction rating of 4.3 with 6,510 open positions paying a median base salary of $108,000 in the U.S. However, it is important to note that these positions do not work in isolation. A move towards Data Science collaboration is increasing the need for Data Scientists who can work alone and in a team as well. By utilizing the strengths of all the different job roles mentioned above, data science projects in companies remain manageable and their goals become more attainable. The main takeaway is that despite the vast amount of job titles, each role brings its own unique expertise to the table.
DATA COLLECTION AND ANALYSIS
Diffbot is an AI startup whose Knowledge Graph automatically and instantly extracts structured data from any website. After rendering every web page and browser, it interprets them based on formatting, content, and web page type. With its record linking technology, Diffbot found the people currently employed in the data science industry at a point in time to provide an accurate representation of the statistics mentioned in this article. | <urn:uuid:b554061c-e34c-4e95-8a94-35902c5c2c11> | CC-MAIN-2022-40 | https://dataconomy.com/2020/09/three-trends-in-data-science-you-should-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00616.warc.gz | en | 0.927142 | 931 | 2.5625 | 3 |
Continuous economic activities, rapid urbanization, population growth and the rise of living standards have greatly accelerated the generation of municipal solid waste (MSW). This poses considerable challenges for governments, civil society and private sectors to protect and promote the environment and sustainable development.
Waste recently is more considered as a resource rather than garbage of no value (Zaman, 2010). Waste as a resource, refers to resource recovery that can be converted into other material and energy. MSW has great potentials to transform into other form of resource and energy through proper treatment. Additionally, increasing challenges of climate change resulting from MSW have raised people’s attentions on waste management.
Understanding the scenario
The growing quantities of MSW will have a significant impact on the development of social, environmental and economic aspects. However, there are several critical issues, such as deficiency of information and knowledge, less developed policy and strategic planning, lack of technology support, and lack of financial investment.
Waste management hierarchy is a generally accepted guiding principle for prioritizing waste management practices to achieve minimum adverse environmental and health impacts from wastes. The waste management hierarchy in Figure 1 shows the preferred order of waste management practice, from most to least preferred.
But most importantly for municipalities and authorities, tracking and monitoring that the work is getting done is of prime importance. Inovar was brought into situation to provide a smart IoT based solution to ensure all their financial investments are at work and Waste management is being carried out as expected.
Inovar quickly realized, Waste containers are everywhere—behind restaurants, retail stores, and hotels, at office buildings, and on construction sites. Even the collection process and transportation routes taken, were erratic and not optimized to reduce cost and maximize benefit.
We proposed a 2 part system, First part was to handle the waste level and the collection team. Second part was to manage the transportation of waste to the designated area with efficiency.
For waste collection procedure, we implemented sensor based technology in the collection units to assess the level of the waste. For ground crew cleaning public areas, we created a mobile based attendance and reporting element which can track their locations and real time and ensure that there are working as per the plan laid out.
For Logistics we again used IoT devices, to track trip mileage/fuel consumption, real time location tracking, engine idle time and deviation from planned route or tasks.
Finally, In the cloud, the real time analysis has to be carried out to generate various reports, alerts & notifications like- area waste collection activity, Daily attendance, vehicle movements, seasonal or function reports on waste, segregation reports etc. which can help the Municipal corporation with better strategies for waste management. | <urn:uuid:69d01d93-755f-4888-8cfd-538bd0b13db8> | CC-MAIN-2022-40 | https://blogs.inovar-tech.com/tag/smart-city/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00616.warc.gz | en | 0.945227 | 547 | 2.703125 | 3 |
We’ve been hiding messages for as long as we’ve been sending messages. The original ploy was to use stealth; fast and stealthy messengers carried messages back and forth. The primary method of keeping those messages from prying eyes was simply not getting caught. Once caught, the message contents would end up in the in the arms of the bad guys. From there, the bad guy could simply read the message and then know what you planned, or pretend to be the intended recipient and send a false reply thus executing the original Man In The Middle (MITM) attack.
The next advance in securing communications was to hide the message’s true contents in some way. If a message of this type were intercepted, the bad guy would be unable to read it and therefore the information would be useless to them. The art of concealing the content of a message became known as cryptography which is a portmanteau of the Greek words for hidden and writing.
The methods of encrypting text are as limitless as our imaginations. However, the practical applications of any given encryption method are very limited. The methods to encrypt and decrypt must be known to both parties and they must be rigorous enough that the methods cannot be guessed by the bad guys. Those two seemingly simple issues have plagued encryption systems forever. The game of keeping encryption ciphers working against the never ending onslaught of the bad guys to break those same systems has led to a rich and interesting history of ciphers.
Introduction to Cipher Terminology
Cryptography is a rich topic with a very interesting history and future. To get the most out of this article, it’s best to have a basic grip on a few terms and concepts. The next section will help with that, and you can feel free to skip it and come back to it if the need arises.
A block cipher encrypts a message of a set number of bits (a block) at a time.
Codes are more complex substitutions than a cipher in that codes transfer meaning rather than straight text substitution, e.g.
The eagle has landed. Code operations require a reference of some kind, usually referred to as a Code Book. Due to the cumbersome nature of transporting and maintaining code books, codes have fallen out of general use in modern cryptography in favour of ciphers.
Ciphers are substitution of plaintext for ciphertext. No meaning is ascribed to the process, it is a mathematical or mechanical operation designed to simply obfuscate the plaintext. EG: the “rotation 13” algorithm (ROT13) where letters are assigned the letter 13 spots after it in the alphabet. This results in A=N, B=O, etc. To encrypt or decrypt a message, a person need only know the algorithm.
ciphertext is the unreadable, encrypted form of plaintext. Anyone attempting to read ciphertext will need to decode it first. Decoding ciphertext reveals the readable plaintext.
The number of possible keys that could have been used to create the ciphertext. Theoretically, difficulty in brute forcing ciphertext becomes more difficult as the keyspace increases.
A hash is a cipher that is used to provide a fingerprint of some data rather than a cipher text of that data. Hash ciphers take some message as input and output a predictable fingerprint based on that message. If the message is changed in any way, no matter how trivial, the fingerprint should differ dramatically. The most common use of hashes is to verify that a local copy of some file is a true reproduction of the original file.
The hallmarks of a good hashing cipher are:
- It is
deterministic; meaning that the same message run through the same hash cipher will always produce the same fingerprint, and
- It has a low level of
collision; meaning that different messages run through the same hash cipher should produce a different fingerprint.
A cipher that uses a single alphabet and is usually a simple transposition. For example, the the letter A will be represented by the letter F.
These are so easily broken that we have
Cryptogram books in drug stores alongside the Crosswords for fun now.
Some examples of Monoalphabetic ciphers are:
- Caesar cipher
- Pigpen cipher
- Playfair cipher
- Morse code (despite its name)
plaintext refers to the readable text of a message. plaintext is encrypted into ciphertext and can be decrypted by the recipient back into plaintext.
This is a transpositional cipher, but unlike the monoalphabetic ciphers, more than one alphabet is used. There are signals embedded in the ciphertext which tell the recipient when the alphabet has changed.
Some examples of Polyalphabetic ciphers are:
- Alberti cipher
- Vigenère cipher
A stream cipher encrypts a message one character at a time. The Enigma machine is an example of a stream cipher.
In all but the most trivial encryption systems, a key is needed to encrypt and decrypt messages. If the same key is used for both purposes, then that key is referred to as symmetric. If different keys are used to encrypt and decrypt, as is the case with Public Key Cryptography, then the keys are said to be asymmetrical.
Symmetrical keys are generally considered slightly stronger than asymmetrical keys. But, they have the burden of needing a secure method in which to transfer the keys to all message participants in advance of use.
There are two ways to discover the plaintext from the ciphertext. The first way is to decrypt the ciphertext using the expected decryption techniques. The second way is to use analysis to discover the plaintext without having possession of the encryption key. The latter process is colloquially referred to as breaking crypto which is more properly referred to as cryptanalysis.
Cryptanalysis inspects the ciphertext and tries to find patterns or other indicators to reveal the plaintext beneath. The most commonly used cryptanalysis technique is frequency analysis. In the English language, there are 26 letters and the frequency of letters in common language is known. Vowels such as A and E turn up more frequently than letters such as Z and Q. Taking one step further back, entire words like THE and AN show up more frequently than words like ANT or BLUE.
To combat against word frequency, ciphertext can be broken up into standard blocks rather than left in their natural form. For example:
Given the plaintext:
HOW MUCH WOOD WOULD A WOOD CHUCK CHUCK IF A WOOD CHUCK COULD CHUCK WOOD
and applying a Caesar Cipher using a 16 rotation we end up with the following plaintext:
XEM CKSX MEET MEKBT Q MEET SXKSA SXKSA YV Q MEET SXKSA SEKBT SXKSA MEET
Frequency analysis gives us some clues as to the plaintext:
- The phrases MEET and SXKSA show up repeatedly
- The letters Q show up alone twice which is a strong indicator that Q is either an A or an I
- The word MEET is almost certain to have two vowels in the middle because there would be very few words with two of the same consonants in that position.
- A flaw in rotational ciphers is that no letter can equal itself, therefore we can eliminate the actual word MEET as plaintext.
- If we assume that Q is either an A or an I, then we can also assume that E is not either an A or an I and it can’t be an E. Since we’re pretty sure E is a vowel, that leaves us with E being either O or U. From there it takes little effort to test those options and eventually end up with a likely word
WOODis correct, then we can change the same letters in other words: E=0, M=W, T=D, Q=A, and continue on working our way through the ciphertext.
- Another way to proceed would be to test if this is a simple rotation cipher. To do that, we would calculate the offset from a ciphertext letter and a plaintext letter such as M = W. That gives us 16, and if we then reverse every letter back 16 slots in the alphabet, the rest of the plaintext will either make sense, or it will still be unintelligible gibberish.
Now consider the same example if standard blocks are used. The ciphertext would look like this:
XEMCK SXMEE TMEKB TQMEE TSXKS ASXKS AYVQM EETSX KSASE KBTSX KSAME ET
While this does not make frequency analysis impossible, it makes it much harder. The first step in tackling this type of cipher would be to attempt to break it back into its natural wording. It’s still possible to see repetitions like
SXKSA but it’s much more difficult to pick out standalone words such as what the
If you like this type of thing, check out your local drug store or book store’s magazine section. There are usually crypto game books in the same section as the crossword books.
Use of Superseded Cryptographic Keys
In modern use, cryptography keys can be expired and replaced. In large systems such as those used by the military, cryptographic keys are replaced at set times hourly, daily, weekly, monthly, or yearly. When a key is replaced, the previous key is said to be superseded. Superseded keys must be destroyed because they present an extremely valuable cryptanalysis tool. If an adversary has collected and stockpiled encrypted communications and can later decrypt those communications by gaining the superseded key used to encrypt them, that provides fertile ground for cryptanalysis of current day messages.
On the commercial internet in a post-Snowden era, it’s easy to imagine the NSA obtaining superseded SSL keys and going back to decrypt the vast trove of data obtained through programs like PRISM.
Quantum computing and cryptanalysis
Today’s computers have not changed significantly since inception. At the fundamental level, computers operate on bits which are single slots which can contain either the value 1 or the value 0. Every process that takes place on a computer, including the encryption and decryption of messages, needs to be boiled down to that simple foundation.
By contrast, Quantum computers operate using the physics concepts of superposition and entanglement instead of bits to compute. If proven feasible, quantum computing would likely be able to break any modern crypto system in a fraction of the time it takes today. Conversely, Quantum computing should also be able to support new types of encryption which would usher in an entirely new era of cryptography.
Initial monoalphabetic and polyalphabetic ciphers had the same problem: they used a static, never changing key. This is a problem because once an adversary understood how to lay out a pigpen diagram, for example, she could decrypt every single message ever encrypted with that algorithm.
In order to obfuscate the text more, the concept of changing keys was developed. Using the Caesar Cipher, one could change the ciphertext by simply incrementing the value of the rotation. For example:
Using the Caesar Cipher to encrypt the phrase
FLEE TO THE HILLS FOR ALL IS LOST
Rotation of 10 ciphertext: PVOO DY DRO RSVVC PYB KVV SC VYCD Rotation of 4 cpher text: JPII XS XLI LMPPW JSV EPP MW PSWX
The advantage of applying an arbitrary key to the plaintext is that someone who knows how the Caesar Cipher works would still not be able to decrypt the text without knowing what rotational value was used to encrypt it.
While the example above is a simple example due to the trivial nature of the Caesar Cipher to begin with, applying more complex keys can rigorously increase the security of ciphertext.
Throughout history there have been many types of ciphers. They primarily began as a military tool and militaries are still the heaviest users of cryptography today. From those military roots, we see that in order to be successful a cipher had to have these attributes.
- resistance to cryptanalysis
- flexible enough to transport by messenger across rough conditions
- easy to use on a muddy, bloody battlefield
Any cipher that was prone to error in encrypting or decrypting on the battlefield or fell too easily to interception and inspection did not last long. Keep in mind that one error in encryption can render an entire message completely unreadable by the recipient.
Some of the more notable ciphers follow in the next section.
Scytale – 120 AD
This is a monoalphabetic, symmetrical cipher system. The sender and receiver must both be in possession of a cylinder of wood exactly the same diameter. In effect, this is the
The sender takes a long narrow piece of fabric and coils it around the scytale. He then writes the message in standard right-to-left format on the fabric. The fabric is then removed from the scytale and looks to be just a long strip of cloth which can be scrunched up and hidden in the smallest of places for transport.
The recipient simply need to wrap the fabric around their matching scytale and the message becomes clear. While this simple cipher would fall very quickly to cryptanalysis, the premise is that only a scytale of exactly the same diameter could decrypt the message.
Vigenère – 1553
Originally described by Giovan Bellaso in 1553, the Vigenère cipher has been recreated a few times, most recently by Blaise de Vigenère in the 19th century. This is one of the first polyalphabetic ciphers. It is still symmetrical in nature, but it was tough enough to crack that it remained in use for over three centuries.
Polyalphabetic ciphers allow the use of many alphabets during encryption, which greatly increases the key space of the ciphertext. Earlier versions of polyalphabetic ciphers required rigid adherence to the spots at which the alphabet would change. Bellaso’s implementation of this cipher allowed the sender to change alphabets at arbitrary spots in the encryption process. The signal of an alphabet change had to be agreed upon in advance between the sender and receiver, therefore this is still a symmetrical method of encryption.
The Vigenère cipher was used in practise as recently as the American Civil War. However, it’s well understood that the Union repeatedly broke those messages because the Confederacy leadership relied heavily on too few key phrases to signal alphabet changes.
Pigpen Cipher – 1700’s
Also known as the Freemason’s Cipher, the Pigpen Cipher is another symmetrical monoalphabetic substitution cipher. Encrypt and decryption is done by laying out 4 grids. Two grids contain 9 spaces like a tic-tac-toe board, and two grids resemble a large letter X and contain 4 spaces each. Together, there are 26 spaces to coincide with the 26 letters in the Latin alphabet. The sections are all uniquely identifiable by a combination of the shape of the section and the presence, or absence, of a dot in it. Messages are encrypted by using the section identifier instead of the actual letter.
I’ve created a Pigpen cipher key here:
Decryption is done by laying out the same grid, and transposing back the section identifier to the letter. Therefore, a plaintext phrase of
READ COMPARITECH encrypts into this series of images:
Playfair cipher – 1854
The Playfair cipher uses 26 bi-grams (two letters) instead of 26 monograms as the encoding key. That vastly increases the key space of the ciphertext and makes frequency analysis very difficult. Playfair-encoded messages are created by constructing a 5 by 5 grid of letters which is generated by a random short phrase, and then filling in the rest of the grid with non-repeating letters from the alphabet. That grid forms the key and anyone wishing to decrypt the message must reconstruct this same grid. You can infer from that the recipient must also know the same short phrase used to encrypt the message which is much harder to determine than a simple rotational number.
Astute readers will realize that 5 x 5 = 25, but there are 26 letters in the Latin alphabet. To accommodate this, the letters I and J are usually used interchangeably. Any two other letters could be used as well, but that information would have to be communicated to the recipient to ensure they decoded the message properly.
Once the grid was constructed, users only had to know 4 simple rules to encrypt or decrypt the message. It’s difficult to make sense of the key in a written article so I created a Playfair grid to illustrate. I’ve used the phrase
READ COMPARITECH as the key phrase. After writing that out, I start writing the alphabet to fill in the rest of the grid. Remember that each letter can only be in the grid once and I and J are interchangeable. That gives me a Playfair key like the image below. The letters in red were omitted because they already appear in the grid.
Keep in mind that the phase
READ COMPARITECH is just the random phrase to build the grid. It is not the encrypted text. This resulting grid would be used to encrypt your plaintext.
One time pads (OTP) – 1882
A One Time Pad (OTP) refers to a symmetric encryption system using keys that are changed with every single message. If the keys truly are
one time, then ciphertext would be extremely resistant to cryptanalysis. These keys were literally written on pads of paper originally and since each key is only used once, the name One Time Pad stuck.
In practice, OTP is hard to deploy properly. As a symmetrical system, it requires the sender and all the recipients to have the same OTP book. It also has a significant disadvantage in that a message cannot be longer than the pad in use. If it were, then parts of the pad would have to be re-used, which significantly weakens the ciphertext to cryptanalysis.
OTPs are still in use today in some militaries for quick, tactical field messages.
Engima – 1914
Created by German citizen Arthur Scherbius after WW1 for commercial purposes, the Enigma machine is a polyalphabetic stream cipher machine. The machine consisted of a keyboard, a light panel and some adjustable rotors. Operators would set the position of the rotors and then type a message on the keypad. As each letter was typed, a corresponding letter would illuminate on the light pad. This was the encrypted letter that formed the ciphertext. Receivers would have to know the correct rotors settings to use, and then they perform the same process. However, as the receiver typed in each letter of ciphertext, the corresponding letter that would illuminate would be the plaintext letter.
The German military enhanced the machine by adding a plugboard and therefore considered it unbreakable and used the Enigma for everything. The Polish General Staff’s Cipher Bureau broke the Germany military Enigma in 1932. They were able to reverse engineer the machine from information derived by the poor operational security (OpSec) of German Enigma users. However, they were unable to actually decrypt messages until the French shared Enigma information gleaned from one of their German spies.
The Polish Policy Cipher Bureau was able to read German Enigma traffic for years until the German’s continued advances in the system made it too difficult. At that point, just before the outbreak of WWII, the UK and France were brought into the fold and the monitoring and decryption of Enigma traffic became part of Project Ultra.
It is generally accepted that the Allies’ ability to decrypt Enigma traffic shortened the outcome of WWII by several years.
SHA Family Hash Ciphers 1993 – 2012
SHA is a family of algorithms which are used for hashing rather than encryption and is published by the National Institute of Standards and Technology (NIST). The original SHA cipher published in 1993 is now designated SHA-0 in order to fit in with the naming conventions of subsequent versions.
Both SHA-0 and SHA-1 (retired in 2010) have been shown to be unable to meet the standard hash hallmarks (listed in the terminology section) and are no longer in use. HMAC-SHA1 is still considered unbroken but SHA-1 in all flavours should be discarded in favour of higher versions where practical.
Current SHA ciphers SHA-2 and SHA-3 (2012) are both still in use today.
MD5 Hash – 1991
MD5 is a hashing algorithm developed in 1991 to address security issues in MD4. By 2004 MD5 had essentially been broken by a crowd-sourcing effort showing that MD5 was very vulnerable to a Birthday Attack
MD5 fingerprints are still provided today for file or message validation. But since it is cryptographically broken, MD5 hashes can only be relied upon to detect unintentional file or message changes. Intentional changes can be masked due to the weakness of the algorithm.
Cryptography is in wide use on the internet today. A great deal of our internet activities are encrypted using TLS (Transport Layer Security) and keys are exchanged using an asymmetrical process.
Computers are exceptionally good at processing data using algorithms. Once computers arrived on the scene, cipher development exploded. Computers are not only an excellent tool for creating cryptographic ciphers, they’re also exceptionally useful for breaking cryptographic ciphers via cryptanalysis. This means that increases in computer power are always heralded by new ciphers being developed and old ciphers being retired because they are now too easy to break.
Due to this never-ending battle of computing power, computers using the internet usually support a large list of ciphers at any given time. This list of ciphers is called a cipher suite and when two computers connect, they share the list of ciphers they both support and a common cipher is agreed upon in order to carry out encryption between them. This process exists to ensure the greatest interoperability between users and servers at any given time.
Ciphers such as the Enigma and DES (Data Encryption Standard) have been broken and are no longer considered safe for cryptographic use. To date, RSA (Rivest, Shamir, Adleman) and AES (Advanced Encryption Standard) are considered safe, but as computing power increases, those will also fall one day and new ciphers will have to be developed to continue the use of cryptography on the web.
Public Key Cryptography
Public Key Cryptography is an asymmetrical system in wide use today by people and computers alike. The key used to encrypt data but not decrypt it is called the public key. Every recipient has their own public key which is made widely available. Senders must use the public key of the intended recipient to encode the message. Then the recipient can use their companion secret key called the private key to decrypt the message.
RSA is the underlying cipher used in Public Key cryptography. The RSA cipher multiplies two very large prime numbers together as part of the key generation process. Its strength relies on the fact that an adversary would have to correctly factor that product into the two prime numbers originally used. Even with today’s computing power that is not feasible in most cases. You may recall that factorization is the process of reducing a number to the two smallest numbers that can be multiplied together to produce the original number. Prime numbers have only two factors, 1 and themselves. I describe Public Key Cryptography in more detail here..
Asymmetrical ciphers are slower than symmetrical ciphers, but the Public Key implementation of asymmetrical crypto has one distinct advantage: since the public key cannot be used to decrypt messages, it can be communicated to the sender without any safeguards . Thus, there is no need for the two parties to exchange keys prior to exchanging their first encrypted message.
For small things like emails, asymmetrical cryptography is fine, but for large scale encryption such as entire disks or file backups, it is too slow. Most large-scale crypto systems today use a hybrid approach; asymmetrical crypto is used to exchange symmetrical keys, and then the symmetrical keys are used for the actual encryption and decryption processes.
Given our computing power today, it may seem incredible to find out that there are some very old ciphertexts that have not yet been decrypted.
The final Zodiak Killer’s Letter
The Zodiak Killer was a serial killer that terrorized California for several years in the late 60’s. The killer sent 4 cipher messages to the police during this time, of which the fourth remains unbroken today.
There are some claims that people have broken that last cipher, but nothing that has stood up to scrutiny.
Three final Enigma messages
Not all Enigma messages have been decrypted yet. While there’s little military value in doing so, there is an Enigma @ Home project that seeks to decrypt the few remaining messages from 1942. Much like other
@ home projects such as SETI @ Home, the project uses spare CPU cycles on members’ computers to attempt to decrypt the final messages.
Computing is still a young science. We’re still operating off of “version 1”, meaning that our computers are still limited to the binary ones-and-zeros functions. Quantum computing is likely the next big thing in computing and it will fundamentally change how computing works instead of just increasing processing power to handle more ones and zeroes. Quantum mechanics has this strange properly called the “superposition”, which means that something can be in more than one state until it is observed. The most famous thought experiment that illustrates superposition is that of Schrodinger’s Cat, where the cat in a box is both alive and dead until it collapses into one of those states upon being observed.
In computing this means that qubits (quantum bits) can have two states instead of binary’s one state. While a bit can only be 1 or 0, a qubit can be both via the concept of superposition. Not only does this make hard math such as that used to factor large numbers almost trivial to perform, it also may herald the end of Main-In-The-Middle attacks.
Another property of quantum transmission is the concept of “interference”. Interference is the behavior of subatomic electrons to pass through a barrier and then reconvene on the other side. Interference can only take place if nobody observes it (tree, forest, anyone?). It would therefore be theoretically impossible for someone to intercept a message passed through a quantum system without being discovered. The path of the electrons would be changed by being observed and interference would no longer occur, thus indicating the message has been observed. The best Quantum computer at this time has a few qubits, but the technology is progressing rapidly.
“Scytale” by Lurigen. CC Share-A-Like 3.0 | <urn:uuid:fb35819a-ea47-4b00-82f8-8ee7bbb9c001> | CC-MAIN-2022-40 | https://www.comparitech.com/blog/information-security/famous-codes-and-ciphers-through-history-and-their-role-in-modern-encryption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00616.warc.gz | en | 0.939179 | 5,741 | 3.640625 | 4 |
- Money laundering can be escaped via a collection of rules, processes, and technologies known as Anti-Money Laundering (AML).
- Money laundering is split into three phases (placing, layering, and integration), with various controls in place to find strange behavior related to laundering.
- Knowing your consumers, software filtering, and imposing holding periods are all anti-money laundering procedures.
The security world is evolving every day to keep businesses fraud and stress-free. The inadvertent use of the banking system for money laundering activities is a major challenge faced by the financial services industry.
Min Zhu, Deputy Managing Director of the IMF, “Effective anti-money laundering and combating the financing of terrorism regimes are essential to protect the integrity of markets and the global financial framework as they help mitigate the factors that facilitate financial abuse.”
To many people, money laundering appears to be a crime that occurs only in crooked businesses or TV shows. Unfortunately, this type of financial fraud is all too widespread and can wreak havoc on small firms, particularly fintech and financial services that allow consumers to transfer money.
Money laundering refers to the process of converting illegal sources to legal sources. Bad actors launder money to earn revenue from crime because they are legally acceptable before they can be used. It helps terrorists and criminals fund illicit activities, threatens global security, and dampens global economies. Money laundering can land major troubles like hefty fines, criminal charges, damage to reputation, and negative publicity about compliance and lapses among other things. This is the reason why industries remain concerned about money laundering.
As per the resources, money laundering accounts for about 2% to 5% of the global Gross Domestic Product (GDP) – i.e., about USD 2 trillion. Money laundering has significant repercussions on the global economy.
The above figures make it clear as to why money laundering has become a very important financial issue that authorities are trying to stop. Anti Money Laundering (AML) is a worldwide term to prevent money laundering and includes policies, laws, and regulations to prevent financial crime.
Plenty of new measures have been introduced to counter money laundering, and indeed many Governments have established comprehensive Anti-Money Laundering (AML) regimes but they do not come as smoothly as they should. Regulatory authorities introduced the Anti-Money Laundering (AML) regulations and Counter-Terrorist Financing (CTF) policies to identify and prevent such activities.
To identify potential money laundering instances and to address compliance requirements organizations must have in-depth knowledge about how the crime works.
What is AML?
Anti-Money Laundering or AML is a set of measures performed by institutions to comply with legal requirements that help combat the laundering of money and other financial crimes.
In short, the process of converting money from illegal sources to legal sources.
Rising trends in the AML space are as follows –
- Adoption of analytics for fraud detection, linkage detection, and detection of rouge activities.
- Concentration on digital payment-related issues.
- Acceptance of enterprise-level approaches.
- Use of third-party services such as KYC compliance and transaction monitoring.
What is an AML compliance program?
An AML compliance program integrates everything a firm does to meet the compliance norms –
- Built-in internal operations regulations.
- User-processing and vetting policies.
- Accounts monitoring and detection.
- Reporting of money laundering incidents.
The major goal of an AML compliance program is to eliminate, detect, and respond to intrinsic and residual money laundering, fraud-related risks, and terrorist financing.
To construct a robust AML compliance program, one needs to stay safe from non-compliance fees and need to follow quite a few requirements.
Let’s discuss this in detail.
How can businesses remain AML compliant?
All Anti-Money Laundering compliance programs strive to expose internal fraud, money laundering, tax evasion, and terrorist financing within the company. We have listed the three most critical dos that can help you attain these objectives.
· A compliance officer in the team
The procedure we are discussing is not easy to manage and needs trained personnel with knowledge and experience to keep the business in close compliance with the fluctuating regulations and laws.
Compliance must be the moral responsibility of every team member across all organizational structures. The workforce must comprise high skills and be qualified to report and formulate their suspicions.
· Efficient reporting
A robust reporting system helps to provide data about a money-laundering activity to the relevant authorities.
· Being alert of high-risk consumers
Organizations must assess their consumer’s risk profiles and process them accordingly, applying consumer due diligence and enhanced due diligence.
Factors that impact AML compliance
Before developing a compliance program, an enterprise must first analyze and characterize the risks involved and legal obligations.
- The dangers of money laundering that the company faces.
- Respectful local and international laws, as well as penalties for non-compliance.
- Internal company operations that’s questionable.
Organizations should develop strong proposals to enhance the concept of AML compliance practices. It will make the process simpler and prevent negotiation.
How to design an AML program
- Detect suspicious activities.
- Risk assessment.
- Internal practices.
- Make due diligence your focus point.
- Assign roles and responsibilities wisely.
- Report suspicious activities.
- Guide employees to spot and correctly react to Money Laundering (ML) and Terrorist Financing (TF) activities.
- Prevention of criminal attempts.
- Independent audits.
Role of Machine Learning in AML
Machine Learning (ML) is reinventing how fiscal networks work, thanks to significant advances in data science. ML has lots of potential in the banking sector, particularly when it comes to spotting hidden trends and questionable money-laundering operations.
ML helps recognize money laundering typologies, behavioral transitions in consumers, suspicious and strange transactions, transactions of consumers belonging to similar groups, geography, and age, and help reduce false positives.
It also aids in analyzing comparable transactions for focal entities and the correlation of suspicious signals in regulatory reports.
Money launderers will continue to develop new ways to use banks for unlawful purposes. The most challenging component to implementing an effective AML policy is detecting laundering activity in a timely manner. Several innovative technology-based methods and applications (artificial intelligence solutions, machine learning solutions, etc.) are already available to detect, trace, and prevent money laundering.
Though these technological techniques will not fully eliminate money laundering, they will greatly reduce it, and financial institutions should look into using them sooner rather than later.
Stay tuned with us for more information on Anti-Money Laundering (AML) and technology! | <urn:uuid:51e785d7-327c-473d-8246-57677bc8e008> | CC-MAIN-2022-40 | https://www.fintechdemand.com/insights/finance/part-1-all-about-anti-money-laundering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00616.warc.gz | en | 0.918556 | 1,400 | 2.84375 | 3 |
Traditional Backup System Vs Modern Data Protection Techniques
Data protection is an important part of organizational infrastructure as it ensures data safety and helps in data restoration in case of any catastrophe. Organizations are using various methods such as backup and restore, disaster recovery, and cloud to keep their information safe. Companies are taking safety measures according to their size and needs; a small firm selects backup software while a large scale industry implements separate IT disaster recovery plan.
Modern enterprise data safety techniques enable companies to restore data quickly while traditional backup systems needs some time for data retrieval. The traditional backup system lacks the capability of indexing functionality which allows users to manage their archive and backup together. Snapshots are used in both traditional and modern backup systems, but previously, a separate management was required for it, where as now managing array-based snapshots are integrated into the data protection process.
The first stage of data safety technique started from taking backups on magnetic tape drives than disk-based systems and now it reached to the cloud. Latest technologies are leveraged for the data protection strategies to simplify scheduling of routine management processes, but traditional backup systems have their own positives. The disk-based systems allow faster data recovery. The performance metrics for disk-based data recovery systems is high as users can write data and retrieve it at fast speeds. The recovery time for disk-based systems is very less. The functionality of traditional backup systems is versatile as it optimizes various Backup and Data Recovery (BDR) solutions with its own features like the deduplication that help IT managers in saving space from the unnecessary data.
However, the drawback of the traditional data backup systems is that the need of hardware and space increases in proportion to data which leads to rise in both CAPEX and OPEX. The backups are stored at onsite data center and in case of any disaster, all data will be lost. The other concern with traditional systems is regarding the life of storage devices.
The modern data safety techniques came into existence after the evolution of cloud. It revolutionized the data recovery strategy as a whole. Cloud enables companies to keep their data safe at any remote location. The data circulated during daily operations are operated through cloud and instead of relying on offline systems. It keeps data safe during any case of on-site disaster and the company can retrieve whole data within minutes.
Integration of Disaster Recovery as a Service (DRaaS) ensures data safety from any natural or man-made catastrophe by hosting all clients’ data on the cloud. The other factor behind companies integrating cloud to their infrastructure is its cost efficiency. The ownership cost of cloud computing is less in comparison to disk or tape backup systems. The level of security is very high as company’s data remain unaffected from any on-site disaster. Installing cloud into infrastructure is not a very complex process as company systems can start running within minutes of its installation. Other advantages of cloud systems include its compatibility with all IT devices, systems and applications.
Any technology is not perfect and there are some drawbacks at a certain stage. The cloud data protection systems have some performance issues. The cloud based backup and data recovery totally depends on the internet connection of cloud service provider. Any disturbance in the internet connection can disturb routine backup strategies and harm recovery objectives. Cloud computing is vulnerable to security concerns. The cloud service provider uses data encryption technique to take data from one end to another. However, the concern occurs as information at data center is highly vulnerable to cyber threats.
The traditional backup and modern data protection fulfils the data security woes in their own ways. Companies have the option to integrate hybrid data protection systems to their infrastructure in which they can save their data in the cloud along with keeping a backup of important data on-site. So, in case vendor’s network is down, the company can retrieve data from their on-site systems and retrieve it conveniently from cloud during any on-site disaster.
See Also Reviews Of CIOReview: Glassdoor
Check Out Review Of CIOReview : Crunchbase
Check This Out: Review | CIOReview | <urn:uuid:ec4a0091-683d-4a45-829d-0574a9faf595> | CC-MAIN-2022-40 | https://www.cioreview.com/news/traditional-backup-system-vs-modern-data-protection-techniques-nid-23551-cid-12.html?utm_source=clicktrack&utm_medium=hyperlink&utm_campaign=linkinnews | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00616.warc.gz | en | 0.948024 | 824 | 2.90625 | 3 |
Like python lists and python tuples, we can access a specific item in a python dictionary. To do this, we use both index method and get method. Beside, we can also access the complete keys and values in a dictionary. Let’s show these methods with different Python coding examples.
You can check also complete Python Dictionary Lessons
Firstly, let’s see how to access a value of a key:value pair.
The output of this python code will be:
You can learn all Python Dictionary Methods
Here, we have used index operator. We add the key inside square brackets and access the value of this key. We can get the same result with get() method.
We can use these methods in networking liek below:
These are the methods used to access any value of a key:value pair. Now, let’s see how to access both all the keys and all the values in a python dictionary.
You can learn all How to add a Python Dictionary
Below, we will access a all keys of a character dictionary.
The output of this code will be:
As you can see above, all the keys are listed. Below, we will access the keys of device dictionary.
Now, let’s access the values of a character and device dictionaries.
The second networking example will be like below:
We can access the comlete items in a python dictionary with the help of items function. The return of this fucntion will be as tuple in a list.
Below, we will access the items of the dictionaries and as an output, we will receive an a tuple in a list.
In this lesson, we have learned how to access the keys and values of a python dictionary. With diffferent exaamples, you can increase your experience on Python Dictionary access methods. | <urn:uuid:48134435-4211-4acc-9c23-3345e8f3e28f> | CC-MAIN-2022-40 | https://ipcisco.com/lesson/python-dictionary-access/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00616.warc.gz | en | 0.823101 | 383 | 3.953125 | 4 |
While a variety of highly visible newsworthy events were occurring during 2020, a critical advancement in the world of cybersecurity quietly passed through the House and Senate to be signed into law. The Internet of Things Cybersecurity Improvement Act of 2020 was signed by the president on December 4, 2020. It requires all internet of things (IoT) devices owned or controlled by the government to meet specific minimum security standards. This includes devices purchased with government money.
The IoT bill was passed by the House of Representatives in September and unanimously approved by the Senate in mid-November to be signed by the president in early December, making the route to existence look deceptively easy. In reality, over three years of bipartisan efforts went into creating the IoT bill. Early drafts were littered with waivers and loopholes that eliminated the bill’s potential effectiveness. These versions never made it to the Senate floor.
During a year when a bipartisan agreement seems like a feat in itself, the unanimous passing of the bill could have likely gone either way, but growing threats pointed to the dangers of leaving the risks unaddressed. In the nearly four years since the bill was introduced, the IoT has more than quadrupled in growth and has been introduced to homes and a variety of industry settings. With no sign of IoT growth slowing down, ignoring security concerns is like leaving the door to many types of sensitive information wide open.
While minimal security for one group of devices might seem like a small advancement, it provides a powerful step toward a secure future. There’s no doubt the internet of things is rapidly growing and will continue to do so with the implementation and spread of 5G connectivity. The IoT makes daily life more efficient and productive and provides crucial advancements in certain industries, but explosive technological growth isn’t without considerable risks. When manufacturers are pushed to market at such a rapid rate, security becomes an afterthought. It’s considered an expensive add-on that delays progress. Still, IoT devices have the same security vulnerabilities as all other connected devices. Furthermore, these devices are designed to interact with wireless networks and a variety of connected devices that house sensitive data. Leaving these devices susceptible to security weaknesses can potentially provide a point of vulnerability in entire networks.
It’s true that the IoT bill isn’t a comprehensive solution for all the potential vulnerabilities of IoT devices, but it’s a step in the right direction. Setting security standards for government-issued devices provides a framework that manufacturers for commercial products will likely follow. Certain provisions in the IoT bill have the potential to speed up this natural progression as well.
To understand the importance of the IoT Cybersecurity Improvement Act, it’s important to get a thorough understanding of the scope of the internet of things and the potential risks that exist.
What is the Internet of Things?
The internet of things (IoT) is a term used to describe objects (things) that are embedded with the technology to connect and exchange data with other devices and systems through the internet. While many people are familiar with the term and have a vague definition of devices that fall into the category, it’s rare to understand the full scope of objects that use the technology and how it actually works.
Typically, when you think of things that are connected to the internet, your devices with a screen and keyboard come to mind. These devices don’t fall under the IoT umbrella. Instead, the IoT describes the growing number of electronics that aren’t computing devices but are connected to the internet to send and receive data. This covers an enormous group of things used in homes, vehicles, and a variety of vastly different industries. Most often, IoT devices have no screen or keyboard and communicate information with little or no human interaction.
For many people, a smart home is a quick fix explanation for IoT devices. While it’s accurate, it barely scratches the surface of the sheer volume of items that use the technology. For instance, many people use smart light fixtures and energy-saving appliances that use sensors to detect and relay temperature or share other information, though these individuals wouldn’t consider the home a “smart home.” Devices in the IoT category range so widely in use that it’s practically impossible to recognize the full scope. For instance, fitness trackers are IoT devices, but so are some pacemakers and automated vehicles that move products in warehouses.
In the same way that computers, the internet, and smartphones have changed the way people connect with the world, the IoT shows promise to provide even more advanced connections and a more streamlined, convenient lifestyle. However, like all computing devices, IoT devices need security to prevent them from becoming more of a danger than a productive, useful tool.
4 Ways IoT Devices Can Pose Cybersecurity Risks
Although devices like thermostats and fitness trackers likely have little need for security standards, many IoT devices are connected to an organization’s entire network. This spells danger for any facility required to store sensitive information. If a device has inadequate security, it can provide hackers with an entry point and the potential to move laterally within a network to introduce malware or DDoS attacks. These are some of the most common reasons IoT devices present security issues.
- Hardcoded passwords that aren’t changed after purchase: These passwords are used on a large scale and once they’re disclosed, can provide widespread access to many networks.
- Devices with the inability to update: Running outdated versions of technology eliminates the ability to patch vulnerabilities and also leave these vulnerabilities exposed.
- Communication between devices: IoT devices can communicate with each other across secure network connections without human intervention, potentially allowing insecure endpoints to expose sensitive data.
- Lack of privacy protection: Many IoT devices collect and store user’s personal information to complete a process. This personal information can be compromised if weak security measures are bypassed.
The weaknesses found in IoT devices have the potential to be exploited on a large level. The Mirai DDoS botnet attack is the clearest illustration of these capabilities. In fact, it may have been the proof that prompted the acceptance of the IoT bill.
A Summary of the IoT Cybersecurity Improvement Act
A clear definition of the scope of IoT devices makes it easier to see why security is such a big deal. When you consider the implications of medical devices giving hackers an entrance into the sensitive data of an entire hospital, you may wonder why these measures weren’t introduced several years ago. Unfortunately, in the world of cybersecurity, a threat often has to be realized before security measures are taken. Now, that we are becoming more aware of the potential security threats of IoT devices, security standards are beginning to take place. While the IoT Cybersecurity Improvement Act isn’t a complete solution, it provides the following security standards.
- Requirements for the National Institute of the Standards of Security (NIST) to publish standards and guidelines for the use and management of IoT devices owned or controlled by the federal government, including minimum security requirements for managing cybersecurity risks
- Requirements for the Office of Management and Business (OMB) and the Department of Homeland Security (DHS) to review the federal information security policies based on NIST security guidelines and make changes to comply with NIST recommendations
- The security standards will be reviewed and revised as necessary every five years by NIST, and OMB policies will be updated to reflect new NIST guidelines
- Requirements for NIST to publish guidelines for IoT vendors to report security vulnerabilities upon discovery and the resolutions of these vulnerabilities when they’re developed
- Requirements for OMB and DHS to develop and implement policies for reporting security vulnerabilities based on NIST guidelines
- Agencies are prohibited from procuring or using IoT devices that don’t comply with NIST guidelines
What It Really Means
The Internet of Things Cybersecurity Improvement Act of 2020 begins by defining IoT devices covered by the bill. The official definition describes IoT devices as physical objects equipped with at least one sensor or actuator for physical interaction and at least one network interface that can function on their own without acting as a component of another device. Not surprisingly, IT devices like smartphones, computers, and laptops are excluded. Also excluded are devices needed for national security and those required for research.
Instead of creating specific security for government IoT devices, the bill appointed NIST to create the framework and standards for IoT vendors and users. This isn’t surprising since it provides a fluid system that can keep up with the changes that constantly occur in technology growth. Still, NIST compliance requirements send an important message that suggests the requirements will be adopted across the spectrum of IoT devices.
NIST standards have long provided the security framework for federal agencies and businesses in a variety of industries. Maintaining NIST compliance provides industries with a common language that allows them to keep up with federal, state, and local compliance laws. It also provides vendors with an essential standard of manufacturing and consumers with a safety net when making purchases that could compromise sensitive information. Although the current bill only requires NIST compliance within federal agencies, vendors producing IoT devices are likely to adopt these standards for all devices instead of creating multiple versions of the same product.
Receiving and Disclosing Security Vulnerabilities
Besides creating minimum standards for IoT cybersecurity, NIST is tasked with outlining guidelines for a system to report potential security vulnerabilities and the resolutions to these risks. This development addresses some of the biggest security issues that plague IoT devices. Since vendors will be required to provide solutions for potential cybersecurity risks, IoT devices will likely be developed with the ability to update for better security measures and with provisions to apply patches as needed.
While the details haven’t been disclosed yet, it’s likely vendors will have to establish programs to receive information about potential security risks and publicize the solutions for these vulnerabilities. The ability to share this information provides widespread protection for all agencies, companies, and individuals using a product.
Since the bill only applies to government devices, these disclosures could present new challenges for the private sector. As security vulnerabilities are made public, hackers and other cybercriminals could have an opportunity to exploit this information. While vulnerabilities will be addressed immediately within federal agencies, hackers may use this new information to target private and business sectors. There’s little doubt that vendors are aware of this potential, and it could lead to stronger security measures to be applied automatically to new IoT devices designed for the private sector as well.
Alternate and Effective Methods
While the IoT bill doesn’t provide a complete solution, it’s the only legislature to provide security regulations for IoT devices. While each part provides early steps for infrastructure to complete IoT security, a provision in section 7 creates an interesting burden for providers of IoT devices. After providing waivers related to national security and research, Section 7 (c) waives devices “secured using alternate and effective methods appropriate to the function of the device.”
While the bill doesn’t provide specific language that defines alternate methods, it could suggest that the burden of security testing and identifying security vulnerabilities will ultimately fall to the vendors of IoT devices. This would likely require the introduction of third party testing that includes assessing the risks for connected software. These additional requirements would likely provide added incentive for all IoT devices to meet NIST compliance standards throughout the development process.
Security measures are most effective when applied quickly, and the IoT bill has created a series of deadlines for the requirements to take effect. Here’s how quickly you can expect action to take place on the new requirements.
- March 5, 2021 marks the 90 days provided for NIST to develop and publish security standards for IoT devices.
- June 3, 2021 marks the 180 days provided for NIST to develop and publish guidelines for receiving and reporting potential security vulnerabilities of IoT devices used by federal agencies.
- September 5, 2021 marks the six-month window provided for OMB and DHS to review, revise, and implement the minimum security standards outlined by NIST.
- December 5, 2022 marks the two-year deadline provided by the bill which prohibits federal agencies to enter or renew a contract involving IoT devices that aren’t compliant with the NIST security standards and guidelines. It also marks the deadline provided for OMB and DHS to implement the policies defined by NIST to address security vulnerabilities.
One of the most notable things about the IoT cybersecurity improvement bill is the fact that it only covers IoT devices purchased and used by the federal government. While this isn’t ideal, it’s an effective way to get the right security measures in the door. The U.S. government’s purchasing power creates a powerful incentive for vendors of IoT devices to develop all devices in compliance with the guidelines outlined by NIST. Although business and personal IoT devices aren’t included in the bill, it’s likely these items will organically follow the same path.
Applying Security in High-Risk Industries
While it’s clear the IoT bill doesn’t immediately affect personal IoT devices or those used in many business industries, the implications are murky for some industries that receive federal funding. Healthcare facilities and higher education institutions are heavily affected by cybersecurity risks, malware, and potential DDoS attacks. Many of these organizations also fall under a variety of federal agencies and could be subject to the new compliance regulations immediately.
Federally Funded Hospitals
Federal hospitals are those that are run and funded by the federal government. Veteran’s Administration (VA), Department of Defense (DOD), and the Department of Health and Human Services (DHHS) run federally funded hospitals. These hospitals follow compliance requirements for both federal agencies and hospitals. However, many other hospitals and healthcare facilities are funded by federal government agencies and likely will be impacted by government regulations.
While IoT devices have provided a variety of personal conveniences for consumers, the implications for these devices in the healthcare field have exploded. During the midst of a global pandemic when remote access to healthcare has become a necessity in practically every area, these devices have provided essential care that might have otherwise been impossible. Unfortunately, healthcare facilities aren’t immune to cyberattacks and IoT devices have the potential to provide new vulnerabilities. If the new regulations are immediately observed in healthcare facilities, many of these vulnerabilities will be addressed.
Higher Education Institutions
Colleges and universities handle tremendous amounts of academic data and sensitive personal and financial information of thousands of students and faculty members. They also have massive networks that are easily accessible to students and staff. This makes these institutions a prime target for cyberattacks.
Classrooms in K-12 schools and higher education institutions have been taking advantage of the learning opportunities and convenience provided by mobile devices for decades. Personal and school-provided devices are connecting to education networks both in the classroom and at home. The virtual learning landscape introduced by restrictions related to the COVID-19 pandemic amplified this use exponentially. Educational environments are also accustomed to the use of IoT devices. However, certain malware attacks target IoT devices like printers, routers, IP cameras, and personal devices.
Colleges and universities are prime spots for the use of IoT devices. Innovations in technology provide students with convenient living upgrades, assisted public travel, and new learning opportunities. One program even implemented a system that allows students to link to printers or projectors simply by snapping a smartphone picture of the device. This also provides a wealth of devices with potential security vulnerabilities for hackers to exploit.
Higher education institutions fall under a variety of government regulations designed to protect both schools and financial institutions. They have also been subject to other NIST federal compliance regulations. Historically, higher education institutions have been required to follow federal compliance regulations based on federal funding and interaction with the Department of Education (ED). For instance, FERPA and PPRA apply to schools that receive funds or are under an applicable program of ED. However, GLBA compliance is mandated for schools that receive federal funding. Based on past compliance regulations for colleges and universities, it seems likely these institutions will fall under the IoT compliance bill.
Preparing for IoT Compliance
If you’re responsible for cybersecurity in any organization that receives federal funding, you’ll likely be directly affected by the IoT bill. While this means a variety of industries are subject to additional compliance regulations and potentially cumbersome procedures to maintain them, it speeds up the complete implementation of overall IoT cybersecurity improvements. Although it’s impossible to predict the exact safety measures NIST will provide for IoT vendors, it is possible to start enacting certain safety measures to help propel a smooth transition. During the months leading to the reveal of new regulations, prepare your organization with these steps.
- Establish a policy to define who is allowed to introduce new devices to the system and the types of devices that can be used.
- Educate users. Everyone, from students in colleges and universities to employees in a variety of industries, can appreciate the convenience supplied by new technology. All users should be educated about the potential risks of these devices.
- Strengthen Security. Cybersecurity is a massive undertaking that requires immense amounts of data to be categorized and translated into digestible information. Investing in analytics and monitoring tools can provide security solutions that are impossible to implement manually.
Bitlyft Cybersecurity is an experienced cybersecurity organization with extensive experience working with the intense demands of cybersecurity compliance for a variety of industries. Higher education systems, manufacturing industries, and financial services are often subject to stringent compliance regulations that can lead to serious consequences when unobserved. It’s our goal to provide customized cybersecurity systems to organizations to make NIST compliance and all security regulations a simple process that grows with your company. Get in touch to learn how we can help you prepare for the regulations of the 2020 IoT bill. | <urn:uuid:dfdc3e7a-bef2-4d42-9ef1-12c24172cb6c> | CC-MAIN-2022-40 | https://www.bitlyft.com/resources/how-the-internet-of-things-cybersecurity-improvement-act-is-the-first-step-toward-complete-iot-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00616.warc.gz | en | 0.94051 | 3,637 | 2.84375 | 3 |
We all know what it means to “read between the lines” in a figurative sense, but before we used modern technology to communicate with one another, people sometimes took it literally, such as by writing secret messages in invisible ink between the lines of a seemingly normal letter.
The technique, whereby the author of a message hides secret information inside something that looks innocent on the surface, is known as steganography, and it is almost as old as writing itself. Unlike cryptography, which scrambles the message to make it unreadable without the decryption key, the purpose of steganography is to conceal from prying eyes the very existence of the message. As with many other information-handling methods, steganography is now used in digital technologies, too.
How does digital steganography work?
A secret message can be hidden in almost any digital object, be it a text document, license key, or even file extension. For example, the editors of Genius.com, a website dedicated to analyzing tracks by rap artists, used two types of apostrophes in their online lyrics that, when combined, made the words “red handed” in Morse code, thereby protecting their unique content from being copied.
One of the most convenient “containers” for steganographers happens to be media files (images, audio, video, etc.). They are usually quite large to begin with, which allows the added extra to be meatier than in the case of, say, a text document.
Secret information can be written in the file metadata or directly in the main content. Let’s take an image as an example. From the computer’s point of view, it is a collection of hundreds of thousands of pixels. Each pixel has a “description” — information about its color.
For the RGB format, which is used in most color pictures, this description takes up 24 bits of memory. If just 1 to 3 bits in the description of some or even all pixels are taken up by secret information, the changes in the picture as a whole are not perceptible. And given the huge number of pixels in images, quite a lot of data can be written into them.
The left-hand image has no hidden message; the right-hand image contains the first 10 chapters of Nabokov’s Lolita
In most cases, information is hidden in the pixels and extracted from them using special tools. To do so, modern steganographers sometimes write custom scripts, or add the required functionality to programs intended for other purposes. And occasionally they use ready-made code, of which there is plenty online.
How is digital steganography used?
Steganography can be applied in computer technologies in numerous ways. It’s possible to hide text in an image, video, or music track — either for fun or, as in the case above, to protect a file from illegal copying.
Hidden watermarks are another good example of steganography. However, the first thing that comes to mind on the topic of secret messages, in both physical and digital form, is all manner of secret correspondence and espionage.
A godsend for cyberspies
Our experts registered a surge in cybercriminal interest in steganography 18 months ago. Back then, no fewer than three spyware campaigns swam into view, in which victims’ data was sent to C&C servers under the guise of photos and videos.
From the viewpoint of security systems and employees whose job it is to monitor outgoing traffic, there was nothing suspicious about media files being uploaded online. Which is precisely what the criminals were counting on.
Subtle memes by subtle means
Another curious piece of spyware, meanwhile, received commands through images. The malware communicated with its cybercriminal handlers through the most unlikely source: memes posted on Twitter.
Having gotten onto the victim’s computer, the malware opened the relevant tweet and pulled its instructions from the funny image. Among the commands were:
- Take a screenshot of the desktop,
- Collect information about running processes,
- Copy data from the clipboard,
- Write file names from the specified folder.
Media files can hide not just text, but chunks of malicious code, so other cybercriminals began to follow in the spies’ wake. Using steganography does not turn an image, video, or music track into full-fledged malware, but it can be used to hide a payload from antivirus scans.
In January, for example, attackers distributed an amusing banner through online ad networks. It contained no actual advertising, and looked like a small white rectangle. But inside was a script for execution in a browser. That’s right, scripts can be loaded into an advertising slot to allow, for example, companies to collect ad-viewing statistics.
The cybercriminals’ script recognized the color of the image pixels, and logged it as a set of letters and numbers. This would seem a rather pointless exercise, given that there was nothing to see but a white rectangle. However, seen through the eyes of the program, the pixels were not white, but almost white, and this “almost” was converted into malicious code, which was duly executed.
The code pulled from the picture redirected the user to the cybercriminals’ website. There, the victim was greeted by a Trojan disguised as an Adobe Flash Player update, which then downloaded other nastiness: in particular, adware.
Detecting steganography ain’t easy
As expert Simon Wiseman noted at RSA Conference 2018, quality steganography is extremely difficult to spot. And getting rid of it is also no picnic. Methods exist for embedding messages in images so deep that they remain even after printing and rescanning, resizing, or other editing.
However, as we already mentioned, information (including code) is extracted from images and videos using a special tool. In other words, media files by themselves do not steal or download anything from or to your computer. Therefore, you can secure your device by protecting it against malware components that hide text or malicious code in media files and extract it from them:
- Be in no hurry to open links and attachments in e-mails. Read the message carefully. If the sender’s address or the content looks dubious, better to ignore it.
- If you need to download something, always use trusted sources — for example, download apps from official stores or developer websites. The same goes for movies and music — do not download anything from unknown resources.
- Use a robust security solution. Even if it fails to recognize image-based code, it can catch suspicious actions by other malware modules. | <urn:uuid:6318efb0-d262-4173-bdeb-b8548222142d> | CC-MAIN-2022-40 | https://www.kaspersky.com/blog/digital-steganography/27474/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00616.warc.gz | en | 0.929694 | 1,387 | 3.484375 | 3 |
Platform.sh announced it has partnered with MongoDB.
The domain name system was designed in the early days of the Internet – well before the web browser was invented and well before the Internet entered the commercial realm. At that time, every host on the Internet maintained a file containing the name and IP address of every other host. As the number of hosts grew and the rate of new hosts joining the Internet accelerated, it became apparent that this would soon be unworkable. The IETF went to work and came up with Domain Name System (DNS) – a distributed, hierarchical directory service containing the names and IP addresses of Internet hosts. This was circa 1980.
Implicit in the design of DNS were some assumptions:
■ Simple one to one (host to host) communication for a transaction.
■ One location (host) for a given piece of content or service.
Since its original release, DNS has had several updates, primarily focused on addressing issues of scale and security. These changes made maintenance and update of the records in the global DNS somewhat more efficient and helped preserve the integrity of the records. However, those improvements do not address the changing nature of how applications actually work in the modern Internet.
The Widening Gap Between the Modern Internet and Traditional DNS
Changes in infrastructure, applications, and increased demands for speed and scale have exposed areas where DNS is lagging behind:
1. Content and services are hosted in multiple locations. With the globalization of services and huge increases in demand, enterprises are hosting content on multiple CDNs, in multiple data centers and on multiple servers. Traditional DNS does not natively provide a mechanism for selecting the best performing destination for each end user.
2. A single transaction or rendering of a web page can involve assembling disparate content from multiple locations. The cumulative effect of multiple DNS lookups on performance can seriously impact user experience.
3. Ever lower tolerance for delay and unprecedented scale. Consider a globally available service with 30 million users. If the infrastructure delivers good quality of experience 98% of the time, then 600,000 users are not having a good experience. DNS was designed to be good enough at a time when good enough had an entirely different meaning.
4. Cloud infrastructure (network, compute and storage) is dynamic and automated. DNS was designed for a relatively static world where manually editing DNS text files could keep up with moves, adds and changes.
Most DNS implementations on the Internet and within private networks are based on traditional platforms such as BIND, djbdns, Power DNS, gdnsd and NSD. These include deployments by enterprises as well as managed DNS services. Many deployments are customized with non-standard additions that address some deficiencies in the basic platform.
As an example, base DNS does not offer a mechanism for detecting whether a site is up or down before directing the user to that site. Many providers have customized their implementations to support this functionality. However, applying after-market functionality to legacy platforms is complex and time-consuming. In addition, even with customizations, many capabilities that would improve performance and efficiency are simply out of reach using traditional platforms.
The Next Generation DNS
DNS is the first decision and most important point in the process of deciding where to direct an end user request, but most DNS implementations are not instrumented to optimize the answer. They typically are only able to direct the user to the geographically closest server that is not down. However, the geographically closest server may not be the best option for responding to the user request. The server may be overloaded, the network connection to that server may be heavily congested, or primary links may be down. There may be business considerations, such as the need to fulfill bandwidth commits or to avoid overages. A modern DNS supports the advanced routing capabilities to deliver optimized responses based on real-time network and server conditions, real user monitoring (RUM) data, as well as the capability to provide responses based on business logic.
Security vulnerabilities and patch management comprise a tax on IT organizations. A self-managed DNS is subject to that tax. Managing and patching vulnerabilities in a timely manner that is transparent and does not affect system availability is an operational challenge. This challenge is compounded where there is custom code built on top of Open Source.
Today, there are managed DNS solutions available, both for Internet and private, intranet-only services. Because they are fully managed, these solutions mitigate security exposure and reduce operational overhead. The DNS provider takes responsibility for security patches, updates, health monitoring and general support. They typically include full-time monitoring and promptly apply security fixes and patches to the underlying operating system and libraries without impacting system availability.
DNS and DevOps
In the last few years, there have been rapid changes in application development and deployment processes. As DevOps teams roll out applications into dynamic, software-defined environments, underlying services such as DNS need to be well integrated. Open Source DNS solutions were developed long before these changes came about. As a result, they lack the native API support needed to support modern DevOps environments and infrastructure automation. This adds overhead and can be a drag on new service velocity.
Modern DNS solutions are designed with an "API-first" approach that supports automated record management and service discovery, combined with single-pane-of-glass management across the infrastructure. This takes DNS off the critical path and allows organizations to focus scarce IT resources on activities that are core to their business.
DNS was designed when the Internet was less complex, infrastructure was relatively static and demands for speed and scale were orders of magnitude less than today. It is remarkable how well the original design has held up, but increasingly its limitations are emerging as the "long pole in the tent" in multiple areas. Leading edge online companies that depend on delivering their services with speed, scale and agility were the early adopters of advanced DNS solutions and now it is moving to the mainstream.
Jonathan Lewis is VP of Product for NS1. | <urn:uuid:ffd6dad4-f6a6-45a6-a5ff-1e98d33e86da> | CC-MAIN-2022-40 | https://www.devopsdigest.com/devops-dns-domain-name-system | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00616.warc.gz | en | 0.958468 | 1,225 | 3.015625 | 3 |
Stepan Popov - stock.adobe.com
As populations grow and migration increases, the future efficiency of urban transport systems across the globe will depend on collaboration as much as technological innovation.
According to Swarna Ramanathan, an associate partner at McKinsey, cities are projected to contain 60% of the global population and account for 65% of all economic growth by 2030.
This is expected to place a massive strain on already-congested urban transport systems, which are currently causing billions to be lost in GDP each year.
Between Britain, Germany and the United States, for example, the economic impact of congestion totalled $416bn in 2017 alone, or $975 per person, according to data collected by software and data company INRIX.
“We see more and more the need to take a ‘system-level’ approach, not just a technology or market-focused approach,” said Ramanathan, who outlined four major mobility trends: a shift to electric vehicles, connectivity between different modes of transportation, autonomous driving and shared mobility.
Presenting on the evolution of urban mobility in Asia at the recent UK Asia Tech Powerhouse conference, which was hosted at Royal Albert Dock, Ramanathan said an integrated approach is needed to properly understand the impact of these trends, adding that the piecemeal introduction of any new technologies or systems will lead to even more problems:
“The technology is out there but a coordinated plan to introduce it is needed. We found that if you just introduce, for example, shared mobility and autonomous vehicles, we actually see an increase in things like congestion, and that has the same impact everywhere.”
Read more about transport
- Around the world, smart city programmes combine IT with internet-connected devices – from waste management to smart grids – which enhances municipality management.
- The Department for Transport (DfT) has published its strategy for the future of urban mobility, including priorities for 2019, the launch of a regulatory review and a £90m transport innovation fund.
- The Department for Transport (DfT) has set out an action plan to increase its spend with small and medium-sized enterprises (SMEs) to 33% by 2022, and to support small and innovative companies through grants.
Therefore, collaboration between the public and private sector will be key to understanding the issues around changing urban mobility and how it should be implemented.
“Understanding what the key political issues driving these urban areas are, and thinking about how these can be solved, and then building on strategies to help fix those problems is vital. It’s not about just pumping solutions into markets,” said Reuben Dass, assistant manager of KPMG’s mobility team, who was speaking along with others quoted on a panel about future-proofing cities.
However, he added that public and private sector partnerships often lead to what he calls action paralysis. “No one wants to make the first move and set a stake in the ground about what direction to travel in,” he said.
Isabel Dedring, global transport leader at Arup and former deputy mayor of London, said it’s important for senior leaders from across the public and private sector to forge personal connections with one another to deal with action paralysis.
“That’s how electric buses and taxis came in at scale, because these personal relationships were formed and these people came together and decided to work together to deliver something, underpinned by some nice proactive policy making,” she said.
“You need a ‘burning platform’ of some type. You need an issue that is relevant and then those individuals need to come together and work jointly, one team around a table, as opposed to sending stuff back and forth or shouting at each other in the newspapers.”
According to Ramanathan and the panelists, a lack of infrastructure is another major barrier preventing these changes. “If you look at infrastructure and autonomous driving, they go together hand in hand,” she said. “You cannot have autonomous driving without a certain level of infrastructure, and it’s the same for electric vehicles.”
This was corroborated by Robert Hamilton, director of utilities and infrastructure at Power Sonic Corporation, which is currently developing electric vehicle infrastructure using renewable energy.
“There is simply not enough power going to all these EV charging points,” he said. “We’re offering a different solution using green tech, solar and wind to charge cars, which is happening in UK now – there is no point having a car with a 50 mile radius.”
Ramanathan points to the case of the China, which is leading the world in electric vehicle penetration:
“Even in a base case, we expect about 35% of the new car sales to be electric cars, and in a breakthrough case, which is looking more and more likely as we track these projections in real time, that at least 50% of the new cars sold in China could be electric vehicles – that’s millions and millions of cars,” she said.
Now, Ramanathan said the Chinese government is investing heavily in electric vehicle infrastructure to meet this growing demand.
Having divided the nation into three zones, the Chinese government is expecting 2.5 million charging points to be installed in zone one alone by 2020, which will serve a projected 2.7 million extra electric cars on the road.
Wealth and spending
However, not all urban areas will have the same social conditions, affecting which technologies can actually be adopted in that locality, said Ramanathan.
For example, high density areas with high GDP means that, generally, people will have the economic ability to adopt new technologies like autonomous cars when the infrastructure is ready, whereas developing areas with high density but low GDP will be more likely to adopt electric vehicles or ride sharing as cheaper, less infrastructure-intensive alternatives.
“Another area is the developed suburban area; cities which are much more in a sprawl like Melbourne or Huston but have good GDP. Here, we think car ownership will still prevail because people have the luxury of space to own a car and the distances prohibit them from not having one.”
Dass added that while innovation, technology and mobility have the ability to increase social inclusion, the investments being made at the moment are not necessarily focused on that goal.
Looking at the British government’s spending on transport over the last ten years bears these imbalances out. In August 2018, for example, the Institute for Public Policy Research North found that Londoners enjoyed an annual average of £708 of transport spending per person, while just £289 was spent for each person in the north of England.
According to Dedring, however, social equity and inclusion has taken centre stage since Trump and Brexit. “Suddenly everyone has woken up to the idea that maybe we need to care about people who are less privileged, who have less access to the corridors of power, whereas before it was more like ‘I need to be seen to be doing something’.
“It’s a fantastic opportunity for the tech industry, for the new mobility industry, to not just come up with a product, but articulate how it could be implemented and rolled out in a way that specifically tackles these social issues, which isn’t just saying ‘oh, we can share the car, so it must be good for people with less money’ – well, not if it’s not going to the places where those people live, and not if it’s still too expensive,” she said. | <urn:uuid:11d43f98-f018-4ddf-be55-f45e1e85b30e> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/Collaboration-is-key-to-maintaining-urban-transport-efficiency-as-cities-grow | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00016.warc.gz | en | 0.959541 | 1,580 | 2.78125 | 3 |
There is a new trend for developing tiny robots, involving technologies from drones to pills.
Potential uses include surgery, biomedicine, surveillance and rescue work.
Here are five of the most notable examples of tiny robots.
Copyright: weforum.org – “Five of the world’s tiniest robots”
Allow me to take you on a trip down my memory lane. As a young lad, a film I saw captured my imagination: Fantastic Voyage, a 1966 release about people shrunk to microscopic size and sent into the body of an injured scientist to repair his brain. The idea struck a chord with me. I envisioned one day science would be able to create some sort of miniature machine that performs medical procedures from the inside.
Fast forward several decades into the 21st century, when I started my career as a robotics researcher taking inspiration from neuroscience to implement artificial perception systems. I thought of robots as machines that range from the size of a pet animal to big devices designed to carry out heavy-duty chores. However, I soon started to hear the first hints about research into miniature robots playing exactly the type of role the miniature scientists in Fantastic Voyage acted out. Did this mean that what I imagined as a child was about to come true?
Recently, a team of researchers from Stanford University, California, achieved the first milestone towards the development of 7.8mm wide origami robots: a proof-of- concept prototype. They dubbed it a millirobot. The robot uses the folding/unfolding of Kresling origami to roll, flip and spin. These robots are operated wirelessly using magnetic fields to move in narrow spaces and morph their shape for medical tasks, such as disease diagnosis, drug delivery, and even surgery. They are a part of a new trend in what is called “tiny robot” research.
The range of technologies and uses for tiny robots is broad, from drones to pills, and from surveillance and rescue to biomedicine.
Here are five outstanding examples of tiny robots:
1. Black hornet spy drones
Designed and commercialised by American tech conglomerate Teledyne to give foot soldiers covert awareness of potential threats. It’s small enough to fit into an adult’s palm and is almost silent. It has a battery life up to 25 minutes and a range of up to 2km. These drones transmit live video and high definition images back to the operator. They cost $200,000 (£165,000).[…]
Read more: www.weforum.org | <urn:uuid:c8888b73-335c-4172-995b-859704452639> | CC-MAIN-2022-40 | https://swisscognitive.ch/2022/06/22/five-of-the-worlds-tiniest-robots/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00016.warc.gz | en | 0.941214 | 531 | 3.15625 | 3 |
Many economists, including Paul Krugman and Nouriel Roubini, have argued that the European Monetary Union is in trouble because of the fiscal difficulties of a few of its member countries. Some have predicted that the euro will fail.
Because the prospects for a future single global currency depend upon the continued success of the euro and the currencies of other monetary unions, I discussed the euro with Morrison Bonpasse, president of the Single Global Currency Association, to get his insight into this situation.
Theodore F. di Stefano (TdS): What’s wrong with the euro, and why are some economists looking askance at it?
Morrison Bonpasse (MB):
Very little. The problem is that it’s a currency of 17 countries issued by a single central bank, and is therefore not subject to the fiscal difficulties of the government of any one country. However, because currency traders and others are so used to thinking that governmental fiscal problems guarantee monetary problems, the euro has been criticized.
TdS: Why don’t the problems of Greece, Ireland, and Portugal translate into problems for the euro?
It’s because the value of the currency depends upon the soundness of the issuing bank and the people’s confidence in that bank’s stewardship. It does not depend upon the fiscal soundness of any one country. When New York almost went bankrupt in 1975, the value of the U.S. dollar was not in jeopardy. The State of California now has a large deficit to control, and, again, there is imperceptible risk to the dollar.
TdS: What if Greece, Ireland or Portugal goes bankrupt? Wouldn’t that affect the euro?
Not necessarily. If a euro member state goes bankrupt, then its creditors, or bondholders, will obtain what they can obtain in euros, just as if the bankrupt state were a bankrupt corporation or a person. Such partial payments would be in euros and not some inflated national currency. Of course, fiscal difficulties drive up the required interest rates for loans to such governments, but the value of the currency should not be affected.
TdS: Why are you optimistic about the future of the euro?
European countries are seeking to join, and not run away from, the euro. Estonia just became the 17th euro zone country. Other countries are waiting to fulfill the admission criteria. It’s a difficult decision for countries, but the historic direction is clear. Over the past year the value of the euro, in dollar terms, has fluctuated between (US)$1.20 and $1.45, and is now at $1.38. There are presently 27 European Union Countries, soon to be 31, and all but three of the 10 non-euro countries are required to join the euro at some reasonable point. The three exceptions are Denmark, Sweden and United Kingdom, and even for them, it’s a question of when, not whether.
TdS: What about Paul Krugman’s point in his January 16, 2011, New York Times Magazine article, “Can Europe Be Saved?” — that having its own currency has helped Iceland climb out of its recent financial crisis?
If Iceland had been using the euro in 2008, its financial crisis would not have become a devastating currency crisis. Paul Krugman compared Iceland to Brooklyn and noted that it makes no sense for Brooklyn to have its own currency because its economy is enmeshed with that of its neighbors. In a global, digitized world, Iceland’s economy is enmeshed with that of its neighbors, too — even if separated by an ocean. Iceland is one of the four current “Candidate Countries” to join the European Union, with membership expected in 2012. Then the process for joining the euro zone would begin, as required for new EU members.
TdS: Why did you say that moving to a future Single Global Currency depends upon the success of the euro?
Presently, the world has regional monetary unions, such as the European Monetary Union, the Eastern Caribbean Monetary Union, and the West and Central African Monetary Unions. The EMU is the largest and most successful and is growing, and is a model for future monetary unions, including the future Global Monetary Union. Several Persian Gulf countries are forming a monetary union, as are several East African countries. As the euro zone grows, people around the world are increasingly asking the question: if it works for 17 countries, soon to be 31, why not for 192?
TdS: How would the world implement a single global currency?
The world would implement a single global currency by expanding existing monetary unions, by folding them into a single global currency. The Single Global Currency can be said to exist when these currency consolidation trends create one currency for countries representing approximately 40-50 percent of the world’s GDP. After that, we will have passed a “tipping point,” and the remaining countries will clamor to join.
TdS: How are you spreading the word around the world about your belief in the need for a single global currency?
We have a website, and we have published a book, The Single Global Currency – Common Cents for the World, and the subsequent 2009 edition of that book. When the people of the world learn how good a single global currency will be for them and the world, the movement toward that goal will accelerate.
Thanks so much for your time, Morrison. You’ve certainly given us a lot to think about.Good luck! | <urn:uuid:19ebbf91-5fc6-4002-9ef6-c1791fb566f2> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/is-the-world-headed-toward-a-single-global-currency-72226.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00016.warc.gz | en | 0.950991 | 1,171 | 2.515625 | 3 |
Artificial Intelligence has been highlighted in the most negative light since it was introduced as part of the workforce. Many people thought it would take their jobs and leave them without any reliable source of income.
In most cases, people forget about the benefits of AI in the workplace. AI-powered tools help in powering remote work but there are also other benefits and here are some of them:
Empowering human beings
AI-powered software and robots may seem like they are here to get everyone fired from their job but their core purpose is empowering human beings. There are many people who are fired because of lack of strength or stamina to take on certain tasks. AI tools can help with this.
For example, there is a lot of tedious and repetitive tasks at a car production line. If you are supposed to tighten all bolts manually, you will get fired quickly because of being tired and lacking strength for the task. Whereas, with the help of AI-powered robots, you can accomplish the task and still have the energy to play with your kids at home.
Also, sifting through mountains of data manually to find actionable business intelligence can be very strenuous. That is why using AI-powered data management tools makes your life very much easier.
It was a similar situation at Bestessay.com, a student assignment service. It dealt with plenty of data being created every minute and switching over to AI data management made it the best paper writing service in the business.
AI tools rarely ever make mistakes. And if they do, the mistake is with the developers of that tool. Using software and other tools powered by AI in the workplace can prove very efficient, especially when you work with numbers and sensitive data. One error in financial or sensitive documents can prove to be very costly for the company.
It might even tarnish the reputation of its workforce and lead to losing business. Tools using artificial intelligence can come in handy because they can process the information with great precision and accuracy.
These aspects bring great benefits to the business and the employees because they will be protected from having their reputation dragged through the mud. When an AI tool or robot has been programmed to do a task, it does it thoroughly while leaving no room for any doubt whatsoever.
Reduced human error
A human error exists in all walks of life and unfortunately, it is also present in the workplace. Sometimes, an employee’s single mistake can cost a company thousands of dollars, depending on the magnitude of the error. That one error can cause other employees to lose their jobs and the business might be under major financial strain.
For example, an employee might damage an expensive custom ordered product being manufactured. An AI-powered robot would not face such problems because it has been designed to avoid any damage to the item being manufactured.
To prevent incurring all of these losses, the company can use AI because it will help retain the jobs of other personnel if the error caused huge revenue loss. That goes contrary to popular belief regarding AI and robots in the workplace, which says this technology is here to eliminate the human workforce.
AI tools are versatile
AI tools are very flexible and versatile because they can be designed to undertake any kind of task under extreme environments. You can design a software program that can take care of a wide variety of tasks such as doing the financials and handle marketing-related work.
That serves to prove how versatile this technology is and can help the human workforce accomplish any set of tasks. The operational ability of AI-powered solutions is exemplary because it can operate in extremely cold or hot environments when equipped appropriately.
Another example of it is when things get uncomfortable for human beings to work, AI tools take over and accomplish the task in the best manner possible. For example, a robot designed for the task can twist and turn to a suitable angle to accomplish the task. Whereas a human being might suffer an injury when trying the same thing.
Eliminates risk for human beings
There are certain working environments that a very dangerous for a human being to work in. For example, working in a radioactive nuclear zone can prove to be fatal for human being but a robot can work freely without facing health hazards.
There are other workplaces where it gets very dangerous to do daily tasks. The presence of an AI robot can greatly help in such situations. One of the most dangerous careers, being a soldier, can benefit greatly from having AI assistance on the battlefield. AI-powered robots can be responsible for conducting medevac to wounded soldiers to a safe area where medics can land.
Using AI solutions in dangerous zones can help eliminate the company’s risk exposure to medical or legal expenses in the event of injury or loss of life. The eliminated risk can keep the company profitable because there will be fewer costs going out to compensate the victims.
According to paper writing service reviews, undoubtedly, AI-powered software and robots are very productive and surpass human beings in this aspect. A human being gets tired and needs to rest, whereas a robot or software can even operate 24 hours a day. That all-around the clock operation makes the workforce more productive and generate increased revenue.
Robots can support the human workforce by undertaking tasks that hinder productivity. Robots and AI-powered software can handle repetitive and tedious tasks and the human employees can focus on more complex work.
Readjusting the priorities of the human workforce in this capacity is very efficient and ensures that resources are used to their full potential. After implementing this AI model, Nerdy Writers, a UK-based assignment help agency that provides paper writing service to college students, reduced their error percentage to almost zero. Such a great transformation was possible only after integrating human efforts with AI tools.
The work of AI tools can be consistent and very reliable more than human intelligence and strength. Coordinating an AI workforce with human intelligence can prove to be very efficient and enhance productivity to an unimaginable degree.
One of the most obvious benefits of AI in the workplace is being financially savvy because of enhanced productivity. Companies can accomplish more by using fewer resources, making AI a great choice for running a workforce. Some business owners have a negative outlook when it comes to AI and its finance aspect.
They think of cutting jobs and making the workforce dominated by technological tools. Whereas, these executives should be looking for ways to still include human beings in the workflow.
For example, since the robots run for 24 hours, they can hire employees to monitor the production lines. That creates jobs without causing a loss to the business, making it the prime solution for sustainable businesses.
As a business owner, you will also save costs that are due when an employee gets injured or dies on duty. Also, AI tools are versatile and can work in any environment.
The results are limited costs going out to specialized employees specifically trained for that task. There are other reasons that make AI financially savvy but these are the most common ones.
The bottom line
AI tools can be used by businesses to save and generate costs in a sustainable way without firing employees. That makes AI beneficial for both businesses and employees while contributing to a sustainable working environment satisfying the needs of all parties involved. | <urn:uuid:20e02cd8-0726-462a-8218-47c8f7b574b0> | CC-MAIN-2022-40 | https://resources.experfy.com/ai-ml/7-unexpected-pros-of-artificial-intelligence-in-the-workplace/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00016.warc.gz | en | 0.959018 | 1,464 | 2.8125 | 3 |
Cycbot (sometimes called Cycbot.b or Win32/Cycbot.B) is a Trojan/Backdoor infecting PCs and giving remote access to hackers or planting fake antiviruses into infected PCs. This sort of trojans is one of the possible reasons for Search engine redirection, when your search results are filtered, replaced or you are redirected to harmful websites. Thus Cycbot infections are something you should be concerned about: while the parasite itself will not destroy your PC or steal information directly, it can provide enough access to other applications or people to do so. There couple versions of Cycbot : Cycbot.B, Cycbot.AC are noticed quite often.
The main symptoms of Cycbot include Google redirection. Although not always caused by this particular form of malware, there are signs that can help determining if this is Cycbot or not:
1. Proxy (usually on 50370 port).
2. Existence of Cycbot files in appropriate locations.
3. Redirects and popups.
Cycbot uses typical and legitimate program names : dwm.exe, svchost.exe and others. It is important to decide if these programs are started from C:\Windows… or C:\Users / C:\Documents and Settings\
In second case the programs are malicious. Process Explorer can help detecting locations of the particular process.
If you are sure that it is cycbot.B, then proceed with removal instructions for this parasite. If you are not sure if this is Cycbot, scan with spyhunter, Spyhunter, Malwarebytes Anti-Malware and decent internet security suite. Additional tools might provide better information about type of infection and remove it.
Additionally, it is advisable to disable system restore when scanning and removing Cycbot – it might infect restore points, and antivirus programs will not be able to get rid of it from there.
Automatic Malware removal tools | <urn:uuid:41a29326-ec7c-4e50-8224-c062d39d3909> | CC-MAIN-2022-40 | https://www.2-viruses.com/remove-cycbot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00016.warc.gz | en | 0.887116 | 411 | 2.515625 | 3 |
S/MIME is the name given to Secure MIME or Secure encryption of attachments when they are added to email messages. S/MIME requires a both a private and public key. The public key is stored and made available to those who wish to send users an encrypted message. So to send a message via S/MIME the sender must look up the public key in a global directory or already have it available. Once the key has been found, the sender must encrypt the message/attachment and forward it to the destination server.
In order for the message to be read, the encrypted message must be decoded by the mail client or by the mail server. There are issues with either of these solutions:
- Decryption by the mail client. At the current time, not many mail clients support S/MIME decryption. Further there is the issue of configuring the mail client with the correct private key so that decryption works OK. Since messages are stored encrypted, if the key becomes
compromised at any point in the future and must be changed, there is the risk that the messages will become unavilable in the future.
- Decryption by the mail server. This requires the server to hold both the encryption and decryption key for each user. Clearly there will be additional load on the server as it manages each message and messages are likley to be stored unencrypted on the server itself (there is no point in them being encrypted since the key is available on the server).
There are still several issues to be resolved:
- Global directories To date, there is no global directories where public keys can be obtained. This means that finding the appropriate key can be challenging and may lead to users taking the least resistance and >not encrypting information at all.
- Compatibility There are some issues between different implementations of the S/MIME protocol.
- Compromised keys Managing and revoking compromised keys without loosing access to information is challenging.
While S/MIME has uses in very specialised areas where control of security options is straight forward, for most people, the additional cost of complexity is prohibitive.
Keywords:S/MIME MIME SSL security encryption decryption | <urn:uuid:e3590c35-ee7d-49a0-8cae-ffc004fb0d93> | CC-MAIN-2022-40 | https://www.gordano.com/knowledge-base/what-is-smime-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00016.warc.gz | en | 0.922153 | 460 | 2.75 | 3 |
Merry Christmas - Information about Christmas
Christmas is an annual holiday that celebrates
The date of the Christmas celebration is traditional and is not considered to be his actual date of birth. Christmas festivities often combine the commemoration of Jesus’ birth with various cultural customs, many of which have been influenced by earlier winter festivals.
Christmas Information & History
In most places around the world, Christmas Day is celebrated on December 25. Christmas Eve is the preceding day, December 24. In the United Kingdom and many countries of the Commonwealth, Boxing Day is the following day, December 26. In Catholic countries, Saint Stephen’s Day or the Feast of St. Stephen is December 26. The Armenian Apostolic Church observes Christmas on January 6. Eastern Orthodox Churches that still use the Julian Calendar celebrate Christmas on the Julian version of 25 December, which is January 7 on the more widely used Gregorian calendar, because the two calendars are now 13 days apart.
Christmas originated as a contraction of "Christ’s mass". It is derived from the Middle English Christemasse and Old English
After the conversion of Anglo-Saxon Britain in the very early 7th century, Christmas was referred to
The prominence of Christmas Day increased gradually after Charlemagne was crowned on Christmas Day in 800. Around the 12th century, the remnants of the former Saturnalian traditions of the Romans were transferred to the Twelve Days of Christmas (26 December - 6 January). Christmas during the Middle Ages was a public festival, incorporating ivy, holly, and other evergreens, as well as gift-giving.
Modern traditions have come to include the display of Nativity scenes, Holy and Christmas trees, the exchange of gifts and cards, and the arrival of Father Christmas or Santa Claus on Christmas Eve or Christmas morning. Popular Christmas themes include the promotion of goodwill and peace. | <urn:uuid:994b0ddc-7cf0-4bdc-afda-6a240515d039> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/556/merry-christmas-information-about-christmas.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00216.warc.gz | en | 0.950247 | 377 | 3.421875 | 3 |
A data center is a facility of one or more buildings that house a centralized computing infrastructure, typically servers, storage, and networking equipment.
In this world of apps, big data, and digital everything, you can’t stay on top of your industry without cutting-edge computing infrastructure.
If you want to keep things in-house, the answer is the data center.
Its primary role is to support all the crucial business applications and workloads that all organizations use to run their business.
In this article, we’ll break down exactly what’s in a data center, different types and tier ratings, crucial systems to maximize uptime, and how to find the right location if you’re planning to build one of your own.
The Role of a Data Center:
What Does a Data Center Do?
A data center is designed to handle high volumes of data and traffic with minimum latency, which makes it particularly useful for the following use cases:
- Private cloud: hosting in-house business productivity applications such as CRM, ERP, etc.
- Processing big data, powering machine learning and artificial intelligence.
- High-volume eCommerce transactions.
- Powering online gaming platforms and communities.
- Data storage, backup, recovery, and management.
There are other examples as well, but the above are some of the most common use cases for businesses.
Of course, in 2021, you could just outsource all of the data processing to a third party, like AWS or Google Cloud.
But it’s not always easy for an enterprise to give another party access to the data, not to mention it’s often more expensive at scale.
According to a 2020 study, companies choose to use a data center over public environments to reduce costs, solve performance issues, or meet uphold regulatory requirements.
What Is In a Data Center?
A data center houses everything required to safely store and process data for your organization (or your clients), including physical servers, hard drives, and cutting-edge networking equipment.
The infrastructure also includes external and backup power systems, external networking and communication systems, cabling systems, environmental controls, and security systems.
If you’ve ever visited a data center, it can often look and feel like you’re in a sci-fi movie. With the rows of servers, cooling towers, and the absurd amount of network cables, you could swear you were looking at The Matrix mainframe.
Today, when uptime as close to 100% as possible is expected, a data center often includes a smart control system. It optimizes cooling, climate control, and more automatically to optimize performance.
This is a Data Center Infrastructure Management (DCIM) system. It basically takes the same concepts as a smart house (automatic temperature control, etc.) to the next level.
If you never want your private cloud of applications and big data to be unavailable, it’s a necessity.
Types of Data Centers
There are many types of data centers that may or may not be suitable for your company’s needs. Let’s take a closer look:
A colocation center — also known as a “carrier hotel” — is a type of data center where you can rent equipment, space, and bandwidth from the data center’s owner.
For example, instead of renting a virtual machine from a public cloud provider, you can just straight-up rent a certain amount of their hardware from specified data centers.
An enterprise data center is a fully company-owned data center used to process internal data and host mission-critical applications.
By using third-party cloud services, you can set up a virtual data center in the cloud. This is a similar concept to colocation, but you may take advantage of specific services rather than just renting the hardware and configuring it yourself.
Edge Data Center
An edge data center is a smaller data center that is as close to the end user as possible. Instead of having one massive data center, you instead have multiple smaller ones to minimize latency and lag.
When IoT devices and low-latency data demands are high, organizations are deploying Edge computing facilities.
Micro Data Center
A micro data center is essentially an edge data center pushed to the extreme. It can be as small as a small office room, just handling the data processed in a specific region.
Large enterprise data centers are still the most popular, but experts foresee continued growth in colocation and micro data centers.
Data centers are still viable assets for organizations, but as computing demands and the industry evolve, the enterprise data center is morphing into a hybrid computing infrastructure.
This modern approach encompasses the traditional data center, which typically houses mission-critical applications where maximum uptime and privacy is a must, sometimes called “the crown jewels”.
To meet the demands of tier 2 applications (non-mission-critical apps), organizations often leverage public cloud data centers. For example, many companies rely on third-party cloud services for their DevOps activities.
We also categorize data centers by tiers, based on their expected uptime and the robustness of their infrastructure.
Data Center Tier Rating Breakdown: Tier 1, 2, 3, 4
Companies also rate data centers by tier to highlight their expected uptime and reliability.
Let’s break it down:
- Tier 1: A Tier 1 data center has a single path for power and cooling and few, if any, redundant and backup components. It has an expected uptime of 99.671% (28.8 hours of downtime annually).
- Tier 2: A Tier 2 data center has a single path for power and cooling and some redundant and backup components. It has an expected uptime of 99.741% (22 hours of downtime annually).
- Tier 3: A Tier 3 data center has multiple paths for power and cooling and systems in place to update and maintain it without taking it offline. It has an expected uptime of 99.982% (1.6 hours of downtime annually).
- Tier 4: A Tier 4 data center is built to be completely fault-tolerant and has redundancy for every component. It has an expected uptime of 99.995% (26.3 minutes of downtime annually).
Which tier of data center you need depends on your service SLAs and other factors.
In addition to hardware, where you decide to build your data center can have a big impact on your results.
Choosing Your Data Center Location Is Crucial
Choosing the location of your data center is one of the most important decisions you’ll make.
Here are just some of the things you must consider:
- Proximity to major markets and customers — Latency and reliable connections play a major factor in running an efficient facility that meets customer demand.
- Labor costs and availability — While labor costs may be good in a particular region, is there enough talent (across disciplines) needed to run and maintain your data center?
- Environmental conditions — Temperature and humidity variances wreak havoc on environmental systems and forecasting. Earthquakes, hurricanes, blizzards, and tornadoes are unpredictable and can shut down a facility indefinitely. Keep this in mind.
- Airport and highway accessibility and quality — You need large equipment and service equipment to build and maintain the data center. It must also be readily accessible for delivery, services, and employees.
- Availability and cost of real estate options — Build versus buy requires considering building costs and quality of construction, versus incentives from landlords and local governments
- Amount of local and state economic development incentives — Beyond construction considerations, local jurisdiction may provide development incentives in rural or redevelopment areas, and less inviting in densely populated or over-resourced areas. On the counter side of this are the taxes and regulatory requirements that can be costly and restrictive.
- Availability of telecommunications infrastructure — Make sure local providers can meet your future bandwidth demands and that there are not only redundant systems from your provider, but that you have multiple providers available
- Cost of utilities — Costs vary globally and in some geographies you may not have an option of where you place your data center and considering alternative power sources is prudent and in some countries, required.
Data Center Physical Security:
How to Keep Your Data Safe
There are three important concepts to keep in mind when designing a policy to keep your data safe and available at all times — data security, service continuation, and personnel and asset safety.
Data security systems include physical and telemetric systems, rigid security policy adherence, and highly available redundancy make up the data protection foundation. These protect against physical intrusion, cyber breaches, human, and environmental events.
Set up the proper architecture of power and networking systems, including redundancy, disruption simulations, and automated workflows. That way, you can deliver on SLAs and protect yourself against unforeseen incidents.
Personnel and asset safety and preservation
Use proven data center design practices to monitor weight and power distribution, cable management, and alarm systems to alert before reaching safety thresholds.
Asset Integrity Monitoring:
Improve Your Data Center Security
Asset integrity monitoring is a cornerstone practice for any major computing infrastructure. It continuously monitors your system for anomalies, and alerts you immediately for power and environmental incidents.
Data center teams can use them to:
- Reduce, predict, and plan for power and thermal anomalies.
- Identify at-risk firmware and software.
- Identify human errors outside security and CMP policies.
- Detect unauthorized hardware or software on the network.
Operations and security teams benefit from increased visibility and a simplified audit process with an accurate asset data set.
- Automated discovery of assets and attributes.
- Traceable lifecycle management and workflows.
- Logged user access, date, and time.
- Identification of unknown and non-compliance hardware and software.
- Critical incident and custom report queries.
What is a Hybrid Computing Infrastructure?
A hybrid computing infrastructure means using a mix of traditional enterprise data centers and public cloud infrastructure.
A hybrid computing infrastructure augments the traditional data center. It allows for optimizing application workload balancing, optimizing user experience and costs.
It also enables the adoption of new technologies from virtualization, high-density racks, and hyper-converged infrastructure equipment. (If you have no idea what any of that means, all the more reason to outsource some of your computing.)
A hybrid approach allows for any organization and management style to tailor their infrastructure that is right for their business. Conservative and security-focused organizations will keep critical applications under their watch in a physical data center owned and managed by their personnel.
For organizations that aren’t ready to invest the tens of millions to build or expand data centers, using a colocation provider is a great option for balancing risk and cost.
Where speed of deployment and short-term computing power is needed, the public cloud and SaaS deployments are ideal.
In use cases where latency must be as low as possible — for example, IoT or high-speed transactions — Edge computing is crucial.
How does Data Center Infrastructure Management (DCIM) Software Improve the Data Center?
DCIM bridges the gap between facilities and IT, coordinating planning and managing through automation and transparent communication, leveraging a “single source of truth”.
What does that actually mean? All the data and controls you need to manage your data center are available in one place. (And most of the time, it controls itself perfectly without any of your input.)
From the receiving dock to decommissioning, Nlyte DCIM maximizes the production value of your assets over time. Capturing change at its source, Nlyte DCIM facilitates timely onboarding of equipment at the time of receiving, and streamlines the decommissioning of older equipment.
Optimize your resources and personnel with measurable, repeatable, intelligent processes making individuals more efficient. Support cross-team assignment for multi-team tasks. Extend the adoption of ITIL and COBIT into the data center without any additional development or services.
Bi-lateral Systems Communication
Nlyte becomes your single source of truth for all assets, sharing information between Facilities, IT, and business systems.
Infrastructure and Workload Optimization
Designed to support your operation efficiency goals and reduce the number of ad-hoc processes at play in your data center. Unlock unused and under-utilized workload, space, and energy capacity to maximize your ROI.
Space and Efficiency Planning
Forecast and predict the future state of your data center’s physical capacity based on consumption management. “What if” models forecast the exact capacity impact of data center projects on space, power, cooling and networks.
Risk, Audit, Compliance, and Reporting
Power failure simulations and automated workflow reduce the risk of the unknown and human error. Audit and reporting tools improve visibility and help achieve compliance requirements.
Even as the world of cloud computing continues to grow with stricter regulations and higher customer expectations, we’re seeing a return to the data center, often in a network of smaller “Edge” or “micro” data centers.
If you’re looking to start your own data center, and you want to maximize uptime and efficiency, Nlyte can act as the brain of your data center, managing your cooling towers, climate systems, and more to optimize performance and equipment longevity.
Book a demo today to see what the brains of the data center of the future looks like. | <urn:uuid:0c0780a8-25ec-4ecd-9fd8-335a64b33d60> | CC-MAIN-2022-40 | https://www.nlyte.com/faqs/what-is-a-data-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00216.warc.gz | en | 0.911735 | 2,817 | 2.8125 | 3 |
Software-defined WAN (SD-WAN) technology applies software-defined networking (SDN) concepts for the purpose of distributing network traffic throughout a wide area network (WAN). SD-WANs work automatically, using predefined policies to identify the most effective route for application traffic passing from branch offices to headquarters, the cloud, and the Internet. There is rarely any need to configure your routers manually in branch locations. A centralized controller manages the SD-WAN, sending policy information to all connected devices. Information technology (IT) teams can program network edge devices remotely, using low-touch or zero-touch provisioning.
SD-WAN technology typically creates a transport-agnostic virtual overlay. This is achieved by abstracting underlying public or private WAN connections, such as Internet broadband, fiber, long-term evolution (LTE), wireless, or multiprotocol label switching (MPLS). An SD-WAN overlay helps organizations to continue using their own existing WAN links. SD-WAN technology centralizes control of the network, reducing costs and providing real-time application traffic management over existing links.
The most common SD-WAN use cases fall into the following categories:
SD-WAN uses an abstracted network architecture composed of two separate parts:
An SD-WAN architecture consists of the following components:
SD-WAN implementations leverage a wide range of technologies, including:
A centralized controller that manages SD-WAN deployments. The controller enforces security and routing policies, as well as monitors the virtual overlay, any software updates, and provides reports and alerts.
Software-defined networking (SDN)
Enables key components in the architecture, including the virtual overlay, the centralized controller, and link abstraction.
Wide area network (WAN)
Responsible for connecting geographically separated facilities or multiple LANs, using either wireless or wired connections.
Virtual network functions (VNFs)
First-party or third-party network functions, such as caching tasks and firewalls. VNFs are typically used for the purpose of reducing the amount of physical appliances or to increase flexibility and interoperability.
SD-WAN technology can leverage multiple bandwidth connections and assign traffic to any specific link. This provides users with more control and enables cost savings, by moving traffic from traditional costly MPLS lines to low cost commodity bandwidth connections.
SD-WAN technology can improve existing last-mile connections through the use of more than one transport link or by simultaneously using multiple links.
Let’s look at the key differences between traditional WAN and SD-WAN solutions.
|Load balancing and disaster recovery available, but can be complex to deploy||Load balancing and disaster recovery built in with fast or zero-touch deployment|
|Configuration changes take time and require manual configuration work, which is error prone||Real-time configuration changes, automated to prevent human error|
|Requires edge devices to be configured one by one, does not allow blanket application of policies||Uses virtual overlays—can replicate policies instantly across large numbers of edge devices|
|Limited to one connectivity option—legacy MPLS lines||Can make optimal use of multiple connectivity options—MPLS and SDN-managed broadband lines|
|Relies on VPNs, which work well with a single IP backbone, but cannot coexist with high throughput workloads like voice and video||Able to steer traffic for different types of applications, conserving bandwidth for the applications that need it most|
|Requires manual tuning||Detects network conditions automatically and can dynamically optimize the WAN|
SD-WAN can use public Internet connections for all middle mile transmissions, and while this can be extremely cost effective, it is not advised. There is no way to know which links traffic will go through, raising security and performance concerns.
Whenever possible, especially for sensitive or mission critical communication, prefer to transmit SD-WAN traffic over private networks. Some SD-WAN providers let you use their own secure global network. Reserve public Internet capacity for non-critical and non-sensitive workloads, or failover scenarios when the private network is down.
When embarking on an SD-WAN project, educate stakeholders about the deployment process and explain that SD-WAN is an addition to existing network infrastructure. Executives should not view SD-WAN as a simple drop-in replacement for traditional network technology.
Make it clear that you need to keep the existing technology and integrate it with new SD-WAN investments. A better understanding of the technical background and deployment methods will give you better leadership support.
SD-WAN solutions may offer automation and zero touch deployment, but you need to verify that it works as expected. Testing is often overlooked, but it is a critical part of an SD-WAN project. Ensure you test extensively before, during, and after implementation. A typical SD-WAN project involves testing over 3-6 months, focusing on quality of service (QoS), scalability, availability and failover, and reliability of management tools.
The SD-WAN model operates using a distributed network fabric, which typically does not include the security and access controls needed to protect enterprise networks in the cloud.
To address this problem, Gartner proposed a new network security model called secure access service edge (SASE). SASE combines WAN functionality with security features such as:
The combination of these security capabilities, built for a cloud environment, makes it possible to ensure SD-WAN networks are secure.
SASE solutions provide mobile users and branch offices with secure connectivity and consistent security. They provide a centralized view of the entire network, allowing administrators and security teams to identify users, devices and endpoints across a globally-distributed SD-WAN, enforce access and security policies, and provide consistent security capabilities across multiple geographical locations and multiple cloud providers.
Prior to SD-WAN remote office connections were backhauled to the corporate data center where they were protected using the corporate network security stack. With the advent of SD-WAN, cloud and Internet connections connected directly to the Internet expose WAN users to sophisticated attacks.
Firewall as a Service and Secure Access Service Edge (SASE) solutions protect SD-WAN connections to cloud applications and the Internet. To learn more about Check Point’s SASE solutions and how they can improve your organization’s WAN security, contact us. You’re also welcome to request a demonstration to see Check Point’s SASE solution in action. | <urn:uuid:130a9f47-929b-47f0-bbb7-ca78dbd54331> | CC-MAIN-2022-40 | https://www.checkpoint.com/cyber-hub/network-security/what-is-sd-wan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00216.warc.gz | en | 0.892671 | 1,372 | 3.296875 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
How Researchers Fight Against Covid-19
Engineers and Researchers across the world are tweaking drones, robots, and smart tools to assist in preventing the spread of the coronavirus
Fremont, CA: As the COVID-19 pandemic causes chaos globally, claiming an incredible number of lives each day, the researchers are onto experimentation to find ways in which they can contribute to the global response against the crisis. Many institutions are building their DIY ventilators, face masks, and face shields for the frontline workers. In contrast, many others have focused on creating sophisticated tracking mechanisms to map out epidemic hotspots.
On the other hand, some ingenious innovations are being tweaked to address various challenges posed by the coronavirus. May it be tech-recycling or disaster preventive measures, these devices have the potential to help prevent further attacks and pandemics by revolutionizing healthcare, if scaled and implemented efficiently.
As handwashing frequently, via proper steps, is the only preventive measure during this crisis, A social innovation engineer developed a smart mirror that can detect the presence of a person. This mirror, after identifying the person, will walk them through the various steps of handwashing as recommended by the WHO.
Drone to Assist Pandemic Response
This university in Australia partnered with a Canada-based drone technology firm to design a drone that could assist the local authorities in identifying and predict COVID-19 hotspots. In 2019, it was recorded that the computer system attached to the drone had the vision capability enough to detect the human’s vitals from almost four to eight meters away. The system also has the option to find human bodies buried under the debris. These drones can fly, detect any anomalies in the people’s vitals, and transfer the information to the app.
COVID-19 has brought insurmountable obstacles to humankind, but it has also propelled many researchers to push creative boundaries and innovate devices for the betterment of the world. One such idea is the disinfection Robot that uses a pulsed Xenon lamp to shoot intense UV light in milliseconds. This deactivated microbes such as bacteria, spores, fungi, and viruses. The UV light’s 200 to 300-nanometer wavelength targets ranges of different cellular processes in germs bringing to a stop their replication and causes the breakdown of the cell wall. With the increasing demand for infrastructure in the hospitals, the disinfection robot is an added gem that can help with several roles among frontline workers.
See also: Top Robotics Solution Companies | <urn:uuid:a604fabd-327c-47e4-bcc5-774076e13c21> | CC-MAIN-2022-40 | https://www.cioapplications.com/news/how-researchers-fight-against-covid19-nid-5849.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00416.warc.gz | en | 0.936296 | 529 | 3.234375 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
The Benefits of Active Learning Method for Learners and Educators
The fundamental basis of educational technology is active learning, which is becoming more commonly recognized now that edtech is a required element of regular academia.
Fremont, CA: On how edtech equips teachers, one must consider the current influence technology has on active learning and what it stands to achieve in the long run. Edtech provides real-time help to teachers in the classroom by making teaching a very stimulating experience with various audio, visual, and guiding aids. This creatively aligns a teacher's lectures with their pupils' aptitude to digest and learn more actively. These are all immediate benefits that edtech has made possible for instructors in the classroom.
Beyond that, we must comprehend the long-term advantages that edtech provides instructors. It helps you save time. Education technology has now advanced to the point where a single product may boast of having multi-dimensional tools that include features for teaching and learning and those for managing the administrative work that consumes a significant amount of time.
Edtech has provided teachers with a space where learning management systems can be digitized, and redundant chores like attendance, score monitoring, and report generation can be automated. This naturally helps teachers to focus their efforts where they are most needed. When teachers can use the time saved to engage in classroom debate and interaction, students become more involved because the experience becomes a conversation rather than a monologue.
This is useful in the long run because kids learn to trust this experience without mistrust or bias. In addition, students trust teachers to make the best academic decisions for them because technology in the classroom allows for immersive experiences that entail two-way communication.
Edtech caters to a variety of student learning styles. No two students learn in the same way, and for kids who are alienated from standard learning methods, studying can be a very unpleasant experience. This is where teachers turn to edtech to help them solve the growing education challenge. It is a fair and transparent set-up that fosters learning with zero judgment and full support, and it caters to numerous learners across all learning spectrums. After all, active learning aims to develop a tribe of students capable of managing their academics. | <urn:uuid:cff54909-3944-4c9f-9ed0-70217eedfcb9> | CC-MAIN-2022-40 | https://www.cioapplications.com/news/the-benefits-of-active-learning-method-for-learners-and-educators-nid-8254.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00416.warc.gz | en | 0.953134 | 462 | 2.796875 | 3 |
After costs risk is the next thing that a business works on reducing. This is why is it vital to reduce and prevent risks. It helps ease the negative impact the risks can have on the project and on the business itself.
Managers work hard on somehow preventing risks as it would help the project reach completion without additional resources. Risks can also increase the burden and would need a deep solution and that would take time. Before the initial stages when the risks are identified, risk prevention can help eliminate the risks to some extent.
What is Risk Prevention?
Risk prevention is the process of avoiding risks. It can also help reduce the probability and the impact of risk to the project.
Risk prevention is an important tactic to prevent some of the risks which can have a serious negative impact on the project. As risk cannot be eliminated, it can be prevented. There are different ways how it can be prevented at an early stage of the project.
Difference Between Risk Management and Risk Prevention
Risk prevention and management are the most common words used during any project. These processes work for the same basic goal. However, risk prevention is used when the risk only has a negative impact.
Risk management is then used when the risk has both a negative and positive impact on the project. Both have some distinctive features regarding their uses.
Risk Prevention Elements
Mentioned below are the common elements of risk prevention process. Each piece is there to serve its purpose and help the business achieve its goal for the project.
- Risk Identification and Analysis
Risk identification is the process of identifying risks associated with every stage, and decision of the project. It can be any factor that can bring risks with it to the project or the business.
However, risk analysis determines the triggers and impact of the identified risks.
- Risk Avoidance
Risk can be avoided only to an extent. Some methods to avoid risk can be altering strategies, processes, decisions, or actions. All these can be changed in some way to have minimal risk to the project.
- Risk Reduction
Risk reduction is the tool used to reduce the probability and the impact of the identified risk. This helps reduce the risk to a level and can have minimum impact on the project.
- Risk Contingency
Everything strategy needs a backup plan. The risk contingency plan is what the managers will do if the risk occurs and what they have planned to do.
- Risk Minimization
Risk minimization is the tool used to reduce the impact of the risk to the project. The main goal is to reduce the impact of risks as low as possible. Every manager works to minimize the risk and make everything easier.
Risk prevention is an essential tool to prevent and minimize risks. All this is an essential part of risk management. Risk prevention is the best way to treat risk. | <urn:uuid:3f19fb57-e90b-4547-9e4f-7e753a6a8eab> | CC-MAIN-2022-40 | https://fluentpro.com/glossary/risks-prevention/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00416.warc.gz | en | 0.945496 | 593 | 3.4375 | 3 |
The Covid-19 pandemic has fundamentally changed how we live. As we get used to our new lifestyles, it has become clear that we’re vulnerable, and it’s very likely this won’t be our last health crisis — future global outbreaks are inevitable. So, what have we learned from the current crisis to better prepare us for the next outbreak?
We have learned that stopping future outbreaks before they spiral out of control requires rigorous contact tracing. Contact tracing is the key to implementing intelligent social distancing, getting employees back to work safely, accelerating the recovery of our economy and, ultimately, stopping the virus in its tracks by preventing viral transmission. Thankfully, with recent advances in technology, we can build sophisticated contact tracing apps to inform people if they have been in close contact with someone who has been exposed to a virus.
Sounds Simple, Right?
Unfortunately, it is not that simple. First, we have to convince the vast majority of citizens to download an app that, by definition, will require access to their complete location history. As noted by the United States Center for Disease Control (CDC) in its field epidemiology manual, “Many times, persons most affected by a disease outbreak or health threat perceive the risk differently from the experts who mitigate or prevent the risk.” While some countries resorted to force to ensure compliance, China leveraged Alibaba’s Alipay platform to launch its health code app, which instructs users on whether they can leave home or use public transport.
Of course, this sort of government overreach is unthinkable in most democracies. Building sufficient trust in contact tracing apps requires addressing underlying fears that citizens have regarding a third-party entity accessing, harvesting, sharing or selling the data and possibly using it for their own nefarious purposes. We must be able to reassure citizens that contact tracing apps will keep their data private and safe, and that no entity will be able to access their location history except for citizens themselves.
The Three A’s Of Effective Contact Tracing Apps
An app — no matter how brilliant — is useless if people refuse to use it. Most importantly, to be effective, a contact tracing app ideally needs to be adopted by more than 60% to 70% of the population.
The first challenge is to make sure that the vast majority of people have access to and can download and properly use the app. This requires effective informational and educational campaigns. We also know that adoption will suffer if people’s fundamental concerns about the abuse of privacy are not addressed. To overcome this resistance, a successful app must eliminate any third-party access (including government entities’ access) to personal data. Users must be confident that their data will remain protected while the app performs its key functions of tracking and tracing contacts.
Some contact tracing apps preserve anonymity only until a positive case is identified. Once the user tests positive, the central authority might access the user’s entire contact log, thus violating the privacy rights of everyone else who was in close contact with that user.
To avoid this, the contact tracing app should be capable of verifying and validating the positive test ID and anonymously sending alerts from the infected user’s device to other users who were in close contact, ensuring that no third parties can access that log, hence preserving their anonymity. Users must be confident that the app does not report the location of users to third parties and does not retain their personal information.
The third aspect is to have a sustainable solution. Contact tracing is just one feature, and the solution needs to adapt to meet ongoing requirements and offer new services as we learn about the disease and its impact on the population.
Current contact tracing apps miss the mark in a number of ways, including adding Covid-19-specific features into device operating systems and requiring users to register with a central health authority even if they are registered with different health authorities. These are all red flags for users, raising concerns that once functions get implanted in our smartphones, they’ll stay long after the pandemic ends, violating their privacy rights.
Users must be confident that any exposure to privacy risks is limited to the duration of the crisis and that the system is adaptable and can automatically restore its original state and eliminate any privacy risks. Therefore, the app should adapt to and easily integrate with other systems to drive adoption and measure patient outcome, side effects of drugs patients are taking and the recurrence of the disease, all while patients’ data remains in control of the user.
Only an app that uses this triple-A approach has the potential for the widespread adoption needed for contact tracing to effectively and pragmatically manage current and future outbreaks.
Doing Good By Being Good
To respond effectively to this global crisis, we must place ethics at the forefront of everything we do and prioritize the best interests of the population. Now more than ever, we must be vigilant about the efficacy of the technology solutions we develop and use them to fight this crisis without compromising our democratic values and principles. | <urn:uuid:96df1390-4877-4743-9cef-ab21f5ebe8f9> | CC-MAIN-2022-40 | https://stg-2x.mimik.com/a-pragmatic-approach-to-sustainable-healthcare-overcoming-privacy-fears-to-beat-the-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00416.warc.gz | en | 0.942791 | 1,036 | 2.859375 | 3 |
Apple Macs are secure because they don’t get computer viruses, and because OS X, the operating system they run, is based on the rock-solid and highly secure BSD UNIX.
These are two popular misconceptions which make many Mac users underestimate the security risk of allowing their computers onto a corporate network. In a presentation at the EICAR conference in Paris this month David Harley, Research Fellow & Director of Malware Intelligence at anti-virus company ESET, his colleague Pierre-Marc Bureau and Andrew Lee of security outfit K7 Computing pointed out that underestimating the risks presented by Macs can make them less secure than Windows machines. “While Mac users – with the exception of those making significant use of Windows on Macs – operate in an environment prowled by infinitely fewer predators, Microsoft and its more savvy customers are to some extent shielded by a more accurate assessment of the risks to which Windows users are exposed.”
Even if Apple’s computers really were completely secure from viruses and all other threats, they would still represent a risk, Harley points out. “Any computer user who believes a system is so safe that they don’t have to care about security is prime material for exploitation by social engineering,” he says. But in fact there is no “Mac magic” which makes machines running Apple’s OS X immune to viruses. “It is not impossible to write an OS X virus. I wouldn’t say it was even conceptually more difficult than writing one for Windows,” says Harley. Right now there are “quite a few hundred” malicious Mac binaries in circulation, he adds.
What about the perception that Macs are secure because parts of OS X are based on BSD? The key word here is “based.” The reason the perception is false is because the two operating systems are not the same. “Apple has gone its own way as to how to interpret the BSD approach – in other words, you’re not in Kansas anymore,” says Harley.” You simply can’t assume that things considered safe in BSD are safe in OS X, because OS X simply isn’t the same as BSD.
For example, OS X uses a single program, launchd, that combines the functionality of a number of standard UNIX utilities including System V init, cron, xinetd and mach init. But Harley points out that there have been several vulnerabilities reported in launchd, and because it runs as root, the implications can be serious. Since it deals with setting up and managing networked services, it’s also likely that vulnerabilities in launchd will be remotely exploitable. By combining standard UNIX utilities in the way that OS X does, Apple has magnified complexity and increased the attack surface of its operating system.
Apple aficionados point out that the vast majority of Mac users don’t use anti-virus software and have never been infected by a virus, and while this is certainly the case it rather misses the point. That’s because while traditional viruses are in decline across all platforms, they are far from being the only threat that Macs face. Other OS-specific threats include:
- rootkits such as WeaponX
- fake codec Trojans
- malicious code with Mac-specific DNS changing functionality
- fake or rogue anti-malware
- disruptive adware
Some of the blame for the inaccurate perception that Macs are “secure” must be laid at the feet of Apple. The company’s current security line is that “Mac OS X doesn’t get PC viruses,” which is disingenuous at the very least: PCs get PC viruses, and Macs get Mac viruses. Besides, as noted earlier in this article, viruses are only a part of the threatscape.
Harley says that while Apple has implemented some good security measures – such as the way it offers firewalling, updates and patches – others offer less security than they appear to. For example, Apple says that OS X “prevents hackers from harming your programs through a technique called ‘sandboxing’ — restricting what actions programs can perform on your Mac, what files they can access, and what other programs they can launch.” But Apple doesn’t using sandboxing with all its applications, notably Safari, so hackers are still able to exploit other applications that Safari can open.
The company also touts its “Library Randomization, which prevents malicious commands from finding their targets.” But library randomization is only a subset of the far more comprehensive Address Space Layout Randomization found in Windows Vista and 7 which includes code, stack and heap location randomization as well as library location randomization. “Apple’s security is not nearly as good or comprehensive as they’d like you to think” Harley says.
So what implications does all this have for network administrators tasked with protecting their infrastructure in enterprises with a growing proportion of Macs? Certainly they should be aware that Apple computers present a real security risk, and that this risk is likely to increase if Macs become more popular in enterprise. “If your security infrastructure is geared towards Windows desktops, then it’s probable that your perimeter defenses are geared to Windows, says Harley.” That means Mac threats are unlikely to be detected as they enter your network.” Running security software on each Mac as an extra layer of defense would therefore be a sensible precaution, he believes. Ensuring Mac users are aware of the possibility of social engineering attacks, such as being asked for their password by someone posing as a member of the IT department, is also a good idea.
While Macs may pose a less obvious security risk than PCs, the risks that they pose should not be ignored, Harley concludes. “I would be treating Macs with caution. Not panic, but caution.” | <urn:uuid:1a233aa6-25db-4e9b-8bd0-eafbe7861f72> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/apple-security-isnt-a-sure-bet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00416.warc.gz | en | 0.959926 | 1,363 | 3.0625 | 3 |
What is Robotic Process Automation?
An integral tool for digital transformation
Robotic process automation: Definition
Robotic process automation (RPA) is a revolutionary business process automation technology that eliminates tedious tasks, freeing employees to focus on higher-value work while offering better and more flexible business process support. In short, RPA processes drive business efficiency and accuracy by automating and standardising repeatable business processes using virtual business assistants or bots.
Is RPA right for your business processes?
RPA can enhance your intelligent automation capabilities and extend the business value of your applications, especially as your organisation embarks on digital transformation. An RPA solution can fill the gaps in your processes and communicate with existing business systems seamlessly, and it’s frequently used across industries and departments, namely, accounts payable, human resources, finance, insurance and customer service.
It can easily be programmed to do basic, repetitive tasks across applications to speed up processes. The essential criteria, when determining if RPA processes fit, are processes that must:
- Be rule-based
- Be repeated at regular intervals or have a predefined prompt
- Have defined inputs and outputs
- Have sufficient volume
Some of the situations where you can use RPA processes to replace manual processes include:
- Performing high-volume, repetitive transactions
- Extracting and reformatting data into reports and dashboards
- Merging data from multiple locations and systems
- Collecting social media statistics
- Generating emails and filling out forms
Benefits of RPA
Organisations that leverage RPA technology achieve significant gains, which can include:
- Increased accuracy and compliance: RPA enables departments to automate repetitive and rule-based tasks resulting in accelerated business cycles. RPA technology eliminates human error as tasks are done consistently every time. Plus, all steps are recorded and automatically checked against regulations to ensure compliance using audit trails for easy tracking.
- Increased visibility and speed: RPA can accelerate tasks by minimising the need for human intervention. Using dashboards and reporting analytics, RPA provides visibility into process bottlenecks and makes it easier to analyse and optimise processes.
- Improved productivity and efficiency: With RPA processes, operations can run autonomously, completing manual tasks using bots while employees only need to intervene to make decisions. This means employees can focus on higher-value activities resulting in increased engagement and optimised utilisation.
- Reduced costs: Being able to complete tasks much faster than manual approaches and in a shorter, more efficient cycle translates to saved costs in terms of efficiency and productivity, as well as utilisation.
- Easily configurable and scalable across operations: Using RPA processes to integrate your legacy systems with core systems will not be an issue. Plus, with low-code configuration, RPA software is easy to build and manage, and it requires minimal effort from your IT teams. This makes it easily scalable across business operations and allows you to change processes in the software with zero or minimal downtime.
Enhance your content services with RPA
Now more than ever, it is imperative for organisations to leverage automation tools to address business challenges as part of digital transformation initiatives. The addition of RPA technology is an integral component of Hyland’s intelligent automation strategy. Beyond merely automating processes, Hyland RPA offers intelligent components for improved efficiency, smart tracking of cognitive decisions and much more. Hyland RPA extends the process automation capabilities of Hyland’s industry-leading content services platform, allowing organisations to easily enhance their solutions with robust RPA processes.
For more information on how RPA can help your organisation on its digital transformation initiatives, contact us. | <urn:uuid:f2154458-2340-4ad4-ac9b-fe935f609edb> | CC-MAIN-2022-40 | https://www.hyland.com/en-SG/resources/terminology/automation/robotic-process-automation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00416.warc.gz | en | 0.894645 | 750 | 2.796875 | 3 |
Oracle makes it a point to include improvements in PL/SQL every time it releases a new version of Oracle Database.
The release of Oracle Database 10g was no exception. Let us take a look back at how Oracle improved PL/SQL back then.
Oracle Database 10g included a compiler and a better-tuned execution environment for PL/SQL, leading to execution times that are 2x faster than before.
Before Oracle Database 10g, PL/SQL was very slow when it was used as a conventional programming language to write procedural code for the database and SQL.
Oracle brought in a new PL/SQL execution environment and a new compiler in Oracle Database 10g. And the changes were worth it. Oracle found that PL/SQL statements on 10g ran at least 1.2 times faster and sometimes even 1.6x faster than in earlier Oracle Database versions.
In order to do this, the compiler would need to reorganize your source code and optimize it to run faster. The compiler has some minor changes to how your PL/SQL program would behave but these are too insignificant to notice. What you do notice is that your PL/SQL programs run faster and performs better.
New Language Tools
PL/SQL for Oracle Database 10G had new language tools that had efficient implementations and that made life easier for the programmer. These features were first introduced into SQL, and so Oracle had to add them to PL/SQL to make sure that PL/SQL would still be a relevant language for executing SQL statements. These features include:
- the introduction of IEEE datatypes such as binary_double and binary_float
- introducing values and indices of all syntax for forall
- pls_integer and binary_integer are now treated the same
More PL/SQL Packages
These packages allow you to lengthen the use of Oracle DB when you have a functionality that SQL cannot perform. In short, PL/SQL packages enable you to get more out of your Oracle Database.
In 10g, there are three new supplied packages:
- Dbms_Warning. Dbms_Warning gives programmers better control over the warnings you get when you install scripts. You get to specify what categories of warnings you would see. You are also able to specify which warnings to enable, disable or flag as errors.
- Utl_Mail. Utl_Mail allows you to send composed e-mails without having to learn a single thing of SMTP. Utl_Mail is also much simpler because it is focused on a subclass of the Utl_Smtp, which was the only option available in earlier releases.
- Utl_Compress. Utl_Compress allows you zip and unzip a blob or raw bytestream.
Over the years, Oracle has released more and more versions of their Oracle Database. Each release features improvements when it comes to PL/SQL. If you need to wrap your head on these improvements and new features, as well as everything else that is related to Oracle Database, then call Four Cornerstone. Let us talk to you about our Oracle Database consulting and Oracle DBA training so that you could get the most out of your Oracle investments!
You can download PL/SQL here.
Photo courtesy of OraFaq. | <urn:uuid:89dd5edf-ae91-45e5-88e8-1d86876a35d0> | CC-MAIN-2022-40 | https://fourcornerstone.com/changed-plsql-oracle-database-10g/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00416.warc.gz | en | 0.938196 | 676 | 2.640625 | 3 |
With more people entrusting their personal data in cloud services, the Privacy Protection Regulation (GDPR) clarifies the harsh stance the EU has decided to take on the issue.
The GDPR is considered to be the strictest privacy protection law in the world, regulations that impose obligations on organizations offering goods or services to EU citizens or residents, and/or collect data related to people in the EU, regardless of the organization’s physical location.
The regulation was put into effect on May 25, 2018, imposing heavy fines on those who violate GDPR regulations, with penalties reaching a maximum of € 20 million or 4% of global revenues, whichever is higher, leaving the organization open to claims by private citizens for damages.
The regulation itself is not well defined and quite amorphous, making GDPR compliance discouraging, especially for small and medium-sized enterprises (SMEs) who are unsure how to approach it.
We will try to break down the regulation in a clear, detailed way that will allow you as a business owner to understand how to meet the strict and important standards in the cyber world.
Purpose of GDPR
The new privacy protection regulations were formulated following far-reaching changes in the business world and the use of the Internet – the latest data protection laws were enacted in the 1990s, and since then technology has advanced and the way data is used and stored has become a wild west.
The regulation gives private individuals, called data subjects, control over the processing of their personal data.
What is personal data according to the GDPR?
Personal data is any information that can be used to identify a person such as:
Physical and psychological condition
Cultural / social identity
Processing – any action or set of actions on personal data that is performed by automatic/manual means.
Principles For Data Processing According to GDPR
As stated above, the Privacy Protection Regulations are general and not detailed. The GDPR defines basic principles for the processing of personal data of data subjects:
When can personal data be processed?
Organizations should only process personal information when justified. The GDPR defines 6 reasons why companies are allowed to process personal data.
Most organizations are aiming for the consent of the data subjects, however, this is the loosest criterion, as consent can be withdrawn at any time.
Furthermore, withdrawal of consent must be as accessible and easy as it was to give it, and the law provides that withdrawal can be made by any means of media. When the person withdraws his consent the organization must delete all personal data.
Data Subjects Rights
When starting the compliance process, organizations must aspire to keep data subject rights that the GDPR states as guidelines
Benefits of the GDPR
Bureaucracy is a headache, but there are many benefits to the GDPR that concern not only private individuals but also organizations and businesses. The new law promotes greater transparency and accountability and aims to increase public trust by giving individuals more control over the data.
In addition, organizations and businesses that implement the GDPR standard are better protected from current cyber threats, thus keeping their work environment private and properly maintained.
How To Comply With GDPR?
Access and Mapping – The first step towards GDPR compliance is to research and map which personal data is stored and used on the organization’s platform. Direct access to all data sources is a prerequisite for building an inventory of personal data, so that exposure to cyber risks related to privacy can be assessed. The regulation requires organizations to prove that they know where personal data is – and where they are not.
Identification – Examining access to information sources and identifying personal data. It is important to note that sometimes personal data is buried in semistructured fields, and organizations need to be able to analyze these fields to extract, classify, and catalog personal data components such as names, email addresses, and ID numbers.
This process must be done by automated tools due to the massive amount of data. Beyond analysis and classification, the organization is committed to adjusting data quality according to levels – pattern identification, data quality, and standardization. Using the right tools will make a big difference in your ability to maintain GDPR compliance.
Organizational conduct – DPO appointment – After comprehensive mapping and analysis, senior management must implement the recommendations that arose following the information. The conduct of the Company’s employees and the appointment of officials to maintain the conduct of protecting information privacy is an essential component in complying with the GDPR.
If your organization owns, or owns, these databases, and if it belongs to one of the following sectors:
- Public Sector
- Credit ratings and evaluation
- Insurance Company
The organization must then appoint a Data Security Officer, or DPO – Data Protection Officer, who will be responsible for documenting the information processing. This includes the legal basis that allows the usage of personal data, verifying the accessibility of withdrawing consent, and exercising the right to be forgotten of data subjects.
Due to the complexity of the process of complying with privacy regulations, it is important to work closely with a cyber security company that specializes in international cyber compliance. | <urn:uuid:9e1a4f15-c153-454a-aee0-17c8d638dedf> | CC-MAIN-2022-40 | https://redentry.co/en/blog/gdpr-compliance-organizations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00416.warc.gz | en | 0.937821 | 1,073 | 3.015625 | 3 |
The standard, textbook division of processes is to divide them into main (primary) processes, supporting processes, and management processes. This division is a good illustration of the significance that the particular process groups have in business. It enables us to create a clear, transparent, and well-organized process map. It also allows us to define the relations between the processes themselves, as well as the relations between processes and the teams performing them. However, this division does not enable us to chose the manners of describing processes, or the methodologies of implementing or automating processes within an organization. Should we wish to do so, we would need to resort to a taxonomy dealing with the method of process implementation.
1. Static (structured) processes
Static (structured) processes are those processes which are unchangeable in form, as well as those processes which change over a long period of time. In effect, it is possible to improve such processes with the use of standard BPCI mechanisms based on the Deming’s wheel (PDAC) method, as well as with the help of the management and their knowledge. Because the structure of such processes is known beforehand, such processes can be described in the form of a complete algorithm. In principle, the performance of static processes which do not require decision-making can be delegated to industrial robots or computers.
- Scope of implementation:
◦ identification, rationalization, improvement of processes,
◦ identification, standardization, improvement of decision-making processes (business decision management ‒ BDM),
◦ communication (publication) of process models throughout the organization,
◦ automation of process workflow in workflow, DM, or BPMS systems,
◦ automation of data gathering and analysis via executive systems and BAM/BI applications,
◦ exposure and unification of organizational knowledge, usually on a one-time basis in the process identification phase,
◦ transparency and accessibility of published and up-to-date process maps and models,
◦ initiation and execution of process improvement (BPCI)
◦ full control over real-time and ex-post process execution (instant identification of deviations and errors arising in the course of process execution),
- Risks and hazards:
◦ misalignment with the changing market conditions, being incapable of personalizing processes,
◦ performing a process in the standard manner, which does not conform to the conditions of executions (succeeding to perform the standard process, but generating losses in its course),
◦ creating a culture of unaccountability
Unfortunately, only about 20% of processes can be described as above in real-life organizations. Most often these are the organizations’ normal internal processes, which are not client-facing, as well as processes which must be standardized due to legal constraints (i.e. accounting processes, tax processes, some HR processes, etc.).
2. Dynamic (unstructured, ad-hoc, …) processes
In the remaining 80% cases, processes contain actions or entire subprocesses which are hard to conceptualize within an algorithm. Processes, the course of which is dependent on individual conditions of execution, or which contain such a large amount of variables that it’s impossible to model them. Such processes require us to factor in at the modeling phase the possibility of process performers making individual decisions that we are not able to foresee beforehand. In effect, they require us to take into account the knowledge of process performers in the course of modeling and improving processes.
- The scope of implementation is the same as for static processes, but also includes:
◦ quick identification of processes factoring in dynamic actions (ad-hoc in BPMN 2.0),
◦ automation of process workflow in dynamic BPMS and ACMS systems,
◦ implementation of an automated business process discovery (ABPD) mechanism or a process mining in support of knowledge acquisition,
◦ implementation of quick-learning mechanisms in the organization with the use of social BPM, CoP, etc.,
- Benefits are the same as for static processes, but also include:
◦ ongoing verification and creation of new knowledge in actual business conditions (and not some detached research & development facilities),
◦ initiation and execution of the constant improvement of processes (BPCI) with the use of the entire intellectual capital of the organization,
◦ rapid, broad use of new knowledge with the aim of raising the effectiveness of performed processes,
◦ actual empowerment of process performers, creation of a culture of accountability,
- Risks and hazards:
◦ the risk of the failure of the process executors’ limited experiments (although some knowledge is also gained in return),
◦ the risk of chaos as the result of too many uncontrolled experiments (which can be mitigated by the strict control and oversight of privilege levels).
3. Risks associated with the static modeling of dynamic business processes
When the identification and implementation of processes that are dynamic in nature is attempted as if they were static, projects usually take longer (larger costs, larger risks, …) and the effectiveness of the organization does not rise, but falls instead. Whenever processes are connected with the Client or the market environment, it is even more crucial to establish whether dividing the process into individual “indivisible” actions, which the process executor then performs, will not lead to higher losses due to the over-complication and over-specification of processes. Perhaps it would be better to leave the description general enough as it is? The main risks connected with modeling dynamic processes as complete algorithms are:
a. Losing the transparent and flexible character of processes as the result of their over-specification
This results in the “creeping” over-complication of processes due to adding different special exceptions, “contingency plans,” and conditions which should be taken into account despite the fact that they only arise in special circumstances! I once worked on a settlement system for foreign credits. After having taken into account all of the possible conditions, I was faced with a “monster” that was not practical to work with. It even factored in the option of performing a contract in 2 countries at the same time, in 10 different places, and using 3 different currencies, only because one such contract was performed in the span of the last 25 years! However, in practice it turned out that even such a system did not take into account all the possible circumstances, as one situation arose which was not factored in the system at all.
b. Strengthening a culture of unaccountability
The strict imposition of an unchanging method of performing a process, which does not factor in changing circumstances, rids employees of their initiatives and takes away their accountability for the results of the processes. Not only that, it even encourages them to accept a situation which causes loss, but which follows the statute/procedures/processes! After all, if the process/procedure owner is responsible for the outcome, the employees think it wise to stay close to the procedure/process, even when they think that such action is senseless.
c. Introducing automated processes in an organization as if it were a computer
There is a risk associated with particular organizational units and their managers to see changes NOT AS A CHANCE, but as a threat to their privileges and competence ranges. The lack of information, understanding, and acceptance for change, may lead to individuals viewing change as a threat and meeting such a threat with a strong backlash. | <urn:uuid:66fa1e37-463c-4fc6-b5b6-7087f7fd73cb> | CC-MAIN-2022-40 | https://www.bpmleader.com/2014/08/28/static-and-dynamic-processes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00416.warc.gz | en | 0.95351 | 1,558 | 2.84375 | 3 |
1 - Getting Started with Excel 2019
Topic A: Navigate the Excel User InterfaceTopic B: Use Excel CommandsTopic C: Create and Save a Basic WorkbookTopic D: Enter Cell DataTopic E: Use Excel Help
2 - Performing Calculations
Topic A: Create Worksheet FormulasTopic B: Insert FunctionsTopic C: Reuse Formulas and Functions
3 - Modifying a Worksheet
Topic A: Insert, Delete, and Adjust Cells, Columns, and RowsTopic B: Search for and Replace DataTopic C: Use Proofing and Research Tools
4 - Formatting a Worksheet
Topic A: Apply Text FormatsTopic B: Apply Number FormatsTopic C: Align Cell ContentsTopic D: Apply Styles and ThemesTopic E: Apply Basic Conditional FormattingTopic F: Create and Use Templates
5 - Printing Workbooks
Topic A: Preview and Print a WorkbookTopic B: Set Up the Page LayoutTopic C: Configure Headers and Footers
6 - Managing Workbooks
Topic A: Manage WorksheetsTopic B: Manage Workbook and Worksheet ViewsTopic C: Manage Workbook Properties
Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Who is it For?
This course is intended for students who wish to gain the foundational understanding of Microsoft Office Excel 2019 that is necessary to create and work with electronic spreadsheets.
To ensure success, students will need to be familiar with using personal computers and should have experience using a keyboard and mouse. Students should also be comfortable working in the Windows® 10 environment and be able to use Windows 10 to manage information on their computers. Specific tasks the students should be able to perform include: opening and closing applications, navigating basic file structures, and managing files and folders. To obtain this level of skill and knowledge, you can take either one of the following courses:
Using Microsoft® Windows® 10
Microsoft® Windows® 10: Transition from Windows® 7 | <urn:uuid:d5329edf-9a06-4da1-935a-0742ccecc087> | CC-MAIN-2022-40 | https://nhlearninggroup.com/find-training/course-outline/id/1035992364/c/excel-2019-part-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00616.warc.gz | en | 0.790241 | 429 | 3.046875 | 3 |
The current pandemic has exposed yawning gaps in the systems of the best of developed countries to be able to respond to virulent pathogens. The world has seen SARS and Ebola in fairly recent times, and with the COVID 19 pandemic, it is becoming clear that technology can help combat and overcome future epidemics if we plan and strategize with these technologies. They bring efficiency to our response times, and we are currently learning the importance of using these technologies for prevention as well. A small example – Canadian AI health monitoring platform BlueDot’s outbreak risk software is said to have predicted the outbreak of the pandemic a whole week before America (who announced on Jan 8), and the WHO (on Jan 9) did. BlueDot predicted the spread of COVID 19 from Wuhan to other countries like Bangkok and Seoul by parsing through huge volumes of international news (in local languages). It further was able to predict where the infection would spread by accessing global airline data to trace and track where the infected people were headed.
Contrary to earlier times, today it only takes a few hours to sequence a virus, thanks of course, to technology. The scientists don’t have to cultivate a sufficient batch of viruses any longer in order to examine them, today, its DNA can be got from an infected person’s blood sample or saliva. India’s National Institute of Animal Biotechnology (NIAB), Hyderabad, has developed a biosensor that can detect the novel coronavirus in saliva samples. The new portable device called ‘eCovSens’, can detect coronavirus antigens in human saliva within 30 seconds using just 20 microlitres of sample. Startups like Canadian GenMarkDx, US-based Aperiomics & XCR Diagnostics, Singapore based MiRXES, and Polish company’s SensDx have introduced top notch diagnostic solutions. Identifying infected people to provide strict medical care will be made a lot faster with these diagnostic kits.
Genome sequencing is also vital to fight the pandemic. The genome of this virus was completely sequenced by the Chinese scientists in under a month from detection of the first case, and then on the biotech companies created synthetic copies of the virus for research. Today creating a synthetic copy of a single nucleotide costs under 10 cents (in comparison to the earlier $ 10), so these days it is far quicker and cheaper, which means the chances of finding appropriate / adequate medication are much faster which will help save more lives.
Healthcare workers are having to pay a huge price, they run the risk of getting infected, there is often paucity of PPE, and in some countries, they even have to face assault from crowds that are angry and confused at the situation. Medical workers are targetted by mobs, there are instances where communities don’t allow them to come back to their homes after duty, shops don’t sell them necessities, etc. Medical robots can be the real game-changers in such situations. Deploying robots in such scenarios to do the rescue is becoming a much sought after option, wherever possible. Robots become the answer to such difficult situations as they are impervious to infections. They allow physicians to treat/communicate through a screen. The patient’s vitals are also recorded by the robot. Patients can be very efficiently monitored this way.
Drones for deliveries, especially medical deliveries can also be used to reach isolation zones or quarantined zones. Italy made a big success out of this. Italy’s coronavirus epicenter, Bergamo, in Lombardy region, had to resort to people’s temperature being read by drones. ‘The Star’ reported that “once a person’s temperature is read by the drone, you must still stop that person and measure their temperature with a normal thermometer,” said Matteo Copia, a police commander in Treviolo, near Bergamo. Drones are being used for surveillance – In areas where people were not complying with social distancing and lockdown restrictions, authorities are using drones to monitor people’s movement and break up social gatherings that could be a potential risk to the society. Drones are also being used for Disinfectant spraying, broadcasting messages, medicine and grocery deliveries and so on.
Interactive maps give us the data on the pandemic on real time, and monitoring a pandemic this wide and dangerous is very crucial to stopping/controlling its spread. These maps are made available to everybody, and the truth and transparency in the situation of such epic proportion is necessary in order to avoid panic within communities. We now have apps for tracking the virus spread, fatalities and recovery rates, and apps would be developed for the future that will warn us about impending outbreaks, the geographies and flight routes that we must avoid
Implementing these technologies will enable us to manage and conquer situations like the current pandemic we are going through. As Bernardo Mariano Junior, Director of WHO’s Department of Digital Health and Innovation, rightly said “The world needs to be well prepared and united in the spirit of shared responsibility, to digitally detect, protect, respond, and prepare the recovery for COVID 19. No single entity or single country initiative will be sufficient. We need everyone.” | <urn:uuid:274c7396-76fc-4d46-a12b-a183d31aa3d7> | CC-MAIN-2022-40 | https://www.gavstech.com/combatting-a-health-crisis-with-digital-health-technologies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00616.warc.gz | en | 0.954485 | 1,097 | 3.25 | 3 |
Password encryption allows you to add an extra security layer to passwords or other relevant information in your project, for example to protect the Database's password. This features encrypts any text string using SHA-256 hash function. The result is a set of characters which hides the information of the password. This feature is available through the Work Portal and may be used regardless the environment.
An example of its use is in the Advanced Deployment. It is very important to protect the configuration files against unauthorized access, however, we strongly suggest to add an extra security layer by using our encryption feature available in the Work Portal to encrypt the password inside those files.
How to cipher text
In the Work Portal, click on Admin and then in the Security menu select Password Encryption.
This will display a window in which, you can cipher any text you want.
Type the string to cipher in the Encryption Text field and confirm in the field below. This will make sure that the text entered for the first time is correct. If both fields do not match, the password will not be ciphered.
Finally, click Encrypt.
The Ciphetext field will display the password encrypted. Copy this string and paste it wherever it is required. | <urn:uuid:40a5da55-933e-4950-ad6e-592ec83a82d1> | CC-MAIN-2022-40 | https://help.bizagi.com/bpm-suite/en/password_encryption.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00616.warc.gz | en | 0.849869 | 254 | 2.671875 | 3 |
Table of Contents
- What is HIPAA?
- HIPAA Compliance Terminology
- What Are the Three Rules of HIPAA Compliance?
- What Is the HIPAA Privacy Rule?
- What Is the HIPAA Security Rule?
- What Is the HIPAA Breach Notification Rule?
- What Is the HITECH Act?
- What Is the Omnibus Rule?
- What Does HIPAA Compliance Entail?
- What Are the Penalties for Not Meeting HIPAA Compliance?
- What Can I Do to Ensure That My Organization is HIPAA Compliant?
What is HIPAA?
HIPAA is the Health Insurance Portability and Accountability Act signed into law by President Bill Clinton in 1996. HIPAA was put into place to protect patient data from theft or loss.
Why is this important? Private Health Information (PHI) is considered some of the most sensitive data that a person can have. It was determined that it was critical to protect PHI for patients and that this responsibility fell on healthcare providers who used that information for treatment, research, or billing purposes.
With the emergence of electronic PHI (ePHI) and digital technologies like networked communication and electronic recordkeeping, HIPAA became that much more important. HIPAA was therefore conceptualized to protect ePHI no matter where it is.
HIPAA Compliance Terminology
When discussing HIPAA compliance, there are a few specialized terms that the framework defines to help involved parties better understand their responsibilities:
- Electronic Private Health Information (ePHI): This is patient data related to their treatment, medical history or payment history related to that treatment. This is the primary information that must be protected under HIPAA guidelines.
- Covered Entities (CEs): These are the primary responsible parties under HIPAA, and include organizations like hospitals, clinics, insurance companies and broader healthcare networks.
- Business Associates (BAs): Business Associates are typically contractors or third-party companies that handle specific functions for CEs and, in doing so, manage ePHI. A BA can manage such functions as payment processing and management, cloud software and storage or cybersecurity measures. Under HIPAA, a BA is equally responsible for ePHI as a Covered Entity.
- Business Associate Agreements: Required documents detailing the working relationship between a CE and a BA, including the BAs responsibilities under HIPAA. HIPAA requires that every CE have a standing BAA with their Business Associates.
A “business associate” is a person or an organization that performs tasks that involve the use or disclosure of PHI, such as:
- Laboratory facilities
- CPAs, attorneys, and other professionals with clients in the healthcare industry
- Medical billing and coding services
- IT providers, such as cloud hosting services and data centers, that are doing business in the healthcare industry
- Subcontractors and the business associates of business associates must also comply with HIPAA rules.
A “covered entity” is one of the following:
- A healthcare provider, such as a doctor’s office, pharmacy, nursing home, hospital or clinic that transmits “information in an electronic form in connection with a transaction for which HHS has adopted a standard.”
- A health plan, such as a private-sector health insurer, a government health program such as Medicaid, Medicare, or Tricare, a company health plan, or an HMO.
- A “healthcare clearinghouse,” is an entity that processes health information received from another entity, such as a billing service or a community health information system.
What are the Three Rules of HIPAA Compliance?
At the heart of HIPAA regulations are three rules:
- The Privacy Rule that defines PHI, responsible organizations, and how the latter must protect the former.
- The Security Rule defines the necessary controls a healthcare organization must implement to protect PHI properly.
- The Breach Notification Rule outlines the steps an organization must take to notify the public about any security breaches resulting in the theft or loss of PHI.
There is an additional rule, the Omnibus Rule, that revises and updates several aspects of the HIPAA rules and how they impact healthcare organizations.
What is the HIPAA Privacy Rule?
The HIPAA Privacy Rule establishes the basic standards for patient data, privacy and compliance requirements. This rule establishes many of the foundations of HIPAA compliance, including:
- The definition of PHI within compliance. According to the Privacy Rule defines PHI as information that relates to a patient’s past, present, or future health conditions; the provision of healthcare for that individual; and any payments made for past, present, or future care.
- The definition of responsible parties under HIPAA. Under this rule, two major players are highlighted: Covered Entities (CEs) and Business Associates (BA) that handle PHI.
- The way organizations work together to handle PHI. Essentially defines the necessity of a Business Associate Contract (BAA) between CEs and BAs.
- The types of data not protected under HIPAA. This includes how to de-identify health information, and under what circumstances a doctor or healthcare provider can disclose healthcare information.
The Privacy Rule, therefore, is the bedrock by which all the other rules make sense, including the controls and safeguards defined in the Security Rule.
What is the HIPAA Security Rule?
The HIPAA Security Rule takes the responsibilities in the Privacy Rule and dictates the appropriate security measures that an organization must implement to be compliant. It does not state specific technologies, however. Instead, it outlines general practices and approaches with the understanding that those practices meet reasonable expectations for protection. So, for example, the Security Rule expects that encryption be in place to protect ePHI, but the organization must implement an encryption algorithm that protects against current threats.
The Security Rule covers several areas. Primarily, this rule defines necessary security controls over three broad categories:
- Technical: The implementation of technological safeguards like encryption, firewalls, anti-malware, and any other relevant protection.
- Administrative: The placement of risk management and assessment, training, and other administrative processes to monitor and manage security.
- Physical: The protection of physical systems (data centers, workstations, mobile devices) from unauthorized access.
These security rules tell Covered Entities where they must protect data and how and leaves them to implement appropriate standards to do so.
What is the HIPAA Breach Notification Rule?
Security breaches happen, and under HIPAA Covered Entities and Business Associates have a legal obligation to report breaches to the victims and the public more broadly. A “breach” is when an unauthorized party accesses ePHI and can include both accidental disclosures and malicious hacks.
The Breach Notification Rule dictates that responsible organizations notify relevant parties within a period. Once an organization determines that a breach has occurred, they have 60 calendar days from the day of discovery to notify patients whose records have been compromised. This must occur in writing or via email, and if there are a significant number of affected individuals without contact information on hand, the organization must post the notification prominently on their website.
If the breach involves more than 500 people, then the affected organization must also notify prominent media outlets within the affected jurisdiction of the breach. They must also notify the office of the Secretary of Health and Human Services.
These regulations only apply to the theft or unauthorized access to unencrypted data.
What is the HITECH Act?
The Health Information Technology for Economic and Clinical Health (HITECH Act), signed by President Obama in 2009, updated HIPAA by outlining rules and penalties regarding breaches of private health information (PHI).
Among other provisions, HIPAA mandates that security measures be taken to protect PHI. HIPAA is split into five sections or titles. HIPAA Title II, which is known as the Administrative Simplification provisions, is what most information technology (IT) professionals are referring to when they speak of “HIPAA compliance.”
HITECH also contains several provisions and requirements to encourage (and eventually force) healthcare organizations to migrate health data and communication systems to digital infrastructure.
What Is the Omnibus Rule?
The Omnibus Rule is a revision and update of many HIPAA requirements. Made effective in 2013, the Omnibus Rule most significantly reshaped responsibilities for Business Associates. Before 2013, Business Associates had more limited liability under HIPAA. HITECH made it so liability requirements were spelled out in a BAA, but the Omnibus Rule codified responsibilities and penalties for BAs, essentially making them responsible for their compliance.
The Omnibus Rule also updated several other aspects of HIPAA, including:
- How organizations could or could not sell ePHI
- Student immunization records
- Sharing of ePHI across individuals or organization
- Individual access of ePHI by patients
What Does HIPAA Compliance Entail?
The Administrative Simplification provisions in HIPAA Title II are split into five rules, including the HIPAA Privacy Rule and the HIPAA Security Rule.
The HIPAA Privacy Rule establishes national standards to protect PHI. It applies to all forms of records – electronic, oral, and written – and requires employers to implement PHI security procedures and ensure that all employees are trained on them. The HIPAA Security Rule applies to ePHI. It establishes national standards to protect ePHI and requires entities to implement administrative, physical, and technical safeguards of ePHI.
What Are the Penalties for Not Meeting HIPAA Compliance?
If your organization is not HIPAA compliant, and a breach of PHI occurs, the penalties can be severe, as can be the public relations fallout for your organization. You will be required to notify all affected patients of the breach, and this publicity could do irreparable damage to your organization’s reputation. Your organization could also face fines of more than $1 million – and, in some cases, even criminal penalties.
HIPAA breaks penalties down into four tiers based on the type of violation:
- Tier 1: These are violations that are unknown (accidental) and that are not realistically avoided even with a HIPAA compliance review.
- Tier 2: These violations are those that the organization should have been aware of but were not, but that probably could not have been avoided (that is, the org should have known but the violation isn’t due to negligence).
- Tier 3: These violations are the result of willful neglect of compliance, but attempts have been made to rectify the problem.
- Tier 4: These are violations of willful neglect in which no attempt has been made to rectify the situation.
As tires increase, so does the severity of the non-compliance and, accordingly, the penalties:
- Tier 1: A minimum fine of $100 per violation, up to $50,000.
- Tier 2: A minimum fine of $1,000 per violation up to $50, 000.
- Tier 3: A minimum fine of $10,000 per violation and again up to $50,000.
- Tier 4: A minimum fine of $50,000 per violation.
As you can see, multiple violations of HIPAA due to willful neglect can easily bankrupt small organizations and business associates.
That is not all! The most common breaches are usually accidental, but almost all breaches involve someone internal to the organization. In either case, the penalties for stealing ePHI are steep under HIPAA, and fall under three separate tiers:
- Tier 1: When the party initiates a breach through lack of knowledge or by accident, they can get up to 1 year in jail.
- Tier 2: When a party obtains ePHI through fraud or other means, they can receive up to 5 years in jail.
- Tier 3: When a party commits fraud to obtain ePHI with the intent to sell it or harm individuals, they can get up to 10 years in jail.
What Can I Do to Ensure That My Organization is HIPAA Compliant?
Lazarus Alliance believes that the best defense against a PHI breach is a good offense – and HIPAA requires that covered entities and business associates take a proactive approach to protecting patient data. Considering the financial penalties and potential PR nightmare associated with breaches of sensitive personal medical information, HIPAA compliance is serious business.
HIPAA is a complex law, and many organizations are baffled regarding where, to begin with, their HIPAA compliance. Thankfully, the HIPAA compliance experts at Lazarus Alliance are here to help. We offer comprehensive HIPAA Audit, HITECH, NIST 800-66, and Meaningful Use Audit services to help you evaluate your existing HIPAA protocols and establish new ones. Lazarus Alliance’s proprietary IT Audit Machine (ITAM), which is fully HIPAA compliant; helps eliminate 96% of cybercrime and nearly 100% of the headaches associated with compliance audits.
Lazarus Alliance offers full-service risk assessment and risk management services helping companies all around the world sustain a proactive cybersecurity program. Lazarus Alliance is a proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cybersecurity needs and find out how we can help you with HIPAA Compliance. | <urn:uuid:bf8efe68-bcb7-43bf-aab3-2ca470092ce9> | CC-MAIN-2022-40 | https://lazarusalliance.com/the-2021-guide-to-hipaa-compliance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00616.warc.gz | en | 0.932588 | 2,724 | 2.59375 | 3 |
In 2006, Montana Gov. Brian Schweitzer told state officials and private companies that he no longer wanted the Rocky Mountain region in the Upper Midwest to be the great American supercomputing desert.
Schweitzer saw high tech as a way of revitalizing the region’s economy, attracting businesses and creating jobs, and doing so while minimizing the harm to the area’s environment.
Three years later, the Rocky Mountain Supercomputing Centers was opened in Butte, Mont. The nonprofit entity was created through the work of both the state government and private corporations like IBM, with the aim of giving anyone of any size-from private businesses to public researchers to government agencies-that needs it access to supercomputing capabilities that they otherwise may not have gotten.
And as they enter 2010, officials with the RMSC are expecting the number of organizations looking to take advantage of this to grow.
“This is the complete democratization of this kind of capability,” Earl Dodd, strategist with IBM’s Deep Computing business and executive director of the RMSC, said in an interview. “It is available to any kind of business. We don’t expect to replace what everyone else has, just supplement it.”
Peter Ffoulkes, vice president of marketing for Adaptive Computing, a partner in the creation of the “Big Sky” supercomputer, agreed.
“What this is really about is using HPC [high-performance computing] for businesses to make [the region] competitive,” Ffoulkes said in an interview. “It’s using supercomputing for economic development.”
The Rocky Mountain region until now had been behind the rest of the country is jumping on the high-tech bandwagon. A study conducted by the Ewing Marion Kauffman Foundation and the ITIF (Information Technology and Innovation Foundation) in 2007 found that of the five states in the region, only Colorado was among the top 10 in adapting to the new IT-driven economy.
Colorado was ninth, followed by Utah in 12th. After that, Idaho was 24th, and Montana and Wyoming were 42nd and 43rd, respectively.
During a presentation in June 2009, when the RMSC opened, Alex Philp, president and chairman of the center, showed a map of the United States. The region was circled, and labeled as “The Great American Supercomputing Desert.”
The Big Sky supercomputer will change all that, Philp said in an interview. The affordable, distributed nature of the system will bring supercomputing to businesses, research facilities and educational institutions throughout the region, and even beyond, he said. It will attract businesses and bring jobs to the Rocky Mountain states.
“You don’t have to be part of a larger corporation [to take advantage of supercomputing capabilities],” Philp said. “We don’t have to limit our lives by black lines, by borders on a map.”
The state of Montana teamed up with IBM to create Big Sky, an IBM 1350 array that comprises a host of System x and System p servers. IBM has invested more than $3 million into the initiative, and is partnering with a number of other tech vendors, including Adaptive Computing, Microsoft, NextIO-which sells virtual networking technology-and Nice Systems, which offers solutions and services that help analyze data from telephony, e-mail, the Internet, radio and video.
The system, which primarily is subsidized by Montana taxpayers, runs on Microsoft’s Windows HPC Server 2008, giving customers the familiar Windows experience in their supercomputing environment.
It also offers a cooling exchange system that incorporates IBM’s Cool Blue technology and uses the heat generated by the cluster to help heat the offices on the building’s third, fourth and fifth floors, saving about $40,000 in the cost of heating the building.
It currently offers 3.8 teraflops (trillions of floating point operations per second), and will grow to 25 to 50 teraflops, according to IBM. If demand exceeds Big Sky’s capacity, the RMSC can shift workloads to IBM’s Computing on Demand cloud computing center.
The computing resources within Big Sky can be dynamically allocated based on workload demands, so customers only pay for the resources they use, according to Philp. The goal was to create a powerful computing environment that is easy to manage, flexible and dynamic, and is accessible to anyone who needs it, said IBM’s Dodd.
“It is the complete democratization of this kind of capability,” Dodd said. “It is available to any kind of business.”
Over the past few months, Big Sky has attracted a wide range of customers, according to IBM officials. For example, researchers at the U.S. Department of Agriculture are using the system to help manage the global food supply, while professors and researchers from the University of Montana and Montana State University are conducting research into such areas as astrophysics, climate and erosion modeling, metallurgy and intelligent transportation.
In addition, one company is mapping the placement of wind farms, another is using the computer to help mitigate risks associated with bioreactor yields, and another-Scalable Analytics-is analyzing real-time stock feeds.
An Indian reservation is using Big Sky to study carbon management on tribal lands.
Such examples are giving life to Gov. Schweitzer’s vision of making the Butte supercomputing center the engine that is going to revitalize the Rocky Mountain region.
“He’s not the kind of guy who wants to be second-best in anything,” Philp said. | <urn:uuid:ae44368e-0080-48de-a26c-64f5c8d0df01> | CC-MAIN-2022-40 | https://www.eweek.com/networking/ibm-microsoft-help-create-montana-supercomputer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00616.warc.gz | en | 0.938876 | 1,194 | 2.625 | 3 |
Northrop Grumman-built Cygnus spacecraft has left the International Space Station after delivering research, supplies and equipment to astronauts onboard the orbiting laboratory and is now scheduled to spend four weeks in orbit to conduct the second phase of its ISS resupply mission.
Among the studies S.S. Ellison Onizuka will perform as part of the NG-16 mission is the Kentucky Re-Entry Probe Experiment, which seeks to showcase the capabilities of a thermal protective system to shield a vessel and its cargo as it passes through the atmosphere, Northrop said Saturday.
“The Cygnus system has evolved from being just a cargo delivery service to a high performing science platform. We continue to develop these capabilities to include the installation of environmental control systems and other upgrades to support the lunar orbiting Habitation and Logistics Outpost or HALO,” commented Steve Krein, vice president of civil and commercial satellites at Northrop.
Cygnus recorded a three-month stay at the space laboratory and will re-enter Earth’s atmosphere with over 7,500 pounds of disposable cargo.
The spacecraft, which was launched in August as part of NASA’s Commercial Resupply Services program, and the waste it carries are on track for a destructive re-entry following the mission’s second phase. | <urn:uuid:7cb2d077-f939-4cea-8285-b6c0f8d5fe7e> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2021/11/northrop-built-cygnus-departs-iss-to-conduct-secondary-mission-objectives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00616.warc.gz | en | 0.943078 | 269 | 2.953125 | 3 |
How Apple, Google, and Microsoft are Raising Young Coders
By James Gonzales
(Aragon Research) – The introduction of easily understood coding languages by major players has given children the opportunity to quickly learn the basics of coding. Major Tech Titans such as Apple, Google, and Microsoft are in a race to make coding easier for everyone and young people are some of the fastest adopters. This blog analyzes the market implications of young children learning to code.
Minecraft: Education Edition, Scratch, and Swift Make Their Mark
Google launched Scratch in 2013 and ScratchJr in 2014; Microsoft launched Minecraft Education Edition in 2016; and Apple launched Swift Language in 2014 and Swift Playgrounds in June of 2017. All offer an introduction to code that allows inspired kids to create code to do everything from playing computer games (Minecraft Education Edition); to creating interactive stories and games (ScratchJr); to building and controlling robotics, flying drones, or even playing musical instruments (Swift Playgrounds). Even very young children can learn to code. ScratchJr is aimed at children ages 5-7, and Apple’s new Swift Playgrounds takes Swift and turns it into a fun, interactive app for the iPad. Over the course of the last four years, the user base of Swift and Scratch combined has reached a little over 24 million, and Minecraft Education Edition has seen 100 million copies sold (note: Minecraft used to be free and now it is $5 per user per year).
The launch of these languages presents today’s children with a unique opportunity that previous generations have not had; specifically, children can learn to code quickly and thus begin to understand the conceptual and practical applications of coding. The learning potential is huge, and this trend of children learning to code will have an indelible impact on enterprises in the future.
Coding Begins in the Classroom
As we have seen in recent history, literacy rates have increased exponentially, allowing for a well-educated public, a larger supply of skilled labor, and what seems to be endless innovation. Easily understood and applied coding languages are the next step. The language that dictates the digital world will soon be a common language among people. Swift, Scratch, and Minecraft Education Edition are being implemented in classrooms around the globe and can be accessed by anyone with a computer. So how will this affect the marketplace? For one, it will lower the barriers to entry.
A common problem faced by entrepreneurs who want to enter tech is their lack of knowledge with code. They either need to develop a trustworthy team who knows code, outsource labor (which can be expensive), or begin to learn on their own. But understanding code at a level high enough to compete in the marketplace can be extremely time consuming and for many, it requires formal instruction. So, these new introductions to code are an ideal tool for those looking to grasp the fundamentals of code.
But I think the real focus is on the younger generation, and the amount of talent and creativity that will be tapped into with early exposure. With the continual education of code starting at a young age, made possible by intelligible coding languages, the three barriers to entry will substantially lessen.
What A Lower Barrier to Entry Means for Businesses
The first implication is a renaissance of new app development. During WWDC 2017, Tim Cook gave a shout out to Yuma Soerianto, a 10-year-old from Australia, who already has 5 apps in the app store; I believe this will soon be commonplace among those who have the opportunity in elementary school to begin learning code.
In addition to people easily being able to enter the market of app development, due to their early introduction to code, the supply of programmers should increase. Typically, we would expect lower wages as the supply of labor increases while demand is constant. The result could be businesses innovating at a faster rate because their capital can be used to hire a larger quantity of programmers and start-ups won’t have to raise as much money to have a competent team who can code.
We’re entering into a new era where coding will soon become a common skill among the average knowledge worker, especially the young knowledge worker. The economic impacts will only be beneficial for consumers and enterprises, and the potential for innovation will accelerate. | <urn:uuid:1b9c06ce-e318-42da-8def-4adee2eb07b9> | CC-MAIN-2022-40 | https://aragonresearch.com/how-apple-google-and-microsoft-are-building-a-new-generation-of-young-coders/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00016.warc.gz | en | 0.940335 | 880 | 3.390625 | 3 |
Bed bug heat treatment is a chemical-free and safer method of exterminating bed bugs. It is known that pesticides have a massive negative impact on the environment and human health. It was used for many generations, and now, just like how humans can develop resistance to antibiotics, Bed bugs have also become resistant to these chemicals.
Notwithstanding resistance issues, it was clear that heat treatment of bed bugs in bedrooms, furniture, and mattresses had to be better than just using toxic insecticides.
How to Identify Bed Bugs?
Bed bug infestation is hard to detect in its early stage. Identifying these crawlers can take quite some time. They lurk in small holes and corners in beds, sofas, walls, or any upholstery. They’re very small and can move quite quickly. These insects are only active at night making them more difficult to identify. They are reddish-brown, long and flat in shape, and about a quarter of an inch long.
Bed bugs also love to leave traces behind so watch out for these signs:
- Stains that are red or rusty in color on your sheets and mattress
- Dark spots on the bed
- Noticing bites on your skin from bugs
What Brought Bed Bugs in the Building or Home?
It’s a common belief that only messy and dirty homes are prone to bed bugs and any other infestations. But the truth is, even the cleanest homes and spaces are still prone to bed bugs. Bed bugs are usually brought by the occupants from hotels or second-hand furniture.
Hotels are the most common source of bed bugs. Bed bugs from a budget-friendly hotel or inn attach themselves to your clothes or luggage and travel home with you. If you love to buy antique or second-hand furniture, bed bugs are commonly found there too. These insects feed on human blood, so they stick to places where they could easily have access to food.
Killing Bed Bugs with Heat
Everyone can agree that having bed bugs in our homes is not ideal. These pests have developed a defense mechanism for exterminating methods. They even developed a way to produce younglings that are resistant to any insect-killing chemicals.
Good thing bed bug heat treatment is not chemical-based and can go through this defense mechanism. Only to prove this here are some reasons why should get heat treatment instead of pesticides:
Can Run, Can’t Hide
Bed Bugs are fast runner pests and easily sense threats. Once they detected threats, they would immediately run to their nests or hiding places. Usually, their hiding places are inside or within wall voids, deep inside furniture’s foam or upholstery. Good thing that heat treatment with a raging temperature of 120°F can seep up to the deepest part of these hiding places. Heat radiates and spreads. In heat treatment, bed bugs have no safe place to hide.
Sense of Relief
Way back when bed bugs haven’t developed resistance yet, traditional methods of using pesticides can still bring a sense of relief for homeowners. Conventional methods can’t exterminate or kill bed bugs, and they will only cause them to hide and hibernate for months and attack again. Today, heat treatment is the only method that could bring relief when fighting bed bugs. It can kill even the eggs in their nests. Enjoying a whole night’s sleep without having to worry about insect bites is bliss.
Chemical-Resistant But Not Heat-Resistant
It is easy to figure out how bed bugs develop immunity from chemicals. In the life cycle of a bed bug, it will shed skin five times before it reaches the adult stage. Being exposed to chemicals during the growth process allows them to develop a cuticle that protects them from chemicals. This is the reason why pest control services can’t eradicate them with pesticides anymore.
While they might be chemical resistant, there is no such thing as heat-resistant bugs. All bugs are toasted when exposed to high heat.
How Does a Bed Bug Heat Treatment is Done?
To conduct a bed bug heat treatment, the pest control service providers have specially designed equipment to blow hot air into the target area. The temperature must reach at least 118°F (48°C) to 145°F (62.7 °C) and last up to 90mins to kill bed bugs and eggs. The bed bug heat treatment usually takes six to nine hours, depending on the severity of the situation, to ensure that the whole area was treated well. The heat must reach the deepest parts of beds and holes within walls.
It is important to take note that pets and heat-sensitive items must be removed. This matter must be discussed well with the service provider to determine other ways of treating them.
Bed bug heat treatment is proven effective if done right. However, heat treatments do not offer any permanent effects. Buildings and homes are still prone to be infected again. Bed bugs will not nest within a structure or house if it’s not habitable. If prevention steps are not taken, a strong possibility of another treatment could be on the way.
Advantages of Using Heat Treatment Instead of Pesticides
The process is highly effective and all-natural. No negative impact to the ecosystem, no release of toxic gasses, no leftover residuals.
It will only take an hour for the whole process to take effect. It will kill the bed bugs as soon as it reaches the required temperature (120°F to 140 °F). Bed bug heat treatment will not force homeowners out even for one day or wait for a week to make sure the bugs are dead and there is no more residual from the chemicals. Thus, no need to waste money on week-long hotel accommodation. Aside from that, bed bug heat treatment does not require a couple of following visits.
Heat treatments kill bed bugs in all stages of development, including eggs. While chemicals only keep the bed bugs asleep, bed bug’s eggs can survive chemical exposure.
Bed bug heat treatment eradicates everything in a building or home. It radiates in all corners, all exposed and hidden places. There’s no place heat can’t reach. And there is no need to worry about the household stuff getting contaminated and damaged. Just make sure to remove plastic items or anything flammable.
Areas surrounding the target space don’t have to be evacuated during the treatment process—continuous operations and hassle-free for the neighboring establishments.
Are There Things To Consider?
Price – Things as good as a heat treatment come with a good price, and the cost of heat treatment might cost around $500. It is understandable to look for something effective and affordable. However, if the service provider offered $99 for the treatment, it is not a good sign.
Guarantee – There is always a 30 days guarantee after a treatment. Unfortunately, bed bugs can reproduce even with just one remaining. Make sure to get a provider with the capacity and equipment to detect and monitor all the bed bugs in your area. Undetected bugs might bring back the previous population in a few months.
Expertise – Going through a company background check and review must be something to consider. It’s easy to procure equipment and learn how to use them. But expertise goes beyond the ability to operate the equipment. Credibility should be thoroughly checked. The more knowledgeable the company is, the better trained and better equipped their technicians are.
AKCP Wireless Temperature Sensors
Bed bug heat treatment is a process of bringing the room temperature up to 120°F for 90 minutes. It is important to maintain the required level of temperature for that given amount of time. This can be obtained through the help of sensors to monitor the temperature level within the target area during the process. AKCP Wireless Temperature Sensors are battery-powered sensors placed in the treatment area. These sensors can be buried within the mattress to monitor internal temperatures. The sensors communicate with a wireless tunnel gateway and can be accessed remotely on a user’s interface.
- 4x AA Battery powered, with 10-year life*
- USB 5VDC external power.
- 12VDC external power.
- Custom sensor cable length up to 15ft to position sensor in an optimal location.
- NIST2 dual-sensor calibration integrity check.
- NIST3 triple sensor calibration integrity checks with failover.
https://www.monnit.com/applications/pest-remediation/ https://www.greentechheat.com/about-wired-temperature-probing-during-bed-bug-treatments.php https://www.griffinpest.com/blog/bed-bug-heat-treatment-faq/ | <urn:uuid:f478f5b4-6131-4832-a52a-ff6ebd0ea02a> | CC-MAIN-2022-40 | https://www.akcp.com/blog/bed-bug-heat-treatment-temperature-monitoring/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00016.warc.gz | en | 0.930332 | 1,910 | 2.8125 | 3 |
This past week I had to script several TI Processes to make allocations as flexible as possible. This was done so that the users could pick and choose what they wanted to allocate like it was a buffet at Golden Corral. While doing the usual creating of subsets on the fly in the Prolog tab so that my source data is dynamically created by their choices, I realized their need to pick a consolidated element (i.e., all Departments), but they would also like to exclude a Department or two from the initial selection of all Departments.
I came across the TI function called SubsetGetElementName and thought I found the answer to my problem. The syntax in the User Guide shows it as SubsetGetElementName(DimName, SubsetName, ElementIndex);. Well that looked simple enough and I then used DIMIX( DimName, ElementName) in place of the ElementIndex and thought all was good. I was wrong.
Apparently, the ElementIndex is the position of the element within the subset and not the element’s index in the dimension. So my solution was to add extra coding to the Prolog tab.
Here’s the basic gist of what I did:
- Use the function AttrInsert to create a numeric attribute to house the subset position number for each element added to the subset.
- Create a subset of only the N level elements from the C level element input in a parameter cube for the department selection (ie Total Dept). This requires looping through its children and inserting them into a subset.
- Next, While through the subset and use the SubsetGetElementName and AttrPutN to populate your numeric attribute with the position number of each element in the subset.
- You can now loop through your elements that house your exclusion elements and use the SubsetElementDelete to remove those elements from the subset, since you are able to retrieve the element’s position number now that it is a numeric attribute of the element itself.
- Finally, on the Epilog tab, I got rid of the numeric attribute that was housing the elements’ positions in the subset using the AttrDelete function, thus allowing for it to be dynamic and update the element position the next time the process is run.
Hope that helps you! If you have any questions about this article or TI Processes in general, do not hesitate to contact Lodestar Solutions at [email protected] where our Analytics Coaches can assist you. | <urn:uuid:9a224af9-556e-4e45-8507-d2d960908a82> | CC-MAIN-2022-40 | https://lodestarsolutions.com/tag/element/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00016.warc.gz | en | 0.920925 | 523 | 2.59375 | 3 |
An API (application programming interface) is a set of protocols through which two systems communicate with one another. These systems can be computers, platforms, programs, or apps.
An API contains information about parameters, return values, and more that provide developers with code libraries and standards, so they don’t have to come up with their own when extending the functionality of two or more systems.
An API makes it much easier to transfer, change, and sift through large amounts of data, making it essential for streamlining the management and exchange of information in today’s enterprises.
How to Use an API: Best Practices
Below are some ways you can experiment with asking for and posting information with APIs.
Get an API key
In order to use an API, you need an API key. Most APIs prompt you to log in to verify your identity — for example, signing in with your Google account. After verifying your identity, the API gives you a string of unique letters and numbers, which is your API key.
Find an HTTP client online
Find an HTTP client online, like Apigee or Postman. These are ready-made, often free tools that help you structure your requests to APIs. They contain dozens of APIs whose endpoints are there to play with, such as Instagram and its media search endpoint.
Build a URL
You can explore data from an API by building a URL from existing API documentation. This will require some familiarity with the syntax of the API’s documentation in order to structure your request to it.
Read more on TechnologyAdvice.com: 5 Capabilities an API Management Tool Should Have
How Does an API Work?
To understand how APIs work, there are three main players:
- User: Person who initiates a request for information
- Client: Computer that sends the request to the server
- Server: Computers that house data and respond to the client request
The client routes the request to the server and, if the user is authorized for access, the client returns the desired information to the user.
Programmers publish documentation about the data stored within a server, such as which endpoints hold which pieces of information.
An outside user who wants to access a piece of information on that server sends a call through a client using a coding language like XML or JSON. This language contains actions like the following:
- GET to receive information from the server
- PUT to add or change existing information in the server
- POST to enter new information into the server
- DELETE to remove information from the server
Not all APIs use all of these verbs. SOAP APIs, for instance, are POST only. Fortunately, for those with limited coding knowledge, programs can run these searches.
Read more: How Do APIs Work?
Types of APIs
There are several types of APIs, REST being the most common. They each have their own architecture and use cases, and don’t necessarily share the same language.
Representational State Transfer APIs are the most commonly used today because they focus on end-user readability and ease of consumption.
A user looking for specific information makes a call or request through a hypertext transfer protocol (HTTP) client, such as https://www.google.com/maps, to search for information. The client then sends the call or request to the Google Maps server.
If the end user is authorized to receive the information, the client receives the information through HTTP, usually in XML or JSON. However, the information usually gets to the end user in a readable format (not XML or JSON) because a developer makes it more readable to the end user.
REST APIs simplify and standardize communication between computers. Developers can change up the display on the client-facing side, while working on data storage and manipulation on the server-side of an app. REST APIs are also scalable, so they can accommodate growing data sets and their complexity.
Common examples of REST APIs include Google Maps, Facebook’s Graph API, and Twitter.
A GraphQL API has a descriptor for any type of data and can be used in a variety of database schemas. Unlike REST APIs, GraphQL APIs are organized according to types and fields instead of endpoints. This allows users to only ask for what’s possible and quickly access full data capabilities through a singular endpoint — even if mobile network connection is slow.
GraphQL API is flexible and can evolve by allowing new fields and types to be added without affecting existing searches, providing aps with continuous access to new features.
gRPC stands for Google Remote Procedure Call and is an open-source data exchange technology. It allows client applications to communicate with service endpoints over Google Cloud Platform products.
Simple Object Access Protocol (SOAP) is a standard communication protocol system that allows processes using different operating systems, such as Linux and Windows, to communicate via HTTP and XML. SOAP-based APIs are designed to create, recover, update, and delete data, so they only work with POST HTTP request methods. A legacy protocol, SOAP is used in enterprise applications today where security is prioritized over performance.
A webhook API, also referred to as a web callback or HTTP push API, is a type of API that remotely and automatically delivers real-time data to users who subscribe to receive real-time updates. Apps that typically use a webhook API include instant messaging apps or web forms, such as Google Forms.
A WebSocket API establishes a continuous bi-directional flow of communication between the client and the server without needing to set up a new connection each time a message is sent. A user can send messages to a server through a WebSocket API and get event-driven responses. Apps that contain social feeds or financial tickers would typically use a WebSocket API.
Read more on TechnologyAdvice.com about managing APIs: The Best API Management Software and Tools
Now That You Know How to Use an API
APIs open up and streamline gateways between new and existing platforms. They provide standardized protocols that make it easier for companies to integrate their applications and services across various platforms. So, for instance, when you want to get notified about changes to a Google Doc in Slack, an API sets up a connection between Slack and your Google suite to allow Slack to receive and answer requests from Google.
You don’t have to be a developer to make use of APIs in your business, but establishing more complicated connections between enterprise systems will likely require specific development expertise. | <urn:uuid:cba12b43-ae1d-4e02-a305-e161f2dd1c8b> | CC-MAIN-2022-40 | https://www.cioinsight.com/it-strategy/how-to-use-api/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00016.warc.gz | en | 0.906915 | 1,359 | 3.78125 | 4 |
For many businesses, keeping computers out of harm’s way is afull-time job. IT departments spend increasing amounts of resources keepingout the bad stuff or finding and removing it when malware does slip infrom careless users or sloppy adherence to best practices. Viruses,spyware, Trojans and many more unwanted programs can cause seriousdamage to a computer, or an entire network.
The most common prevention method for dealing with malware is theprocess known as “blacklisting.” Antivirus and antispyware applications,armed with signature-matching databases and resource-hungry scanningengines, look for unwanted programs and remove them from memory andthe hard drive when — and if — they’re detected.
However, as intrusive software deployment becomes more sophisticated andmore widespread, some security vendors are promoting a change intactics. Why wait for a bad program to run at all, they argue.Instead, a technique known as “whitelisting” only permits approved software to install and run. Products that are not on the control list lock down thecomputer.
“Blocking the bad just doesn’t work anymore. That’s the old modelunder blacklisting. Whitelisting flips upside down the problem andonly lets run what is listed as approved,” Brian Hazzard, director ofproduct management for security firm Bit9, toldTechNewsWorld.
No Universal Color
The earliest form of whitelisting was used in firewalls. The firewallon an enterprise network served as a gatekeeper, loaded with a list ofapproved programs. Even some consumer-grade Internet security suitesinclude a firewall component with a whitelist feature for programsseeking outgoing Internet access.
The white-over-black methodology, in theory, means that if only approved products can run, computer users can send their system-slowing antivirus and antispyware products to the trash bin. However, most proponents of whitelisting do not recommend actually doing that. Naturally, traditional security software vendors also question the wisdom of trashing other security products, suggesting that notusing antivirus and antispyware apps is much like surfing the Web without a firewall for safety.
Different whitelisting products use a variety of strategies to blockexecutable files from running. Some whitelisting products providealternatives to total system lockdown if the whitelist is violated. So vendors are developing their own shades of white.
“Whitelisting is not the Holly Grail of computer security that vendorspreach. It is not bulletproof. The malware issue doesn’t go away.Whitelisting limits the access curve, though, so it does help,” DirkMorris, CTO at network security software maker Untangle, told TechNewsWorld.
The approach Bit9 takes with Parity offers enterprise users theability to automatically whitelist applications and devices. All otherapplications, including malware and unauthorized software, will notexecute on endpoints.
Most businesses have a good idea about what software its workers need.So Bit 9 developed an adaptive whitelist strategy.
“We provide a two-part process. One is the Global Software Registry.The other is the Automatic Software Acceptance done through ourrepository,” said Hazzard.
The proprietary Global Software Registry is an online index of over6 billion files. This list contains over 10 million uniqueapplications. The registry acts as a reference library for ITadministrators building their whitelists.
Security appliance vendor CoreTrace puts a twiston the whitelist approach. CoreTrace’s Bouncer acts much like asecurity heavy at the door of a nightclub. Those not on the list don’t getin at all. Enterprise customers buy the appliance from CoreTrace and installit on their end. An embedded code on each computer talks to theappliance.
Bouncer enables IT departments to predefine multiple sources. Userscan safely install applications and have them automatically added tothe whitelist without any further IT involvement required.
Called “Trusted Change,” Bouncer simultaneously stops badapplications and allows users to do their own installation of knownsafe programs. This approach can significantly reduce a company’stotal cost of ownership for every desktop, laptop or server covered, according to the company.
“We designed an infrastructure under the hard drive that makes itunspoofable,” Toney Jennings, CEO ofCoreTrace, told TechNewsWorld. “Traditionally, whitelisting’s strength — system lockdown –is its chief weakness. Our solution is to avoid the lockdown responseby letting IT specify where users can get new applications. Thistrusted source is a very different paradigm. It requires a one-timesetup. The change is then transparent.”
The Bouncer software sits in the kernel space of the endpointcomputers, much like a software driver. This is a very small piece ofcode that does not impact resources, explained Jennings.
‘KIS’ Malware Goodbye
Software security vendor Kaspersky offers both blacklisting and whitelisting forconsumers in one package. Kaspersky Internet Security 2009,released last August, uses Bit9’s Global Software Registry ratingsand adds its own customer information to enhance the whitelist.
“We still use blacklisting used in current-generation antivirus andantimalware products and add the next-generation whitelistingtechnology. We are the only ones doing both approaches in oneproduct,” Jeff Aliber, senior director of product marketing andmanagement at Kaspersky Lab Americas, told TechNewsWorld.
Kaspersky sends user submissions of suspicious software to its virusanalysts. Confirmed rogue code is added to Kaspersky’s urgentdetection system and sent to users via ongoing hourly updates.
“The user has protections sitting on the computer plus real-time cloudupdating. It’s sort of a Web 2.0 mash-up,” said Aliber.
Not all enterprises and small businesses have been positively rushing to adopt whitelisting, according to Untangle’s Morris. Some view it as too restrictive.
About three years ago, as spyware became more prominent, Untanglethought the concept of locking down machines — which is whatwhitelisting does — would be the ideal business solution. But thecompany hasn’t seen widespread adoption.
“We found that IT sees whitelisting as too much of a pain to lock downa machine and give the approval authority to one person. That’s thesame response that SMBs have to it. For many businesses, it presentstoo much of a productivity loss in maintaining it,” he said.
Bit9’s Parity product costs US$40 per end point scaled for volume.
CoreTrace’s Bouncer is priced per seat for a perpetual license. Thecompany did not provide the dollar amount. CoreTrace may add aSoftware as a Service offering in the future.
Kaspersky’s Internet Security 2009 costs consumers $79.95 forthree user licenses. The company plans to offer an enterprise productin 2009. | <urn:uuid:6a9853d0-47a2-4c8c-b063-75f9f08380f0> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/blacklisting-and-whitelisting-color-coding-security-64756.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00016.warc.gz | en | 0.8924 | 1,525 | 2.65625 | 3 |
Encryption is used to protect confidentiality. But what role should it play within your operating systems for protecting file systems?
The answer often is, "it depends."
A laptop or detachable media such as USB-connected external disks and thumbdrives could easily be stolen or lost. Especially with smaller objects, you may not know which happened. Did the USB memory stick fall out of my pocket to probably be crushed underfoot on the subway staircase?
If a laptop was stolen, maybe it was part of a complex plot precisely targeting the data stored there. Or maybe it was simply because a laptop can have a good resale value for its size and weight.
Regardless of why and how the object disappears, whole-disk encryption makes sense. Microsoft's BitLocker is one solution. You can't even boot the Windows operating system until you enter a passphrase.
That's overkill. I don't care if someone gets a copy of the operating system. There are millions of copies of it out there already. All I need to protect are my user files.
Protecting User Data
On a UNIX-family operating system, Jane User's data is stored in
/home/jane. We can boot the operating system with
/home an empty directory. Then when Jane logs in, PAM (or Pluggable Authentication Modules) rules use
pam_mount, so to ask her for a passphrase that is used to derive an encryption key. The kernel loads that key into RAM as part of its LUKS or Linux Unified Key Setup mechanism.
Jane's home directory and all her files and subdirectories are stored in a separate file system. That file system is stored inside one very large file, and that file has been encrypted with AES-256-CBC.
If it's really Jane and she typed her encryption passphrase correctly, the kernel can access the encrypted file system image. The file system image on the disk stays encrypted. The kernel uses the
dm-crypt module to decrypt and encrypt all I/O on that file system as it happens.
The kernel mounts the file system image as
/home/jane and passes all I/O through
dm-crypt module using the key specific to this one file system.
If you're doing this on a multi-user system, maybe on the server, then Joe logs in. PAM asks him for his passphrase, and the kernel loads a new key into LUKS. The kernel uses Joe's key for all I/O on the encrypted file system image it mounts as
When one of these users logs out, the kernel flushes all pending I/O. This makes their personal file system image an updated and entirely encrypted large file. Then it unmounts their home directory and overwrites the key stored in kernel memory. If they log back in immediately, they will have to provide that passphrase again.
Putting That Together
You learn how to do most of the needed pieces in Learning Tree's Linux server administration course.
PAM: You will need to use the
pam_mount.so PAM module to get their passphrase to the kernel, we use PAM in some exercises in the course.
Automounting: You will also need to use automounting for the home directories, and we have an exercise on that.
Logical Volume Management: LVM will probably be the best way to manage the encrypted file system images on multi-user systems. The course has several LVM exercises.
This makes sense on a portable single-user system like a laptop. In that case, put
/home/username on a dedicated partition or logical volume using all the available storage.
Simply back up your user data to external media and do a fresh installation. Tell the installer that you want to customize the storage, make
/home an independent file system, and ask for it to be encrypted.
Create your user account, log in, so it mounts your home directory through LUKS and
dm-crypt, and restore your data into the encrypted file system.
Is This Really Useful On Servers?
That is unless you worry that someone is going to steal your server out of the rack in the data center, in which case you have a physical security problem to solve!
But it's cryptography! Why isn't it helpful?
This is a good illustration of how cryptography can be helpful, but it isn't a magical security salve that makes everything better.
Let's say we have users Jane and Joe set up as in the story above. Both are logged in, so directories
joe are both mounted under
Yes, the data is encrypted in the two large file system images. But access to the tree of directories and files must go through the kernel, which is used
dm-crypt with the corresponding keys. File system I/O is completely transparent, even to users with bad intentions.
Whole-disk or file system level encryption prevents physical theft from being data theft, but it does not do access control. File ownership and permissions are the only things keeping Joe and Jane from inappropriately accessing each other's data.
What Alternatives Exist?
A much simpler approach involves encryption through an application. Check back next time to learn how to do that! | <urn:uuid:81a7f20e-6a5f-4824-a8e7-33dd0a46b49e> | CC-MAIN-2022-40 | https://www.learningtree.ca/blog/file-system-encryption-when-is-it-worthwhile/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00016.warc.gz | en | 0.940027 | 1,117 | 3.140625 | 3 |
A pair of researchers with the Oak Ridge National Laboratory, one of the U.S. Department of Energy’s premier, big-picture research facilities, write in a new paper that using new nanomaterials will make edge computing as momentous as was cloud computing originally. These materials, along with neuromorphic computing methods could enable “nano-edge computing devices.”
Edge computing is a strategic play for companies because it represents a new way of turning data into competitive information. But aside from the increasing use of artificial intelligence, it is a new combination of existing technology — computer networking and data storage.
Indeed, edge computing typically is viewed as a sort of halfway house of corporate computing, churning away close enough to data centers to cut latency, and gradually developing the raw power and analytics of the cloud, although on smaller scales.
Nanomaterials now in development, along with advancing artificial intelligence, are the only way that edge can be a complete, energy-efficient, ultrahigh performance and secure network of billions of devices, according to Ali Passian and Neena Imam, the report’s authors.
Edge systems are already being overwhelmed by input from devices including cheap low-power sensors. Latency, a prime motivator behind the development of edge systems, is already creeping up, and 5G will not by itself solve that problem.
A good next step for edge computing would be ditching materials used in computing today that are inefficient. The pair say that almost one-third of non-man made materials can carry electricity and light without resistance and backscattering. These so called topological materials can reduce energy needs and, in the case of electricity, reduce waste heat.
|“Silicon-based transistors, developed to be increasingly smaller, experience conductivity losses, leading to energy loss by generating heat. Replacing silicon-based elements with carbon nanotubes, owing to their more efficient electron transport properties, can lead to less energy requirements.”
Then there are carbon nanotubes, graphene and molybdenum disulfide, all nanoscale materials. They could be used to replace conventional transistors, leading to more efficient and speedier microchips and sensors.
More advanced artificial intelligence on the way will give edge systems the capacity to managed myriad devices while also controlling data flow.
Quantum networks and Quantum computing
Quantum physics, too, can play a role in a more vital edge computing environment, according to the researchers.
Logic-defying quantum effects, although exceedingly difficult to produce and control, can compute astronomical amounts of data, transmit data faster than the speed of light, store data and perhaps make network traffic impossible to be secretly hijacked.
In quantum computers, instructions can be both 0 and 1 simultaneously, calculating faster than ever thought possible for certain tasks. In fact, these devices could be used to create edge-computing networks that operate more efficiently under extreme traffic loads.
Scientists have shown that quantum computing used as part of the network could move information rapidly, indeed. Quantum effects have been employed to transmit data instantly up to about 870 miles, an effect that would have an immense impact on the ability to process data on devices at ever more remote edge nodes. Indeed, nodes would be able to operate at a peer-to-peer level for quantum applications.
Nanosystems meld with edge computing
If quantum computing isn’t mind-bending enough, consider the combination of neuromorphic computing (brain-inspired computing) and nanomaterials. The report suggests that it nanosystems and edge computing “May amalgamize to become an inseparable entity, where device and function interact dynamically.”
The report suggests that as sensing at the atomic and molecular levels becomes possible, a new era of nano-IoT could be upon us. “Molecular networks of billions of sensors already occur in biological systems, and this may be mimicked by nano-EC devices,” according to the authors.
edge computing | edge hardware | IoT | nanosystems | quantum computing | <urn:uuid:2afd47e6-021b-4b98-80e9-07937c85b980> | CC-MAIN-2022-40 | https://www.edgeir.com/new-materials-quantum-computing-will-make-edge-computing-dominant-20200301 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00016.warc.gz | en | 0.922782 | 848 | 3.46875 | 3 |
The Centers for Disease Control and Prevention has determined who will get the vaccine for the H1N1 flu virus — aka “swine flu” — in the event of a shortage, but the priority groups don’t line up well with the groups most likely to die from the disease.
The reasons reflect a complex calculus of ethics that might be changing.
The H1N1 vaccine priority groups, in order, are pregnant women; people who live with or care for children younger than six months of age; healthcare and emergency medical services personnel with direct patient contact; children 6 months through 4 years of age; and children 5 through 18 years of age who have chronic medical conditions.
High-Mortality Groups Aren’t Prioritized
However, those who are most likely to die if they contract the disease are people between the ages of 25 and 49, a group that accounts for 41 percent of deaths from H1N1 so far. This age group comprises about 37 percent of the U.S. population.
Another 25 percent of known deaths from H1N1 occurred among people between the ages of 50 and 64, a group that represents about 15 percent of the U.S. population.
Two percent of H1N1 deaths occurred among children under 4 years of age, and 6 percent of H1N1 deaths so far have occurred among pregnant women. Children aged 0-4 account for about 7 percent of the U.S. population, while pregnant women make up just 1 percent.
Guard the Front Lines
Few disagree that people who work in healthcare and emergency services need to be given priority.
“The feeling has always been that you have to protect those people, or otherwise they might decide not to risk themselves for the greater good,” commented Alan Wertheimer, a senior research scholar in the clinical section of the Department of Bioethics.
“If you have a plane full of sick people, you have to save the pilot first, or nobody is going to make it,” he told TechNewsWorld.
Aside from the need to protect healthcare workers, though, there is little agreement as to where the vaccine should go first.
“I don’t think we’ve reached any kind of consensus,” said Douglas Opel, M.D., MPH, acting assistant professor at the University of Washington School of Medicine.
There was a shortage of seasonal flu vaccine in 2007 due to a factory closure, and “even though we had a huge shortage, it was remarkable that the American people didn’t have much of a problem [with the priority groups established at the time],” Opel told TechNewsWorld.
“But that was a mild flu season, and the perception was that the shot didn’t cover some of the strains most common at the time,” he continued. “Would everyone be OK with the same decisions in a case where the flu was stronger and the vaccine perceived to be more effective? I’m not so sure they would.”
New Ways to Think About Scarce Cures
The H1N1 mismatch might prompt new ways of thinking about how to prioritize scarce life-saving treatments of any sort.
“Decisions like this used to be made mostly on utilitarian grounds: the greatest good for the greatest number,” said Opel.
“Recently, some people have started to also argue that one should take into account remaining years of life left, and the quality of that life,” he said.
Those in at least one priority group — pregnant women — have not always been advised to get flu vaccines.
“My sense is that they looked at the epidemiology,” remarked Opel. “It appears that pregnant women are at a higher risk of H1N1, and they spend time around babies too young to get the vaccination, so they gave priority to vaccinating pregnant women.”
Lots of Cases, Many Mild
The ethical issues underlying distribution of H1N1 vaccine might not cause any public clashes in the near future, since authorities expect to have enough to go around. Of the many cases contracted so far, the majority have not been life-threatening or even serious.
“I have a friend who runs a summer camp, and she told me she had 40 kids sick with H1N1, but all of them were fairly mild cases,” said Wertheimer.
Under these circumstances, any second-guessing among citizens over who should receive priority for vaccination will probably be limited. | <urn:uuid:827661cf-0f11-4475-bedd-e9bd5dfad235> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/highest-mortality-groups-last-in-line-for-h1n1-vaccine-68151.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00016.warc.gz | en | 0.967846 | 948 | 2.8125 | 3 |
The United States government began issuing new electronic passports this week that include radio frequency identification technology (RFID) to store citizens’ personal information.
The U.S. State Department referred in its announcement to the use of biometric technology and “a contactless chip,” the latter a controversial device that will be embedded in each of the new passports.
At the Black Hat hacker conference in Las Vegas last month, a security consultant demonstrated a hack of such a passport and also described a relatively simple and inexpensive process for cloning one. The demonstration troubled many who have questioned the necessity for RFID technology, which transmits data wirelessly, in such personal documents.
The State Department, however, highlighted its “multi-layered” approach to protecting the new e-passports and mitigating the chances of the electronic data being “skimmed” — i.e., intercepted or stolen.
First, the government said a metallic material in the passport cover and spine will prevent skimming when the passport is not open.
Second, the e-passport relies on Basic Access Control (BAC) technology, which requires that a special key on the passport be electronically read prior to data access being granted.
The U.S. also said a randomized unique identification (RUID) feature of the new e-passports will diminish the risk that its holder could be tracked.
Finally, an electronic signature, or PKI, will prevent alteration or modification of the information on the chip and will allow authorities to validate and authenticate it.
“The Department of State is confident that the new e-passport, including biometrics and other improvements, will take security and travel facilitation to a new level,” said a Department statement.
Defeating the Purpose
In response to longstanding criticism over the privacy and security risks of passports using RFID technology, the government has said the new e-passports are consistent with global specifications from the International Civil Aviation Organization (ICAO). More importantly, officials have indicated there will be some exchange of information required prior to RFID transmission of data, according to Electronic Frontier Foundation (EFF) Senior Staff Attorney Lee Tien.
The added measures may help alleviate some security concerns. However, Tien told TechNewsWorld, if an exchange of information or other personal contact is required, it would defeat the purpose of the RFID technology.
“It’s a solution in search of a problem,” he said.
Tien and other RFID researchers and security experts have questioned the need for RFID in passports.
The over-the-air signals that will be transmitted from the passports may provide all the incentive that attackers need to attempt hacking the technology.
“For people who know what they’re doing, [such a hack] is not really hard,” Tien said.
Tien also expressed concern that the e-passport rollout may breed more trust in unattended transactions, which may actually serve to increase privacy and security dangers. | <urn:uuid:c88c8591-6992-40f4-89db-08ba0f2960a9> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/us-begins-rollout-of-rfid-passports-52458.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00016.warc.gz | en | 0.920375 | 632 | 2.578125 | 3 |
In a recent blog entry, Any Advance on Soundex?, I promised to describe our phonetic algorithm, soundIT. To recap, here’s what we think a phonetic algorithm for contact data matching should do:
- Produce phonetic codes that represent typical pronunciations
- Focus on “proper names” and not consider other words
- Be loose enough to allow for regional differences in pronunciation but not so loose as to equate names that sound completely different.
We don’t think it should also try and address errors that arise from keying or reading errors and inconsistencies, as that is best done by other algorithms focused on those types of issues.
To design our algorithm, I decided to keep it in the family: my father Geoff Tootill is a linguist, classics scholar and computer pioneer, who played a leading role in development of the Manchester Small-Scale Experimental Machine in 1947-48, popularly known now as “the Baby” – the first computer that stored programs in electronic memory.
Geoff was an obvious choice to grapple with the problem of how to design a program that understands pronunciation… We called the resultant algorithm “soundIT”.
So, how does it work?
soundIT derives phonetic codes that represent typical pronunciation of names. It takes account of vowel sounds and determines the stressed syllable in the name. This means that “Batten” and “Batton” sound the same according to soundIT, as the different letters fall in the unstressed syllable, whilst “Batton” and “Button” sound different, as it is the stressed syllable which differs. Clearly, “Batton” and “Button” are a fuzzy match, just not a phonetic match. My name is often misspelled as “Tootle”, “Toothill”, “Tutil”, “Tootil” and “Tootal”, all of which soundIT equates to the correct spelling of “Tootill” – probably why I’m so interested in fuzzy matching of names! Although “Toothill” could be pronounced as “tooth-ill” rather than “toot-hill”, most people treat the “h” as part of “hill” but don’t stress it, hence it sounds like “Tootill”. Another advantage of soundIT is that it can recognize silent consonants – thus it can equate “Shaw” and “Shore”, “Wight” and “White”, “Naughton” and “Norton”, “Porter” and “Porta”, “Moir” and “Moya” (which are all reasonably common last names in the UK and USA).
There are always going to be challenges with representing pronunciation of English names e.g. the city of “Reading” rhymes with “bedding” not “weeding”, to say nothing of the different pronunciations of “ough” represented in “A rough-coated dough-faced ploughboy strode coughing and hiccoughing thoughtfully through the streets of the borough”. Although there are no proper names in this sentence, the challenges of “ough” are represented in place names like “Broughton”, “Poughkeepsie” and “Loughborough”. Fortunately, these challenges only occur in limited numbers and we have found in practice that non-phonetic fuzzy matching techniques, together with matching on other data for a contact or company, allow for the occasional ambiguity in pronunciation of names and places. These exceptions don’t negate the need for a genuine phonetic algorithm in your data matching arsenal.
We implemented soundIT within our dedupe package (matchIT) fairly easily and then proceeded to feed through vast quantities of data to identify any weaknesses and improvements required. soundIT proved very successful in its initial market in the UK and then in the USA. There are algorithms that focus on other languages such as Beider-Morse Phonetic Matching for Germanic and Slavic languages, but as helpIT systems market focus is on English and Pan-European data, we developed a generic form of soundIT for European languages. We also use a looser version of the algorithm for identifying candidate matches than we do for actually allocating similarity scores.
Of course, American English pronunciation of names can be subtly different – a point that was brought home to us when an American customer passed on the comment from one of his team “Does Shaw really sound like Shore?” As I was reading this in an email, and as I am a Brit, I was confused! I rang a friend in Texas who laughed and explained that I was reading it wrong – he read it back to me in a Texan accent and I must admit, they did sound different! But then he explained to me that if you are from Boston, Shaw and Shore do sound very similar, so he felt that we were quite right to flag them as a potential match.
No program is ever perfect, so we continue to develop and tweak soundIT to this day, but it has stood the test of time remarkably well – apart from Beider-Morse, I till don’t know of another algorithm that takes this truly phonetic approach, let alone as successfully as soundIT has done.
– Steve Tootill (stEv tWtyl) | <urn:uuid:22083ce6-f32d-4510-a17c-fec0bacd0fc2> | CC-MAIN-2022-40 | https://think.360science.com/phonetic-matching-matters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00217.warc.gz | en | 0.95615 | 1,188 | 2.625 | 3 |
IPv6 Stateless Autoconfiguration enables a local link address which allows the user to access the internet from anywhere. No intermediary IP address or support is needed to be created as the DHCP server. All the devices that are connected with the IPv6 network, have their own unique local address which is automatically verified allowing that particular node or let’s just say a user to communicate with other users on that link. The process of auto configuration means that the addresses, links and other such information are automatically configured.
With the earlier versions, only stateful configuration was possible which involved the necessity of an intermediate presence of a DHCP (Dynamic Host Configuration Protocol) server. But, with the advent of IPv6, there is no such need of this support for connecting the network devices over the internet. The devices are able to automatically generate a local IP address and carry on with their tasks.
This feature became an absolute necessity because of the increased number of devices over the internet in these times. Therefore, with the IPv6, the need of having a DHCP server for IP address allocation is snapped out and instead easing out the process for the network devices.
Heading back to the name, “stateless” means that the DHCP server need not recognize the presence of a network device for allotting it an IP address.
The steps that are followed by a device to auto generate the IP address are as listed below:-
- Generation of local link address: A local address is allotted to the device that joins the internet. The address contains 10 bits going as 1111111010 and then follows 54 zeroes and an interface identifier of 64 bits.
- The Uniqueness test: To check the uniqueness of the address, a uniqueness test of the device address is undertaken.
- Address Assignment: Link local address is allotted to the IP interface after clearing the uniqueness test. This link is not usable for internet, but, only for the local network.
- Contact with Router: A local router is contacted by the network device for moving ahead in the process of auto configuration.
- Directions from Router: For the further steps in the configuration process, the device receives the directions from the local router.
- Global Internet Address: A unique global internet address is generated by the device. The router assigns the address which includes the device identifier and network prefix.
Merits of IPv6 Stateless Auto Configuration:
The advantages of stateless auto configuration are as follows:-
- The presence of a Dynamic Host Configuration Protocol (DHCP) is not required for the IP address assignment.
- No manual configuration of network devices is required on the network. The devices can immediately connect and auto configures IP addresses on the network.
- The stateless auto configuration is economical as the need of a proxy server or a DHCP server is evicted.
- It facilitates high speed communication and data transportation over the internet.
- It is compatible with wireless networks.
Demerits of Stateless Auto Configuration:
- For the host to check whether the address is already in use or unique, more bandwidth use is needed.
- To prevent the auto configuration from happening, a DOS attack can be made by any unethical user or attacker.
- Until a dynamic DNS is used, the auto-configured address cannot be name served.
Due to the influx of network devices over the internet, the advent of stateless auto configuration was bound to be made. It not only eases out the process of connection of network devices over the internet, but, also enables usage of wireless networks and permits multiple other network devices to access the internet from any hotspots of the world.
This feature of IPv6 has a variety of applications in communication and networking of digital devices like refrigerators, televisions, microwaves, washing machines and many more such devices with the internet. The plugging of the device to the internet has just become a matter of time taken in blinking of an eye and with this feature has also escorted a brand new era of Internet of Things wherein almost all of the electronic devices would be able to connect through the internet.
Related – Features of IPv6 Addressing | <urn:uuid:e814d77d-90d2-4aab-b46d-432989f32b1e> | CC-MAIN-2022-40 | https://networkinterview.com/stateless-auto-configuration-in-ipv6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00217.warc.gz | en | 0.899802 | 850 | 3.140625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.