text
stringlengths 234
589k
| id
stringlengths 47
47
| dump
stringclasses 62
values | url
stringlengths 16
734
| date
stringlengths 20
20
⌀ | file_path
stringlengths 109
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
124k
| score
float64 2.52
4.91
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|---|
Subnet Mask Cheat SheetRecords Cheat SheetGeoDNS ExplainedFree Network TroubleshooterKnowledge BasePricing CalculatorLive CDN PerformanceVideo Demos
BlogsNewsPress ReleasesIT NewsTutorials
Give us your email and we'll send you the good stuff.
When it comes to DNS, there's nothing we love more - except DNS management. And maybe Secondary DNS. Or Failover. Even anomaly detection. Oh who are we kidding, if it's even remotely close to the topic of DNS, we got you covered!
What is DNS? The Domain Name System maps domain names to IP addresses, much like how phonebook maps names to phone numbers. DNS Records provide information about your hostname, domain, and/or current IP address. Sounds simple right?
There are several different DNS Record types. Here, I will take you on a journey to understanding the top 7 most commonly used DNS Record types, and when to use them. Using these DNS records types will eliminate downtime, and drive ROI (who doesn’t want that?). DNS Records are the settings of the domain configuration and DNS records keep internet traffic moving.
A Records are most commonly used to map a fully qualified domain name (FQDN) to an IPv4 address.
Example: A records can be configured in a domain for a specific host such as www.example.com or for the root record such as example.com.
AAAA Records are similar to A records but point to IPv6 address instead of IPv4 address.
Example: IPv4 addresses are finite, while IPv6 was created to allow for an abundant amount of unique addresses.
CNAME Records are alias record mapping FQDN to FQDN, multiple hosts to a single location without having to specifically assign an A record to each hostname.
Example: CNAME records are commonly used to point multiple hostnames to a single location. This is useful when you have multiple records pointing to the same location (usually a web server at the root of a domain). If that location changes, all you have to do is change the endpoint in the record you’re pointing all those CNAME’s to. CNAME records can also be used to point a hostname to a location that is external to the domain.
ANAME Records allow you to map the root of your domain to a hostname or FQDN. ANAME Records were developed by DNS Made Easy as a combination of CNAME and A Records, both RFC compliant and saving the extra DNS lookup. This is important because this allows you to have the functionality of a CNAME record at the root/apex of your domain.
Backed by years of practice, our legacy product DNS Made Easy is the industry leader in providing IP Anycast enterprise DNS services. Embracing that legacy, Constellix ANAME records prevent downtime and increase performance. For every hostname or FQDN, there is a corresponding IP address that will be resolved in the end.
Constellix caches the IP address(es) that the hostname resolves to and creates A record(s) with it. This functionality has allowed ANAME records to work consistently with CDNs (Content Delivery Networks) which enables multiple dynamic updated IP addresses to be authoritative for a domain in numerous locations.
Example: Unlike other providers, Constellix ANAME has no limits and acts just as a normal CNAME would, but at the root of your domain. ANAME resolution happens in real-time at the moment your client queries our nameservers for your domain. Learn more about ANAME on our knowledge base.
SOA Records (Start of Authority) are used to records are used to direct how a DNS zone propagates to secondary name servers. An SOA record is created by default for each domain added into the Constellix DNS system.
Example: The authority for constellix.com is ns11.constellix.com.
NS Records (Name Server) specify which name servers are authoritative for a domain or subdomain. NS Records are used in the event that another external DNS provider will be used in conjunction with Constellix DNS. They can also be used if a subdomain delegation takes place to external name servers.
The zone’s apex is where the SOA and NS (and commonly MX) records for DNS zones are placed. In order to achieve redundancy out of the NS record, hosting on a different network segment is recommended, if not and a network segment goes down, your DNS can go down.
Example: all “.com” subdomains such as “www.example.com” are delegated from the “.com” zone.
MX records use mail servers to map where to deliver emails for a domain. If no email is sent or received from a domain, then there is no reason to have MX records configured.
MX records are ordered based on MX priority. The lowest priority MX record is the first destination for the email. MX records should only map to A records in a domain (not CNAME records), or other external mail servers to the domain.
Example: If email is sent to a domain with no MX records, mail delivery will be attempted to the matching A record. MX records map to domains and not IP addresses.
With the 7 most commonly used DNS Record types and how to utilize them covered, you now have the tools to keep your internet traffic moving by eliminating downtime and driving more revenue. Read about the other records types on our knowledge base and learn how to configure DNS records here.
Woo wee that was a lot of information! Are you craving more? Share this with your friends and let us know you what you think @Constellix or on our Facebook. Constellix has changed the way we monitor DNS by immediately eliminating downtime, driving more revenue, and understanding your website analytics. Learn more about how Constellix can benefit you and your organization by scheduling a demo here.
Sign up for news and offers from Constellix and DNS Made Easy | <urn:uuid:e767f09c-d486-419b-81c4-5c1f74eb696c> | CC-MAIN-2022-40 | https://constellix.com/news/dns-record-types-and-when-to-use-them | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00418.warc.gz | en | 0.903271 | 1,258 | 3.03125 | 3 |
Within the last decade, we’ve seen incredible progress in the fields of robotics and artificial intelligence. Innovators have been seeking out ways to meld man and machine and, in some areas, remove man altogether. In robotics, we’re seeing delivery drones , security robots, and more. In AI, chatbots , self-driving cars, and voice recognition have all made significant strides.
Perhaps most importantly, we’re seeing incredible advances in AI and robotic technologies within healthcare that are improving patient treatment and care. One specific practice area capitalizing on both technologies is physical therapy — with a particular focus on people who suffer from mobility issues due to neurological injury.
Bots are helping humans provide improved care
While traditional therapy methods are tried and true and still yield results, recent research around motor learning interference and motor memory consolidation has shown that the optimal way to treat patients with neurological disorders is through a collaborative effort of robots and human therapists. The robots focus on reducing physical impairments and the therapists assist in translating the gains in impairment into function.
According to the Worldwide Health Organization (WHO), 15 million people suffer a stroke worldwide each year , with the United States accounting for almost 800,000 of those instances. A third of those people – approximately five million – are left permanently disabled and in need of some sort of physical therapy or patient care to help them attempt to regain even a fraction of their original physical mobility.
Regaining Motor Skills with a Bot
The loss of motor skills is a common occurrence for stroke survivors, with impairment leaving it difficult to stand, walk, or complete simple tasks like tying shoelaces or squeezing a hand. Current care methods rely on physical therapists manually helping patients learn how to balance and strengthen muscles through a series of exercises and stretches. While this has certainly been an effective treatment over the past decade, traditional physical therapy for stroke survivors and others suffering from neurological injuries/disorders falls far behind what is possible when technology is integrated into the course of care.
The advancement of machine learning and artificial intelligence technologies, along with the evolution of robotics, has produced commercialized robotic therapy solutions that have an exceptional capacity for measurement and immediate interactive response. Consider this – a human therapist can likely only guide a patient through a handful of movements during a session, with little ability to movements that aren’t outwardly significant. A therapy robot can guide a patient through hundreds of movements during a session. It can sense even the slightest response while adjusting to the patient’s continually-changing physical ability. […] | <urn:uuid:55a90460-7e6a-494d-a297-499a6aac5752> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/10/18/bots-are-becoming-highly-skilled-assistants-in-physical-therapy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00418.warc.gz | en | 0.936613 | 513 | 3.125 | 3 |
Here at HOPZERO, considering we have HOP in our name, we believe HOPs are a vital part of the Internet’s architecture, not to mention a powerful way to leverage existing protocols to protect your most-precious data.
But what does this mean for you as a network security professional? How can HOP counts do more for you than just help you pass some certification test? And just what benefit does knowing your data’s HOP counts provide you?
What Does HOP Count Mean?
HOP counts refer to the number of devices, usually routers, that a piece of data travels through. Each time that a packet of data moves from one router (or device) to another — say from the router of your home network to the one just outside your county line — that is considered one HOP.
The HOP count is the total number of HOPs that a packet of data travels. Let’s say you’re on your home laptop and you want to look at the website of the Louvre in Paris. To get all the way from your home office to www.Louvre.fr, you might travel through eighteen routers (or eighteen HOPS) to get to that location. Thereby your packet of data — your request to view this page — will have traveled eighteen HOPs.
Why HOP Count is Important
The reason HOP counts are important is that it only takes, approximately, 40 HOPs for any piece of data to reach the entire world. Yet the default setting for most devices is far higher than 40 HOPs. LINUX has a default HOP count of 64. And Microsoft, since the NT4 Service Pack 6 in 1995, has boosted its default HOP count from 32 to 128.
The reason for these high default HOP counts is user convenience; you don’t want to have to send an email to your colleague in London only to find your email has hit its HOP count limit and isn’t delivered.
The trouble is not all data is created equal. The email you sent to your colleague in London is far different from the customer credit card numbers stored in your data center.
This becomes an even-bigger problem when a hacker or phish gets beyond your firewall — such as when one of your employees mistakenly clicked on a bad link in an email — and that intruder has exfiltrated your data to Bulgaria.
Since it only takes 40 HOPs (or routers) to reach Sofia, the capital of Eastern Europe’s poorest country, and your organization happens to be using Microsoft (with a default hop count of 128), there’s little you can do, once a hacker has breached your sphere of trust, to keep that data from being exfiltrated. No matter how powerful a firewall you may have.
What Does HOP Count Exceeded Mean?
The beauty of HOP counts as a security tool is the fact that there’s a simple protocol that every single router in the world follows. And that is: every time data hops from one router to another the HOP count limit of that data packet is reduced by one. And when that packet hits zero, it automatically destroys itself.
This is a precaution set up for the old BGP routers, a precaution that has protected the Internet for 30 years. For example, let’s say you’re doing some work on your LINUX machine, with a default HOP count of 64. And let’s say you wanted to reach a destination 40 HOPs away. BGP routers always carry a HOP count of 1. Just one.
Well, each router-to-router exchange would decrement the HOP count limit of that data packet by one, from 64 to 63 to 62…and so on. But let’s say you wanted, for some reason, to access a destination that was 65 HOPS away, one more than the default set by LINUX. Well, that data packet would reach just the edge of your intended destination, one router before the device you wanted to communicate with, before that packet would destroy itself. Thereby the packet of data would have exceeded its HOP count.
How Can HOPs Be Used in Network Security?
Okay, but what does this have to do with network security? Well, quite simply, it flips the script on would-be hackers. It allows organizations to be less reactive — waiting for the bad guys to arrive; hoping the firewall holds — and more proactive by setting up HOP limits that serve the best interests of an organization.
Ask yourself this: should your most-precious data — the crown jewels of your company located in your data center — have the same HOP limits that some innocuous emails sent to Europe have? Absolutely not.
Because while that local machine with the innocuous emails might need a heartier HOP limit to conduct business, your crown-jewel data likely doesn’t need a default HOP limit of more than three to five to ensure it stays within the data center. By setting a strict, but appropriate, HOP limit for your most-prized data you ensure it won’t get into the wrong hands, no matter how breached your firewall may be.
How to Measure HOP Count
The trick, of course, is how do you determine what’s an appropriate HOP limit for your varying types of data? Well, first this requires getting a clear look at where your data packets are actually traveling, not just where you think they’re traveling. And then setting appropriate data limits that work within those safe boundaries.
A great tool for discovering where your data is traveling is WireShark. There’s an increasing awareness, and usage of “packet-capture” devices and tools, like WireShark. IT Security engineers have concluded that the protection afforded by firewalls simply isn’t enough. “Knowing your network” means knowing where all your endpoints are communicating to/from. That’s where capture comes into play.
Once you’ve got a clear idea of where your data is traveling, it then becomes a question of analysis. Does my data center with my customer’s private medical data really need a default HOP count of 256? A default HOP count which might allow it to travel to Romania and back? Three times! Probably not.
The HOP count register (also known at TTL) exists in the ethernet packet header of your devices. (See below)
HOPZERO provides a graphical user interface that monitors, tracks, and even “grades” this packet header info for any or all of your network’s devices. “Knowing your network” is the key to being able to set meaningful limits on your data, while simultaneously exposing those devices that are shown to be communicating to places they shouldn’t! We’ll display the time and location of those packet transfers.
Here’s a snapshot of our interface:
It’s precisely this extraction of critical traffic patterns within your own network, that allows HOPZERO to set up “Secure Radius Zones” protecting your most valuable assets (data).
A Paradigm Shift (or HOP) in Network Security
We may be biased; HOPs, after all, are in the name of our company. But we firmly believe the next real breakthrough in protecting the wholesale raiding of the world’s data will come from enforcing data limits through HOPs.
Instead of trying to build an ever-stronger firewall — that is dependent on factors such as upgrades, updates and the human element — wouldn’t it be better to teach your data to be smarter? To learn where it needs to go and no further? We think so. And we believe if you have a chance to take five minutes to see where your data is traveling — and see HOP security in action — you’ll think so too. | <urn:uuid:fc7766b3-89e1-47bf-aa59-c8a0a28f5460> | CC-MAIN-2022-40 | https://hopzero.com/what-does-hop-count-mean/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00418.warc.gz | en | 0.927001 | 1,666 | 2.65625 | 3 |
Maximizing Network Availability
Many of you may have read about the Loma fire that started in California’s Santa Cruz mountains on September 26, 2016, destroying many buildings and threatening several tower locations. Two of the tower sites, heavily used as a primary route for telecommunications and Internet traffic, were surrounded by flames that damaged generators, melted AC lines, and engulfed radios mounted less than 6m (20 feet) AGL. Mimosa uses these sites as a primary route for providing Internet service to both our headquarters building, and to several test sites with live users on both sides of the mountains. In light of this fire, we thought it would be a good time to discuss how to plan and maximize network availability during disasters.
In the field of computer networking, there are a number of standard techniques for achieving five nines (99.999%) availability, which translates to approximately 5 ½ minutes per year of downtime maximum – what many users have come to expect.
Some of these techniques include power redundancy, hardware redundancy, failover architecture to support them, smart routing, geographic and path diversity, remote access, and active monitoring. These techniques can and should be extended to RF links such that they approach the availability of their wired counterparts. This article will outline each of these techniques and describe how to apply them for robust RF network design.
Power outages are one of the primary failure modes in RF networks. Antennas tend to be installed in remote locations with a single power source. Even when a generator is available to export power after a grid-outage, there is often transfer switching delay in generator startup that causes radios to reboot.
One solution for this problem is the introduction of an uninterruptable power supply (UPS). Aside from protecting against power surges and dips, a UPS can provide power during the time between the grid outage and generator startup. In locations without a generator, the UPS should be sized such that the holdup time exceeds the longest expected grid power outage.
Photovoltaic (PV) power systems are another option. In addition to PV panels, a complete PV system should also contain a charge controller, a bank of batteries, and an inverter. Inverters are available in grid-tie (solar as backup), or off-grid (solar only) configurations.
In both solutions above, matching the expected loads with the output from the backup power solution is critical. This entails summing the power required for all loads and ensuring that the source of backup power is equal or greater, and that it can sustain the loads over a set period of time.
It is important to identify single points of failure that could cause a network outage and then identify a workaround for each. In most cases, the workaround involves having more than one device (e.g. radio, switch, router, etc.) in parallel that serves the same function, either used at the same time (aggregated) to increase capacity, or separately as a failover option.
Installing a second parallel RF link operating on another frequency provides even better downtime insurance. While this could be achieved at the same location (e.g. two Mimosa B5 radios using four independent 20 MHz channels), geographic diversity would also protect against site-specific problems such as a power outage at a single point.
For maximum network-level availability, Mimosa recommends using both redundancy and geographic diversity to avoid single points of failure.
Failover Architecture (or Don’t Forget to Route)
The entire network must be configured to failover, or self-heal, in a way that doesn’t cause a service outage for downstream users.
While it is beyond the scope of this article to describe every method for achieving the goal of fault tolerance, there are two network routing protocols that provide an excellent starting point: BGP and OSPF. These protocols were designed to enable external and internal network redundancy, respectively.
In a scenario where you have two upstream Internet providers and an edge router installed at each facility in a colocation cabinet, each router can be configured to use the Border Gateway Protocol (BGP), which advertises reachability information about your network’s IP space to the outside world. The two routers are called neighbors (or peers), meaning that they share the responsibility for advertising your network to the Internet. Another term for this relationship is “multi-homing”. If one path to a router becomes impaired, then the other router fully takes over advertising the IP space through another path. Once multiple routes to your network are available from the public Internet to your IP space, it is time to focus on internal redundancy.
The Open Shortest Path First (OSPF) protocol allows routers within your network to communicate and dynamically adjust topology in the case of link failures. The implication is that static routing is no longer necessary since OSPF learns the shortest path from one IP to another at each router. In OPSF-routed networks, one router is nominated as the designated router (DR) that publishes topology information to other routers in order to minimize traffic related to discovering routes. If the designated router (DR) becomes impaired, a backup DR (BDR) takes over. Path costs can be applied to specific interfaces (individual Ethernet ports) to control how OSPF routes traffic over multiple links to the same destination.
Back Door (Alternate Access Path)
Occasionally, there are times when testing or troubleshooting are most easily performed while connected to the same subnet as the devices which need attention. If your transit links are IP addressed within a small subnet, consider interconnecting an inexpensive linux server and your radios to the same switch so that you can SSH into the linux server and access the entire subnet from a particular network node. This is especially useful in a network containing parallel links where the server is configured with two network cards, one on each subnet.
The advantages are that updates and tests can be performed locally without configuring every device for remote access over the Internet (a potential security risk), or without consuming extra bandwidth to administer each device in the case of firmware upgrades.
As a full-featured operating system, linux comes standard with a robust security model, built-in tools for troubleshooting network issues (e.g. ping, traceroute, netstat, arp, dig), tools for accessing other devices (telnet, SSH), and it can even function as a firewall or router (iptables).
For advanced troubleshooting, third-party open-source tools like Iperf can be installed to perform network throughput tests by traffic type (TCP, UDP), and with varying packet and window sizes.
To prevent downtime and costly truck rolls, consider installing IP-controlled remote power switches (such as the ones available from Digital Loggers, Inc.) to cycle power if one of your devices hangs or requires a hard reset. These devices are similar to regular power strips, but provide the ability to cycle power to specific devices through their built-in Ethernet interface. They are typically placed inline between the power supply and other devices being served, such as routers, switches and POE’s. A request to cycle power simply disconnects, and then reconnects the power output to the device requiring a reboot.
If you happen to have parallel links that terminate at the same location, install two IP power strips and cross-connect their Ethernet interface. That is, connect the power to one link and the Ethernet port to the other link. This way if one of the two parallel links goes down, you will still be able to remotely power cycle devices through the Ethernet port connected to the active link.
Some commercially available switches provide PoE power to your radios provided that they have compatible voltage, the same power standard (802.3at/af or passive), and an adequate power budget. Cycling PoE power to a particular port through the switch GUI control accomplishes the same thing as cycling AC power to a standalone PoE.
Monitor and Manage
There’s nothing more satisfying (or anxiety-reducing), than seeing a sea of green devices on a network map, but when one of your devices needs attention, you’d rather know about it as soon as possible to avoid downtime.
Mimosa Networks’ free cloud-based network management tool (“Manage”) provides a detailed view of device performance over time that can help find ways to strengthen your network. Using the topology diagram, you can learn to identify single points of failure and determine which parent device may be affecting the accessibility of downstream child devices.
Several commercial monitoring systems are available (e.g. Solarwinds, Zenoss) that arm the operator with the ability to define devices and their placement within a network topology. Their open-source counterparts sometimes require a more detailed understanding about how the monitoring system operates, and knowledge of what data and the methods for collecting it are available from the monitored device. Other free or open-source options include Nagios, OpenNMS and Zabbix.
What was the outcome in the Loma fire? Though the tower sites had both UPS and generators, the generators were located outdoors and did not start because their controls were damaged before the AC power was lost. In this situation, dynamic routing protocols and geographic diversity were the only possibility for recovery. With good planning, traffic was automatically diverted to secondary routes with the next lowest cost, and connectivity continued for the majority of end points.
As you can see, achieving five nines network reliability requires a combination of different techniques that should also be applied across RF microwave links as well as other interconnected network equipment. A diligent effort, starting with design and extending through deployment and monitoring, is necessary to avoid single points of failure and to ensure that your customers experience high availability for their critical applications. | <urn:uuid:54321b9a-346e-41a0-9df6-7e84a4ab02c0> | CC-MAIN-2022-40 | https://mimosa.co/blog/maximizing-network-availability | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00418.warc.gz | en | 0.935929 | 2,028 | 2.6875 | 3 |
A common thread weaving through today’s most valuable companies is the basis of their business model. The companies we all know and use often on a daily basis are by no means traditional in how they are run or in the ways they make their profits. Instead of mass producing goods to sell to consumers, these multifaceted technology driven companies offer frameworks that allow users to interact with each other and even use these frameworks to build their own companies. Seven of the 10 most valuable companies globally are now based on a platform business model. Alphabet, Facebook, Amazon, Alibaba, and just about any other service we access through an app on our phone all use similar business models to grow exponentially and grab significant market share while composing the modern day platform economy.
A Tale of Two Business Models
Not that long ago, the world’s most valuable companies offered goods and products that were spread across multiple industries, including automakers, manufacturers, oil and gas companies, and brick-and-mortar retailers. They all followed a pipeline business model where the flow of value was linear from producer to consumer, like water flowing through a pipe. Pipeline businesses have been around us for as long as we’ve had industry and were always the dominant business model until the digital revolution of the last few decades. Essentially everything we buy comes to us via a pipe. Manufacturing, television and radio, even educational and service industries are run on this pipeline model.
Years of rapid digital innovation has seen many of those valuable companies from all industries shift from a products-based pipeline model to technology-driven platform models with multiple offerings. Consumer platform services are more commonly known for use in streaming, rideshares, and ecommerce business types. Business-to-business platform services may include software-as-a-service, or SaaS. Platform businesses do not exist only to make things but also to push them out to users and consumers. They encourage their users to create, share, and consume while providing for interaction and commerce with each other. For example, think of cable TV (pipe) and YouTube (platform) or Encyclopedia Britannica (pipe) and Wikipedia (platform).
The simplest way to describe an online platform is to think of it as sort of an online matchmaker. If you need a ride, Uber finds you someone willing to give you a ride home for a fee. If you need a cabin in the mountains on short notice, Airbnb sets you up with someone willing to rent theirs to you and your family. Sure, taxis and hotels have been providing rides home or places to spend the night just as Uber and Airbnb, but unlike NYC’s yellow cab, there is no fleet of leased taxis or expensive medallions–just an agreement between two users using the platform Uber built. The same goes for Airbnb, which may not own one single hotel, but amassed an inventory of one million rooms a staggering 50 years faster than Marriott did. All of this digital matchmaking is made possible thanks to algorithms and cloud computing and, of course, through some really smart and inventive people who took risks to create services a lot of us can’t imagine living without.
Can Traditional Businesses Embrace Platforms?
A research report by McKinsey suggests more than 30% of global economic activity–some $60 trillion–could be mediated by digital platforms in six years’ time. Their experts also estimate that only 3% of established companies have adopted an effective platform strategy.
Although the usual titans of tech are ubiquitous with examples of modern business models and innovative and disruptive strategy, all is not lost for long established and more traditional businesses. These companies also have the chance to create their own platforms. If creating their own platform seems outside their wheelhouse, then they can at the very least become part of another platform’s ecosystem by leveraging existing platforms to their advantage.
An easy example of a traditional (pipe) company growing into a platform business by acquisitions is Walmart. When the retail giant purchased Jet.com in 2016 and invested in Flipkart in 2018, it quadrupled the number of SKUs (stock keeping units or unique items for sale) from 15 million to 60 million. Now Walmart has a system much like Amazon’s that allows sellers to plug into its platform (walmart.com) and sell their goods to Walmart’s masses of customers. Since the retailer itself doesn’t have to inventory these items, they do not have to carry the risks and costs that come with the extra product while sellers get to participate in global trade on one of the most visited shopping sites on the internet. Walmart also lets sellers fulfill their own orders or rents out its own infrastructure to enable such fulfillment. This shows the power behind creating a platform.
But how can someone without the financial backing of Walmart get started on their silicone road to becoming a participant in this platform economy? The Harvard Business Review suggested three ways a traditional business can go from pipe to platform at its own pace.
Use what’s already available
Many companies have already embraced and incorporated the easiest way to integrate platform business practices into their own companies by using preexisting digital tools to engage their networks. Facebook, Twitter, and Instagram are prime examples of these tools, and all offer opportunities for two-way, collaborative communication from buyer to seller. Companies like GoPro are engaging their customers through Instagram and encouraging them to post videos of their own adventures taken with their cameras. Instagram is a free service and when your customers are happy enough to tag your company in their posts, it can lead to a lot of free advertising when you have the platform working for you.
Partner up or invest
Car manufacturers from Ford and GM to even Jaguar have made large investments in ridesharing companies like Uber and Lyft over the past five years. Many car makers are hedging bets on the future of autonomous vehicles while making large investments into partnerships to develop their own autonomous vehicles, which promises riding to be the future of driving.
Build it yourself
The toughest option is taking an existing company and devoting a lot of time, manpower and money to building a digital platform yourself. There will be technology and skill gaps to overcome, and it will take years of effort to create the right team to build the right platform. But for some companies with deep enough pockets and a clear enough vision of what they want their platform to be, then this may be the best way to bring value to their market. When the CEO of General Electric began its own digital transformation to create Predix, he said it required new talent throughout GE, not just in IT. The company developed new leadership and culture designed around understanding their customer base. The results showed when Predix, a platform for the industrial internet, generated $5 billion in revenue in 2015.
Room to grow
A lot has been written about the platform economy covering its benefits, its faults, and its future. Experts agree that even though there are challenges with data sharing, privacy and information selling, the platform economy isn’t going away anytime soon. On a macro level, seven out of the top ten largest companies in the world are a platform model, but only a small fraction of the top thousand can be considered a true platform. There is still a lot of room for growth and innovation from both sides as the big players reach further into new industries while traditional businesses learn to adapt and develop into hybrid business models, hopefully combining the best of both models. | <urn:uuid:2fcc50c7-4332-4086-b9a4-24edb376e4e2> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/platform-economy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00418.warc.gz | en | 0.960913 | 1,508 | 2.53125 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
By Mike Cobb, Director of Engineering
Meet Ron Cen
Ron begins each workday by stepping across flooring topped with sticky material that collects contaminants from the bottom of his shoes on his way into a small changing room. After closing the door behind him, Ron suits up in a special hooded suit, face mask, gloves and booties before stepping across another sticky floor and through a second door into his workspace.
Inside the long, 2,000 square foot room, 33 fans spin loudly at a constant speed and bright florescent lights exaggerate the white color of the walls, floors, desktops and suits worn by Ron and his coworkers. Walls of computer monitors flash numbers and symbols as he walks past the open bodies of his subjects.
Ron is the Supervisor of Cleanroom Engineering at DriveSavers. He oversees a team of specially trained engineers who work alongside him to rescue lost data from masses of open, damaged hard drives in the largest, most advanced Certified ISO Class 5 Cleanroom in North America used for data recovery.
Any failed computer or disabled device that uses a sealed hard disk drive (HDD) should only be opened and worked on inside a proper cleanroom—a laboratory free of dust, static and particulates—or the entire exercise may not only fail, but could render the data unrecoverable.
Cleanrooms are classified on several different levels, based on how clean the air is inside the space. The idea is to filter out microscopic matter like dust and smoke particles, airborne microbes and aerosol particles. These tiny materials could damage the delicate surface of the hard drive’s platters, which house important data.
In an HDD, the read/write heads are attached to the end of an armature that swings out over the platters as they spin at speeds up to 15K RPM. Clearance between the surface of the platters and the heads is minuscule—about ten millionths of a meter, smaller than a fingerprint.
Particulate matter that comes to rest on the data-bearing surface of a spinning disk will be struck by the read/write heads as they attempt to do their job of reading and writing information to the device. When the heads, which float above the surface of the disks, strike any obstruction, then there is going to be damage that may cause permanent data loss.
It doesn’t take much time for this to happen, since the disks can move at speeds of more than 60 mph. Hitting a microscopic dust particle would be equivalent to a car hitting a speed bump while traveling at excessive speed.
The cleanroom at DriveSavers is designed to protect drives and give our engineers the highest probability of a successful recovery.
For more details, explore these articles:
- Take a virtual tour of DriveSavers, including our Certified ISO Class 5 Cleanroom.
- Inspect our cleanroom’s ISO Class 5 certification.
- Discover the dangers of microscopic particles in data recovery.
Meet Angel Ortiz and Jim Alcott
Angel and Jim are also both engineers at DriveSavers. You can find them with their faces buried in microscopes and other specialized laboratory inspection tools that are used to perform microsurgery on solid state storage devices (SSDs). But their work environment is entirely different than Ron’s.
Music fills the air rather than the whir of fans. While Angel and Jim do practice contamination controls at their workstations and wear standard lab safety gear like glasses, gloves and antistatic grounding wristbands, they are able to wear their normal street clothes. There are no special suits, hoods or booties. There are no sticky floors. Unlimited coffee and a staff kitchen are just around the corner.
Why don’t Angel and Jim need to work in the same type of pristine environment as Ron?
Angel and Jim are both data recovery engineers like Ron; however, unlike DriveSavers cleanroom engineers, they work on devices that use NAND flash and other solid state memory to store data. While flash media devices—like SSDs, flash drives, smartphones and memory cards—are all manufactured in controlled environments, they do not necessarily require a cleanroom setting for data recovery. Because solid state components have no moving parts or exposed media substrates, they are generally unaffected by the microscopic airborne contaminants that can destroy data on traditional spinning disk HDDs.
HDDs are magnetic, mechanical devices that rotate at insanely high speeds with nanoscopic tolerances that must be maintained between components, all while fighting the laws of physics on heat, vibration and friction. The smallest interference may result in the death of your drive and the loss of your data. SSDs operate on the same nanoscale, but are not prone to the same contamination-related issues. Their challenges are of an entirely different nature. For example, there are 760 capacitors in an iPhone 6 and every one of them is subject to failure, affecting the operation of the device and the ability to recover data.
Jim recovers data from smartphones that are not accessible due to logical or physical issues. Angel recovers data from SSDs that are physically damaged. They and their fellow flash memory data recovery engineers may not need to be inside the most advanced cleanroom in North America for data recovery (which the DriveSavers cleanroom is), but their jobs are equally difficult and specialized.
For more details, explore these articles: | <urn:uuid:10d07ba7-f27f-42f6-9ffc-b1c936fb62e1> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/white-room-hdd-vs-ssd-data-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00418.warc.gz | en | 0.940606 | 1,124 | 2.75 | 3 |
Malware comes in many different types. Some examples include viruses, worms, Trojans, and spyware. Find out more about malware in the sections below.
There are many different malware variations. In fact, every minute, a new malware variation is being created and discovered. In the next section, we are going to list some of the most popular types of malware you may encounter:
- Ransomware. Ransomware is an entity that uses encryption methods to prevent a user from accessing their data until a ransom is paid. Even when the ransom is settled, there is no guarantee that the perpetrator will provide a decryption key.
- Spyware. Spyware is a software program that collects information about a user’s activities without their knowledge. This information could be their credit card number and passwords, for example.
- Adware. Adware is a less invasive malware entity. It is designed to monitor a user’s surfing activity to determine what types of ads best suit them. While it works similar to spyware, the noticeable difference is that it does not install any program on a user’s computer.
- Worms. Worms are among the most harmful malware entities out there. They find loopholes in an operating system and take advantage of them to gain access to sensitive data. These entities often find their way to devices by means of freeware and unintentional software downloads.
- Trojans. Trojans often disguise themselves as legit software programs or codes. Once downloaded, they will take control of the device for malicious intent. These Trojans may also hide in apps, software updates, and games, or maybe attached in phishing emails.
- Rootkits. A rootkit is a software program that allows hackers to control a victim’s computer remotely. This program is usually embedded or attached to an application or firmware, and it is spread via malicious downloads and phishing.
- Virus. A virus is a malicious entity that embeds itself into an app and performs its actions when the app is active. Once inside a victim’s computer, it can steal sensitive information and initiate DDoS attacks.
Malware entities can infect devices and networks in many ways. Depending on the type of malware that attacked a device, it can present itself differently to the user.
Sometimes, malware’s presence is almost unnoticeable. But for others, it can be downright disastrous. Regardless of its infection method, all malware entities have one goal: to exploit a device in such a way that benefits the creators.
The world of cybersecurity is ever-changing, which means that hackers are constantly finding new ways to target users. While it’s impossible to be completely safe from malware, you can minimize your risk by staying informed on the latest threats and protecting yourself accordingly. Also, understanding how cyberattacks work gives you a better idea about what steps to take if something happens to you.
So, how do you detect malware infection? Check for the following signs:
- Random ads are popping up everywhere. It is common to see ads on websites. However, if you are bombarded with lots of ads, it’s another story. Take note that adware programs often display many ads to their victim’s devices. Most of these ads are for legit products, which, when clicked on, will give an affiliate fee to the adware creator. In some cases, these ads will take you to malicious websites that drop more malware entities on your device.
- Your search sessions keep getting redirected. Not all websites are malicious. But if you notice that Google is taking you to an unrelated site, there might be a problem. Sometimes, these redirections are not obvious. For instance, a banking Trojan may take you to a fraudulent website that looks similar to the bank’s real site. So, in that case, your best action is to inspect the URL first.
- You receive fake warnings. Creating fake antivirus programs is a popular business today. And to get these annoying programs onto systems, perpetrators use sneaky techniques. One is through displaying fake warnings about threats. When you click on them, you will be asked to pay a certain amount to download a tool that fixes the problem. But of course, the fake antivirus program isn’t actually doing anything.
So, you’ve got malware. Now what?
Removing a malware entity is quite an easy task. But to ensure you stay on the right track, we have put together this guide on how to remove malware from your computer.
- Step 1: Update your anti-malware software. First, check if your anti-malware software is updated with the most recent virus definitions. Anti-malware vendors constantly update their database as they encounter new strains of malware. If your software is outdated, you are putting your device at risk of an infection.
- Step 2: Scan your system. Using your anti-malware software, perform a malware scan. Once done, check the results and apply the recommendations.
- Step 3: Reset your system. If you feel the problem has caused severe damage to your device, your last resort is to reset your system. For this step, you need to use System Restore points and restore your system to a point before the infection.
Malware can infect your device or even crash it. You may have a malware infection but not notice until you’ve lost important files, had a few viruses attack your system, or, even worse, experienced identity theft.
We’re going to give you some tips that will protect your device from future malware attacks. These are the most basic preventative measures anyone with a computer should take to keep malicious software at bay.
- Install anti-malware software to remove detected malware right away.
- Use strong passwords and enable multi-factor authentication.
- Keep software programs up to date.
- Think before you click.
Malware is a dangerous and growing problem that every computer user should be aware of. There are many steps you can take to protect yourself from malware attacks, and we recommend taking them all to ensure the safety and security of your valuable data. | <urn:uuid:9cb9738d-7e37-4a7a-8465-41d66534bde6> | CC-MAIN-2022-40 | https://e3zine.com/how-malware-works-and-how-to-remove-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00418.warc.gz | en | 0.928095 | 1,267 | 3.234375 | 3 |
The internet has changed the way we used to live and communicate in our lives. It has completely transformed the way brands, and companies work. These digital technologies have some incredible benefits, which has made our daily lives more comfortable. We’re living in a technologically driven world, and all of us are dependent on technology for one or the other reason. No doubt, it is useful. But, it is essential to know that technology is a boon as well as a bane to humans. Technology has improved the quality of human lives. However, it can have a tremendous impact on the environment and our mental health. One of the most significant risks of technology is cyber-attacks. It is essential to know that cyber-attacks could damage your reputation and your whole business.
In today’s modern day society, cybersecurity is becoming more critical than ever. Despite the various measures, cybercrimes are increasing at an alarming rate. Security is one of the most vital aspects of an organization, and in today’s world of digital media; safety needs to be your top priority for all the businesses.
Cyber threats are one of the most significant risks to your organization. Therefore, it is crucial to prepare your business for the dangers of tomorrow. Cyber-attacks can lead to financial loss and loss of critical data. It is vital to prepare for the aftermath of a cyber-attack and have a strategy to incur the cost of recovering from such an incident. Small and medium-sized businesses suffer the most from such attacks because they are not prepared for the attack and don’t have any plan or strategy to deal with such type of attacks.
And the reality is most small business owners think they have nothing worth stealing. Therefore, they don’t take any such measures. It is essential to understand the significance of cybersecurity. No matter, if you are a smaller firm or a more prominent organization owner, you should take protective measures. Following are some of the most effective ways that can help you protect your business from cyber threats:
1. Know your data and identify areas of risks
It is essential for a company to know the type of data they have. After having complete knowledge of data, it is necessary to understand and identify sensitive data and the areas of risk. Identifying the areas of risks will help you in strategizing your plan for your company.
2. Always have a back-up of your data
Backing up data is one of the essential steps that every company should follow. It will not only help you protect the data but will also help you to restore the data after the threat. Protecting your business data should be your utmost priority, and you should always have a back-up of all your business data and details. As a company owner, you should keep a check and ensure that there are regular backups of the business data. This will help you and your business during a natural disaster or a hack attack.
3. Install anti-malware software
Malware is a program that can attack your computer and are very harmful. However, you can prevent these attacks with a few measures. You can avoid attacks of malware by installing anti-malware software and using a firewall. The benefit of using a firewall is that it detects and blocks some of the malicious programs.
Set up firewall security and ensure that anti-malware security software is there on each business computer. Install this software on all your business devices and prevent these kinds of threats.
4. Encrypt the data
Encryption of business data is one of the most effective ways to protect your data and achieve security over your data. Encryption is the process of encoding the data. In simpler terms, encryption is the translation of data in such a way that only authorized parties have access to it. It is all about translating the data into another form, a type of secret code which can only be accessed by people who have the right encryption key.
Those who are not authorized do not have access to the data. The best part about encryption is that it works even during data transportation. It is one of the best possible ways to ensure and maintain data security. Not only this, but encryption will also protect data across devices (mobile phones).
5. Choose strong passwords
According to research, 73% of users have the same password for multiple sites. Using the same passwords increases the risks for cyber threats as cybercriminals can easily use that information to log in and access your different accounts. Therefore, it is essential to choose strong passwords to protect yourself from all kind of cyber threats.
Choosing a strong password is essential for your healthy digital life. It will protect you from identity theft, financial fraud, and will help you achieve data security. Protect all the devices with a strong and complicated password. Share the password only with the device user and ask him/her to not share it even with the other employees. You may also use two-factor authentication.
6. Educate and train your employees
In the end, your employees are your biggest asset. Therefore, it is essential that you educate and train your employees about cybersecurity. Give security awareness education to all your employees and teach them about the different online threats. Your staff plays a huge rule. Therefore, you should make sure that every new member completes his/her training program.
Ask them to be aware of fraudulent emails. Train them and tell them the importance of having strong passwords and why it is crucial to maintain and choose strong passwords. Inform them about their computer rights and responsibilities and tell them how their role can make a huge difference in protecting business data.
Take cybersecurity very seriously. You can be a target of it at any moment. Follow the above steps and get cybersecurity insurance for your company. Cybersecurity insurance will protect your business from security threats. | <urn:uuid:00a98037-1052-4ce2-8b81-65cfc112b7d7> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/cyber-threats-of-tomorrow-how-you-should-prepare-your-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00418.warc.gz | en | 0.954113 | 1,178 | 2.703125 | 3 |
Many businesses are migrating from a monolithic architecture to microservices to keep up to date with the technological revolution currently taking place. This is because a single error will not bring the entire application down.
Does this, however, imply that your Microservices Architecture is fault-tolerant? Failures and errors are common throughout the development of apps. Because a microservice ecosystem is bound to fail at some time, it’s best to embrace it now. It’s also a good idea to design Microservices with failure in mind.
In other words, you need a resilient microservices infrastructure. The ability of an application to recover from failures is referred to as resilience. When developing microservices, it’s vital to think about how to make microservices resilient and the number of dispersed services.
What Failures are Common with Microservices Infrastructure?
Microservices-based systems frequently have multiple dependencies, such as databases, back-end pieces, and APIs, which might result in service call failures, which can be roughly classified as follows:
1. Transient Faults
These are relatively infrequent and only take the application down for a brief period (typically a few seconds). Temporary network disruptions and missed requests are some examples.
2. Permanent Faults
These can cause the application to be unavailable for extended periods. These are typically the result of highly degraded services and long-term outages.
A microservice design has extra points of failure due to a large number of moving parts. Failures can occur due to a variety of factors, including code faults and exceptions, new code releases, bad releases, hardware problems, datacenter breakdowns, inappropriate architecture, a lack of unit tests, communication across an unstable network, and dependent services, among others.
Microservices can and will falter, despite all of the planning and careful work that goes into building them. It is the responsibility of development teams to provide apps that can gracefully accept failure. While the application determines the best approach to resiliency, leveraging microservices resiliency patterns is a recurring theme among these strategies.
What Are the Microservice Resiliency Patterns?
Three well-known microservices resiliency techniques improve fault tolerance and allow applications to smoothly handle failures. Fault tolerance is a feature that allows the system to continue to function even if some of its components fail.
1. Retry Pattern
Databases, modules, back-end services, and APIs are all common dependencies in microservices. Any of these aspects can fail at any time, resulting in a slew of service call failures. These transitory errors can be solved using the retry pattern.
The retry pattern creates a system that repeats a failed operation a predetermined number of times for periodic and abrupt failures. IT administrators set the number of retries as well as the time intervals between them. Rather than shutting down on initial failure, this allows malfunctioning services to summon services one or more times until the anticipated response from the resource is obtained.
Only use this approach for transitory failures and resist chaining retry attempts. Keep meticulous logs to pinpoint the core reason for these failures later. Lastly, allow enough time for each service to recover to avoid cascading problems and maintain network resources while the failing service recovers.
2. Circuit Breaker Pattern
While the retry pattern is effective for temporary failures, teams still require a dependable microservices resiliency design to deal with larger, long-term, permanent defects.
If a retry mechanism inadvertently invokes a severely broken service numerous times until it achieves the desired outcome, it may cause cascade service failures that are more difficult to discover and resolve.
The circuit breaker pattern results in a component that looks like an electric circuit breaker. This part lies between the endpoints of the services and the requests for them. The circuit breaker relays information between these services in a guarded state as long as they communicate normally.
The breaker unlocks the message circuit to terminate service operation when a retried service request traveling across the closed-circuit fails a specified number of times. The breaker pauses service operation during this open state and sends error prompts to the client service for every failed transaction.
The circuit breaker operates in a half-open condition after a particular amount of time (known as the circuit reset timeout). The breaker calls shut the loop during this time to see if communication between both the two services has been reestablished. If the breaker senses a single fault, it will flip to the open state once again. It closes the loop again as expected once the error is addressed.
Create the circuit breaker in such a way that it may assess service failures and adjust call methods as needed. Thread-safe and sequential circuit breakers are required.
3. Timeout Design Pattern
You’ve probably heard of a timeout in football, but in Microservices, it refers to a set amount of time for an event to happen. A timeout encompasses the entire transaction, from connection establishment through the last byte of the answer. Unfortunately, this isn’t compatible with the SO TIMEOUT function. We can get around this by using the OkHttp or JDK11 clients.
Instead of waiting an endless period for a service response, you should create an exception. This will prevent you from becoming stranded in limbo and continuing to use application resources. The thread gets freed once the timeout period has expired.
Conclusion about How to Make Microservices Resilient
We covered three of the most common microservices resiliency patterns in this article. Employing these patterns is a very effective resiliency strategy for microservices applications. It doesn’t matter how you apply these patterns; what matters is that you have systems that can properly deal with failures. Implementing these resiliency patterns will allow you to do just that.
Further blogs within this How to Make Microservices Resilient category.
In case you require more information about our offerings, or if you’d just like to review your own needs in detail, you can get in touch with Cloud Computing Technologies instantly | <urn:uuid:7400b3aa-0288-4c11-9119-7ad1fd69deb7> | CC-MAIN-2022-40 | https://cloudcomputingtechnologies.com/how-to-make-microservices-resilient/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00418.warc.gz | en | 0.924311 | 1,243 | 2.625 | 3 |
Oak Ridge National Laboratory (ORNL) researchers have created a technology that more realistically emulates user activities to improve cyber testbeds and ultimately prevent cyberattacks.
The Data Driven User Emulation, or D2U, uniquely uses machine learning to simulate actual users’ actions in a network and then enhances cyber analysts’ ability to thwart, expose and mitigate network vulnerabilities.
“Understanding and modeling individual user behaviors is critical for cybersecurity,” said Sean Oesch from ORNL. “D2U can create unlimited, realistic test users of a particular network for developers of cyber tools to improve their products.”
Where other user models need large numbers of testers or make assumptions about their behavior, D2U needs a small number of users and emulates actual user behavior.
The software is currently deployed to help evaluate defensive cyber technologies but could have benefits to the broader cyber community. | <urn:uuid:53f47148-a6a4-4fa0-8622-feec50267e2a> | CC-MAIN-2022-40 | https://www.industrialcybersecuritypulse.com/education/ornl-cybersecurity-put-to-the-test/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00618.warc.gz | en | 0.898385 | 185 | 2.90625 | 3 |
If a city municipality could detect leaks within minutes, rather than the extended period of time it takes now, think of the impact! Water conservation would go up, money could be saved, and the municipality could repair smaller leaks, saving them time and money and consumers the hassle of having the water shut off for extended periods of time. If only there was a leak detective that could do this work.
Fortunately, there is, and it’s not what you might think! Rather than some Sherlock Holmes of water systems, there is now a water leak detection system available that offers municipalities a cost-effective way to manage water resources. The best part is the system is managed online with sensors and alarms, so no need for cloak and dagger antics.
NEC’s Water Leak Detection Service utilizes underground sensors to monitor the water pipes. These sensors record the vibration that occurs with a water leak, sending a signal via wireless transmission. The sensors around the leak will continue to send these signals that ultimately result in a report that can be viewed regularly. This report will accurately pinpoint the leak location, at which time a municipality can dispatch an employee to begin repairs.
This system is significantly faster and easier, and allows for municipalities to detect leaks much more quickly than ever before.
Recently NEC Corporation of America completed a water leak detection trial with the City of Arlington in Texas, and the results were quite successful. During the trial, the first of its kind to be completed in the United States, the NEC team installed a series of 33 sensors at two main sites in the Arlington water system. Three water leaks were identified during the four-month trial and have been repaired. This is the first step for the city to develop a long-term leak detection strategy by evaluating current leak detection technology. Additional trials in other cities are currently under consideration.
The NEC Water Leak Detection System is just one of the many solutions developed as part of an overall strategy to aim for a society where values such as safety, security, efficiency, and equality can be realized to create a brighter, more affluent society for everyone in the world—in other words, Orchestrating a Brighter World . Think of the possibilities!
With the current water leak detection technology, municipalities can conserve water, repair leaks more quickly and save consumers—as well as the city itself–money. What about the future? The city of tomorrow will have the ability to communicate immediately, ensuring the safety of citizens through enhanced identification for police, improved 911 capabilities for mobile so first responders can get there fast, and even the ability to improve the quality of life with cleaner air, better infrastructure and sustainable solutions to improve the earth.
Water leak detection is just the beginning. Moving forward, lives can be improved through innovation and collaboration. NEC is excited to be a part of #SolutionsforSociety. | <urn:uuid:45e8864f-4c7a-4b83-aeac-ad0c0e466d86> | CC-MAIN-2022-40 | https://nectoday.com/tag/solutions-for-society/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00618.warc.gz | en | 0.954554 | 585 | 2.734375 | 3 |
Good Manufacturing Practice
Good manufacturing practice (GMP) is a set of guidelines and rules that help manufacturers of products, such as cosmetics, food, diagnostics, medical devices, or pharmaceuticals, better protect consumers from exposure to potentially harmful ingredients or processes. Complying with good manufacturing practice is mandatory in all pharmaceutical manufacturing and most food processing.
What Is GMP?
GMP is a system that consists of guidelines for processes, procedures, and documentation that ensures manufacturing products are consistently produced and controlled according to set quality standards. It helps ensure that a product is made according to industry standards.
Good manufacturing practice influences the safety and quality of products by requiring manufacturers to address specific areas of production, including:
- Building and facilities
- Documentation and recordkeeping
- Inspections and quality audits
- Quality management
- Raw materials
- Sanitation and hygiene
- Validation and qualification
GMP regulations are mandated and enforced in the countries where manufacturers operate. Countries worldwide have GMP regulations that regulate the production, verification, and validation of manufactured products.
Enforcement of Good Manufacturing Practice in the United States—Food and Drug Administration (FDA)
In the United States, GMP is enforced by the FDA through Current Good Manufacturing Practices (cGMP). These cover a variety of industries, including cosmetics, food, medical devices, and prescription drugs.
The FDA’s portion of the Code of Federal Regulations (CFR) is Title 21. This includes the Federal Food, Drug, and Cosmetic Act and related statutes, including the Public Health Service Act. Directives related to pharmaceutical or drug quality-related regulations appear in several Parts of Title 21, including Sections in parts 1-99, 200-299, 300-499, 600-799, and 800-1299.
Within these Sections are a few Parts of particular importance as related to good manufacturing practice are:
- 21 CFR Part 210—current good manufacturing practice in manufacturing processing, packing, or holding of drugs
- 21 CFR Part 211—current good manufacturing practice for finished pharmaceuticals
- 21 CFR Part 212—current good manufacturing practice for positron emission tomography drugs
- 21 CFR Part 314—for FDA approval to market a new drug
- 21 CFR Part 600—biological products
Global Enforcement of Good Manufacturing Practice
- Australia—Therapeutic Goods Administration
- Canada—Health Canada
- China—China Food and Drug Administration
- Europe—European Medicines Agency
- India—Central Drug Standard Control Organization
- Japan—Japan - Pharmaceuticals and Medical Devices Agency
- World Health Organization (WHO)—WHO Good Manufacturing Practices
GMP guidelines and regulations require manufacturers, processors, and packagers of drugs, medical devices, and food to ensure that their products are safe, pure, and effective. There are ten steps that should be taken in order to follow good manufacturing practice.
These focus attention on five key elements, or the five Ps, of GMP—people, premises, processes, products, and procedures (or paperwork).
1. Create and distribute standard operating procedures (SOPs)
These should be step-by-step instructions that document all procedures and methods that should be followed when performing specific operations. The SOPs should be written clearly, concisely, and logically so that they are easy to understand. These SOPs should be provided to all employees along with a review to ensure that they are understood.
2. Implement and enforce the SOPs
All SOPs and work instructions should be embedded in employees’ workflows to ensure a controlled and consistent performance. Oversight should be in place to prevent shortcuts or deviations from the SOPs.
3. Document work
Documentation of work with regards to following SOPs should be prompt, and accurately created documentation provides an official record that is helpful for compliance. In the event of an issue, these records can support an investigation.
4. Validate work
Validate the effectiveness of SOPs by establishing documentary evidence that procedures, processes or activities, and production adhere to compliance requirements at every stage. This provides proof of consistent performance that follows written procedures.
5. Integrate productivity, quality, and safety into facilities design and equipment placement
Design with function, per the SOPs, in mind. Facilities’ design and construction should be done in a way that utilizes space and equipment for optimal productivity, quality, safety. This includes physically separating tools and materials to minimize confusion, cross-contamination, and potential quality issues.
6. Maintain equipment, facilities, and systems
Maintenance should be performed according to an approved schedule. All maintenance of equipment, facilities, and systems should be backed up with documentation that details what is done along with any issues or concerns related to safety or quality.
7. Foster job competence
Requirements for job competency should be clearly defined with associated procedures documented or called out in SOPs. Training should be provided to assure that requirements are understood and executed correctly.
Job competency should be based on consistently and efficiently producing a quality product. Achievement of and continued adherence to requirements for job competency should be documented.
8. Embed cleanliness in all activities
One of the easiest, most effective ways to prevent contamination is cleanliness. Cleaning and sanitization, as well as personal hygiene procedures, should be part of SOPs and enforced.
9. Build quality into workflows
Quality should be embedded in every step in the product development lifecycle, including controlling components, manufacturing, packaging and labeling, and distribution. A master record should be kept for every product with documentation of quality controls that were in place and details about how they were followed.
10. Conduct internal audits
Regular internal audits should evaluate good manufacturing practice compliance and performance. This not only identifies potential issues, but helps be prepared for external GMP audits.
What Is cGMP?
cGMP is an acronym that came from the FDA. It means current good manufacturing practice. The goal was to make clear to manufacturers that continuous improvement in their approach to product quality was important. Manufacturers should stay vigilant with regards to changes or improvements that could impact product quality—that is, stay current with good manufacturing practice.
Most other countries assume that manufacturers will keep abreast of changes to good manufacturing practice guidelines. Therefore, they do not use cGMP. This is why cGMP and GMP are often used interchangeably.
Audits and Inspections
The agencies that regulate good manufacturing practice around the world are authorized to perform audits and inspections to confirm compliance with GMP. Whether performed internally or by external groups, audits and inspections help not only to ensure quality, but also to improve the performance of different systems, including the following:
- Building and facilities
- Customer service
- Materials management
- Packaging and identification labeling
- Personnel and GMP training
- Quality control systems
- Quality management systems
The stakes are high with agency-led inspections and audits. If serious violations are found, the FDA has the authority to recall products.
A few measures that can be taken to uphold good manufacturing practice standards and be prepared for audits and inspections are as follows.
- Appoint a quality team that focuses on enforcing and enhancing current manufacturing procedures and complying with good manufacturing practice.
- Demonstrate that instruments, processes, and activities are regularly evaluated by documenting evaluations. Among the areas that require validation in the event of a good manufacturing practice audit are the following processes:
- Process validation
- Cleaning and sanitation validation
- Computer system validation
- Analytical method validation
- Conduct internal audits to gain insights into day-to-day operations and identify areas of non-compliance.
- Provide compliance training to help staff better understand good manufacturing practice and give them the tools and skills to continually improve operations or systems according to GMP standards.
Good Manufacturing Practice Benefits Consumers and Organizations
Good manufacturing practice guidelines do not provide specific instructions for manufacturing. Follow recommended protocols, although they can sometimes be cumbersome and difficult.
This set of general principles helps organizations drive the most effective and efficient quality process. In addition to ensuring consistent, acceptable product quality and safety, a well-executed good manufacturing practice can:
- Reduce risks
- Save money
- Protect reputations
- Create a competitive edge
- Drive profits
Implementing and following good manufacturing practice benefits consumers and organizations.
Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide.
Last Updated: 23rd February, 2022 | <urn:uuid:c7943c05-3843-42e5-a049-8e24c244eff8> | CC-MAIN-2022-40 | https://www.egnyte.com/guides/life-sciences/good-manufacturing-practice | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00618.warc.gz | en | 0.932962 | 1,791 | 3.25 | 3 |
Drones are often compared to mosquitos: the high pitch whine, the stigma, the general level of aerial annoyance felt by the public. But now the technology is being used to counter the insects, as part of a UNICEF project in Malawi that aims to boost the fight against malaria.
Last year, the government of Malawi and UNICEF launched an air corridor to put drones to the test for several humanitarian applications. At the time it was the first corridor of its kind in Africa, and one of the few chunks of airspace in the world dedicated solely to the development of humanitarian technologies.
Several drone projects have been developed in the area, including for the transport of medical supplies, and the mapping of cholera outbreaks. The next target is battling malaria, the mosquito-borne disease that kills more than one million people every year, with the majority of those being children under the age of five.
Mapping mosquitos to fight disease
The relationship between malaria and water is an important one. Female mosquitos search for bodies of water to use as breeding grounds. Finding these areas – particularly as the seasons change – is key to keeping mosquito numbers down and saving lives, via environmental management techniques.
In an article for The Conversation, , medical research council fellow at Lancaster University, and , a senior lecturer in vector biology at the Liverpool School of Tropical Medicine, and member of the Malawi-Liverpool-Wellcome Trust Clinical Research Programme, describe how mapping breeding sites could be a huge step in reducing mosquito numbers.
“Not only could mapping mosquito breeding sites determine which areas are prone to malaria transmission, they could also provide the information to reduce mosquito numbers in water bodies through environmental management. Prevent mosquitoes from breeding – especially in those sparsely available sites in the dry season – and we could make a significant impact on local malaria cases,” they write.
Together with UNICEF, the researchers have been studying whether aerial images captured by drones can simplify the search for mosquito breeding grounds; it goes without saying that drones can cover large areas with ease and gather huge amounts of data in a single flight.
To date, the team has been analysing this data using the human eye – a method that, though effective, can be time-consuming and requires significant training. As a result, there may be scope for automating that process in the future using AI. Meanwhile, surveys on the ground remain an important factor.
A machine learning algorithm trained to recognise potential mosquito breeding sites could be the next logical step in the programme and help teams on the ground take action more quickly.
Internet of Business says
The UNICEF project illustrates the huge potential that autonomous systems have in the fight against malaria and other diseases. Aside from mapping, the technology has also been used to introduce sterile mosquitos to populations in South America in an effort to slow down the Zika virus.
Throughout the world, drones have helped first responders and search and rescue teams to save lives, while in Rwanda, drones are being used to deliver critical blood supplies and medicines to remote rural areas – a technology that may be deployed worldwide. | <urn:uuid:e2916fd3-b51c-4cc2-8a36-4e2182799e99> | CC-MAIN-2022-40 | https://internetofbusiness.com/unicef-drones-stop-spread-malaria/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00618.warc.gz | en | 0.944391 | 630 | 3.5625 | 4 |
Artificial Intelligence (AI) and the Internet of Things (IoT) are the two technologies that play a vital role in a smart future. Thus, by integrating these two it will be worthwhile for both ordinary people and experts.
IoT manages gadgets interacting with the Internet whereas AI causes the gadgets to gain from their information and experience. So, it will be amusing to know why they both need to work jointly.
Have you ever imagined a world where machines are not only mindless engines, rather it is intelligent and creative that can help in duo with human entities? This is not just in imagination, however, it is a realistic outcome of the two most strenuous technologies that is Artificial Intelligence (AI) and the Internet of things (IoT).
(Also read: Branches of AI)
So, let's begin the topic by introducing these two technologies 'AI and IoT'. After that, you will get to know the advantages of AI with IoT, applications of AI in IoT, the requirements of AI in IoT, examples, in this blog.
About Artificial Intelligence (AI) and Internet of Things (IoT)?
Artificial Intelligence, or AI, or you can say machine intelligence is basically the potential of the robot to do tasks that are generally performed by human beings. Nowadays, we can see that AI is used in approximately every field.
Artificial Intelligence techniques have become a fundamental part of the technology industry. It helps with dealing with many testing issues in software engineering, programming designing, and exercises research. Learn more about what is AI: types, uses and working
On the other hand, the Internet of Things, or IoT, is a web of different gadgets that are linked over the web. They can even gather and trade information with one another. In simple words, it alludes to a strategy of interrelated, web associated objects that can gather and move information over a wireless system without human intervention. Explore more in detail, what IoT is.
Applications of AI in IoT
Artificial Intelligence and the Internet of Things both have their distinctive place in the technical world. However, their credible power can be discerned jointly. There are several applications across different enterprises that prefer both AI and IoT. So, have a look at some of the applications;
Smart Cities- Everything is getting smarter day by day, say Smart cities, they can be made with a system of sensors that are attached to the physical city foundation. Thus, these sensors can be utilized to screen the city for different city factors, for examples, vitality proficiency, air contamination, water use, traffic conditions, and so on.
Digital Twins- In digital twins, there is a subject in which one is a real realm subject and the further is its digital image. Digital Twins are basically used to dissect the presentation of the items without utilizing the customary testing strategies thus decreasing the costs needed for testing.
Collaborative Robots- People always wanted that if there is a robot who will do their work. So, collaborative robots are exactly that gadget. These robots are highly sophisticated machines that are basically formulated to help human beings. Also, they can be a robot arm conspired to accomplish tasks or complicated robots modelled to attain hard duties.
Smart Retailing- In this, retailers used AI and IoT to discern their customer's behaviour analysis by researching the customer online profile, in-store register, and many more. After that, they transmit real-time personalized offers while the consumer is in the shop.
(Related blog: Applications of AI)
Is AI required for IoT?
Well "Yes"! AI is required for IoT. As per Business Insider, there will be in excess of 64 billion IoT gadgets by 2025, up from around 9 billion in 2017. All these IoT gadgets create a great deal of information that should be gathered and dug for significant outcomes. Hence, this is the place AI reaches into the image.
IoT is utilized to gather and handle the enormous measure of information that is required by the Artificial Intelligence calculations. Thus, these calculations convert the information into valuable significant outcomes that can be executed by the IoT gadgets.
Consequently, this can be nicely added in the expressions of Maciej Kranz, Vice President of Corporate Strategic Innovation at Cisco. So, have a glimpse at his words.
“Without AI-powered analytics, IoT devices and the data they produce throughout the network would have limited value. Similarly, AI systems would struggle to be relevant in business settings without the IoT-generated data pouring in. However, the powerful combination of AI and IoT can transform industries and help them make more intelligent decisions from the explosive growth of data every day. IoT is like the body and AI the brains, which together can create new value propositions, business models, revenue streams and services.” - Maciej Kranz
(Must read: Top 10 IoT Examples)
Advantages of integrating AI with IoT
Artificial Intelligence and the Internet of Things together can make the perfect combination. Even altogether they are identified as AIoT. So, below are some advantages of integrating AI and IoT;
Expanded working efficiency and productivity- Companies that comprise AI into IoT applications can obtain expanded working efficiency. The AI capacities can handle information and make forecasts in manners that people can't do. Innovation can ascertain an enormous set of information in a brief timeframe.
Create deeper customer relationships- It's not that only the employer will get the advantage of integrating AI and IoT. Also, if it will execute correctly the straight outcome can personalize the experience for customers.
Improved Security And Safety- Now you must be thinking that AI and IoT can separately have the capability to improve security. But, it is also true that if AI and IoT can merge then it can contribute an extra coating of security.
(Recommended blog: Role of technology in business)
More advantages include saving money, even these rising technologies can protect lives, improved work culture, etc.
Recent Statistics at the Edge
There are some stats that will show you why Artificial Intelligence and the Internet of Things are empowering the realm.
According to the report provided by Gartner, 80% of all corporation IoT tasks will incorporate AI as a significant segment by 2022.
There is a report by Statista that, by 2025, it's predicted that there will be roughly 44 billion IoT gadgets around the world.
Some statistics to show AIoT integration
Now, it's time to conclude, and hope your doubt about the integration of AI and IoT has been cleared. AI and IoT are performing their task very well but whenever they work together they become perfect. The future is not so far when we will hear how AI and IoT are advancing the entire world.
AI and IoT are indivisible. The whole thought of Artificial Intelligence is to catch more noteworthy information from IoT gadgets. The IoT is now disrupting different businesses affecting human lives in a few different ways.
Thus, we have read above how AI is required for IoT, there are so many advantages, applications, there are much more things which AI and IoT will introduce in the coming days. There are some real-life examples such as Self-driving cars, Smart Thermostat Solution, Retail Analytics, Robots in manufacturing, and so on.
(Referred read: Role of IoT in Manufacturing Industry)
Altogether, IoT combined with AI technology can direct the path to a progressive phase of outcomes and experience. To get better incentives from your network and change your industry, you ought to coordinate AI with approaching information from IoT gadgets. | <urn:uuid:085a41c9-1b02-44c9-b683-008ed607e892> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/how-ai-integrated-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00618.warc.gz | en | 0.943763 | 1,558 | 3.015625 | 3 |
New Training: Security Concepts
In this 10-video skill, CBT Nuggets trainer John Munjoma covers fundamental security concepts including common vulnerabilities and access control models. Watch this new Cisco training.
Watch the full course: Cisco Certified CyberOps Associate
This training includes:
54 minutes of training
You’ll learn these topics in this skill:
The CIA Triad
Comparing Security Deployments Part 1
Comparing Security Deployments Part 2
Describing Security Terms Part 1
Describing Security Terms Part 2
Comparing Security Concepts
The Principles of Defense In-Depth Strategy
Comparing Access Control Models
Common Vulnerability Scoring System
The 5 Tuple Isolation Approach and Data Visibility
How Do Access Control Models Work?
There are three different models for describing how to control access in your network. Understanding them can be easier when you imagine your network as a real, physical building.
The first access control model is Discretionary Access Control (DAC). Imagine your computer network like a building, Discretionary Access Control would be like assigning every employee an office, and then telling them to hand out keys to their offices and filing cabinets. It’s up to each employee to decide who gets to come and go into their office and who can go into which drawers.
The second model is Role-Based Access Control. It starts with grouping everyone according to what work they do — finance, sales, IT, etc. Then, every door gets locked and managers and supervisors of each group are responsible for handing keys out to their team members — according to what rooms they might need to get into.
The last access control model is Mandatory Access Control (MAC). In our example, you’d still have employees grouped according to their job, and then you’d assign each room a security level plus a categorization of what sort of work relates to it. Then, you’d give each employee a security level. Each time an employee visits a room, a check gets done to confirm they have the right security level and work in the right category.
These are oversimplifications, but can help illustrate the different levels of effort each access control model requires from a cybersecurity perspective. | <urn:uuid:d164fe74-b29e-4702-a9df-f1a968dba5a9> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-security-concepts | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00018.warc.gz | en | 0.90598 | 460 | 2.609375 | 3 |
As most people know, textbooks can be costly. Especially since schools constantly need to update, with new editions being printed so frequently. iPads allow access to the resources and texts required in the classroom, all in one portable device.
Although an iPad is significantly more expensive than a textbook, it can be a better long-term investment.
Brookfield High School in Connecticut is one of many U.S. schools distributing iPads to their students instead of individual textbooks. The iPad will be equipped with all the readings and resources they need so that they don’t need to lug a heavy backpack to school everyday. As well, calculators, translators, dictionaries and other teaching aids can all be found on an iPad, so parents have fewer school supplies to buy.
One thing that should not be undermined in all of this is the importance of teachers. While iPad’s in classrooms are efficient, they are not stand-ins for good teachers. The iPads are meant to be resources that will aid teachers, not replace them. Technology and intelligence are not synonymous, although they can co-exist very nicely! | <urn:uuid:e3003e3b-f041-45ce-a6ce-b299879b1b44> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/does-your-list-of-school-supplies-include-a-ipad | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00018.warc.gz | en | 0.966677 | 225 | 2.921875 | 3 |
Where is the New What
Spatiotemporal data describes where objects are and where they are moving. Prime examples are streams of IoT data from mobile devices, social platforms, static or moving sensors, satellites, wireless, and video feeds from drones and closed-circuit TVs. This data comes in the form of a reading, a timestamp (t), and location coordinates (x & y).
Gartner reports the line between IoT-enabled sensors and location tracking is increasingly blurring as providers face increasing demand from clients to address location data along with other sensing information, such as asset status, direction of movement, temperature, and humidity.
These changes in time & space data are driving digital transformation in the areas of Automotive (e.g., connected car), Public Health (e.g., monitoring spread of disease) and Safety (e.g., threat hunting), Security (e.g., common operational picture), Logistics (e.g., fleet monitoring), Environment and Climate (change detection), Retail (e.g., proximity marketing), and many others.
Existing databases (even with special object-relational extensions for spatio-temporal data) have struggled to keep up the scale, speed, and specialized analytics required for modern location intelligence workloads. They were never designed to handle the variety of fusion steps and aggregations in an acceptable latency profile required to power downstream value-added location aware services.
A new paradigm, commonly referred to as vectorized databases, are radically reducing the complexity and increasing the performance of spatiotemporal workloads. Vectorization is extremely efficient at calculating changing geometry over time.
Data is Changing
Real-time geospatial data is proliferating as prices continue to fall dramatically on the technology that generates this data.
|Characteristics||Static, authoritative data||Streaming, noisy data|
|Sources||Surveys (e.g., census, polls, etc.), satellites, and transactions||Sensors, smartphones, telemetry, drones, closed-circuit TVs, satellite constellations, bluetooth tags|
|Volumes||Megabytes to gigabytes||Terabytes to petabytes|
Uses are Changing
New and innovative uses of real-time data were first pioneered by leading tech companies such as Uber and Tesla. But those demands are being felt by many other companies and industries.
|Persona||GIS Specialists||Developers, Architects, Analysts, GIS Specialists|
|Starbucks, Walgreens, Gallup Poll, Municipalities||Uber, Ford, Apple, Verizon, Tesla, U.S. Air Force, Amazon, Liberty Mutual, British Petroleum|
Tools are Changing
Modern real-time geospatial analytics tools are designed to make it easy to work with the volume, speed and noise of moving data.
Critical Capabilities for Real-time Location Intelligence
Sensors have evolved from taking readings over time to taking readings over space and time. Understanding this trend and the resulting impacts are essential for innovators seeking to create value in the next wave of IoT products and services. Learn how to harness modern location intelligence techniques for IoT.Get the White Paper
Kinetica: For the Next-Generation of Geospatial Applications
Real-time Ingest for IoT
Over 130 Geospatial Functions
Solve Routes & Relationships
Server Based Visualizations
Ease of SQL
Fast Lookup and High Concurrency.
What Can You Build with Kinetica?
Smart Cities Advanced Monitoring
Track Entities in Real-time
Connect Multiple Feeds through Time with ASOF Joins
Book a Demo!
Sometimes marketing copy can sound too good to be true. The best way to appreciate the possibilities that Kinetica brings to large-scale geospatial analytics is to see it in action, or try it with your own data, your own schemas and your own queries.
Contact us, and we can set you up with a demo and a trial environment for you to experience it for yourself. | <urn:uuid:f585785f-3e34-4369-a9a5-aabd881304ea> | CC-MAIN-2022-40 | https://www.kinetica.com/solutions/modernize-location-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00018.warc.gz | en | 0.873273 | 916 | 2.703125 | 3 |
As Intel and Advanced Micro Devices (AMD) race toward smaller, faster chips, the fierce competition is taking its toll as both companies struggle to maintain profit margins by using more efficient manufacturing.
At the tiny scale of 45-nanometer transistors and wires, a single speck of dust on a 300-millimeter silicon wafer can destroy an entire processor. Lower manufacturing yields cut into a company’s profit margin, reducing the number of functional chips they can sell from each wafer.
“As an industry, we’re running out of gas at 45-nanometer geometry,” said Nick Kepler, vice president of logic technology development for AMD. “We’ve been unable to scale the size as much as we used to because of leakage. We’re still putting transistors closer together, but not shrinking the gates. So geometry shrinks can hurt you now, which they haven’t in the past.”
One problem is that 65-nanometer and 45-nanometer chip components have shrunk so far that they are now smaller than the wavelength of the light used to carve those features onto silicon chips. To fight that trend, AMD and IBM said Tuesday they will use three new technologies to boost manufacturing efficiency.
By the second half of 2008, AMD expects to be producing 45-nanometer chips using immersion lithography, ultra-low-k interconnect dielectrics and enhanced transistor strain. The companies made the announcement at the International Electron Device Meeting in San Francisco.
By inserting liquid instead of air between the projection lens and the wafer, AMD and IBM can reduce the wavelength of the light, giving them a 40 percent gain in resolution compared to dry lithography and allowing them to sell a higher percentage of the hundreds of chips on every wafer, Kepler said.
Likewise, IBM plans to produce 65-nanometer server chips by the second half of 2007, then shrink both server and gaming chips to 45 nanometers at some point in the future, said Gary Patton, vice president of technology development for IBM’s semiconductor research and development center. AMD and IBM have cooperated on chip development since 2003, and last year extended their contract through 2011 to reach the 32-nanometer and 22-nanometer chip generations.
In response, Intel says that it beat AMD to the market with 65-nanometer chip sales by more than a year, and has already begun producing samples of its 45-nanometer “Penryn” quad-core chip for notebooks, desktops and servers. Intel plans to ship those chips in the second half of 2007.
Even if AMD improves its manufacturing, Intel is poised to continue its lead in shrinking chip geometries because it has shorter gate lengths and static RAM (SRAM) cell sizes, said Rob Willoner, a technology analyst with Intel’s technology and manufacturing group.
“People have different meanings when they talk about the 65-nanometer process. There’s not an industry standard,” he said. AMD’s Opteron chip was a commercial success because its excellent design compensated for poor dimensions at the 90-nanometer scale, he said.
Chips with a greater cache of local memory stored in SRAM cells can avoid the time-intensive process of retrieving data from external sources. At the same conference in San Francisco this week, Intel announced plans to use “floating-body cells” to further increase the amount of on-chip memory, according to published reports.
Despite the manufacturing challenges, chip companies continue to build smaller features because of the tremendous performance benefits. By shrinking from 90 to 65 nanometers, Intel chips have double the transistor density while reducing leakage fivefold (at constant performance) and requiring 30 percent less switching power.
Check out our CIO News Alerts and Tech Informer pages for more updated news coverage. | <urn:uuid:c29bdf54-fc12-4d15-894a-d41939ac409d> | CC-MAIN-2022-40 | https://www.cio.com/article/265250/infrastructure-amd-seeks-efficiency-in-making-45nm-chips.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00018.warc.gz | en | 0.919802 | 801 | 2.609375 | 3 |
Cyber warfare involves weaponizing hacking skills to either initiate attacks or prevent different types of cyber attacks. Although hacking started out as localized, relatively modest attacks on certain individuals or systems, as profiteers, organized cyber crime conglomerates, and nation-states have noticed the unique strategic advantage cyberattacks create, more and more notorious attacks have been happening.
In many cases, hackers looking to get the respect of the hacking community have also launched high-profile attacks unilaterally, earning the respect of both cyber criminals and the public.
When Did Cyber Warfare Start? History of Cyber Warfare
Cyber warfare began in 2010 with Stuxnet, which was the first cyber weapon meant to cause physical damage. Stuxnet is reported to have destroyed 20% of the centrifuges Iran used to create its nuclear arsenal.
Then, between 2014 and 2016, Russia launched a series of strategic attacks against Ukraine and the German parliament. During the same period, China hacked 21.5 million employee records, stealing information from the U.S. Office of Personnel Management.
In 2017, the WannaCry attack impacted upwards of 200,000 computers in 150 countries. The attack targeted Windows computers with ransomware. Later in 2017, the NotPetya attack, which originated in Ukraine, destroyed files, resulting in more than $10 billion in damage.
The Most Notorious Cyberattacks in History
There have been countless cyberattacks throughout the years, but the following cyber warfare examples have had a significant impact on the cyberattack landscape, as well as how companies and countries defend themselves against attackers.
Robert Tappan Morris—The Morris Worm (1988)
Robert Tappan Morris made the first internet computer worm in history. He was a student at Cornell University. Although Mr. Morris claimed he did it to explore the size of the cyber space, it soon evolved into a virus that caused between $10 million and $100 million in damage repair costs.
A Canadian high schooler launched a distributed denial-of-service (DDoS) attack on several commercial sites, including big players like CNN, eBay, and Amazon. The hacks resulted in an estimated $1.2 billion of damage.
Google China Attack (2009)
In 2009, in an act of cyber espionage, hackers were able to get inside Google’s servers and access Gmail accounts belonging to Chinese human rights activists. Upon further investigation, authorities discovered that many Gmail accounts of people in different countries had been penetrated.
A Teenager Hacks the US Defense Department and NASA (1999)
A 15-year-old named Jonathan James was able to get inside the U.S. Department of Defense’s (DOD) computers and install a backdoor within its servers. He then used the backdoor to intercept internal emails, some of which had usernames and passwords inside.
James then used his access to the DOD’s system to steal NASA software used to support the International Space Station.
Hacking a Radio Phone System to Win a Porsche (1995)
A man named Kevin Poulsen heard of a radio station contest where you could win a sports car. He ended up winning a Porsche 944 S2 by being the 102nd caller. He accomplished this feat by hacking the phone system, locking out other callers, ensuring his victory. He ended up getting sentenced to five years in prison.
The Future of Cyber Warfare: Best Practices for Prevention
Cyber warfare is likely to continue and grow, particularly because of the interconnected nature of people’s lives. In addition to business systems, entertainment, and social media, the infrastructural components of cities and countries are also dependent on networks. When hacked, these can become an Achilles' heel—a weak spot that would not otherwise exist, which creates tempting opportunities for cyber warfare soldiers and the organizations and countries that support them.
Even though the opportunities presented by cyber war are vast—and likely to inspire new methods of attack—organizations can do a lot to minimize the chance of being impacted by an attack:
- Use available tools. It is no coincidence that phishing scams have become popular. Phishing involves an attacker tricking someone into divulging sensitive credentials. Because companies have been using next-generation firewalls (NGFWs), web application firewalls (WAFs), intrusion detection and prevention systems, antimalware, and other tools, stealing login credentials has become a go-to option. Using the latest tools immediately takes your organization off the list of cyberattackers’ low-hanging fruit.
- Increase cyber awareness. You can use famous cyberattacks and their methodologies, as well as the most recent cybersecurity statistics, to educate employees about what to look out for. An event does not have to be the biggest cyberattack in history to hurt your organization. If employees know the signs and how to be cyber-responsible, you can significantly reduce the chances of a successful attack.
- Segment your networks. Some of the most dangerous cyberattacks were successful only because the networks they targeted were not properly segmented. Keep sensitive data and anything else attractive to cyber criminals separate from the rest of the network and each other. This way, an east-west spread of an attack will do less damage.
How Fortinet Can Help
The FortiGate Next-Generation Firewalls (NGFWs) give your network and its users advanced protections that can prevent a cyberattack from being successful. Also, because FortiGate NGFW is integrated with the Fortinet Security Fabric, you can set it up as a central element of your network, making it possible to manage traffic with a FortiGate NGFW, keeping all users and devices more secure.
FortiGate NGFWs use advanced packet inspection, powered by a dedicated security processor. They can prevent zero-day attacks, as well as all those indexed by FortiGuard, the Fortinet threat intelligence system. Recent cyberattacks show the damage a successful incursion can inflict. With a FortiGate NGFW in your cyber defenses, you can thwart even some of the most advanced attacks.
What is the biggest cyberattack in history?
The biggest cyberattack in history was arguably the Jonathan James attack on NASA and the U.S. Department of Defense in 1999, especially due to the fact that the attack compromised such trusted, high-profile organizations.
What is the most famous cyberattack?
The most famous cyberattack is the Google China hack in 2009.
Where did cyber warfare originate?
Cyber warfare may have originated in the United States when Americans supposedly took out Iranian nuclear facilities. | <urn:uuid:f66f4b09-1756-431c-9ce1-ef7a314f4068> | CC-MAIN-2022-40 | https://www.fortinet.com/lat/resources/cyberglossary/most-notorious-attacks-in-the-history-of-cyber-warfare | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00018.warc.gz | en | 0.950726 | 1,362 | 3.359375 | 3 |
by Sam Raincock, IT and telecommunications expert witness
· Examine this laptop and establish if it has accessed the website http://www.forensicfocus.com.
· Examine this mobile telephone and determine if it sent a text message with the content “Forensic Focus”.
Let’s look at the first example. In the event there is “no evidence of access to http://www.forensicfocus.com found”, what remains is proving (or commenting on) a negative. However, just because you do not find any evidence of connections to the site, does this imply no connections ever occurred?
There are three main possibilities to consider. Firstly, the techniques used in your examination did not facilitate finding the evidence even though it is present. For example, if we simplistically relate this to an examination where only the live Internet history is examined initially, it is possible that a subsequent examination could determine some deleted Internet history and further evidence may be established.
Secondly, you did find the evidence but were unable to determine how to interpret it so you didn’t establish its meaning. For example, you found a partial registry file in deleted space but did not have the knowledge to interpret it and extract the evidence.
Thirdly, there is no evidence on the device of any connections occurring to http://www.forensicfocus.com. So no connection ever occurred?
Even given the last situation, with a computer, often the absence of any evidence is not evidence that it was never present. This is due to the fact that on a computer, data can be deleted and overwritten. Hence, it is possible that an event occurred but evidence of it is no longer available.
It’s not what you find, it’s what you don’t find
In the process of reviewing evidence reports, I often see statements made about something not being present or the inability to do something:
1. No video files were stored on the mobile telephone.
2. It’s not possible to determine how the files found in Shadow Copies came to reside there.
3. At 15:00 no activity was occurring on the computer.
4. There is no occurrence of the word “Forensic” on the memory card.
What do these statements actually mean? And more importantly, how will they likely be interpreted by a legal professional?
Let’s look at the first statement. “No video files were stored…..” It’s a strong statement that in its current wording would likely be interpreted as factual by a legal professional i.e. there are no video files. What happens when another examiner analyses the device using a different examination technique/software finding video files? It would give rise to an interesting case conference!
Let’s also consider points 2 to 4 from the above list:
· “It is not possible to determine how the files found in Shadow Copies came to reside there.” So why is it not possible? Because in the past, we didn’t know how to do it! However, it was not impossible – it was just that the writer did not know how to interpret the evidence they were examining.
· “At 15:00 no activity was occurring on the computer.” This statement may be true if you can prove it wasn’t switched on. However, what about a computer that is running, but you have not found any evidence (yet) around the time of interest? In this example, what would happen if a user was editing a Word document that they created at 13:00 and finished working on at 17:00?
· “There are no occurrences of the word “Forensic” on the computer.” What about if you search for “ForensicFocus”? Will the search terms you use return different results? In this example, depending on the search heuristic being implemented, will your results differ?
Dealing with a negative finding
The ability to deal with a negative finding is what is important. It is my belief that the report produced should use appropriate language to describe what is meant by not finding something. This makes it clear to the reader the significance of a negative finding as well as protecting the writer in the event their original statement is disproven. To do this you firstly need to consider what your negative finding means. Why have you not found the evidence? Could you examine the device further and find a partial file? Could someone else? Are the search terms you used the reason why you have not found what you were looking for? Do you trust the completeness of any scripts you are using?…
Let us take a case scenario where an examiner is asked to find sound recordings on a mobile telephone. Furthermore, let us say that the telephone was examined and it was concluded that it did not contain any sound files. The telephone was then re-examined by another examiner who, using different techniques, concluded a deleted sound recording was present but it is not possible to date its creation. Another examiner analyses the evidence and finds the sound recording and determines it was possible to date the original sound file. If the first two people have concluded a negative – they have both been disproved. What happens now to the evidence originally presented by the other two examiners?
So how can things be phrased to protect the examiner and also to provide a more objective view?
“I did not find any occurrences of ForensicFocus” may become “The searches X, Y, Z I performed using A did not find any occurrence of ‘ForensicFocus’.”. You could explain the search process in your background information section so that it is clear what this process may or may not find.
“No activity was recorded on the computer at 15:00” may become “The examinations I performed did not find any evidence of activity at 15:00. However, it should be noted that the way in which a computer operates means that……”. You could discuss how the absence of information does not prove an event did not occur – perhaps give an example that people can relate to, something like the editing of a Word document and the evidence this may produce.
There is nothing to see here, please move on!
Two things I personally consider before starting any statement: 1) there are people smarter and more knowledgeable than me, and 2) very few things are impossible – we just don’t know how to figure them out yet. I then start writing……
After that, my advice is to review the meaning and ensure the avowals you make (or are present in your report templates) are not open to misinterpretation.
So, the next time you are asked to consider if a device contains anything of evidential value and your examination fails to uncover anything of interest, would you really write “Nothing of evidential value was present on this device” in your report?
Click here to discuss this article.
Sam specialises in the evaluation of digital evidence from the analysis of telephones to determining the functionality of software systems (and almost anything in-between). She also provides overview assessments of cases, considering different sources of evidence in the context of a whole incident to highlight inconsistencies particularly due to digital devices. Sam can be contact direct on +44 (0)1429 820131, [email protected] or http://www.raincock.co.uk. | <urn:uuid:e6d025cf-1f24-4f41-9cf9-4ca057d758d3> | CC-MAIN-2022-40 | https://www.forensicfocus.com/articles/its-not-always-what-you-find/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00018.warc.gz | en | 0.964511 | 1,540 | 3.015625 | 3 |
This post explains the complete history of the ERP software system, the ERP evaluation path, and its future.
Even though understanding ERP is a little confusing, it is good to understand it for all businesses, especially manufacturing.
ERP software systems in manufacturing industries have proven effective and efficient.
The history of ERP dates back to 1960!
Two aspects of the history of ERP
From a business perspective, ERP has expanded from coordinating manufacturing processes. In addition, it grew to the integration of enterprise-wide backend processes.
ERP has evolved from legacy implementation to a more flexible tiered client-server architecture from a technological aspect.
Inventory Management & Control (the 1960s)
The history of ERP begins with inventory management and control.
In the 1960s, manufacturing industries found that they required a system that should manage, monitor, and control their inventory.
Inventory Management and control combine information technology and business processes to maintain the appropriate stock level in a warehouse.
The activities of inventory management include,
- Identifying inventory requirements
- Setting targets
- Providing replenishment techniques and options
- Monitoring item usages
- Reconciling the inventory balances
- Reporting inventory status
Material Requirements Planning (MRP)(the 1970s)
The next stage of the history of ERP is material requirements planning.
In the 1970s, material requirements planning evolved to meet the manufacturing industries’ needs.
Material requirements planning (MRP) utilizes software applications for scheduling production processes. MRP generates schedules for operations and raw material purchases.
Scheduling is based on,
- Production requirements for finished goods
- Structure of the production system
- Current inventory levels
- Lot-sizing procedure for each operation
Manufacturing Resource Planning (MRP II) (the 1980s)
The last stage of the history of ERP is manufacturing resource planning.
In the 1980s, vendors added more manufacturing processes to MRP to make the process easier and more accurate. And this new system is named manufacturing resource planning (MRP II).
Manufacturing Resource Planning or MRP II utilizes software applications, Applications for coordinating manufacturing processes. Processes from product planning, parts purchasing, and inventory control to product distribution.
Enterprise Resource Planning (ERP) (the 1990s)
For the first time in the 1990s, The Gartner Group used the term ERP.
Enterprise resource planning or ERP uses a multi-module application software system. Software for improving the performance of the internal business processes.
ERP systems often integrate business activities across functional departments.
- Product planning.
- Parts purchasing.
- Inventory control.
- Product distribution, fulfillment, to order tracking.
ERP software systems may include application modules for supporting,
- Human resources.
During this period in the history of ERP, big corporations implemented it. However, most small and medium-scale businesses are left out due to the higher upfront costs.
Web Functionalities with Internet (ERP II) (the 2000s)
Interaction of ERP with other application suites is enabled in ERP II. An example is integrating with CRM systems.
Technological advancement accessing information using internet web-browsers and mobile devices was made possible.
ERP II adapted technological advancement with Services Oriented Architecture (SOA).
Cloud-based ERP (the 2010s)
Business applications are delivered as a Software as a Service (SaaS) model. Servers are deployed on the cloud and accessed with the rest APIs. Android, iOS, and browser applications are developed for delivering ERP software in the SaaS model.
It is helping businesses of all scales start using ERP systems since the upfront cost of cloud ERP systems is relatively minor.
Most of the prominent top ERP vendors are delivering services over the cloud.
Evolution of Open Source ERP solutions
Along with commercial vendors, open-source ERP systems are also evolved. These systems are mainly catering to the requirements of small and medium-scale businesses.
Since there is less upfront cost involved while implementing these systems, businesses with less budget could also afford it.
There is a surge in service providers who help implement and customize open source ERP solutions.
Difference between MRP and ERP
Here are some differences between MRP and ERP.
|It means material requirement planning||It means enterprise resource planning|
|It is a solo software||It can integrate with other systems or software easily|
|You can integrate it with other software, but that is challenging.||It combines with other software or modules without any difficulty.|
|It suits manufacturing industries.||It suits all industries and huge enterprises because it can fulfill the requirements of all the departments of large industries with its modules.|
|Types of its users are minimum because only the manufacturing department uses it.||Types of its users are maximum with extended users in different departments.|
|It is less expensive||It is more expensive|
What is the significant difference between open source ERP and commercial ERP?
The significant difference between open source ERP and commercial ERP is source code. In an open-source ERP system, source code is publicly accessible. But in a cloud system, you have to pay to get the source code license.
|Open Source ERP||Commercial ERP|
|You can customize the code, rewrite the code, and generate a new code version.||You can not edit the code.|
|It suites industries with less required functionalities||It suits big sectors that need a wide variety of features.|
|It is entirely free. You need to pay only for services.||Upfront costs and subscription charges are included.|
Future of ERP systems
Compared with the history of ERP, its future ERP trends are more dynamic due to the advancement in technology.
- Due to the reduction of computation and data storage costs, collecting every minute detail of business events is possible. In addition, it opens up the possibility of extensive data analysis and advanced reporting.
- Machine learning can help suggest better business decisions based on previous data and industry benchmarks.
- Automation of data-driven decision-making will take the front seat with the help of artificial intelligence.
- For business transactions between multiple parties, they are establishing data integrity with blockchain technology.
- To avoid frictions due to physical proximity, virtual reality for better interactions.
- Jobsite management using 5 G-enabled smartphones
- Internet of things (IoT) for better data exchange between human-to-machine and machine-to-machine
The advancement in technology has always accompanied the history of ERP. It continues to boost business growth.
With SaaS-based cloud ERP systems, more and more companies can start using enterprise resource planning solutions in their business operations.
ERP system evolved to fulfill the requirement of the manufacturing industry. It is vital for all types of business with its broad and flexible features.
Small, medium and enterprise industries require ERP software systems to get centralized, real-time data. | <urn:uuid:86d151b6-ee4a-4e29-a4d7-8db9be1c2324> | CC-MAIN-2022-40 | https://www.erp-information.com/history-of-erp.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00018.warc.gz | en | 0.893851 | 1,501 | 2.953125 | 3 |
The term “digital transformation” has become increasingly commonplace in recent years. With lockdowns in place, many people and organizations have had to rely on digital technologies for various aspects of their daily lives. These technologies have become essential, with global spending on digital transformation estimated to reach $6 trillion by 2023.
And as the world continues to digitize at an increasingly rapid rate, one question that develops is:
How do we manage the vast digital data we receive?
The answer to that question comes from Digital Asset Management (DAM). In this blog, we will explore the concept behind DAM, its features, the importance of DAM in an enterprise, and the software associated with it.
What is Digital Asset Management and Its Features?
Before we get into the meaning and explanation of Digital Asset Management, it is essential to know what a digital asset is.
A digital asset, in simple terms, is content data that can be created and stored digitally. Nowadays, many items are classified as digital assets – documents, photos, graphics, animations, audio, video, books, and even digital forms of currency.
However, it is essential to note that a digital asset is not regarded as an asset if you don’t own its rights.
DAM is the process of handling and controlling digital assets over a single central platform. The core concept of DAM is to bring together all the digital content under one roof to shelter and protect. A DAM system offers an efficient and streamlined approach to handling data that includes activities like
Sorting and Organising:
The first step in managing digital assets is to organize them into a centralized library. This organization involves sorting out the data and arranging it systematically.
A well-organized library helps users find the desired information quickly by providing various search options, such as keyword or attribute searches.
Sharing and Collaborating:
Since all digital assets are in one medium, sharing and collaborating can be easily done over multiple options like mail or brand portals.
Other features include creating user groups and adjusting data access control over the data, data version control, and flexible approval workflows. The versatility of these digital asset management system features makes it a robust infrastructure.
How Does Digital Asset Management Work?
Based on the Digital Asset Management definition, maximizing the value of an organization’s digital assets and enhancing its operational efficiency is the main objective of such a system. DAM is not just storage space but is a massive consolidated process that involves several activities.
Depending on the organization’s needs, DAM systems come in various shapes and sizes. Some organizations may require these systems to simply store and arrange vast amounts of data, while others may need it to improve communication and collaboration within their organization.
But while the set goals may differ from organization to organization, the core functionality of DAM remains the same.
- Collect digital assets, arrange, sort, and securely store and manage them.
- Make them searchable, easily accessible, and shareable to boost collaborative problem-solving.
The use of metadata and information of a digital asset like file format, date created, modified, name of the creator, etc. make these functions possible.
Why is Digital Asset Management Important for an Enterprise (Benefits)?
Data was and will continue to be a valuable asset in an enterprise set-up. Every year, a lot of digital data is lost per organization. With rapid digitization, a data asset management system must be necessary to protect the investment in creating digital assets.
A DAM system eliminates data loss and duplicate data, which are some reasons for data inconsistency leading to reduced efficiency.
Other than keeping digital assets well organized, secure, and easily searchable, reasonable digital asset management solutions also help improve workflows and facilitate fundamental data analysis.
The streamlined, straightforward approach improves communication and collaboration within an enterprise, building brand consistency and presence and resulting in much better ROI.
Top 3 Digital Asset Management Challenges and Its Solutions
Even the best digital asset management solutions must be well assessed and monitored to ensure your organization gets the full benefit. Here are the top three most common challenges when handling a DAM system:
1. Lack of Data Governance
Not instilling a sound data governance strategy can injure an organization’s data security infrastructure. It leads to inconsistent handling of digital assets, and without any security norms, it could lead to legal injunctions. Therefore, it is vital to set up fundamental data governance laws to ensure proper data usability and compliance.
2. Lack of Integration
Integration is one of the benefits of a digital asset management system that enables users to access and share content readily.
When building a DAM system, identifying the existing software and platforms must be looked into extensively. Suppose you are planning to work on a new platform or software or looking to switch. In that case, it is essential to ensure this change is updated on the DAM system.
3. Poor Metadata Tagging
Metadata is critical in organizing, arranging, and searching data. If the metadata of digital assets is poorly tagged, a DAM system would never execute to its full potential.
Maintaining version control would be difficult, leading to data redundancy, meaning employees would be wasting time working on the same assets.
Planning what keywords will be generated when creating and storing particular digital assets will help better manage and customize metadata later.
What is Digital Asset Management Software and Its Benefits?
Digital Asset Management Software is a centralized portal that helps create, manage, arrange, search and share digital assets. DAM software is a streamlined process management tool aiming to seamlessly access, secure, and collaborate with valuable organization-relevant files, be it documents, ppts, designs, graphics, audio and video, and many more.
Here are some more software-specific benefits –
- The centralized collection and storage of digital assets mean that files can be easily searched, accessed, and reused, making it a good resource allocation platform.
- Saves time and reduces production costs by preventing using outdated and irrelevant assets. This would eventually improve brand consistency and integrity.
- With all digital assets under one roof, DAM software eliminates the risk of losing valuable content.
- Due to version control, every employee using the software is up-to-date with the latest version of the content.
- Lastly, well-integrated DAM software will allow you to seamlessly share and move files across various digital platforms. This helps massively in managing workflow for a particular project.
With a wide variety of data being generated daily, organizations are looking for solutions and innovations to assist them in maneuvering through this digital playfield.
Suppose your organization deals with enormous amounts of data and content. In that case, ensuring you have a well-structured Digital Asset Management system is a no-brainer in today’s business world.
It makes sense to invest in a DAM, especially with data management’s increasing importance and benefits.
But rather than asking what the best digital asset management software is, you may want to ask which DAM software suits your specific requirements.
Vaultastic is the data management expert, with many big names trusting us with their critical business data. We have many solutions and services, including the best digital asset management tools and software. Get in touch with our experts to know more about our cloud data protection solutions and services. | <urn:uuid:a7707e9f-9f31-4d9d-bafa-aad32c59b981> | CC-MAIN-2022-40 | https://vaultastic.mithi.com/blogs/what-is-digital-asset-management-and-why-it-is-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00018.warc.gz | en | 0.917677 | 1,504 | 2.796875 | 3 |
More and more organisations today have some airgapped computers, physically isolated from other systems with no Internet connection to the outside world or other networks inside their company.
Security teams may have disconnected from other networks in order to better protect them, and the data they have access to, from Internet attacks and hackers.
Of course, a computer which can’t be reached by other computers is going to be a lot harder to attack than one which is permanently plugged into the net. But that doesn’t mean it’s impossible.
Take, for instance, the case of the Stuxnet worm, which reared its ugly head in 2010. Stuxnet is thought to have caused serious damage to centrifuges at an Iranian uranium enrichment facility after infecting systems via a USB flash drive and a cocktail of Windows vulnerabilities.
Someone brought an infected USB stick into the Natanz facility and plugged it into a computer allowing it to spread and activate its payload.
And it’s not just Iran. In the years since, we have heard of other power plants taken offline after being hit by USB-aware malware spread via sneakernet.
So, we accept that although it may be more difficult to infect isolated airgapped computers, it isn’t impossible.
But what about exfiltrating data from computers which have no connection with the outside world?
Researchers from Ben-Gurion University in Israel think they have found a way to do it, hiding data in radio emissions surreptitiously broadcast via a computer’s video display unit, and picking up the signals on nearby mobile phones.
And, to prove their point, they have released a YouTube video, demonstrating their proof-of-concept attack in action:
In the video, which has no sound, the researchers first demonstrate that the targeted computer has no network or Internet connection.
Next to it is an Android smartphone, again with no network connection, that is running special software designed to receive and interpret radio signals via its FM receiver.
Proof-of-concept malware, dubbed “AirHopper”, running on the isolated computer ingeniously transmits sensitive information (such as keystrokes) in the form of FM radio signals by manipulating the video display adaptor.
Meanwhile, AirHopper’s receiver code is running on a nearby smartphone.
“With appropriate software, compatible radio signals can be produced by a compromised computer, utilizing the electromagnetic radiation associated with the video display adapter. This combination, of a transmitter with a widely used mobile receiver, creates a potential covert channel that is not being monitored by ordinary security instrumentation.”
As the researchers revealed in their white paper, the phone receiving the data can be in another room.
Now, you may think that if AirHopper is fiddling with the targeted computer’s screen that this could be noticed by any operator in front of the device. However, the researchers say they have devised a number of techniques to disguise any visual clues that data may be being transmitted, like waiting until the monitor is turned off, waiting until a screensaver kicks in, or determining (like a screensaver does) that there has been no user interaction for a certain period of time.
It’s all quite ingenious—and although I have explained before how high frequency sound can be used to exfiltrate data from an airgapped computer, this new method could work even if a PC’s speaker has been detached.
No sound on a computer you can live with, but removing monitors seems impractical.
Of course, it’s important that no-one should panic. The technique is elaborate, and at the moment—as far as we can tell—only exists within research laboratories.
It’s important to understand the various steps that have to be taken to exfiltrate data from an airgapped computer.
Firstly, malware has to be introduced to the isolated PC—not a simple task in itself, and a potential hurdle that may prove impossible if proper defences are in place.
Secondly, a mobile device carrying the receiver software needs to be in close proximity to the targeted computer (this would require either an accomplice, or infection of an employee’s mobile device with the malware).
The data then has to be transmitted from the mobile phone itself, back to the attackers.
Finally, this may not be the most efficient way to steal a large amount of data. The AirHopper experiment showed that data could be transmitted from targeted isolated computers to mobile devices up to 7 metres (23 feet away), at a rate of 13-60 bytes per second. That’s equivalent to less than half a tweet.
Despite that, it’s still easy to imagine that a determined hacker who has gone to such lengths would be happy to wait for a sizeable amount of data to be transmitted, perhaps as the isolated computers are left unattended overnight or at weekends.
If this all sounds like too much of an effort, think again. Because the researchers’ paper says although complex, the attack isn’t beyond modern attackers:
“The chain of attack is rather complicated, but is not beyond the level of skill and effort employed in modern Advanced Persistent Threats (APTs)”
Which leads us to what you should do about it, and there is a familiar piece of advice to underline: tightly control who has access to your computers, and what software they are able to install upon them, and what devices they are permitted to attach.
The AirHopper attack cannot steal any data from your airgapped computers at all, if no-one ever manages to infect them in the first place.
It will be interesting to see if others take this research and devise more methods to counter this type of attack in the future.
This article first appeared on the Tripwire State of Security blog.
Found this article interesting? Follow Graham Cluley on Twitter to read more of the exclusive content we post. | <urn:uuid:3d1b15c2-37f0-4216-9cd8-ebe471d302f9> | CC-MAIN-2022-40 | https://grahamcluley.com/malware-can-steal-data-airgapped-computer-using-fm-radio-waves/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00018.warc.gz | en | 0.94736 | 1,238 | 2.90625 | 3 |
A new educational term is fully underway. Schoolchildren have now found their natural rhythm, while freshers are beginning to find their feet in the world of higher education. So, as the younger generation embarks on another year of learning, it may also be an opportunity for manufacturers to take stock of recent developments in the industry, and how they might employ the latest advances in technology to improve efficiencies and productivity, and meet evolving customer demands.
Indeed, although many manufacturing businesses have embarked on a form of digital transformation to do just this, the current outlook for the industry in the UK does not inspire. While the most recent Purchasing Manager’s Index (PMI) (opens in new tab) from IHS Markit suggests a ‘mild improvement’ in its performance on the previous quarter, conditions in the industry are described as remaining ‘relatively lacklustre’.
Manufacturers should, therefore, educate themselves on the current state of the sector, and what they can do to address the doldrums in which the industry finds itself, and turn the situation around.
Advanced manufacturing techniques
First, it’s worth noting the extent to which digital manufacturing technology has evolved over recent years, and how it allows manufacturers to meet customer demands for high quality products that meet their specific requirements, and that are delivered at the fastest possible speed.
The turnaround process is faster than could ever have been previously imagined, and it is now possible for customers to receive components or finished parts within just a matter of days of submitting a design via an online CAD upload system. What’s more, depending on their particular criteria, customers are able to choose the advanced manufacturing technique that best suits their needs.
CNC machining, for example, is a process in which computers are used to control high-speed milling and turning tools, and tends to be a popular choice for the manufacture of parts for commercial and industrial equipment and machinery. It may not be suitable for every business, however, when differing economies of scales, customer demands, part geometries and technical requirements might mean that 3D printing is a more appropriate technique.
More well known
3D printing is the most well-known manufacturing technique, with regular news stories illustrating its capacity to produce human organs, prosthetic limbs, and even food. Used to create intricate, complex geometrical shapes, many of which demand great dimensional tolerances, the technology offers manufacturers a level of flexibility that allows them to reimagine how they design the various components that make up their products. In addition, its potential for creating an almost limitless variety of finished parts and prototypes can eliminate the expense associated with producing a range of machine tools.
In certain circumstances, it can be advantageous to employ both techniques. CNC machining can be employed as an add-on to fine tune 3D printed objects, for example, and the two processes can be used in conjunction to meet increasingly tough design challenges, such as the demand for components and products to be ever more lightweight.
Ultimately, although one technique is better known than the other, both are equally important in addressing the needs of an industry required to create effective high-quality parts and products, faster and more efficiently than ever before.
Further opportunities for efficiencies and cost savings can be gained from the Internet of Things (IoT), the adoption of which, with an 84 percent annual growth (opens in new tab) in network connections, is being dominated by the manufacturing industry.
As this level of adoption continues to grow, factory floors will become increasingly connected, and valuable information around factors such as product usage, production capabilities, and market and customer requirements will be shared and analysed faster than was ever previously possible. Such information, and the insight its analysis provides, will enable manufacturers to transform their production processes and operating models, thereby improving the speed and quality of their offering in line with customer demand.
Capturing and analysing this critical information will also allow manufacturing businesses to predict the future trends and challenges that might impact factory floor operations. Indeed, by combining two of the biggest developments in digital technology, the IoT and big data, many businesses are already successfully enhancing the quality of their processes and products. What’s more, by embracing their benefits, they are unlocking the potential of Industry 4.0.
Pushing the boundaries
Probably the most important evolutionary step in manufacturing in recent years, the term Industry 4.0 refers to the trend towards web-connected manufacturing processes, based on robotics and automation to deliver unparalleled levels of productivity, quality, and efficiency.
Given the impact that this, in addition to digital manufacturing techniques and the IoT, will have on the factory floor, Industry 4.0 is more than just a different approach to manufacturing. By enabling them to both acknowledge and address the importance placed on short lead times, on-demand production, and mass customisation, for example, Industry 4.0 offers manufacturers of any size the opportunity to compete on a global stage.
To support these new capabilities, however, and capitalise on the opportunities they represent, businesses will need to consider additional investment. New software will be required, and integrated with existing processes; back-office systems may need upgrading, and customer-facing web applications developed. Employees will need to adapt and develop their current skills in order to keep up with and support the rapidly developing technology, gaining expertise in robotics, automation, and the latest digital manufacturing techniques.
Technology is continuing to push the boundaries of conventional manufacturing; production facilities are slowly being replaced by ‘smart’ factories, and employees can often find themselves spending more time at a computer than hands-on with traditional manufacturing equipment.
Digital manufacturing, with its ever growing and changing series of connections, processes, and advanced production technologies, is key to the transformation that the industry needs right now. Education is important to ensure that manufacturing businesses remain up to date with its evolution, and that they have the necessary techniques, connections and training in place to fully embrace Industry 4.0, and once again set the industry on an upward trajectory.
Stephen Dyson is head of Industry 4.0, Protolabs (opens in new tab) | <urn:uuid:88192bec-e955-43c1-83ec-0ffb8e440d14> | CC-MAIN-2022-40 | https://www.itproportal.com/features/educating-manufacturers-on-the-future-of-the-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00018.warc.gz | en | 0.954478 | 1,257 | 3.078125 | 3 |
Saturday, September 24, 2022
Published 2 Years Ago on Saturday, Jul 11 2020 By Adnan Kayyali
Covid-19 mass testing should be an integral part of any plan to navigate the pandemic. The main objective across the board would be to incrementally ease lockdowns, curfews, and restrictions, open up and revive the economy, all while avoiding a second-wave. Testing is key to all of this. Without proper strategic testing, we cannot effectively isolate, contain and subdue any new pockets of infection.
Governments and institutions, especially those in developed countries, have all the tools they need to begin mass testing and start alleviating confinement. Ideally, restrictions wouldn’t be lifted until a vaccine or effective treatment is created, but that is sadly some time away, and so other measures must be implemented.
The questions to ask would be: What to test and how?
The answer to the first “what to test” is shorter: There are two types of tests, molecular diagnostic testing (RT-PCR), and serology tests. The first, is a standard test to identify whether the person is currently infected or not, and gaging the percentage of infected people within an area or community. The second, reveals whether the person has been infected before, and has developed antibodies. This is to allow people who have developed an immunity to return to work safely, and to provide samples and data that could help in vaccine development and better understand the virus.
The “how” is a slightly longer story. One of the most effective strategies that have been tried and tested by other nations such as South Korea is ‘Testing, Tracking, and Tracing’ – or TTT.
Another technique for COVID-19 mass testing is known as “Assurance Testing”. Simply put, organizations, communities or even entire towns can request that their members be tested as a whole. This means that testing kits can be supplied on demand for an entire group, easing the organization, logistical strain, and procurement of medical supplies. It is an effective way of opening up the economy slowly and methodically as each office building or company that gets tested all together can pretty much return to work. If infected individuals are found, measures are taken.
It seems like COVID-19 mass testing is the only way out of this mess. We can’t all sit at home; someone has to run all the machines and keep society marching on. But things cannot go back to normal so quickly and easily either. Strategic implementation is key.
The fastest-growing waste stream in America, according to the Environmental Protection Agency (EPA), is electronic garbage, yet only a small portion of it is collected. As a result, the global production of e-waste may reach 50 million metric tons per year. Sustainably manufactured green phone have, as a result, risen in popularity. When you purchase […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:ef40c74f-ea2e-411a-bdb4-1cfa4d387a5e> | CC-MAIN-2022-40 | https://insidetelecom.com/covid-19-mass-testing-the-need-for-strategic-implementation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00219.warc.gz | en | 0.953001 | 645 | 2.828125 | 3 |
Saturday, September 24, 2022
Published 2 Years Ago on Monday, Aug 17 2020 By Ranine Awwad
Amid the Covid-19 pandemic, Cybersecurity attacks in India may have increased by 500% according to Pavan Duggal, a Supreme Court and Cyber Law expert. Cyber threats have potential impacts on Indian society, economy, and development. The introduction of a new cybersecurity strategy is crucial for India, considered home to the second-largest internet user base.
“Threats from cyberspace can endanger all these aspects of Indian life. The government is alert on this and will soon introduce a new cybersecurity policy”, said Prime Minister Narendra Modi from the Red Fort during the 74th Independence Day speech on August 15, 2020, according to the Indian Express. Prime Minister Modi said that Cybersecurity is a very important aspect, which cannot be ignored. Early August, the Indian Government banned Chinese apps including Tiktok. These applications have been removed from both-Play store and App Store.
The Indian Government’s move to introduce the new policy goes back to the efforts made to secure internet connectivity around the country. “When the internet comes, there is always an increase in cybercrime risk. So we will soon come up with a new cybersecurity policy”, explained Modi, according to The Print.
The new policy aims to build capabilities to prevent and respond to cyber threats as well as enhancing the protection of India’s critical information infrastructure. “This policy will enable protection of information and also effectively safeguard citizen’s data, (thereby) minimizing chances of data theft and bringing down cybercrime in the process”, avowed a government official.
India has 36 central bodies to deal with cyber issues and it has already adopted a National Cyber Security Policy in 2013. However, since then, the current landscape poses serious cybersecurity threats due to advanced technology such as 5G and Artificial Intelligence. Thus, the so-called National Cyber Security Strategy 2020 – an upgraded version of the Cybersecurity Policy 2013 – is required. India is going through a digital revolution. Keeping this in mind, a new cyber policy is on the cards”, Ravi Shankar Prasad, Minister for Information and Technology told India Today.
“New challenges include data protection/privacy, law enforcement in evolving cyberspace, access to data stored overseas, misuse of social media platforms, international cooperation on cybercrime & cyber terrorism, and so on”, states the draft of the National Security Strategy that is likely to be finalized this year. The strategy is planned for 5 years to ensure that it will not be outdated.
Inside Telecom has already reported on the importance of legal reforms to secure the deployment of 5G technology across India. According to The Internet Crime Report for 2019, released by the Federal Bureau of Investigation (FBI), India stands third among the top twenty cybercrime victims. With security challenges posed by the deployment of fifth-generation technology, the cybersecurity policy has never been more important to the nation.
The fastest-growing waste stream in America, according to the Environmental Protection Agency (EPA), is electronic garbage, yet only a small portion of it is collected. As a result, the global production of e-waste may reach 50 million metric tons per year. Sustainably manufactured green phone have, as a result, risen in popularity. When you purchase […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:b907e5b1-b622-4eeb-8f4a-0755def36efc> | CC-MAIN-2022-40 | https://insidetelecom.com/india-a-new-cybersecurity-policy-to-be-introduced-by-the-end-of-the-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00219.warc.gz | en | 0.929921 | 738 | 2.515625 | 3 |
We now live in a world where holding the door open for someone balancing a tray of steaming hot coffee—she can’t seem to get her access card out to place it near the reader—is something we need to think twice about. Courtesy isn’t dead, mind you, but in this case, you'd almost wish it were. Because the door opens to a restricted facility. Do you let her in? If she really can't reach her card, the answer is clearly yes. But what if there's something else going on?
Holding the door open for people in need of assistance is considered common courtesy. But when someone assumes the role of a distressed woman to count on your desire to help, your thoughtful gesture suddenly becomes a dangerous one. Now, you've just made it easier for someone to get into a restricted facility they otherwise had no access or right to. So what does that make you? A victim of social engineering.
Social engineering is a term you often hear IT pros and cybersecurity experts use when talking about Internet threats like phishing, scams, and even certain kinds of malware, such as ransomware. But its definition is even more broad. Social engineering is the manipulation or the taking advantage of human qualities to serve an attacker’s purpose.
It is imperative, then, that we protect ourselves from such social engineering tactics the same way we protect our devices from malware. With due diligence, we can make it difficult for social engineers to get what they want.
Know thy vulnerable selfBefore we go into the “how” of things, we’d like to lay out other human emotional and psychological aspects that a social engineer can use to their advantage (and the potential target’s disadvantage). These include emotions such as sympathy, which we already touched on above. Other traits open for vulnerability are as follows:
CarelessnessThe majority of us have accidentally clicked a link or two, or opened a suspicious email attachment. And depending on how quickly we were able to mitigate such an act, the damage done could range from minor to severe and life-changing.
Examples of social engineering attacks that take advantage of our carelessness include:
- Homograph attacks
- Blackhat SEO/SEO poisoning
- Tailgating or piggybacking
CuriosityYou seem to have received an email supposedly for someone else by accident, and it’s sitting in your inbox right now. Judging from the subject line, it’s a personal email containing photos from the sender’s recent trip to the Bahamas. The photos are in a ZIP-compressed file.
If at this point you start to debate with yourself on whether you should open the attachment or not, even if it wasn’t meant for you, then you may be susceptible to a curiosity-based social engineering attack. And we’ve seen a lot of users get duped by this approach.
Examples of curiosity-based attacks include:
- Malware campaigns in social networking sites (“Hot video” Facebook scam, celebrity scandals)
- Other scams that bait you with exclusive content (videos related to accidents or calamities)
- “Who visited your profile” social media scams
- USB attacks
- Snail mailed CD attacks
FearAccording to Charles E. Lively, Jr. in the paper “Psychological-Based Social Engineering,” attacks that play on fear are usually the most aggressive form of social engineering because it pressures the target to the point of making them feel anxious, stressed, and frightened.
Such attacks make participants willing to do anything they’re asked to do, such as send money, intellectual property, or other information to the threat actor, who might be posing as a member of senior management or holding files hostage. Campaigns of this nature typically exaggerate on the importance of the request and use a fictitious deadline. Attackers do this in the hopes that they get what they ask for before the deception is uncovered.
Examples of fear-based attacks include:
- Business email compromise (BEC)/CEO or CFO fraud
- Blackmail/extortion (sextortion, ransomware)
- Cold call scams
- Rogue software (fake AV)
- Malware campaigns that pretend to be fake software patches
DesireWhether for convenience, recognition, or reward, desire is a powerful psychological motivation that can affect one’s decision making, regardless of whether you’re seen as an intellectual or not. Blaise Pascal said it best: "The heart has its reasons which the mind knows nothing of." People looking for the love of their lives, more money, or free iPhones are potentially susceptible to this type of attack.
Examples of desire-based attacks include:
- Catfishing/romance fraud (members of the LGBTQ community aren’t exempt)
- Certain phishing campaigns
- Scams that bait you with money or gadgets (e.g. 419 or Nigerian Prince scams, survey scams)
- Lottery and gambling-related scams
- Quid pro quo
DoubtThis is often coupled with uncertainty. And while doubt can sometimes stop us from doing something we would have regretted, it can also be used by social engineers to blindside us with information that potentially casts something, someone, or an idea in a bad light. In turn, we may end up suspecting who or what we think we know is legit and trusting the social engineer more.
One Internet user shared her experience with two fake AT&T associates who contacted her on the phone after she received an SMS report of changes to her account. She said that the first purported associate was clearly fake, getting defensive and hanging up on her when she questioned if this was a scam. But the second associate gave her pause, as the caller was calm and kind, making her think twice if he was indeed a phony associate or not. Had she given in, she would have been successfully scammed.
Examples of doubt-based attacks include:
- Apple iTunes scams
- Payment-based scams
- Payment diversion fraud
- Some forms of social hacking, especially in social media
Empathy and sympathyWhen calamities and natural disasters strike, one cannot help but feel the need to extend aid or relief. As most of us cannot possibly hop on a plane or chopper and race to affected areas to volunteer, it’s significantly easier to go online, enter your card details to a website receiving donations, and hit "Enter." Of course, not all of those sites are real. Social engineers exploit the related emotions of empathy and sympathy to grossly funnel funds away from those who are actually in need into their own pockets.
Examples of sympathy-based scams include:
- Fake orphanages (prevalent in Cambodia)
- Disaster fraud, for which Fraud Magazine identified five primary categories: charitable solicitations, contractor and vendor fraud, forgery, price gouging, and property insurance fraud
- Cancer fraud
- Specific physical social engineering attempts, like this one
- Scams that take advantage of crowdfunding websites like Indiegogo, GoFundMe, or Kickstarter
Ignorance or naivetéThis is probably the human trait most taken advantage of and, no doubt, one of the reasons why we say that cybersecurity education and awareness are not only useful but essential. Suffice to say, all of the social engineering examples we mention in this post rely in part on these two characteristics.
While ignorance is often used to describe someone who is rude or prejudice, in this context it means someone who lacks knowledge or awareness—specifically of the fact that these forms of crime exist on the Internet. Naiveté also highlights users’ lack of understanding of how a certain technology or service works.
On the flip side, social engineers can also use ignorance to their advantage by playing dumb in order to get what they want, which is usually information or favors. This is highly effective, especially when used with flattery and the like.
Other examples of attacks that prey on ignorance include:
- Venmo scams
- Amazon gift card scams
- Cryptocurrency scams
Inattentiveness or complacencyIf we’re attentive enough to ALT+TAB away from what we’re looking at when someone walks in the room, theoretically we should be attentive enough to “go by-the-book” and check that person’s proof of identity. Sounds simple enough, and it surely is, yet many of us yield to giving people a pass if we think that getting confirmation gets in the way. Social engineers know this, of course, and use it to their advantage.
Examples of complacency-based attacks include:
- Physical social engineering attempts, such as gaining physical access to restricted locations and dumpster diving
- Diversion theft
Whether the person you’re dealing with is online, on the phone, or face-to-face, it’s important to be on alert, especially when our level of skepticism hasn’t yet been tuned to detect social engineering attempts.
Brain gyming: combating social engineeringThinking of ways to counter social engineering attempts can be a challenge. But many may not realize that using basic cybersecurity hygiene can also be enough to deter social engineering tactics. We’ve touched on some of them in previous posts, but here, we’re adding more to your mental arsenal of prevention tips. Our only request is you use them liberally when they apply to your circumstance.
- If bearing a dubious link or attachment, reach out and verify with the sender (in person or via other means of communication) if they have indeed sent you such an email. You can also do this to banks and other services you use when you receive an email reporting that something happened with your account.
- Received a request from your boss to wire money to him ASAP? Don’t feel pressured. Instead, give him a call to verify if he sent that request. It would also be nice to confirm that you are indeed talking with your boss and not someone impersonating him/her.
Phone (landline or smartphone)
- When you receive a potentially scammy SMS from your service provider, call them directly instead of replying via text and ask if something’s up.
- Refrain from answering calls not in your contact list and other numbers you don’t recognize, especially if they appear closely related to your own phone number. (Scammers like to spoof area codes and the first three digits of your phone to trick you into believing it's from someone you know.)
- Avoid giving out information to anyone directly or indirectly. Remind yourself that volunteering what you know is what the social engineers are heavily counting on.
- Apply the DTA (Don’t Trust Anyone) or the Zero Trust rule. This means you treat every unsolicited call as a scam and ask tough questions. Throw the caller off by providing false information.
- If something doesn’t feel right, hang up, and look for information online about the nature of the call you just received. Someone somewhere may have already experienced it and posted about it.
- Be wary when someone you just met touches you. In the US, touch is common with friends and family members, not with people you don’t or barely know.
- If you notice someone matching your quirks or tendencies, be suspicious of their motives.
- Never give or blurt out information like names, department names, and other information known only within your company when in the common area of your office building. Remind yourself that in your current location, it is easy to eavesdrop and to be eavesdropped on. Mingle with other employees from different companies if you like, but be picky and be as vague as possible with what you share. It also pays to apply the same cautious principle when out in public with friends in a bar, club, or restaurant.
- Always check for identification and/or other relevant papers to identify persons and verify their purpose for being there.
- Refrain from filling in surveys or playing games that require you to log in using a social media account. Many phishing attempts come in these forms, too.
- If you frequent hashtagged conversations (on Twitter, for example), consider not clicking links from those who are sharing, as you have no idea whether the links take you to destinations you want. More importantly, we’re not even sure if those sharing the link are actual people and not bots created to go after the low hanging fruit.
- If you receive a private message on your social network inbox—say on LinkedIn—with a link to a job offer, it’s best to visit the company’s official website and look up open positions there. If you have clicked the link and the site asks you to fill in your details, close the tab.
When it comes to social engineering, no incident is too small to be neglected. There is no harm in erring on the side of safety.
happy smart ending
So, what should you do if someone is behind you carrying a tray of hot coffee and can't get to her access card? Don’t open the door for her. Instead, you can offer to hold her tray while she takes out and uses her access card. If you still think this is a bad idea, then tell her to wait while you go inside and get security to help her out. Of course, this is assuming that security, HR, and the front desk have already been trained to respond forcefully against someone trying to social engineer their way in. | <urn:uuid:af040435-ec62-4a10-ac0c-e8fb718d067a> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2018/08/social-engineering-attacks-what-makes-you-susceptible | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00219.warc.gz | en | 0.946899 | 2,851 | 2.890625 | 3 |
Examining Trapped Ion Technology for Next Generation Quantum Computers
(Phys.org) computer scientists at Princeton University and physicists from Duke University collaborated to develop methods to design the next generation of quantum computers. Their study focused on QC systems built using trapped ion (TI) technology, which is one of the current front-running QC hardware technologies. By bringing together computer architecture techniques and device simulations, the team showed that co-designing near-term hardware with applications can potentially improve the reliability of TI systems by up to four orders of magnitude.
Their study was conducted as a part of the Software-Tailored Architecture for Quantum co-design (STAQ) project, an NSF funded collaborative research effort to build an trapped-ion quantum computer and the NSF CISE Expedition in Computing Enabling Practical-Scale Quantum Computing (EPiQC) project.
To build the next generation of QCCD systems with 50 to 100 qubits, hardware designers have to tackle a variety of conflicting design choices. “How many ions should we place in each trap? What communication topologies work well for near-term QC applications? What are the best methods for implementing gates and shuttling operations in hardware? These are key design questions that our work seeks to answer,” said Prakash Murali, a graduate student at Princeton University. | <urn:uuid:e9d2540a-3e46-42d1-953a-0b2d0eabc550> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/examining-trapped-ion-technology-for-next-generation-quantum-computers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00219.warc.gz | en | 0.927711 | 278 | 3.453125 | 3 |
SMTP TLS: All About Secure Email Delivery over TLS
TLS stands for “Transport Layer Security” and is the successor of “SSL” (Secure Socket Layer). TLS is one of the standard ways that computers on the internet transmit information over an encrypted channel. In general, when one computer connects to another computer and uses TLS, the following happens:
- Computer A connects to Computer B (no security)
- Computer B says “Hello” (no security)
- Computer A says, “Let’s talk securely over TLS” (no security)
- Computer A and B agree on how to do this (secure)
- The rest of the conversation is encrypted (secure)
- The meat of the conversation is encrypted
- Computer A can verify the identity of Computer B (by examining its SSL certificate, which is required for this dialog)
- The conversation cannot be eavesdropped upon (without Computer A knowing)
- A third party cannot modify the conversation
- Third parties cannot inject other information into the conversation.
TLS and SSL are used for many different reasons on the internet and help make the internet a more secure place. One of the popular uses of TLS is SMTP for securely transmitting email messages between servers. See also:
Secure Email Delivery over TLS with SMTP
The mechanism and language (i.e., protocol) by which one email server transmits email message(s) to another email server is called SMTP (Simple Mail Transport Protocol). For a long time, email servers have had the option of using TLS to transparently encrypt the message transmission from one server to the other.
Use of TLS with SMTP, when available, ensures that the message contents are secured during transmission between the servers.
Not all servers support TLS!
The use of TLS requires that the server administrators:
- purchase of one or more SSL certificates
- configure the email servers to use them (and keep these configurations updated)
- allocate additional computational resources on the email servers involved.
For these reasons, many email providers, especially free or public ones, have not supported TLS. Over the last five years, however, the trend has been to add TLS everywhere. Now, most providers support TLS — 82.3% of domains tested as of July 2018.
For TLS transmission to be used, the destination email server must “advertise” support for TLS (see: How to Tell Who Supports TLS for Email Transmission), and the sending computer or server must be configured to use TLS connections when possible.
The sending computer or server could be configured for:
- No TLS — never use it.
- Opportunistic TLS — use it if it is available; if not, send insecurely.
- Forced TLS — use TLS or do not deliver the email at all
How Secure is Email Delivery over SMTP TLS?
TLS protects the transmission of the content of email messages. It does nothing to protect the security of the message before it is sent or after it arrives at its destination. For that, other encryption mechanisms may be used, such as PGP, S/MIME, or storage in a secure portal.
However, transmission security is all that is minimally required of many organizations (i.e., banks and healthcare) when sending to customers. In such situations, enforced use of TLS is an excellent alternative to more robust and less user-friendly encryption methods (like PGP and S/MIME) and can prevent the insecure delivery of email.
The transmission itself is as secure as can be negotiated between the sending and receiving servers. If they both support strong encryption (e.g., AES 256), then that will be used. If not, a weaker grade of encryption may be used. The sending and receiving servers can choose what kinds of encryption they will support — and if there is no overlap in what they support, then TLS will fail (this is rare).
There are other deficiencies in how SMTP TLS is implemented in practice by most email servers on the internet. For example, TLS certificates are generally not validated, leaving SMTP TLS open to active man-in-the-middle attacks. For more information, see Stronger Email Security with Strict Transport Security.
What about Replies to Secure Messages?
Let’s say you send a message to someone that is delivered to their inbox over TLS. That person then replies to you. Will that reply be secure? This may be important if you are communicating sensitive information. The reply will use TLS for security only if:
- The recipient’s servers support TLS for outbound email (there is no way to test this externally).
- The mail servers (where the “From” or “Reply” email address is hosted) support TLS for inbound email.
- Both servers support overlapping TLS ciphers and protocols so they can agree on a mutually acceptable means of encryption.
Unless familiar with the providers in question, it cannot be assumed that such replies will use TLS. There are two ways of looking at this problem:
- Conservatively. If replies must be secure in all cases, then assuming TLS will be used is not a reasonable assumption. In this case, a service should be used (like SecureLine Escrow) whereby the messages are encrypted and stored in a secure portal. The recipient must go there to view the message and reply securely. Or set up PGP or S/MIME for additional security.
- Aggressively. In some compliance situations like HIPAA, it can be argued that it is up to the sender to send messages securely if needed. While doctors need to ensure that ePHI is sent securely to patients, patients are not beholden to HIPAA and can send their information insecurely to anyone they want. So, if the patient’s reply is insecure, that could be okay. If the recipient is another organization that falls under the HIPAA umbrella, then it is up to them to ensure that everything they send (e.g., their replies) is secure. For these reasons, and because using TLS for email security is so “easy,” many do not worry about the security of email replies. However, this should be a “Risk Factor” that you consider in any internal security audit. Is the risk of insecure replies worth the possible data exposure in your organization’s practices. If you fall under HIPAA, are you encouraging insecure replies?
What is new with SMTP TLS?
SMTP TLS has been around for a long time and has recently seen a great deal of adoption. However, it has some deficiencies:
- There is no mandatory support for TLS in the email system;
- A receiver’s support of the SMTPTLS option can be trivially removed by an active man-in-the-middle because TLS certificates are not actively verified. In such cases, opportunistic TLS will deliver messages securely, and forced TLS will not deliver the message.
- Encryption is not used if any aspect of the TLS negotiation is undecipherable/garbled. It is very easy for a man-in-the-middle to inject garbage into the TLS handshake (which is done in clear text) and have the connection downgraded to plain text (opportunistic TLS) or have the connection fail (forced TLS).
- Even when the SMTP TLS is offered and accepted, the certificate presented during the TLS handshake is usually not checked to see if it is for the expected domain and unexpired. Most MTAs offer self-signed certificates as a pro forma. Thus, in many cases, one has an encrypted channel to an unauthenticated MTA, which can only prevent passive eavesdropping. Why? Because this is still better than plain text email delivery.
There are new solutions that help remedy these issues. For example, SMTP Strict Transport Security. SMTP STS enables recipient servers to publish in DNS information about their SMTP TLS support. This prevents man-in-the-middle downgrades to plain text delivery, ensures more robust TLS protocols are used and can enable certificate validation. Unfortunately, SMTP STS is still only an internet draft specification and is not yet widely used. Fortunately, enabling SMTP STS does not hurt compatibility with systems that do not yet support it.
What about Secure Email Delivery over TLS at LuxSci?
LuxSci inbound email servers support TLS for encrypted inbound email delivery from any sending email provider that also supports that.
For selected organizations, e.g. Proofpoint, LuxSci also locks down its servers so that it only accepts email from them if its is delivered over TLS.
Outbound Opportunistic TLS.
LuxSci outbound email servers will always use TLS with any server that claims to support it and with whom we can talk TLS v1.0+ using a strong cipher. If the TLS connection to such a server server fails (due to misconfiguration or no security protocols in common), the message will not be sent.
Outbound opportunistic TLS encryption is automatic for all LuxSci customers, even those without SecureLine.
Support strong encryption, up to AES 256 and better
LuxSci servers will use the strongest encryption supported by the recipient’s email server that is also considered strong. LuxSci servers will never employ an encryption cipher that uses less than 128 bits (they will failed to deliver rather than deliver via an excessively weak encryption cipher) and they will never use SSL v2 or SSL v3.
LuxSci servers use “Forced TLS” with recipient servers that support TLS if email is being sent to those servers from any SecureLine account using TLS-Only delivery services (outbound email or forwarding). This ensures that messages will never be delivered to such servers, even in the case that they stop supporting TLS suddenly.
Forced TLS is also in place for all LuxSci customers sending to banks and organizations that have requested that we globally enforce TLS to their servers.
Does LuxSci have any other Special TLS Features?
When using SecureLine for outbound email encryption:
- SMTP MTA STS: LuxSci’s own domains support SMTP MTA STS, and LuxSci’s SecureLine encryption system leverages STS information about recipient domains to improve connection security.
- Try TLS: Account administrators can choose to have secure messages “try TLS first” and deliver that way. If TLS is not available, the messages would fall back and use more secure options like PGP, S/MIME, or Escrow. Email security is easy, seamless, and automatic when communicating internally or with others who support TLS.
- TLS Exclusive: This is a special LuxSci-exclusive TLS sending feature. TLS Exclusive is just like Forced TLS, except that messages that can’t go TLS are just dropped. This is ideal for low-importance emails that must still be compliant. E.g., email marketing email in healthcare. In such cases, the ease of use of TLS is more important than the actual receipt of the message.
- TLS Only Forwarding: Account administrators can restrict any server-side email forwarding settings in their accounts from allowing forwarding to any email addresses which do not support TLS for email delivery.
- Encryption Escalation: Often, TLS is suitable for most messages, but some messages need to be encrypted using something better (e.g., forcing recipients to pick up the message in a secure portal). LuxSci allows users to escalate the encryption from TLS to Escrow with a click (in WebMail) or by entering particular text in the subject line (for messages sent from email programs like Outlook).
- When TLS delivery is enabled for SecureLine accounts, messages will never be insecurely sent to domains that purport to be TLS-enabled. I.e., TLS delivery is enforced and no longer “opportunistic.” The system monitors these domains and updates their TLS-compliance status daily.
- Double Encryption: Messages sent using SecureLine and PGP or S/MIME will still use Opportunistic TLS whenever possible for message delivery. In these cases, messages are often “double encrypted.” Encrypted first with PGP or S/MIME, that secure message may be encrypted again during transport using TLS.
- No Weak TLS: Unlike many organizations, LuxSci’s TLS support for SMTP and other servers only supports those protocol levels (e.g., TLS v1.0+) and ciphers recommended by NIST for government communications and which are required for HIPAA. So, all communications with LuxSci servers will be over a compliant implementation of TLS.
For customers whose security or compliance needs allow TLS to be an acceptable form of email encryption, it enables seamless security and “use of email as usual” security. SecureLine with Forced TLS enables clients to take advantage of this level of security whenever possible while automatically falling back to other methods when TLS is unavailable.
Of course, the use of Forced TLS as the sole method of encryption is optional; if your compliance needs are more substantial, you can disable TLS-Only delivery or restrict it so that it is used only with specific recipients. | <urn:uuid:56a75481-4ebb-4f3b-b1d8-2f3aa7ee5a44> | CC-MAIN-2022-40 | https://luxsci.com/blog/smtp-tls-all-about-secure-email-delivery-over-tls.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00219.warc.gz | en | 0.9132 | 2,753 | 3.546875 | 4 |
Using x-ray lasers, researchers at Stockholm University have been able to map out how water fluctuates between two different states when it is cooled.
At -44°C these fluctuations reach a maximum pointing to the fact that water can exist as two different distinct liquids. The findings will be published in the journal Science.
Water, both common and necessary for life on earth, behaves very strangely in comparison with other substances.
How water’s density, specific heat, viscosity and compressibility respond to changes in pressure and temperature is completely opposite to other liquids that we know.
We all are aware that all matter shrinks when it is cooled resulting in an increase in the density.
We would therefore expect that water would have high density at the freezing point.
However, if we look at a glass of ice water, everything is upside down, since we expect that water at 0°C being surrounded by ice should be at the bottom of the glass, but of course as we know ice cubes float.
Strangely enough for the liquid state, water is the densest at 4 degrees C, and therefore it stays on the bottom whether it’s in a glass or in an ocean.
If you chill water below 4 degrees, it starts to expand again.
If you continue to cool pure water (where the rate of crystallization is low) to below 0, it continues to expand — the expansion even speeds up when it gets colder.
Many more properties such as compressibility and heat capacity become increasingly strange as water is cooled.
Now researchers at Stockholm University, with the help of ultra-short x-ray pulses at x-ray lasers in Japan and South Korea, have succeeded in determining that water reaches the peak of its strange behaviour at -44°C.
Water is unique, as it can exist in two liquid states that have different ways of bonding the water molecules together.
The water fluctuates between these states as if it can’t make up its mind and these fluctuations reach a maximum at -44°C.
It is this ability to shift from one liquid state into another that gives water its unusual properties and since the fluctuations increase upon cooling also the strangeness increases.
“What was special was that we were able to X-ray unimaginably fast before the ice froze and could observe how it fluctuated between the two states,” says Anders Nilsson, Professor of Chemical Physics at Stockholm University.
“For decades there has been speculations and different theories to explain these remarkable properties and why they got stronger when water becomes colder.
Now we have found such a maximum, which means that there should also be a critical point at higher pressures.”
Another remarkable finding of the study is that the unusual properties are different between normal and heavy water and more enhanced for the lighter one.
“The differences between the two isotopes, H2O and D2O, given here shows the importance of nuclear quantum effects,” says Kyung Hwan Kim, postdoc in Chemical Physics at Stockholm University.
“The possibility to make new discoveries in a much studied topic such as water is totally fascinating and a great inspiration for my further studies,” says Alexander Späh, PhD student in Chemical Physics at Stockholm University.
“It was a dream come true to be able to measure water under such low temperature condition without freezing” says Harshad Pathak, postdoc in Chemical Physics at Stockholm University.
“Many attempts over the world have been made to look for this maximum.”
“There has been an intense debate about the origin of the strange properties of water for over a century since the early work of Wolfgang Röntgen,” further explains Anders Nilsson.
“Researchers studying the physics of water can now settle on the model that water has a critical point in the supercooled regime. The next stage is to find the location of the critical in terms of pressure and temperature. A big challenge in the next few years.”
Materials provided by Stockholm University. Note: Content may be edited for style and length.
- Kyung Hwan Kim, Alexander Späh, Harshad Pathak, Fivos Perakis, Daniel Mariedahl, Katrin Amann-Winkel, Jonas A. Sellberg, Jae Hyuk Lee, Sangsoo Kim, Jaehyun Park, Ki Hyun Nam, Tetsuo Katayama, Anders Nilsson. Maxima in the thermodynamic response and correlation functions of deeply supercooled water. Science, 2017; 358 (6370): 1589 DOI: 10.1126/science.aap8269 | <urn:uuid:4aabad13-20fc-4df8-a970-bc3203d833b0> | CC-MAIN-2022-40 | https://debuglies.com/2017/12/22/the-origin-of-waters-unusual-properties-found/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00219.warc.gz | en | 0.942254 | 973 | 3.78125 | 4 |
A telepresence robot, that is.
COLUMBUS, Ohio—Thomas Hatch noticed something unusual in a reflection on his laptop screen as he worked on a lesson one day in his pod at high school.
The teenager turned around. He was face to face with a teacher of an online course. Well, sort of. The teacher’s face was encased in a small video screen. His body was a 4-foot-tall plastic tower on wheels. He maneuvered the telepresence robot around the classroom and spoke to students using controls on his computer from a remote location.
“It was, um, different—definitely different,” Hatch said of his first encounter with the robot last year, when he was a junior at the Nexus Academy of Columbus.
The public high school in central Ohio blends online and in-person instruction in an open, office-style building located in a small industrial park. The school has some in-the-flesh teachers, but many teachers never set foot in the building, because they teach only online courses—some from locations quite far away. Most of the time, the remote teachers interact with their students through a computer screen or phone call. The new telepresence robot provides another means of communication with students and staff in the building. | <urn:uuid:3f459842-3628-46a5-a753-71319968f15b> | CC-MAIN-2022-40 | https://letsdovideo.com/what-its-like-to-have-a-robot-for-a-teacher/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00219.warc.gz | en | 0.977073 | 266 | 3.0625 | 3 |
Augmented reality is a technology which enriches the physical world with digital information in the form of text, images, and 3D models. One of the important aspects of augmented reality is its ability to overlay the same information collected from physical objects through IoT on the same physical object in real time and in a way suitable for easy and meaningful consumption by the user. For instance, a maintenance engineer can readily see the health parameters of the machine projected on the machine itself through the augmented reality interface along with the other relevant information such as the machine specs, parameter threshold levels, and generated alerts. This equips the engineer with the necessary information to take guided action quickly even before opening up the product or machine for repairs. Augmented reality facilitates the engineer during repairs with on-the-fly instructions manual. Such capabilities of augmented reality technology hold immense potential for its manifold application in the manufacturing industry.
Augmented reality (AR) can influence the different departments of the manufacturing organization, including employee training and safety; factory floor and field services operations; machine assembly, inspection and repair; manufacturing facility and product design; and warehousing and distribution of goods. Research shows that assembly rate errors can be reduced by 82% using AR technology and with AR-enabled training, there can be up to 90% improvement in quality at the first attempt. According to Markets and Markets, the AR market is expected to reach $56.8 billion by 2020. AR technology is not just the technology of the future; it’s happening now with several large manufacturing companies already using AR technology in their facilities and in the field to service equipment, such as Caterpillar. Interesting use cases of AR are being piloted across different industries by leading organizations. Some of these are:
- The Chemical Process Engineering Research Institute (CPERI), a nonprofit research organization is pioneering AR in a use case for plant start-up procedures, in which an operator using an AR device completes a task which requires many sequential steps in several hours for starting up a plant.
- Comau, part of the Fiat group, is developing an AR-enhanced system to help a user assemble a robot wrist, a process that normally requires four hours and over 290 individual steps to complete, in less time and with fewer errors
AR reality has evolved but it still has several challenges such as the expensive hardware, unavailability of AR content, and nonuniform requirements to overcome before it is widely adopted. Efforts are on the way to help focus the direction of AR for use in industry. Recently, 65 organizations including industrial companies, AR providers, universities and government agencies came together for a workshop conducted by Digital Manufacturing and Design Innovation Institute (DMDII) to offer insight into their challenges and needs to help create the guidelines development process. This will provide a benchmark set of requirements that will help them develop a roadmap and source, select, evaluate, and deploy AR solutions. The guideline documents address AR features such as: hardware: battery life; connectivity; field of view; on-board storage; onboard operating system; environmental aspects; inputs/outputs and safety, and software: authoring; AR content; creating 3D content; deployment of AR content, and IoT. | <urn:uuid:7ffb1d54-57c5-44bd-90db-56be98052973> | CC-MAIN-2022-40 | https://www.hcltech.com/blogs/peek-augmented-reality | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00219.warc.gz | en | 0.938938 | 647 | 3.0625 | 3 |
Round Robin Load Balancing Definition
Round robin load balancing is a simple way to distribute client requests across a group of servers. A client request is forwarded to each server in turn. The algorithm instructs the load balancer to go back to the top of the list and repeats again.
What is Round Robin Load Balancing?
Easy to implement and conceptualize, round robin is the most widely deployed load balancing algorithm. Using this method, client requests are routed to available servers on a cyclical basis. Round robin server load balancing works best when servers have roughly identical computing capabilities and storage capacity.
How Does Round Robin Load Balancing Work?
In a nutshell, round robin network load balancing rotates connection requests among web servers in the order that requests are received. For a simplified example, assume that an enterprise has a cluster of three servers: Server A, Server B, and Server C.
• The first request is sent to Server A.
• The second request is sent to Server B.
• The third request is sent to Server C.
The load balancer continues passing requests to servers based on this order. This ensures that the server load is distributed evenly to handle high traffic.
What is the Difference Between Weighted Load Balancing vs Round Robin Load Balancing?
The biggest drawback of using the round robin algorithm in load balancing is that the algorithm assumes that servers are similar enough to handle equivalent loads. If certain servers have more CPU, RAM, or other specifications, the algorithm has no way to distribute more requests to these servers. As a result, servers with less capacity may overload and fail more quickly while capacity on other servers lie idle.
The weighted round robin load balancing algorithm allows site administrators to assign weights to each server based on criteria like traffic-handling capacity. Servers with higher weights receive a higher proportion of client requests. For a simplified example, assume that an enterprise has a cluster of three servers:
• Server A can handle 15 requests per second, on average
• Server B can handle 10 requests per second, on average
• Server C can handle 5 requests per second, on average
Next, assume that the load balancer receives 6 requests.
• 3 requests are sent to Server A
• 2 requests are sent to Server B
• 1 request is sent to Server C.
In this manner, the weighted round robin algorithm distributes the load according to each server’s capacity.
What is the Difference Between Load Balancer Sticky Session vs. Round Robin Load Balancing?
A load balancer that keeps sticky sessions will create a unique session object for each client. For each request from the same client, the load balancer processes the request to the same web server each time, where data is stored and updated as long as the session exists. Sticky sessions can be more efficient because unique session-related data does not need to be migrated from server to server. However, sticky sessions can become inefficient if one server accumulates multiple sessions with heavy workloads, disrupting the balance among servers.
If sticky load balancers are used to load balance round robin style, a user’s first request is routed to a web server using the round robin algorithm. Subsequent requests are then forwarded to the same server until the sticky session expires, when the round robin algorithm is used again to set a new sticky session. Conversely, if the load balancer is non-sticky, the round robin algorithm is used for each request, regardless of whether or not requests come from the same client.
What is the Difference Between Round Robin DNS vs. Load Balancing?
Round robin DNS uses a DNS server, rather than a dedicated hardware load balancer, to load balance using the round robin algorithm. With round robin DNS, each website or service is hosted on several redundant web servers, which are usually geographically distributed. Each server hands out a unique IP address for the same website or server. Using the round robin algorithm, the DNS server rotates through these IP addresses, balancing the load between the servers.
What is the Difference Between DNS Round Robin vs. Network Load Balancing?
As mentioned above, round robin DNS refers to a specific load balancing mechanism with a DNS server. On the other hand, network load balancing is a generic term that refers to network traffic management without elaborate routing protocols like the Border Gateway Protocol (BGP).
What is the Difference Between Load Balancing Round Robin vs. Least Connections Load Balancing?
With least connections load balancing, load balancers send requests to servers with the fewest active connections, which minimizes chances of server overload. In contrast, round robin load balancing sends requests to servers in a rotational manner, even if some servers have more active connections than others.
What Are the Benefits of Round Robin Load Balancing?
The biggest advantage of round robin load balancing is that it is simple to understand and implement. However, the simplicity of the round robin algorithm is also its biggest disadvantage, which is why many load balancers use weighted round robin or more complex algorithms.
Does Avi Networks Offer Round Robin Load Balancing?
Yes, enterprises can configure round robin load balancing with Avi Networks. Round robin algorithm is most commonly used when conducting basic tests to ensure that new load balancers are correctly configured.
For more on the actual implementation of load balancers, check out our Application Delivery How-To Videos.
For more information see the following round robin load balancing resources: | <urn:uuid:9f0e8367-89d4-4013-a42d-2d8c8f18cc7f> | CC-MAIN-2022-40 | https://www-stage.avinetworks.com/glossary/round-robin-load-balancing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00419.warc.gz | en | 0.873043 | 1,137 | 3.703125 | 4 |
Confidently erase data in active environments and from used IT assets.
Boost services throughout the device lifecycle—from first sale to end-of-life.
Expedite processes, recover more marketable product, and increase services.
Home » Resources » Is Data Erasure Really Secure?
While this type of physical destruction is certainly valuable in any IT security policy, it’s not always the best option
Yes, shredding most traditional drives will render the data irrecoverable, but destroying newer technologies, such as SSDs, has been found to leave data on drive fragments, creating the possibility of a data breach while rendering the drive unusable.
Secure, certified data erasure has become a popular choice for organizations wanting to dispose of sensitive data records. Data erasure can add additional security to a physical destruction project. It can also be used as the sole means of removing data from drives, mobile phones, removable media and more.
But is data erasure secure enough to replace physical destruction?
To explore the security credentials of software-based data erasure, we must first look at the limitations of physical destruction. Physical destruction has been an industry stalwart for the history of IT hardware, particularly for hard disk drives. But it’s not the only, and often not the best, option for highly sensitive data stored on newer drive types.
SSDs and other IT assets can be physically destroyed with brute force, but because of the increasingly dense way data is stored, intact chips and the data they contain can remain on shards of shredded hardware. This vulnerability, plus drive replacement expenses, can be costly to business.
It’s also costly to the environment. As the “green” movement gains momentum and global technology needs skyrocket, there’s concern over the rapid consumption of natural resources for new devices, as well as the vast number of used devices (e-waste) going into landfills.
Given these two physical destruction concerns, organizations are taking a closer look at their bottom line and their role in sustainability while holding to strict standards of secure data protection. | <urn:uuid:b116598c-a601-49d1-bf00-6e0e5fd30a9b> | CC-MAIN-2022-40 | https://www.blancco.com/resources/sb-is-data-erasure-really-secure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00419.warc.gz | en | 0.920347 | 430 | 2.71875 | 3 |
Many experts predict that machine learning (which any companies are currently investing in significantly) will be responsible for the most important breakthroughs in history. That includes being more important than the industrial revolution or the introduction of electricity, the computer, or the internet. Only time will tell whether these predictions prove to be correct, but machine learning is undoubtedly advancing at a significant pace.
What is a Machine That Learns?
A standard machine is programmed to do a particular task while a machine that can learn is programmed to learn how to do it. This learning is achieved through data, so the quality of the machine is dependent on the data.
In an article on Datanami, authors Hui Li and Fiona McNeill explain that machines learn in four main ways:
· Supervised learning – labelling the data that the machine uses to learn as well as defining the desired output.
· Semi-supervised learning – this uses some data that is not labelled and some data that is labelled.
· Unsupervised learning – the data used by machines that learn unsupervised is completely unlabelled. In this form of learning, the machine looks for patterns in the data.
· Reinforcement learning – this is a trial and error method of learning. Machines that learn in this way will try a scenario, get feedback from its environment, then adapt its approach based on that feedback.
Machine Learning’s Various Forms
Machine learning is a term used to describe a number of different types of technology, all of which learn in one or more of the ways listed above: | <urn:uuid:c189f68a-7c49-4522-b04c-20cface476b9> | CC-MAIN-2022-40 | https://www.grtcorp.com/content/machines-can-learn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00419.warc.gz | en | 0.939195 | 323 | 3.71875 | 4 |
It may sound like something from a comedy show or comic book, but don’t be fooled!
The newly uncovered Wi-Fi security vulnerability is something that potentially affects millions of devices. These attacks can be used to read your internet communications, inject ransomware and malware, steal personal information and other sensitive data, including financial information and passwords.
The vulnerability affects both private and public Wi-Fi networks, specifically targeting the trusted Wi-Fi encryption tool WPA2, which is designed to keep user internet activity private. Although the hacker must be onsite to be able to leverage the vulnerability, this does not mean you are safe.
In order to best protect yourself from threats, it is advisable to regularly check, patch and update your wifi enabled devices. In addition, resetting passwords and upgrading your hardware and software to options that offer more comprehensive security features is advisable.
In businesses, education is a key factor in ensuring that employees are able to detect and report any security vulnerabilities within the organisation and to ensure that employees act in a safe manner online. Although rectification of vulnerabilities and protection from threats comes at a cost, it is almost always significantly less than the business would pay following a security breach.
Security threats are real, they are serious and they aren’t going away any time soon.
Sign up to our Cyber Security Basics mailing list for a bunch of resources that help you to learn more about the potential threats, how they could affect you and how you can act to prevent them. | <urn:uuid:10f3e7a2-2011-49d2-8eec-e95ad2f3fe95> | CC-MAIN-2022-40 | https://eventura.com/cyber-security/krack-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00419.warc.gz | en | 0.946614 | 304 | 2.8125 | 3 |
One of the amusing and sometimes annoying things about technology is dealing with the inevitable hype cycle associated with any new device. Drones fly near the top of the list. Over the past few years, they’re been highlighted in news stories, featured in promotional videos and billed as a solution to just about every problem.
Naturally, we’ve also witnessed the inevitable backlash. Politicians, public safety officials and others have railed against drones. There also have been clear public safety dangers, such as drones interfering with commercial aircraft and firefighters in helicopters.
A few extremists have also taken aim at drones literally: They’ve introduced bounties and shot them out of the sky.
Yet, drones have already revolutionized the way directors film movies, TV show and commercials. Insurance companies and engineering firms are now using them to inspect buildings, high-tension wires, bridges and other infrastructure. And agribusiness and food producers are using them to monitor land, crops and animals.
According to the Federal Aviation Administration (FAA), 5,537 petitions to use drones were granted as of July 19, 2016. Recently, the FAA finalized Part 107 for small Unmanned Aircraft Systems.
These first operational rules, which took effect in August, create national, uniform regulations for commercial drone operations. This represents a huge step forward because it allows drones to deliver products beyond visual line of sight.
The FAA, citing industry sources, reports that the rule could generate more than $82 billion for the U.S. economy and create more than 100,000 new jobs over the next 10 years. Although, like any emerging technology, there will be frivolous and sometimes bizarre uses of the technology (7-Eleven recently delivered a Slurpee via a drone), other applications are amazing and remarkable.
For example, California-based drone operator Zipline announced it will begin delivering medical supplies to small and remote communities—including Native American reservations—in Maryland, Nevada and Washington State. Meanwhile, The Weather Company (an IBM Business) and AirMap will tap real-time hyper-local weather data to provide information to drone operators. This is important because Part 107 requires pilots of unmanned systems to consult a weather forecast prior to a flight.
At this point, it’s apparent that drones are here to stay and will impact a wide swath of industries. The resulting disruption will be enormous—and this is just the beginning. | <urn:uuid:1043483a-9615-4d28-919a-03866b9280bb> | CC-MAIN-2022-40 | https://www.baselinemag.com/blogs/the-drones-are-here/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00419.warc.gz | en | 0.942582 | 493 | 2.515625 | 3 |
An exploit kit is a toolkit designed to facilitate the exploitation of client-side vulnerabilities most commonly found in browsers and their plugins in order to deliver malware on end users’ machines.
The first documented case of an exploit kit was found in Russian underground forums in late 2006 and called MPack.
From the beginning, authors of exploit kits made sure to build their program as a commercial package, often including support and providing regular updates.
By 2010, the market for such exploitation tools had blossomed and one of the most popular and revered exploit kits entered the scene with the infamous Blackhole EK.
EK writers started introducing new vulnerabilities at a faster pace and focused on the most deployed and unpatched applications like Java or Adobe Reader.
After the arrest of Blackhole’s creator (Paunch) in late 2013, there was uncertainty in the underground market but activity picked up again not very long after.
By 2015, a newer exploit kit called Angler was dominating and using zero-day vulnerabilities instead of already patched ones. A zero-day attack happens when no patch is available from the software manufacturer and yet an exploit already exists and may even be used on a large scale already.
The primary infection method with an exploit kit is a drive-by download attack. This term is used to describe a process where one or several pieces of software get exploited while the user is browsing a site.
Such attacks occur silently within seconds and most notably they do not require any user interaction. The simple fact of viewing a webpage is enough to trigger an attack.
Websites that have poor security often get hacked and injected with malicious code within their pages, for example iframes, which are HTML tags that allow the loading of an external site directly within the same page.
Other times, well-known and trusted websites are caught redirecting visitors to exploit kits via malicious advertisements, also known as malvertising.
From there, the browser loads the exploit kit landing page which is stuffed with code that fingerprints the victim’s machine for the type of software installed and the corresponding vulnerabilities. In other cases, such as with zero-days, the exploit is fired right away knowing that since there is no patch available, it will most likely succeed in its task.
Once the exploit has opened the door to the target computer, it can load the final piece, which is the malware itself.
For this reason, exploit kits are a means for malicious actors to distribute their malware without the user’s consent on tens of thousands of machines within minutes.
The top exploit kits as of 2015 are:
As mentioned earlier, exploit kits are a means to infect your computer and their code is hosted on remote servers, often housed with bullet-proof hosting providers. For this reason, one cannot remove the exploit kit itself, but rather focus on the payload that was dropped by it. This could be ransomware, a banking Trojan, or a spam bot just to name a few.
Once infected by an exploit kit, you will need to check your computer for the presence of malware using antivirus and anti-malware tools.
Of course, it is also important to identify the cause of the infection (i.e. an out-of-date Flash Player) in order to prevent future ones.
The best way to protect against exploit kits is to first and foremost keep your computer up-to-date but also remove any pieces of software that you no longer need in order to reduce the attack surface you are allowing the bad guys to exploit.
Since zero-days are becoming more and more prevalent, regular patching is no longer sufficient. A layered defense starting with anti-exploit and other mitigation tools is a must.
Select your language | <urn:uuid:293b00cd-14cd-4d4e-8df3-c8785fbae6f8> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/threats/exploit-kits | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00419.warc.gz | en | 0.952317 | 762 | 2.546875 | 3 |
This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here:
Just like the change of the seasons, it’s natural to think about the process of birth, growth, plateau, then decline. Technology goes through similar birth, growth, and death processes. From the beginning invention to its wide-eyed hype and enthusiasm, to the realizations of its shortcomings, and then to its inevitable downfall and replacement with something new. Yet not every technology goes through the exact same process. Some technologies are immediately adopted in a widespread fashion and don’t exhibit any sort of immediate decline or any misalignment of expectation with reality. For example, the light bulb, motor car, phonograph, airplane, Internet, mobile computing, databases, and the cloud have all been technologies enthusiastically embraced, adopted, and retained by billions of people around the planet.
On the other hand, there seem to be waves of technologies that consistently have trouble with adoption. Immersive 3D realities seem to come and go without sticking. So too the concepts of flying cars, supersonic transport, and living in space all seem to be frustratingly beyond our grasp. We can see technically how we can implement these things, but in practice the challenges seem to get in the way from making them a reality.
The relevant question here is which future does Artificial Intelligence (AI) hold for us? Will it be one of those great ideas that we can never seem to attain, or will it finally conquer the technological, economic, and societal hurdles to achieve widespread adoption? In this research, we look at the previous cycles of adoption for AI technology, its ebbs and flows, and analyze whether we are at an incline, plateau, or decline in future adoption expectations.
The First Wave of AI Adoption, and the First AI Winter
The first major wave of AI interest and investment occurred from the early-to-mid 1950s through the early 1970s. Much of the early AI research and development stemmed from the burgeoning field of computer science, going through its rapid growth from vacuum tubes and core memory to the development of integrated circuits and the first microprocessors. In fact the 20-ish years of computing from 1955 to 1975 was the heydey of computer development, resulting in many of the innovations we still use today. AI research built upon these exponential improvements in computing technology and combined with funding from government, academic, and military sources to produce some of the earliest and most impressive advancements in AI.
Yet, while progress around computing technology continued to mature and progress, with increasing levels of adoption and funding, the AI innovations developed during those heady decades of the early computer years ground to a near halt in the mid 1970s. This period of decline in interest, funding, and research is known in the industry as the AI Winter. This was a period of time when it was dramatically harder to get the funding, support, and assistance necessary to continue progress of AI.
Winter Reason #1: Overpromising, Underdelivering
The early days of AI seemed to promise everything. Computers that could play chess, navigate their surroundings, have conversations with humans, and practically think and behave as people do. It’s no wonder that HAL in 2001: A Space Odyssey didn’t seem so far fetched to the audiences in 1969. Yet as it turned out, those over-promises came to a head with the backers with misaligned expectations.
Winter Reason #2: Lack of Diversity in Funding
Government institutions in the US, UK, and elsewhere, provided millions of dollars of funding with very little oversight and restriction on how those funds were used, an outgrowth of Manhattan Project and Space program style funding. This was especially the case with DARPA, which saw great gains from Space projects and nuclear research applicable in all areas of technology. However, they did not see the same sort of general, or even specific, returns from their AI investments. Indeed, it was a practical death-knell to the UK AI research establishment when Sir James Lighthill delivered a report to the UK Parliament in which he derided the attempts of AI to achieve its “grandiose objectives.” His conclusion was that the work in AI had complexities that resulted in “combinatorial explosion” or “intractability” in some instances (Artificial General Intelligence in particular), or were too trivial to be used in more specific (narrow) instances.
Furthermore, AI funding in general was too dependent on government and non-commercial sources. When governments worldwide pulled back on academic research in the mid 1970s fueled by budgetary cutbacks and changes in strategic focus, AI suffered the most. In research settings, this is made worse by the fact that AI tends to be very much inter-disciplinary, involving different departments in computing, philosophy and logic, mathematics, brain & cognitive sciences, and others. When funding drops in one department, it impacts the ability of AI research as a whole. This is perhaps one of the most learned lessons from this era: find more consistent and reliable sources of funding so that research won’t come to an end.
The Second Wave of AI Adoption, and the Second AI Winter
Interest in AI research was rekindled in the mid 1980’s with the development of Expert Systems. Adopted by corporations, expert systems leveraged the emerging power of desktop computers and cheap servers to do the work that had previously been assigned to expensive mainframes. Expert systems helped industries across the board automate and simplify decision-making on Main Street and juice-up the electronic trading systems on Wall Street. Soon people saw the intelligent computer on the rise again. If it could be a trusted decision-maker in the corporation, surely we can have the smart computer in our lives again.
All of a sudden, it wasn’t a dumb idea to assume the computer would be intelligent again. Over a billion dollars was pumped back into AI research and development, and universities around the world with AI departments cheered. Companies developed new software (and hardware) to meet the needs of new AI applications. Venture capital firms, which didn’t exist in the previous cycle, emerged to fund these new startups and tech companies with visions of billion dollar exits. Yet just like in the first cycle, AI adoption and funding ground to a near halt.
Winter Reason #3: Technological hurdles
Expert systems are very dependent on data. In order to create logical decision paths, you need data as inputs for those paths as well as data to define and control those paths. In the 1980s storage was still expensive, often sold in megabyte increments. This is compounded by the fact that each corporation and application needed to develop its own data and decision flows, unable to leverage the data and learnings of other corporations and research institutions. Without a global, connected, almost infinite database of data and knowledge gleaned from that data, corporations were hamstrung by technology limitations.
Compounded on these data issues was the still somewhat limited computing power available. While new startups emerged with AI-specialized computing hardware (Lisp / Symbolic Machines) that could process AI-specialized languages (Lisp, again), the cost of that hardware outweighed the promised business returns. Indeed, companies realized they could get away with less-intelligent systems for far cheaper with business outcomes that weren’t far worse. If only there was a way to get access to almost infinite data with much less cost, and computing power that could be purchased on an as-needed basis without having to procure your own data centers…
Winter Reason #4: Complexity and Brittleness
As the expert systems became more complex, handling increasingly greater amounts of data and logic flows, maintaining those data and flows became increasingly more difficult. Expert systems developed a reputation of being too brittle, depending on specific inputs to get desired outputs, and too ill-suited to more complex problem solving requirements. The combination of the labor required for updating with increasing application challenges resulted in businesses re-evaluating their need for expert systems. Bit by bit, other non-intelligent software applications such as the emergence of Enterprise Resource Planning (ERP) and various process and rules-based systems starting eating at the edges of what could previously only be done with expert systems. Combined with the cost and complexity of Lisp machines and software, the value proposition for continuing down the expert system path grew more difficult to justify. Simply put, expensive complex systems were replaced by cheaper, simpler systems, even though they could not meet overall AI goals.
One possible warning sign for the new wave of interest in AI is that expert systems were unable to solve certain, specific, computationally “hard” logic problems. These sort of problems, such as trying to predict customer demand or determine impacts on resources from multiple, highly variable inputs require vast amounts of computing power, that were simply unavailable in the past. Are new systems going to face similar computationally “hard” problem limits, or is the fact that the computationally hard game of Go was successfully surmounted by AlphaGo recently a sign that we’ve figured out how to handle computational “hardness”.
The Third Wave of AI Adoption… Where We Stand Now
Given these two past waves of AI overpromising and underdeliverinf, combined with increasing and then decreasing levels of interest and funding, why are we here now with resurging interest in AI? In our “Why Does AI Matter?” podcast and follow-on research, we come to the conclusion that the resurgence in interest in AI revolves around three key concepts: advancement in technology (big data and GPUs in particular), acceptance of human-machine interaction in our daily lives, and integration of intelligence in everyday devices from cars to phones.
Thawing Reason #1: Advancement in Technology
Serving as a direct answer to the Winter Reason #3, the dramatic growth of Big Data and our ability to handle almost infinite amounts of data in the cloud combined with specialized computing power of Graphical Processing Units (GPUs) is resulting in a renaissance of ability to deal with previously intractable computing problems. Not only does the average technology consumer now have access to almost limitless amounts of computing power and data storage at ridiculously cheap rates, but we also have the increasing access to large pools of data that allow organizations to share and build upon each other’s learnings at exceptionally fast rates.
With just a few lines of code, companies have access to ginormous data sets and training data, technologies such as TensorFlow and Keras, access to cloud-based machine learning toolsets from Amazon, Google, IBM, Microsoft, Facebook, and all sorts of capabilities that would previously have been ridiculously difficult or expensive to attain. It would seem that there are no longer long-term technical hurdles for AI. This reduction in cost for access to technical capabilities gives investors, companies, and governments increasing appetite for AI investment.
Furthermore, the emergence of Deep Learning and other new AI approaches is resulting in a revolution of AI abilities. Previous problems that seemed intractable for AI are now much more accessible. Indeed, computing and data capabilities alone can’t explain the rapid emergence of AI capabilities. Rather, Deep Learning and related research developments have enabled organizations to harness the new almost limitless amounts of compute and data to solve problems that have previously been difficult (Winter Reason #4).
Thawing Reason #2: Acceptance of Human-Machine interaction
In addition, ordinary non-technical people are getting accustomed to talking and interacting with computer interfaces. The growth of Siri, Amazon Alexa, Google’s assistant, chatbots, and other technology have proven that people are accepting of human-like intelligence and interactions in their daily experiences. This sort of acceptance gives investors, companies, and governments confidence in pursuing AI-related technologies. If it’s been proven that the average Joe or Jane will gladly talk to a computer and interact with a bot, then more development on that path makes sense.
Thawing Reason #3: Integration of Intelligence in Everyday Technology
Continuing on that theme, we’re now starting to see evidence of more intelligent, AI-enabled systems everywhere. Cars are starting to drive and park themselves. Customer support is becoming bot-centric. Problems of all shapes and sizes are being enabled with AI capabilities, even if they aren’t necessarily warranted. Just as in the early days of the Web and mobile computing, we’re starting to see AI everywhere. Is this evidence that we’re in another hype wave, or is this different? Perhaps we’ve crossed some threshold of acceptance.
The Cognilytica Take: Will There be another AI Winter?
It is important to note that the phenomenon of the AI Winter is primarily a psychological one. While it’s certainly the case that technology hurdles limit AI advancement, this doesn’t explain the rapid uptick and decline in funding in the first two cycles. As Marvin Minsky and Roger Schank warned in the 1980s, the AI winter is caused by a chain of pessimism that infects the otherwise rosy outlook on AI. It starts within the AI research community, percolates to press and general media, which then impacts investors who cut back, and eventually this has direct impact on the beginning of the cycle, dropping interest in research and development.
While we might have successfully addressed Winter Reasons #3 and #4, we still have Winter Reasons #1 and #2 to grapple with. Are we still overpromising what AI can deliver? Are we still too dependent on single-sources of funding that can dry up tomorrow when people realize AI’s limitations? Or are we appropriately managing expectations this time around, and are companies deep-pocketed enough to weather another wane in AI interest?
James Hendler in 2008 famously observed that the cause of the AI winters is not related to just technological or conceptual challenges, but to the lack of basic funding for AI research that would allow those challenges to be surmounted. He rings the warning bell now that we’re diverting much of our resources and attention away from AI research to AI applications and that we’ll once again hit a natural limit on what we can do with AI. This is the so-called AI research pipeline, which Hendler warns is already starting to run dry. This will then inevitably cause the next AI winter. We worry, similarly, that an over-focus on doomsday scenarios for AI could cause an AI Winter without even reaching those limitations.
Indeed, a recent article was written about how much of our current AI advancement is based upon decades-old research, that while resulting in much value and fruits of benefit now, will soon start to decrease in return. The challenge is to invest again in basic AI research to discover new methods and approaches so that we can continue the current thawing of AI and reach a new eternal summer of AI, rather than fall into the coldness of another AI winter.
Our expectation and hope is that the next AI winter will never come. Many companies are now taking an AI-first approach which we hope will continue to make advancements in AI research as well as continue to push the needle forward with practical AI solutions. With AI also becoming more integrated in everyday use cases, which was not the case in the past, it will become too difficult to just pull the plug on AI as had happened in AI Winters past. For these reasons, we expect and hope another AI winter will not come. | <urn:uuid:6ba235c9-b992-463a-a5a1-be8361e06c44> | CC-MAIN-2022-40 | https://www.cognilytica.com/2018/02/22/will-another-ai-winter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00619.warc.gz | en | 0.956269 | 3,195 | 2.75 | 3 |
When a fiber optic connection is broken, we all know that fiber optic tester like visual fault locators or OTDR can help to find out where problems may occur and solve the problem. Then, what if a problem occurs in Ethernet networks? It’s time to use a network cable tester. This article is going to give a brief introduction to network cable tester and how to use as well as some abnormal connections that may appear in the testing process.
A network cable tester is a device that can identify whether the wires are paired correctly and whether there is a break in the insulation which allows cross-talk between two wires. It even can tell if the cable has the proper level of resistance. Here is a picture of network cable tester shown below. It consists of a main tester (also called master tester) and remote tester. There are night lights on the front panel of them. Besides, most testers have either two or three connectors: RJ-45 for Ethernet, RJ-11 for telephone cable, and BNC for coaxial cable. When testing a Cat5e cable, the two testers are connected by the Cat5e cable. And if the cable works, the lights on both testers will become green sequently from 1 to G. However, if using the tester to test cable with RJ11 connectors, the shining sequence of lights will be different.
Network cable tester can test two common types of cable: coaxial and twisted pair. A basic network cable tester may only test simple connectivity issues but not identify other problems that cause fiber failure. Generally, if a network doesn’t work normally, the problem frequently lies in the user errors or other sides. It rarely is a faulty cable.
Network cable tester can tell if the Ethernet cable is able to carry a signal when connected to it. Then, how to use a network cable tester to check a cable correctly? Since there are various types of network cable testers in a number of different forms, here is just a simple guide on how to use it.
Notes: never connect a cable tester to a live circuit. Before starting with testing, keep in mind to remove the cable from computer or router.
Step one. Connect the testers. Connect the main tester and remote tester with the cable you want to test. Make sure your cable is plugged in fully and correctly.
Step two. Start the test. When you are ready, turn on the button to send a signal up the cable, which lights up the LEDs on both testers. If something has gone wrong, the red lights will on. Then you need to check your test process or cable carefully.
Step three. Read the test report. The different lights shine and the sequence of lights on shows different problems. Here are some common results you may encounter during the testing process.
Besides, here are some abnormal connections that can help you to use a network cable tester effectively.
- If a cable is short circuit, for example No.3, then the two No.3 lights on both testers will not be on.
- If several cables are not connected, several lights will be off respectively. If less than two cables are connected, none of the lights is on.
- If two ends of a cable are disordered, the lights on testers will shine in different orders.
- If three cables are short circuited, none of the corresponding lights is on.
The network cable tester is designed to calculate how well your high-speed network cables are performing. Poor performance in these cables can result in damage to work, poor Internet access, and general disruption to the network. There are a number of problems which can be sorted out by using the network cable tester, and it will certainly help you to save time trying to find software or hardware solutions. | <urn:uuid:25176222-afcb-4605-a6dc-8d46b770a797> | CC-MAIN-2022-40 | https://www.fiber-optic-components.com/tag/network-cable | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00619.warc.gz | en | 0.921322 | 785 | 2.875 | 3 |
Determining the Value of Built-in I/O Functions
February 28, 2007 Hey, Ted
What follows is a question that I have paraphrased from the emails of various readers of this august publication, who all seem to be encountering a similar issue: “Is it possible to view the value returned by built-in functions, such as %EOF and %FOUND, when working in the green-screen, full-screen debugger?”
The short answer is “No.” The debugger will not show you the result of any function, built-in or user-defined, unless that function is defined to the debugger. (See Undocumented Debugger Function.) The reason for this behavior is that a function is not a variable. A variable is a section of memory, and as such, is easily queried for its current value. A function is executable code that runs when the function is invoked.
However, that does not mean that there is no way to determine the value that an I/O function returns. You just have to go about it another way.
Let’s consider the following trivialized example program snippet.
Fitembl if e k disk D Itnbr s 15a /free chain itnbr itemblmc; if %found(itembl); DoWhatever(); else; DoWhateverElse(); endif;
After the chain operation, %FOUND is either true (*ON) or false (*OFF). The same is true for %ERROR.
Here’s another example:
FMyInfo if e k disk FQSysPrt o f 132 printer /free *inlr = *on; dow '1'; read myinfo; if %eof(myinfo); leave; endif; except pline; enddo; return;
After the read, the %EOF and %ERROR functions are updated with true/false values.
I have heard it said that there’s no need to be able to view the returned values of the functions, since a person can determine how the functions behaved by watching the path of execution. That’s a good point, and that’s what I usually do. But there are times that I would not allow a section of code to execute if I knew what the function had returned, or I would have plugged a value in order to force the program to continue in some other direction. (I assume that those who have asked me this question have similar motives for wanting to know the result of a function call.)
I know of two methods you can use to see a function’s return value. Here’s the more obvious one first.
Method 1: Store the function’s return value in a variable, like this.
FMyInfo if e k disk FQSysPrt o f 132 printer D eof s n /free *inlr = *on; dow '1'; read myinfo; eof = %eof(myinfo); if eof; leave; endif; except pline; enddo; return; /end-free OQSysPrt e pline 1 O name O age 4 +0001
This method is by no means rocket science. Store the result of the %EOF function into variable EOF, then view the value of EOF. Besides being able to view the result of the %EOF function, you can alter the course of program execution by changing the value of the EOF variable before it is tested.
eval eof = '1'
Method 2: Check the status subfield of the file information data structure.
FMyInfo if e k disk infds(MyInfoDS) FQSysPrt o f 132 printer D MyInfoDS ds D MyInfoStatus *status /free *inlr = *on; dow '1'; read myinfo; if %eof(myinfo); leave; endif; except pline; enddo; return; /end-free OQSysPrt e pline 1 O name O age 4 +0001
Add an INFDS keyword to the F spec of the file whose I/O status you want to test, and key the name of a data structure as the keyword’s value. In this case, the data structure is MyInfoDS. Create the data structure and include a status subfield, defined with the literal *STATUS in the “from” entry field of the D spec.
When the program performs an I/O operation to MyInfo, the file information data structure will be updated. If the I/O is successful, the status subfield gets set to all zeros. If a READx operation hits beginning or end of file, the status becomes 11. A failed CHAIN (random read) returns a status code of 11. Status codes of 1000 or above indicate errors. See the iSeries Information Center for more about status codes.
iSeries Information Center, Status Code section | <urn:uuid:0474aa50-c812-4a5f-b9c3-902a386c65f2> | CC-MAIN-2022-40 | https://www.itjungle.com/2007/02/28/fhg022807-story01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00619.warc.gz | en | 0.821402 | 1,043 | 2.734375 | 3 |
Organizations face a growing challenge in the need to integrate technological systems and align them with overarching corporate needs. As businesses consider the best way to achieve these goals, the Roman Empire could serve as the guide.
As the Romans used roads to unify economic and military efforts over a large empire, organizations can use business process management software to integrate technological systems to improve processes.
Looking at Rome as a model for businesses
The Via Appia, commonly referred to as the Appian Way, was one of the most famous and noteworthy roads in the Roman Empire. The saying "All roads lead to Rome," has been remembered through history because the empire was built on its systems of roads. By building roads in every region that it conquered, the Roman Empire allowed trade to flourish in areas where technology and transportation were extremely limited. In some areas, especially during certain periods of the empire, this led to prosperity not only for Rome, but for the conquered groups.
During other periods, the focus of the road system was its ability to enable Roman leaders to distribute military units quickly and intelligently. This allowed Rome to expand and sustain its empire in even some of the most tumultuous regions, because the roads functioned as a central nervous system, enabling communication and logistical simplicity.
If Rome had not prioritized roads like the Via Appia, deer tracks, small paths through woods, rivers and other transit methods would have been needed to support expansion and economic stability. While it may have been possible to get by with those means of operations, it would have been incredibly complex, and slowed progress throughout the entire empire.
Businesses face a similar scenario. Business processes drive success, but technology often functions as the roads upon which workers transport data and applications that allow them to complete processes efficiently. However, the rise of cloud computing, mobile devices and social technologies present a system similar to rivers, paths and tracks. There are many ways to get to the desired process destination, but they need to be unified and optimized for maximum efficiency. BPM software can accomplish this by unifying various technological solutions and presenting data to end users in the most intuitive way possible.
As companies consider BPM investments, they may want to take a serious look at the Appian way of getting the job done. With tools like Worksocial, Appian is able to provide organizations with a holistic BPM solution that allows them to streamline processes and deliver major performance benefits to end users.
Vice President of Product Marketing
Appian is the unified platform for change. We accelerate customers’ businesses by discovering, designing, and automating their most important processes. The Appian Low-Code Platform combines the key capabilities needed to get work done faster, Process Mining + Workflow + Automation, in a unified low-code platform. Appian is open, enterprise-grade, and trusted by industry leaders. | <urn:uuid:bc987238-cb83-4c66-a843-8d8b8417c002> | CC-MAIN-2022-40 | https://appian.com/blog/2012/taking-the-appian-way-to-process-efficiency.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00619.warc.gz | en | 0.965831 | 576 | 2.59375 | 3 |
Risk management is emerging as a necessary practice for large enterprise businesses and SMBs alike. It isn’t the case that you can simply plug into a cloud provider, operate a few servers on-prem and install firewall and malware protection to call it a day. Risk management is a real process that requires insights into your systems and their operations, and practices like penetration testing and vulnerability scanning can help with that process.
What Is Risk Management?
In technical terms, risk management is the process of assessing, evaluating and attending to how your organization can or cannot respond to relevant cybersecurity threats. Central to the concept of risk management is that no IT system can be completely secure against all threats for a few reasons
- Limitations in technology: Not every security control or protection measure can protect against all threats, and different technologies or versions of technologies have different capabilities to prevent attacks.
- Complex IT infrastructure: As your systems become more complex and interconnected, new and unforeseen threats naturally arise.
- Business objectives: Your company must remain flexible, resilient and agile. Not every security configuration promotes that kind of operation, nor do they always fit into logistical or cost models.
- Compliance requirements: Outside of specific business goals, you will, depending on your industry, face the strict fact that compliance requirements must be addressed.
Risk management, therefore, is the process of analyzing your IT system to understand how different configurations and implementations introduce or mitigate your risk of attack. Accordingly, rather than look to security engineers as capable of building a risk-free environment, you must take a hard and informed look into what level of risk you are willing to take into consideration of other factors, like cost, business goals and other logistical questions.
To best understand their risk profile, organizations will typically deploy risk management processes regularly to understand how risk impacts their systems if their levels of risk have changed and how to address or remediate changes in potential attack risk due to emerging threats or changes to technology or technical configurations.
There are several methodologies to address risk, with perhaps the most well-known process defined by NIST Special Publication 800-37 as the Risk Management Framework (RMF). Under RMF, risk management is defined as a six-step process:
- Identify: At this step, you identify all your relevant technical systems alongside any administrative or physical processes that could impact security. This includes all security systems, access points, user access permissions, network configurations… literally anything that could allow attackers or insider threats to compromise your systems. At this step, you’ll also determine the inherent risks of attack to these systems.
- Select: With an understanding of your system, you now must select the proper security controls necessary for your system. The selection of controls will adhere to the types of technologies at work, any compliance obligations you have and the level of risk that you will feasibly take on.
- Implement: At this stage, you will implement your selected controls, enact any policies or procedures around those controls and align them to your business operations and compliance standards.
- Assess: Create benchmarks, take measurements and observe the functioning of your implementations to determine the efficacy of both the controls and their use.
- Authorize: Using reports and documentation from the previous stage, authorize IT and business leaders to make risk-based decisions informed by the actual operations as they work in real-time.
- Monitor: Continuously monitor the system in the face of evolving security threats, technical upgrades and the addition of new or different technologies. Make continuing risk-based decisions to continue the process of identification, implementation and assessment.
As may be expected, accurate and timely information is paramount to the effectiveness of this model. Fortunately, there are several avenues to get this information.
What Are Penetration Testing and Vulnerability Scanning?
System risk assessment is, depending on the complexity of a system, often a discipline unto itself. While there are several methods to assess system vulnerability, two stand out as effective and accurate, particularly for systems handling sensitive information:
- Vulnerability Scanning: A test, often automated, that identifies and reports on existing vulnerabilities in a system and provides documentation on potential steps for remediation. Scans can be scheduled regularly, and higher quality scans can look for tens of thousands of potential security gaps based on different compliance frameworks and common best practices.
- Penetration Testing: A human-directed test wherein a professional security expert or benevolent attacker (colloquially known as a “white hat” hacker) performs simulated attacks on vulnerable systems to determine weaknesses. These tests are often more thorough and comprehensive, covering potential attack surfaces outside of technical systems (for example, targeting employees through phishing or social engineering alongside technical attacks). A tester, rather than simply cataloging vulnerabilities, will exploit them to see how deep into a system they can get–thus exposing several layers of interconnected systems.
Penetration testing is often the more involved test in that it will usually be more structured, focused and rigorous. Additionally, a pen test will usually include a solid picture of how connected applications, networks or devices can expose your systems to risk in unexpected ways.
Defining Risk Management Objectives with Real Insight
As mentioned above, one of the most crucial areas of an effective risk management practice is information. More specifically, risk management at any level calls for your organization to plan a risk management strategy that outlines the acceptable types of risk you face, what your overall risk posture should be, how your processes, operations and technologies function within that posture and how you plan to regularly assess and re-evaluate your strategy.
Within the RMF process, several steps include some sort of evaluative process, particularly the “Assess” step. It stands to reason then that your organization has a robust assessment mechanism in place to support information gathering efforts at any stage of your risk management process.
Penetration testing offers an in-depth look at your system, its vulnerabilities and potential risks. With a solid penetration test, you will have information on hand to provide real insight into what your actual risk profile is. Rather than rely on product documentation or operational reports, you can get an “on the ground” view of what your real vulnerabilities are. That, in turn, gives you incredible insight to make risk-based decisions about your system.
Pen tests are often involved, however, and not something many organizations do in the short term. They take lengthy planning, structuring and deployment times to provide reliable results. In that case, vulnerability scans can provide a more regular “finger on the pulse” of your system.
Both vulnerability scans and penetration testing, used correctly, can support accurate assessment, monitoring and decision making around the risk inherent in your organization, which in turn ensures that you make better, more informed decisions about compliance, technology implementation and configuration updates.
Make Risk Management a Part of Your Business with Continuum GRC
We understand that many businesses can contribute to their industries in unique ways but may not have the time or expertise to manage cybersecurity, governance, or risk on their own. That’s why Continuum GRC offers automated GRC audits and expert consulting through our custom, cloud-based ITAM system. We can help you implement state-of-the-art security practices without compromising your business operations and turn compliance audits from costly endeavors to simple and streamlined aspects of your business.
Preparing for Risk Management and Compliance Audits?
Call Continuum GRC at 1-888-896-6207 or complete the form below. | <urn:uuid:c86519b2-01e6-405f-961a-42567571917c> | CC-MAIN-2022-40 | https://continuumgrc.com/how-can-penetration-testing-help-with-risk-assessment-and-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00619.warc.gz | en | 0.934705 | 1,535 | 2.625 | 3 |
A computer network is made up of software and hardware components that allow one device to communicate with another.
Hardware provides the set of instructions that utilizes the networking equipment for data transmission, whereas software specifies the sequence of commands (communication protocol) that uses the hardware equipment for data transmission.
A basic data transfer is made up of numerous phases that take place at different layers of the computer network. The most established communication stack is the OSI 7-layer model.
A communication network can also be categorized into two broad models:
- Peer-to-peer network model
- Client-Server network model
In this article we’ll go through Peer-to-Peer vs Client-Server based network models to learn how data is transmitted and received at the computer level and compare and discuss the two network communication categories.
Table of Contents
Definition of Peer to Peer Model
A P2P (peer to peer) network is a decentralized collection of computers that has been established to exchange information (such as file documents, songs, movies, software, etc) with everyone or only certain users.
In a p2p network, all computers on the network are considered equal, with each workstation offering access to resources and data.
This means that each node in the p2p network model can both request for services from the other peers or offer services to the other peers. Each node can be both a client and a server.
Peer-to-Peer can be huge networks in which computers may interact with each other and share what is on or linked to their machines with other people.
It is also one of the most straightforward forms of architecture to construct if you have of course the proper software on each node.
Definition of Client-Server Model
The client-server model structure is a centralized network in which the server hosts, provides, and maintains the majority of the client’s data and services.
In this network model, a central server is a must and all the clients (computers) are connected to the central server for retrieving data or using its services.
The diagram above shows a server connected to the network (shown as Internet above but can be any other type of network) with various clients.
The server acts as a middle point of the network. Servers wait for requests from clients to show up before responding.
Comparison of Peer-to-Peer vs Client Server
|Characteristic||Client-Server Model||Peer-to-Peer Model (p2p)|
|Prime focus||Client-server networks are primarily concerned with data exchange.||peer-to-peer networks are primarily focused on communication and connectivity.|
|In client-server networks, the server provides all the services and data while the clients request for services and data.||In a p2p network, all the members (peers) of the network act as the service providers and consumers.
|Cost||A client-server network is expensive to implement because a central server has to be built and it has to keep running constantly otherwise the network will collapse.||p2p is cheaper to make than a client-server network as no central server is required.
|Security||The client-server network model provides better security because the file access is controlled by the server, not the nodes.
|A p2p network is more vulnerable. The peers act like the server and the consumer at the same time which is why file access can’t be handled centrally, the security is handled by the users.|
|Client-server systems are more robust and can be extended as needed.
|As the number of nodes in a peer-to-peer network grows, performance decreases.|
|The bandwidth depends on the connection of the server to the rest of the network.
|The full bandwidth is not allocated in advance in a peer-to-peer connection. It uses the bandwidth node according to the available bandwidth of each node (peer) and then releases it when it is no longer required.|
Examples of client-server architecture
Client-server networks are preferable for bigger networks, particularly if they are expected to increase in scale.
If your network contains sensitive data, you should employ a client-server structure as well.
When you use a web browser to go to a particular website, the browser (client) sends a request to the web server which is handling the website’s content.
Then the server responds to the request and sends data and cookies back to the browser which will show that data according to the configuration.
The same goes for database servers. The client sends a request/query to the server, which checks for the legitimacy of that request. If everything checks out, the server will send data back to the client.
Server peering is used in the Internet mail system, which is a distributed client-server framework.
Clients send and receive mail via communicating with servers, while servers communicate with one another.
An outgoing message can be sent directly to the server (MTA), which will transport it to the recipient’s inbox, or to another MTA, which will pass it on.
By organizing servers in a layer, this system is designed to be extremely scalable. Another example of distributed client-server is the DNS network which contains root servers and other second level servers in the hierarchy.
Examples of Peer to Peer (p2p) model
In 1999, when Napster was launched, was a pivotal moment in the history of P2P. People utilized Napster to exchange and download music via file-sharing programs.
Most of the music exchanged on Napster was copyrighted, making it unlawful to distribute it. That didn’t stop many from getting it, though.
Peer-to-peer technology had to face a lot of backlash because of its usage in illegal file-sharing (torrents). But a lot of companies and many day-to-day services we use, incorporate p2p technology.
Another example is Windows 10 updates. Microsoft’s servers and P2P are both used to deliver Windows 10 upgrades.
Some online gaming platforms make use of peer-to-peer (P2P) technology to allow players to download games. Diablo III, StarCraft II, and World of Warcraft are all distributed through peer-to-peer (P2P) by Blizzard Entertainment.
Because of blockchain’s peer-to-peer construction, all cryptocurrencies can be exchanged globally without the need of a middleman, mediators, or a central repository (server).
Anyone who wants to participate in the process of confirming and verifying blocks can set up a Bitcoin node on the decentralized peer-to-peer network.
In this process, the database is maintained by every member of the network, unlike banks where all the database is handled by a centralized server.
Pros & Cons of Peer to Peer Model
|Cheap because it does not need a central server.||It is generally slower because every user is accessed by other users.
|The network does not need a unique operating system to function.
|Backing up data and archiving is tough because the files are handled by every user, not by a central server.|
|The speed of your internet connection may have no bearing on the time it takes for your files to download.||It is less secure than other network models.
|The system will not be disrupted if one of the computers crashes.||Possibilities of Illegal data sharing.
Pros & Cons of Client-Server Model
|Authorized users (clients) can access and modify data on a server, allowing for improved sharing.
|When multiple client requests are made at the same time, servers become significantly saturated, resulting in traffic congestion.|
|Access and resources are better regulated on servers, ensuring that only authorized clients can acquire or alter data.||This network is vulnerable. If the server crashes, the whole network will collapse.
|Any new user can be simply integrated into the networks because the network is flexible enough.||Maintaining a central server can be costly and require a lot of manpower and time.
|Backing up and archiving data is easier in this network.||The user policies in the network must be set by an expert network administrator.| | <urn:uuid:b1972881-fc47-4ec6-9b9b-c8d5562f469e> | CC-MAIN-2022-40 | https://www.networkstraining.com/peer-to-peer-vs-client-server-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00619.warc.gz | en | 0.905488 | 1,762 | 4.09375 | 4 |
I bet this article will be of quite an interest for Security and Network engineers especially those who are engaged in design, implementation, configuration or troubleshooting with Network Firewalls like Cisco ASA, Juniper SRX and Palo Alto including others.
Related – Firewall Security Level Guide
Recent coronavirus pandemic has changed the way we go through our day to day work schedule. Considering the big threat that looms the world, it has become essential to attain a posture we call social distancing, the best bet to fight covid19 virus. The Indian government has recently given some relaxations which will be regulated based on the spread of the virus in the districts. The present guidelines issued by the government is regarding what is allowed or restricted – basically categorized under RED, ORANGE and GREEN Zones. The colour schema of Area Zoning can be compared to how Network Firewalls separate the Areas or Zones. Firewalls have a key feature of the traffic zoning/classification based on their security levels or level of safety. Both the “India Lockdown” and “Firewall Lockdown” are considered an outcome of virus attacks.
While the spread of corona virus is very high in some areas (regions with high number of infections are considered unsafe) and such areas with large number of infected population are catgeorized under RED Zone (RED Zone = high number of infected people and high risk areas). In the same way, Outside Zone (Also called unsecured or external Zone in Firewall) is an unsecured area which is vulnerable to attacks from viruses and related threats. Henceforth, color RED also completements Outside or unsecured Zones of Firewall.
In the current lockdown, categorization under ORANGE Zone (as per new government norms) is more relaxed compared to RED Zone (since there are less number of infections and a medium level of risk). In the same way, DMZ or demilitarized or Semi-safe zone can also be considered type of ORANGE zone which lies between Usafe (RED) and Safe (GREEN) Zones.
The 3rd and most safe Zone is the GREEN Zone with least number of infections and least risk area. In the same way, Inside Zone of a Firewall is most secured and protected from vulnerabilities and attacks. | <urn:uuid:8345c6d6-c542-496b-b374-41827957465a> | CC-MAIN-2022-40 | https://ipwithease.com/india-lockdown-zones-compared-to-firewall-security-zones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00619.warc.gz | en | 0.940214 | 474 | 2.546875 | 3 |
"About 60% of the data on the Dark Web could be damaging to companies." (Boring.com)
One of the internet's most contentious transformations has been the so-called "Dark Web" growth since its inception in the 1990s.
While adults may be wary of young people venturing onto the 'Dark Web', we should remember that, like everything online, the technology itself is not to blame for any issues that arise. Instead, it is how people choose to use (or misuse) the tools at their disposal.
If you're worried that a young person is using the internet, knowing the crucial information about these areas of the web may assist you in giving practical and honest assistance to teens.
Yes, let's look closely into each one of it to have more clarity.
Open web or surface web is the publicly visible part of the internet that most people use daily, and it can be accessed through search engines like Google or Bing.
The deep web is the less explored inner sanctum of the internet--away from the interfering prying eyes of search engines. It mainly comprises databases that can be accessed securely through the 'open web.' Examples include hotel booking sites, online retailers, medical records and banking information which are only accessible to authorized persons (i.e., employees) with a password.
Most people use a device with an IP address to go online. This unique identifier allows networks to deliver information, like emails, to the right place. You can use someone's IP address to trace and watch internet activity.
The Dark Web is notoriously difficult to access and even more challenging to trace. Users can remain anonymous by using dedicated software that conceals their IP address. The most popular software used is Tor, short for The Onion Router.
The dark web, sometimes known as the deep or hidden web, is a part of the internet where users may access unindexed online material anonymously via various encryption methods. It is a hidden network of websites that can only be accessed using specific software, configurations, or authorization and is often used for illegal or illicit purposes. The dark web is sometimes referred to as the "dark net" or "deep web".
The intelligence community, media workers, whistleblowers, and ordinary citizens are all users of the dark web who use it for lawful or unlawful reasons.
Each time you connect to the internet, your device is given a unique IP (Internet Protocol) address. This ensures that your data gets where it needs to go.
All you need is a person's IP address to keep track of their internet behavior. Because all you have to do is obtain their IP address, you may learn as much information as you desire about their online activities. On the dark web, sophisticated techniques are employed to disguise a user's actual IP address, making it challenging to identify the sites that a device has visited. One of the most popular anonymizing technologies is Tor (The Onion Router).
The term "onion" regarding software encryption means that the Tor network layers each message with protection before sending it through different nodes or computer relays operated by other Tor users. The message then bounces from node to node, and a layer of encryption is removed so that it can reach the next node until there are no more layers left and the receiver gets the decrypted message. This method makes tracing back to where the message originated much more complicated, if not impossible.
Dark web sites are decentralized and not indexed like traditional websites, so you'll require an onion link—a combination of numbers and letters followed by a .onion extension—to access them. If you're planning to do any browsing on the dark web, you must take some precautions first:
By utilizing a VPN, you're ensuring yourself a secure connection while illegally browsing the dark web. Although rewards could come with casual surfing, authorities could be scanning your every move if they believe criminal activity is afoot--which would subsequently result in them raiding your house. A VPN allows you to bypass these roadblocks and surf anonymously.
If you're looking for anything on the dark web, don't use your default browser. They all have tracking systems, making it simple for authorities to observe your activities.
Tor is the best browser to preserve your anonymity and safety while browsing the internet, especially the dark web. When you connect to a VPN (a virtual private network) before using Tor, your request will be first encrypted by Tor. This method is known as "Tor over VPN." The IP address is concealed through a VPN server, which happens as the traffic passes from country to country. The request is distributed among numerous Tor nodes before being linked with the appropriate website. You may use a Tor Browser combined with a VPN to access the Dark Web securely and pseudonymously more effectively than just using Tor alone.
Malware is less likely to spread from a virtual environment to your local device, so it's best to use VMs when accessing the dark web. Examples of VMs you can use include Oracle VM Virtualbox, Red Hat Virtualization and Microsoft Hyper-V. If you're worried about any malicious activity while using these operating systems, consider disposable OSs that don't rely on physical storage devices--this will help reduce your risks.
After you've set up and configured Tor, you may now go online and explore. Although the material isn't indexed, Hidden Wiki and Grams are an excellent place to start when looking at the dark web. It is most well-known for illegal activities but has essential features such as news platforms, e-commerce sites, social media platforms, email services, and advocacy organizations.
When visiting the darknet, there are a few things to keep in mind. First, don't use your credit or debit cards for purchases. Second, only visit websites that appear to be trustworthy, so you're not involved in any unlawful behavior. Finally, remember that law enforcement is actively monitoring some of the darknet's channels due to criminal activity.
Cybercriminals' services on the dark web are vast and varied. In addition to traditional criminal services such as drug dealing and money laundering, some services facilitate identity theft, hacking, and other forms of cybercrime.
Several services support cybercriminals, such as forums where they can share information and tips or marketplaces where they can buy and sell stolen data. The anonymity of the dark web makes it a haven for scammers, and the range of services available means that there is something for everyone.
While the dark web can be risky, it is also home to several legitimate businesses and services. For example, there are a number of TOR-based email providers that offer encrypted email services, and there are also many legal websites that offer information and resources for those who want to stay anonymous online.
In addition to trafficking personal data and hijacked accounts, cybercriminals also peddle the tools required to launch digital espionage and other malicious activity on the Dark Web.
The following are examples of dangerous software that may be used to attack your company's data, systems, and networks.
Even though dark web surfing isn't as simple as normal internet surfing, a few tools are available to assist you in following your progress. Dark web search engines and platforms like Reddit might help you find reputable dark sites, but you'll need a specialized dark web browser to visit them.
Tor Browser is the most popular dark web browser because it directs your browser traffic through the Tor network to access the darknet. Your data is encrypted and bounced between at least three relay points, known as nodes, during its passage through Tor. Because of this, the Tor Browser will have slow browsing than a standard web browser.
While popular internet search engines can't access the dark web, search engines mainly designed for the dark web can help you find what you're looking for. DuckDuckGo is a robust privacy-oriented search engine that lets you maintain your anonymity when you use it across the internet. Haystack, Not Evil, Ahmia, and Torch are other popular dark web search engines. The subreddit r/deepweb is a great place to start if you're looking for advice from more experienced users on how to find what you want on the dark web. And lastly, The Hidden Wiki is a compilation of links to sites on the dark web — but beware that many of these links may be broken or lead to dangerous websites.
Websites on the dark web can only be accessed with special software, and their addresses are usually long strings of random numbers and letters - unlike standard website URLs.
The Hidden Wiki is a website on the surface offering links to dark websites. However, not all these links work, and thus may not be safe. But before visiting any dark websites, ensure you have robust cybersecurity software to prevent potential threats.
While some use the dark web for illegal activity, others use its anonymity for more innocuous means, such as journalism and whistle-blowing. Tor was explicitly created for anonymous communication and provided a vital service in countries that persecute free speech.
It's especially useful for law enforcement and cybersecurity professionals, since it can be used to monitor the dark web. These businesses may keep track of dark web technologies and strategies utilized by scammers by monitoring the dark web. The New York Times is one of many prominent media corporations that frequent the dark web to stay up to date on such sites.
On the dark web, people can buy and sell illegal goods or services, such as drugs or hacking services. This network is underground and not easy to access. One of the most well-known dark websites was The Silk Road, which became infamous for the variety of drugs that could be purchased on the site. In 2013, Ulbricht was arrested and sentenced to life in prison, leading to the FBI shutting down The Silk Road. Another famous dark web market with illicit content, AlphaBay, was shut down by authorities.
Tor is legal in most countries, with a notable exception of those with authoritarian governments that restrict internet usage. The dark web is another story though only half the sites offer illegal material, according to "The darkness online," a study by King's College London in 2016.
It's up to you to be cautious of what you access and who you interact with if you decide to utilize Tor and the dark web. This prevents you from unintentionally viewing or accessing illegal material, making several police departments angry.
The short answer is yes, the dark web is safe to use. However, there are certain risks that come with using any part of the internet, so it's important to take precautions when browsing.
When accessed through a secure connection, the dark web can be a safe place to browse and communicate anonymously. However, there are scammers and criminals who also operate on the dark web, so it's important to be aware of these risks.
It's pretty difficult to shop for anything on the dark web, especially illicit products. Besides the danger of jail time for purchasing unlawful material, the dark web trade lacks quality control. It's impossible to know who to trust when both the vendor and buyer are hidden. Even vendors with long track records and excellent feedback have been known to vanish with their would-be client's Bitcoin unexpectedly.
The answer is yes if you are wondering whether your personal data can be sold on the Dark Web. However, it is important to note that not all of the information on the Dark Web is accurate or up-to-date. In addition, some of the information sold on the Dark Web may be outdated or no longer accurate.
That being said, there is a market for personal information on the Dark Web. This market is fueled by criminals who are looking to use this information for identity theft, financial fraud, and other types of dark web fraud. If you have had your personal information compromised, it is important to take steps to protect yourself from these criminals.
If your personal information is found on the dark web, there are a few things you can do:
1. Change your passwords - criminals could gain access to your accounts and steal sensitive information if your password is compromised. Be sure to use strong, unique passwords for each account.
2. Monitor your credit report - keeping an eye on your credit report can help you spot identity theft early on and take steps to resolve it quickly. You're entitled to a free annual credit report from each central credit bureau (Experian, Equifax and TransUnion). Check for new accounts or activity that looks suspicious.
3. Place a fraud alert or security freeze on your files - this will make it more difficult for someone to open new accounts in your name but may also make it harder for you too since businesses will need additional verification before approving any applications made in your name | <urn:uuid:98f2a594-e86a-4e2b-8c98-12dc52858fa5> | CC-MAIN-2022-40 | https://www.efani.com/blog/dark-web-explained | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00619.warc.gz | en | 0.944894 | 2,579 | 2.890625 | 3 |
What Are Cloud Communications?
Cloud communications occur in a hosting environment that provides servers, storage, data security, email, backup, data recovery, voice, and other communication resources. Basically, cloud communications are internet-based communications. The cloud’s environment is instant, flexible, scalable, and secure.
Cloud communications providers own and maintain servers, giving access to all the above through their services. Customer costs are lowered when utilizing cloud communication services because they are hosted, managed, and maintained by the provider.
For the purpose of this article, cloud communications cover methods such as voice, email, chat, video, and other collaboration efforts all integrated together in one application to eliminate communication lag, offer greater productivity, and bring people together from anywhere on nearly any device. | <urn:uuid:f3dd1334-3000-4697-9d0d-b92cc3ca6828> | CC-MAIN-2022-40 | https://firstdigital.com/products/voice-services/cloud-communications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00619.warc.gz | en | 0.925898 | 155 | 2.796875 | 3 |
A DHS grantee created a web comic template for cyber training.
Having trouble getting those abstract cybersecurity concepts across to your students or employees? Try web comics.
During a Homeland Security Department conference Tuesday, the company Secure Decisions presented a new interactive tool it’s developed that allows companies and educators to lay out cybersecurity lessons into interactive web comics without hiring a developer or graphics designer.
The tool, Comic-Based Education and Evaluation, or Comic BEE, was developed with funding from DHS’ Science and Technology division.
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
Organizations that use the tool can storyboard various cybersecurity challenges, projects and dilemmas using cartoon figures and thought bubbles, Laurin Buchanan, Secure Decisions’ principal investigator for cybersecurity education research, said during a demo at DHS’ Research and Development Showcase.
They can also set up logic chains of correct and incorrect responses and track how well students and trainees perform, Buchanan said.
Buchanan showed a sample page in which a woman names Alice must choose whether to share her password with someone on the phone who claims to be “Bob in IT.” (Hint: Bad idea, Alice.) Other tutorials can tackle more complex cybersecurity topics, Buchanan said.
The tool has been piloted at Stony Brook University in New York and at cyber-focused summer camps, including GenCyber, sponsored by the National Security Agency and the National Science Foundation, Buchanan said. It’s now available free to government agencies.
Training future generations of cybersecurity workers is one major goal of a cybersecurity executive order President Donald Trump released in May. That order calls on the Commerce Department to assess cyber education, curriculum, apprenticeships and training programs from grade school through universities.
The department’s cyber standards agency, the National Institute of Standards and Technology, plans to release a request for information Wednesday querying the public about the current state of education metrics for cybersecurity and the sort of cyber knowledge and skills employers value most.
The RFI will also ask whether the way educators currently categorize and organize cyber skills is effective and how prepared educators are to deal with cybersecurity concerns stemming from emerging technologies such as artificial intelligence and connected devices.
The RFI does not address general cybersecurity hygiene for non-technologists.
NIST plans to host a workshop on the RFI’s findings in August in Chicago. | <urn:uuid:cf284ec2-33da-4a29-bfe4-d5bb8061f043> | CC-MAIN-2022-40 | https://www.nextgov.com/cybersecurity/2017/07/heres-how-comics-can-boost-cyber-training/139345/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00619.warc.gz | en | 0.91538 | 500 | 2.796875 | 3 |
European Cyber Security Month (ECSM) takes place throughout October to promote the importance of information security and highlight the simple steps that can be taken to protect personal, financial and professional data.
ECSM’s main goal is to “raise awareness, change behaviour and provide resources to all about how to protect themselves online”.
The European Union Agency for Network and Information Security (ENISA), the European Commission DG CONNECT and Partners deploy the European Cyber Security Month (ECSM) every October.
This year’s campaign breaks down each week of October into different focusses. During each week, ENISA and its partners will be publishing reports, organising events and activities centred on each of these themes. Events will focus on training, strategy summits, general presentations to users and online quizzes.
Here’s what to expect throughout the month:
Week 1: Oct. 1-5
Practice Basic Cyber Hygiene
The theme seeks to assist the public in establishing and maintaining daily routines, checks and general behaviour required to stay safe online.
Week 2: Oct. 8-12
Expand your Digital Skills and Education
Transform your skills and security know how with the latest technologies.
Week 3: Oct. 15-19
Recognise Cyber Scams
The theme aims to educate the general public on how to identify deceiving content in order to keep both themselves and their finances safe online.
Week 4: Oct. 22-26
Emerging Technologies and Privacy
Stay tech wise and safe with the latest emerging technologies.
ECSM is inviting public and private sector organisations concerned with network and information security to get involved in this year’s programme. Find out more at https://cybersecuritymonth.eu/ | <urn:uuid:aa2af248-2910-43af-83e8-fb8d4bf00f58> | CC-MAIN-2022-40 | https://www.pcr-online.biz/2018/10/05/european-cyber-security-month-kicks-off-throughout-october/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00619.warc.gz | en | 0.908334 | 361 | 2.8125 | 3 |
Phishing is one of the most prevalent cyber attacks currently. Phishing scams are generally fake email messages appearing to come from legitimate companies, like utility companies, a bank, or the IRS. They direct you to a fake website or ask for personal information, especially account related info. Phishing attacks tend to be panic-inducing in tone to catch the reader off-guard by indicating an urgent issue. So what can you do to prevent them?
First, they rely on the fact that web users are over-saturated with information and will click on links or open emails without noticing that the sender domain is not exactly who they think it is. If you receive an email suggesting you need to go to a site to provide information, examine the URL you’re being redirected to for inconsistencies. Reputable organizations will not request confidential information, like Social Security numbers, via email. If you’re still unsure, go to the company website directly and call the contact number to verify the request.
A second useful tip comes from IT Toolbox.
Many successful phishing attacks – namely spear-phishing attacks that target specific individuals in an organization – exploit Internet domains and the inherent trust that users place on domains. Say that your business domain name is x~y~z.com. It’s simple for a criminal hacker to register very similar-looking domain names such as x~y~z-tech.com or x~y~z.us, and then attack your users by sending phishing emails that look like they’re coming from someone legitimate inside your organization. The reality is, with today’s information overload and weary eyes, a lot of computer users aren’t going to notice small nuances in the originating domain name such as this. I think people are getting more savvy in looking for domain names ending in .cn or .ru and not clicking on those links. But it’s human nature (a weakness of our brains?) to overlook something that’s very similar such as a domain name that has a slightly different spelling.
The solution is obvious and simple. Just register all domain names that are similar to your domain name(s). Use your domain registrar to search for similar spellings, adding words or acronyms onto the correct spelling, as well as any top-level domains that you might have overlooked when your original domains were registered such as .info and.net. You will probably spend less than $200 doing so. Imagine if you could reduce your email phishing risks by a significant percentage by spending a mere drop in the bucket of your overall security budget!
Your antivirus and antimalware software can only do part of the work involved in protecting your data. It is up to each user to be vigilant about cyber attacks. If you think you need a more robust security package, contact the data security experts at Great Lakes Computer. We offer a wide range of services designed to keep your data safe and give you peace of mind. If you suspect you’ve been a victim of cyber attacks, it may be wise to utilize our cyber forensic services. We can search your system for intrusions and eliminate them. | <urn:uuid:f435495f-f533-4301-9d06-b6c0699281d5> | CC-MAIN-2022-40 | https://greatlakescomputer.com/blog/malware-tip-protecting-yourself-from-phishing-hacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00019.warc.gz | en | 0.931056 | 647 | 2.9375 | 3 |
Microwave links using 512QAM, 1024QAM, 2048QAM & 4096QAM (Quadrature Amplitude Modulation)
What is QAM?
Quadrature amplitude modulation (QAM) including 16QAM, 32QAM, 64QAM, 128QAM, 256QAM, 512QAM, 1024QAM, 2048QAM and 4096QAM is both an analog and a digital modulation scheme. It conveys two analog message signals, or two digital bit streams, by changing (modulating) the amplitudes of two carrier waves, using the amplitude-shift keying (ASK) digital modulation scheme or amplitude modulation (AM) analog modulation scheme.
Why are higher QAM levels used?
Modern wireless networks often demand and require higher capacities. For a fixed channel size, increasing QAM modulation level increases the link capacity. Note that incremental capacity gain at low-QAM levels is significant; but at high QAM, the capacity gain is much smaller. For example, increasing
From 1024QAM to 2048QAM gives a 10.83% capacity gain.
From 2048QAM to 4096QAM gives a 9.77% capacity gain.
What are the penalties in higher QAM?
The receiver sensitivity is greatly reduced. For every QAM increment (e.g. 512 to 1024QAM) there is a -3dB degradation in receiver sensitivity. This reduces the range. Due to increased linearity requirements at the transmitter, there is a reduction in transmit power also when QAM level is increased. This may be around 1dB per QAM increment.
Comparing 512-QAM, 1024-QAM, 2048-QAM & 4096-QAM
This article compares 512-QAM vs 1024-QAM vs 2048-QAM vs 4096-QAM and mentions difference between 512-QAM, 1024-QAM, 2048-QAM and 4096-QAM modulation techniques. It mentions advantages and disadvantages of QAM over other modulation types. Links to 16-QAM, 64-QAM and 256-QAM is also mentioned.
Understanding QAM Modulation
Starting with the QAM modulation process at the transmitter to receiver in the wireless baseband (i.e. Physical Layer) chain. We will use the example of 64-QAM to illustrate the process. Each symbol in the QAM constellation represents a unique amplitude and phase. Hence they can be distinguished from the other points at the receiver.
Fig:1, 64-QAM Mapping and Demapping
• As shown in the figure-1, 64-QAM or any other modulation is applied on the input binary bits.
• The QAM modulation converts input bits into complex symbols which represent bits by variation in amplitude/phase of the time domain waveform. Using 64QAM converts 6 bits into one symbol at transmitter.
• The bits to symbols conversion take place at the transmitter while reverse (i.e. symbols to bits) take place at the receiver. At receiver, one symbol gives 6 bits as output of demapper.
• Figure depicts position of QAM mapper and QAM demapper in the baseband transmitter and receiver respectively. The demapping is done after front end synchronization i.e. after channel and other impairments are corrected from the received impaired baseband symbols.
• Data Mapping or modulation process is done before the RF upconversion (U/C) in the transmitter and PA. Due to this, higher order modulation necessitates use of highly linear PA (Power Amplifier) at the transmit end.
QAM Mapping Process
Fig:2, 64-QAM Mapping Process
In 64-QAM, the number 64 refers to 2^6.
Here 6 represents number of bits/symbol which is 6 in 64-QAM.
Similarly it can be applied to other modulation types such as 512-QAM, 1024-QAM, 2048-QAM and 4096-QAM as described below.
Following table mentions 64-QAM encoding rule. Check the encoding rule in the respective wireless standard. KMOD value for 64-QAM is 1/SQRT(42).
|Input bits (b5, b4, b3)||I-Out||Input bits (b2, b1, b0)||Q-Out|
QAM mapper Input parameters : Binary Bits
QAM mapper Output parameters : Complex data (I, Q)
The 64-QAM mapper takes binary input and generates complex data symbols as output. It uses above mentioned encoding table to do the conversion process. Before the coversion process, data is grouped into 6 bits pair. Here, (b5, b4, b3) determines the I value and (b2, b1, b0) determines the Q value.
Example: Binary Input: (b5,b4,b3,b2,b1,b0) = (011011)
Complex Output: (1/SQRT(42))* (7+j*7)
Fig:3, 512-QAM Constellation Diagram
The above figure shows 512-QAM constellation diagram. Note that 16 points do not exist in each of the four quadrants to make total 512 points with 128 points in each quadrant in this modulation type. It is possible to have 9 bits per symbol in 512-QAM also. 512QAM increases capacity by 50% compare to 64-QAM modulation type.
The figure shows a 1024-QAM constellation diagram.
Number of bits per seymbol: 10
Symbol rate: 1/10 of bit rate
Increase in capacity compare to 64-QAM: About 66.66%
Following are the characteristics of 2048-QAM modulation.
Number of bits per seymbol: 11
Symbol rate: 1/11 of bit rate
Increase in capacity from 64-QAM to 1024QAM: 83.33% gain
Increase in capacity from 1024QAM to 2048QAM: 10.83% gain
Total constellation points in one quadrant: 512
Following are the characteristics of 4096-QAM modulation.
Number of bits per symbol: 12
Symbol rate: 1/12 of bit rate
Increase in capacity from 64-QAM to 409QAM: 100% gain
Increase in capacity from 2048QAM to 4096QAM 9.77% gain
Total constellation points in one quadrant: 1024
Advantages of QAM over other modulation types
Following are the advantages of QAM modulation:
• Helps achieve high data rate as more number of bits are carried by one carrier. Due to this it has become popular in modern wireless communication system such as LTE, LTE-Advanced etc. It is also used in latest WLAN technologies such as 802.11n 802.11 ac, 802.11 ad and others.
Disadvantages of QAM over other modulation types
Following are the disadvantages of QAM modulation:
• Though data rate has been increased by mapping more than 1 bits on single carrier, it requires high SNR in order to decode the bits at the receiver.
• Needs high linearity PA (Power Amplifier) in the Transmitter.
• In addition to high SNR, higher modulation techniques need very robust front end algorithms (time, frequency and channel) to decode the symbols without errors.
For Further Information
For More Information on Microwave Links, Please Contact Us | <urn:uuid:0f436759-a5d3-4a97-9511-a14c8314ad31> | CC-MAIN-2022-40 | https://www.microwave-link.com/tag/modulation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00019.warc.gz | en | 0.825426 | 1,672 | 3.125 | 3 |
As history turns the page on traditional methods of how energy is produced and distributed, utilities and energy suppliers are shifting toward Industry 4.0. The end game of this transformation will be more efficient energy management. Industrial markets will feel the ripple effects of this transition toward smart factories, culminating in cost savings and an eco-friendlier landscape.
Energy producers will have much more data to analyze thanks to IoT, helping them meet power demands with less strain on the grid. And smart technology will also refine operational processes based on data from sensors that monitor electricity needs.
Another compelling component of smart energy management is the inclusion of clean, renewable energy. Solar and wind energy can provide additional electricity to power companies when demand for energy exceeds supply of traditional resources. Backup stored renewable energy can supplement the grid, filling power gaps that occur during transmission.
Intelligence collected from IoT devices can be analyzed by machines to alert utility officials when adjustments are needed to account for system imbalances. AI technology can analyze system performance, predict power fluctuations, and make decisions on accounting for shortages and overloads. More energy will be conserved per day, as utilities will see increases in profit margins and more reliable output.
At the foundation of all smart technology is a system of interconnected IoT devices that measure system activity. Industry 4.0 is bringing qualitative changes to energy and utility management that point to more reliable and sustainable solutions, such as the following:
- Comprehensive Monitoring - Operational monitoring through IoT sensors can speed up the process of locating system vulnerabilities. Tracking various production factors, such as applied power and harmonics, will help bring greater stability to power generation at lower costs.
- Industrial Internet of Things (IIoT) - Just like IoT, similar IIoT technology designed for industrial use is the key to expansion of data gathering. IIoT is also a synonym for Industry 4.0, which consists of machine learning and automation to improve a factory's infrastructure. Sensors connect with other digital technology to measure processes and performance throughout the system.
- Big Data Analysis - Since a factory usually involves a wide range of processes, it takes thousands of meters to collect comprehensive data for system analysis. Machine learning is the root of this analysis, fueled by historical data that can generate intelligence on current conditions and carefully calculated future predictions.
- Streamlined for Sustainability - Quality data is a powerful force for improving overall sustainability for large power plants. Algorithms can be developed to reduce negatives while boosting positives as far as system speed, noise and energy consumption.
Ultimately, the benefits from this modern technology will help utility management reduce greenhouse gases and improve financial conditions. Managers will have a wealth of valuable system information at their fingertips that wasn't available to them until recently. It will put utilities in a position to be more competitive and achieve greater customer satisfaction.
Goals for Energy Efficiency
The new and improved direction for energy management opens the door to leaner costs for utilities and greener effects on the environment. Part of the push for these changes comes from stronger government regulations, driven by scientific and public concerns. Utilities are leading the way for the entire business world to follow as far as putting a greater emphasis on improving energy efficiency, | <urn:uuid:6fd7fb31-581a-4dd3-8528-d56144432b7c> | CC-MAIN-2022-40 | https://iotmktg.com/impact-of-industry-4-0-on-energy-and-utilities-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00019.warc.gz | en | 0.933297 | 648 | 2.953125 | 3 |
Employing Cloud Technology for Effective Learning
Benefits of Cloud Computing in the Education industry
Cloud computing offers a multitude of advantages for the education industry, perhaps the greatest of which is, access to large-scale solutions that would be otherwise unattainable for many medium and small-sized institutions.
A. Some of the earliest cloud-based solutions for education included email services (Gmail, Live@EDU, and more recently Office 365) that have in many cases made on-premise email solutions a commodity service. Once such services reach commodity status, outsourcing to a cloud-based provider often yields cost savings (staffing, software licensing, and operations).
B. Cloud-based storage solutions have experienced rapid adoption as the cost of storage plummets and the feature set available to subscribers’ have increased. Tight integration with mobile devices and mobile application has also driven widespread adoption of cloud-based storage solutions. Colleges and universities can effectively shift capital expenses, staffing, and some operating costs (back-up expenses, for example) to the cloud-based provider, allowing campus information technology teams to focus more on specialized and mission critical activities and support. When a cloud-based storage solution is combined with a consortium buying agreement, the advantages can be compelling. Internet2’s NET+ program offers such a program to both members and non-member institutions in partnership with box.com.
C. A considerable number of instructional tools and applications have evolved to a cloud-computing model, including learning management systems (LMS), e-portfolio solutions, and classroom response systems. Once these applications operate in the cloud, their commodity status becomes a factor. The option for colleges and universities to avoid recurring capital costs, downtime for upgrades, and staff time for on-premise system support becomes compelling. Additionally, many of these applications are integrated in such a way that they provide even greater value when operated from the cloud.
Philadelphia University's desire is to leverage data, which was driven by the need to produce an executive dashboard, providing overnight updates to several KPIs, and to help foster data-driven decisions
D. An unusual cloud computing solution that touches our campus involves software application rendering in the cloud. About half the students on our campus are in design disciplines that require local rendering of application files for final review, display or viewing. This process is computationally intense and can require dedicated workstations for many hours or even days. Several vendors and service providers are offering no charge or low cost cloud-based rendering for the most popular design applications (AutoCAD, 3dsMax, Revit, Rhino, and Maya ). This relieves the pressure on both campus computing facilities and students’ own laptops or desktops devices.
E. One very promising area for cloud computing is the next generation ERP (enterprise resource planning) systems. These complex and costly systems could possibly become more manageable, scale much better, offer improved access to compliance, better integration with data analytics tools, and leveraging of mobile access. Once again, the benefits of avoiding recurring capital costs, downtime for upgrades, and staff time for on-premise system support are apparent.
Our involvement with cloud computing involves the implementation of the first four areas (A, B, C, and D), all having been in place for between 2-6 years. As a result we have been able to expand services, provide greater control, and assign staff to more strategic information technology projects. The last area, ERP in the cloud, is one that we are monitoring closely with the expectation that we will aggressively focus our investigation in the next 3-5 years.
Data as the Driving force
Universities and colleges have made data and information gathering a priority on their campuses and see direct value in meaningful analysis of all information related to recruitment, retention persistence, and academic success for their students. There are numerous tools and approaches to the effective use of information and data that colleges and universities can leverage to provide value and insight for improved decision-making. Data warehouses can help provide a repository for institutional data without taxing core ERP systems. Information stored in data warehouses can be mined and exploited using many different tools and solutions. Philadelphia University’s desire is to leverage data, which was driven by the need to produce an executive dashboard, providing overnight updates to several KPIs, and to help foster data-driven decisions. One of the University’s most successful projects that evolved from this is the ‘Smart Growth’ study, which involved the analysis of several years of course section scheduling, identification of low enrollment sections, adjunct faculty hiring patterns, fulltime and adjunct teaching loads, institutional coordination of adjunct hiring, and the frequency of elective offerings. Through a careful process of data mining of the institutional data warehouse, this process enabled the application of a set of strategies that included matching section enrollment to pedagogy, optimization of room capacity, establishing master course sections with subordinate sections, holding additional sections in reserve until seats in the primary section are filled, optimizing the frequency of required courses and electives, and managing demand for adjunct faculty hiring. Ultimately this project has led to more refined scheduling of course sections, assignment of course loads, and the hiring of adjunct faculty. In the first full year of implementation following the study, the projected savings from reduced course section offerings, reduced need for adjunct faculty, and reduced number of low-enrolled course sections was $400,000 in ongoing savings to the academic units.
Retention is an area that can benefit from leveraging data and applying the resulting information through a customer relation management (CRM) application. Philadelphia University is in the third year of a student success-focused CRM that is designed to collect academic alerts and academic praise, instantly distributing those elements to faculty, support staff, academic advisors, and student affairs professionals assigned to react and respond to student deficiencies and/or signs of struggle. This provides a platform for intervening promptly to ensure student academic success, especially for freshmen. In the first two years, we have increased the freshman to sophomore retention rate by more than 7 percent. This success is due to the influence of compelling data, committed faculty/support staff, and timely advising. Although it is much more difficult to measure, there is a good reason to believe that in addition to helping a struggling student recover and succeed, the system encourages many students to achieve at higher levels.
Innovative Ways Employed by the University
Innovation can be fostered through diverse perspective and points of view both in the faculty and student populations. Philadelphia University has a signature academic approach called Nexus Learning, which is active, collaborative, real-world learning infused with the liberal arts. In the Kanbar College of Design, Engineering and Commerce, students in design, engineering and business have core courses together each year and approach learning and problem-solving in a more holistic and real-world way, which often leads to innovative solutions. This learning environment serves as the foundation for a campus culture that embraces innovation. The information technology team leverages this by building a diverse team of professionals committed to supporting students and faculty, identifying opportunities to construct creative solutions, and setting aside time each month to explore and experiment in areas beyond the typical work-related tasks. The IT team is open to failed experimentation and realizes the valuable learning that results from such activities in a culture of innovation.
Top Cloud Technology Solution Companies
Top Cloud Consulting/Services Companies | <urn:uuid:18149590-2f2f-4b23-a833-19bc1789477c> | CC-MAIN-2022-40 | https://itad.cioreview.com/cioviewpoint/employing-cloud-technology-for-effective-learning-nid-15186-cid-277.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00019.warc.gz | en | 0.948656 | 1,510 | 2.5625 | 3 |
A clever algorithm that has digested seven decades’ worth of articles in China’s state-run media is now ready to predict its future policies. The research design of this “crystal ball” can also be applied to tackling a variety of other problems.
Supervised learning — the most developed form of Machine Learning — involves learning a mapping from input data (such as emails) to output labels (whether they are “spams”) and, subsequently, applying the learned mapping to predict the labels for new data (i.e., new emails). A critical prerequisite to this approach, however, is a rich and representative set of training data, which are often hard to come by.
On the other hand, in the era of Big Data, there are ample data labels that are readily available but ostensibly unimportant for the problems we would like to tackle. But, are they really unimportant?
In a new research paper, “Reading China: Predicting Policy Change with Machine Learning,” we demonstrate that seemingly trivial labels can be used to uncover important underlying patterns. We build a neural network algorithm that “reads” the People’s Daily, China’s official newspaper, and classifies whether each article appears on the front page — an ostensibly trivial label. It turns out that such a simple algorithm can be used to detect changes in how the People’s Daily prioritizes issues, which, in turn, have profound implications for China’s government policies.
The algorithm tries to mimic the mind of an avid People’s Daily reader who reads its articles and tries to figure out how its editor places articles on different pages. Due to the official status of this newspaper, the way its editor selects articles for the front page reflect the newspaper’s issue priorities, which the avid reader will try to pick up. If the reader had read and thought through, say, five years’ worth of articles, they would have acquired a fairly good sense of what is in the editor’s mind and what kind of articles “should” or “should not” appear on the front page. But if the reader was then surprised by new articles in the following quarter — that is, their educated guess about the new articles turned out to work either particularly well or exceptionally poorly — it might constitute a signal of change from the reader’s perspective. While a small surprise may well be taken as noise, a strong signal would convince the reader that their existing understanding of the editor’s mind is no longer valid and that the priorities of the People’s Daily must have fundamentally changed.
Using the above reasoning, we construct a quarterly indicator, which we call the Policy Change Index (PCI) of China, that captures the amount of surprise to the algorithm in each quarter, compared to the paradigm the algorithm has acquired over the past five years’ data.
The namesake of the indicator comes from the fact that detecting changes in the newspaper’s priorities allows us to predict changes in the Chinese government’s policies. This is because the People’s Daily is at the nerve center of China’s propaganda system, an essential function of which is to mobilize resources to attain the government’s policy goals. Moreover, before major policy changes are made, the government often finds it necessary to justify to or convince the public that those changes are the right moves for the country. Hence, while the algorithm is detecting propaganda change in real time, the resulting index is really predicting policy changes for the future.
When put to the test against the ground truth — policy changes in China that did occur in the past — the PCI could have correctly predicted the beginning of the Great Leap Forward in 1958, that of the economic reform program in 1978, and, more recently, a reform speed-up in 1993 and a reform slow-down in 2005, among others. Furthermore, these events are widely recognized in the academic literature as among the most critical junctures in the history of China’s economy and reforms.
Our approach to learning underlying patterns from easily available labels has an obvious “context-free” feature; that is, the construction of the PCI does not rely on the researcher’s understanding of the Chinese context (it’s language, history, or politics). This feature opens the door to a variety of applications that have a structure similar to ours. Readers can find more details about China’s policy changes, methodology, and its potential applications in this research paper or the website of the project. The source code of the project is also released on GitHub, so that the academic, business and policy communities can not only replicate the findings but also apply this method in other contexts.
(This article is co-authored by Julian TszKin Chan and Weifeng Zhong)
Julian TszKin Chan is a senior economist at Bates White Economic Consulting. Weifeng Zhong is a research fellow in economic policy studies at the American Enterprise Institute.
Weifeng Zhong will be speaking on this subject at Data Natives 2018 in Berlin. The views expressed here and in Weifeng Zhong’s speech are and will be solely those of the authors and do not represent the views of the American Enterprise Institute, Bates White Economic Consulting, or their other employees. | <urn:uuid:be40ed30-a91b-406d-8b15-216ada2e384d> | CC-MAIN-2022-40 | https://dataconomy.com/2018/11/machine-learning-with-a-twist-how-trivial-labels-can-be-used-to-predict-policy-changes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00019.warc.gz | en | 0.940409 | 1,099 | 2.625 | 3 |
Two different classes of identifiers must be tested to reliably authenticate things and people: assigned identifiers, such as names, addresses and social security numbers, and some number of physical characteristics. For example, driver’s licenses list assigned identifiers (name, address and driver’s license number) and physical characteristics (picture, age, height, eye and hair color and digitized fingerprints). Authentication requires examining both the license and the person to verify the match. Identical things are distinguished by unique assigned identities such as a serial number. For especially hazardous or valuable things, we supplement authentication with checking provenance — proof of origin and proof tampering hasn’t occurred. | <urn:uuid:71c6f6fe-7c78-406c-aad4-49c154a5b67d> | CC-MAIN-2022-40 | https://www.absio.com/tag/data-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00219.warc.gz | en | 0.897965 | 134 | 3.375 | 3 |
There is a strong case to be made that artificial intelligence (AI) is now the most central topic in technology. While the computer science that underpins AI has been in development since the 1950s, the rate of innovation has gone through multiple step changes in the last ten years.
The technological reasons for this are well understood: the advent of neural networks; an increase in semiconductor processing power; and a strategic shift away from AI systems that rely on parameter-driven algorithms towards self-reinforced and multiplicative learning, machines that get smarter the more data they are fed and scenarios they negotiate.
Development has been open and collaborative. The benefits of AI in process efficiency and, potentially, accuracy are clear. For this reason, R&D activity, pilots and commercial deployments stretch to virtually every sector of the economy from healthcare to automotive manufacturing to telecom networks. A recent Vodafone survey indicated a third of enterprises already use AI for business automation, with a further third planning to do so. Take-up on this scale, at this rate, could put AI on a level with prior epochal shifts of electricity, the combustion engine and personal computing.
Two sides to each coin
Whether that actually happens depends on how the technology is managed. I spend a lot of time talking with major telecom and technology companies. While it’s clear AI is a major point of interest to nearly everyone, the discussion is still pitched in generalities. Paraphrasing:
AI is the Fourth Industrial Revolution
We know AI is big and we want to do something with it, but we don’t know what
We’re moving to be an AI-first company
How can we win with AI?
We’re a far more efficient company because of AI
The ebullient tone is to be welcomed.
Far less talked about, however, are the ethical and legal implications that arise from trading off control for efficiency. It’s fairly clear that cognitive dissonance is at work – the benefits blind us to the risks.
How do you answer these?
A crucial faultline is the balance between programmed and interpretive bias. That is to say, how much are machines programmed to act based on the way humans want them to act (reflecting our value sets) versus their own learned ‘judgement’? This has a direct bearing on accountability.
To make this point, let’s pose a series of questions that draw on how AI is being used in different industries.
If a self-driving car faces the inevitability of a crash, how does it decide what or who to hit? If that same self-driving car is deemed to be at fault, who bears responsibility? The owner? The car manufacturer? A third-party AI developer (if the technology was outsourced)?
If an algorithm is tasked with predicting the likelihood of reoffending among incarcerated individuals, what parameters should it use? If that same algorithm is found to have a predictive accuracy no better than a coin flip, who should bear responsibility for its use?
If Facebook develops an algorithm to screen fake news from its platform, what parameters should it use? If content subsequently served to people’s news feeds is deemed intentionally misleading or fabricated, does responsibility lie with the publisher or Facebook?
I chose these for a number of reasons. One, these are real examples rather than hypothetical musings. While they emanate from specific companies, the implications extend to any firm seeking to deploy AI. Second, they illustrate the difficulty in extracting sociological bias from algorithms designed to mimic human judgement. Third, they underline the fact that AI is advancing faster than regulations and laws can adapt, putting debate into the esoteric realms of moral philosophy. Modern legal systems are typically based on the accountability of specific individuals or entities (such as a company or government). But what happens when that individual is substituted for an inanimate machine?
No one really knows.
A question of trust
Putting aside the significant legal ramifications, there is an emerging story of the potential impact on trust. The rise of AI comes at a time when consumer trust in companies, democratic institutions and government is falling across the board. Combined with the ubiquity of social media and rising share of millennials in the overall population, the power of consumers has reached unprecedented levels.
There is an oft-made point that Google, Facebook and Amazon have an in-built advantage as AI takes hold because of the vast troves of consumer data they control. I would debunk this on two levels. First, AI is a horizontal science that can, and will, be used by everyone. The algorithm that benefits Facebook has no bearing on an algorithm that helps British Airways.
Second, the liability side of the data equation has crystallised in recent years with the Cambridge Analytica scandal and GDPR. This is reflected in what you might call the technology paradox: while people still trust the benevolence of the tech industry, far less faith is placed in its most famous children (see chart, below, click to enlarge).
In an AI world, trust and the broader concept of social capital will move from CSR to boardroom priority, and potentially even a metric reported to investors.
This point is of heightened importance for telecom and tech companies given their central role in providing the infrastructure for a data-driven economy. Perhaps it is not surprising, then, that Google, Telefonica and Vodafone are among a vanguard seeking to proactively lay down a set of guiding principles for AI rooted in the values of transparency, fairness and human advancement. The open question, given the ethical questions posed above, is how actions will be tracked and, if necessary, corrected. Big questions, no easy answers.
– Tim Hatt – head of research, GSMA Intelligence
The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members. | <urn:uuid:9bec5d49-59cb-47ab-8317-0926d156ce70> | CC-MAIN-2022-40 | https://www.gsmaintelligence.com/2019/01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00219.warc.gz | en | 0.940459 | 1,280 | 2.65625 | 3 |
What’s this Social Engineering stuff all about?
I’m writing this blog to help educate anyone who’s interested in social engineering. Full disclosure: much of my job is to ethically “steal” information or access sensitive areas. After the engagement, I’ll issue a report letting the client know where their security was lacking and how resistant they are to specific types of attacks. This type of work could be called social engineering and/or physical penetration testing. This is the first post in a series designed to give you an inside view of what goes into working in social engineering. These posts will include short stories of social engineering attacks, why the attacks worked or didn’t, and how to resist specific attack tactics. I hope that you will find these stories entertaining and educational, and enjoy reading them as much as I have enjoyed experiencing them. Before getting into a bunch of stories we should first define social engineering, and introduce a few key statistics.
Let’s define social engineering
“What is social engineering?” If you type that question into a search engine, you’ll find everyone has their own definition, and many of them sound extremely negative and scary. Kind of makes you think that you need to read their posts right away so you don’t fall victim to a social engineering attack too… see what they did there? It’s a bit ironic to see so many people using a few social engineering tactics when they define social engineering.
So, let’s remove some of this fear mongering and strive for a more neutral definition. Social engineering: The use of imagery, words, or body language to elicit a desired action from an individual or group. Social engineering isn’t inherently good OR evil. In most cases, the tactic that’s used has more good or evil attached to it than the actual intent. Everyone uses social engineering techniques and, in turn, are socially engineered just about every day. An example could be a kid asking a parent for some money. How they ask will dictate whether they get their desired outcome. A clever kid will know the right time and the right way to ask when trying to influence their parent. If it works, they walk away with the cash, if not they will just try again later. Social engineering is all about influence. Understanding that it the first step in resisting it.
As far as tactics go there are a lot of them, but most can be classified into three primary types. Some more advanced tactics use a combination of these types.
Social engineering types:
- Electronic – Attacks seen on computers through email or websites are the most common.
- Telephone – Calls from people impersonating someone else, in order to get your information.
- Physical – Someone or something you interact with, in person, that attempts to influence you.
Over the years, I’ve collected a lot of data on what works and what doesn’t. I should frame this by saying the clients that ask for these engagements come from every line of business. Banking, healthcare, manufacturing, retail, legal, the list goes on and on. Some tactics work better than others on businesses. So, let’s get into some numbers quick and see what’s happening out there.
Success rate averages by type:
Electronic Social Engineering:
- Phishing: click rate 14.13%
- Spear phishing: credentials obtained 23.23%
Telephone Social Engineering:
- Vishing: sensitive information obtained 9.38%
Physical Social Engineering:
- USB drop: software run on company system 8.33%
- Physical access: gained access to restricted or secure areas 100%
These numbers were the results as of 8/15/2017. As you can see some attack vectors are more successful than others. The last statistic on physical access is particularly striking. It didn’t matter if there was a security guard posted out front and two-factor authentication on a data center door, I still got in. But that’s a story for another post.
It seems that with enough time and determination, all physical and technical controls at a company can be bypassed. There’s a likelihood of success for each one of these attacks. The question is, what can you do to reduce that likelihood? What group of people has the time and ability to test how your organization responds to social engineering attempts? What tactics were used? Why did they work? Why didn’t they work? Those are the main questions I’ll be focusing on in the next articles. Please feel free to share your own experiences with social engineering in the comments section, or visit our social engineering page to learn more about FRSecure’s social engineering services. | <urn:uuid:13f400d5-d8d9-44b0-9e6e-b4326116043e> | CC-MAIN-2022-40 | https://frsecure.com/blog/lets-define-social-engineering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00219.warc.gz | en | 0.9547 | 991 | 2.53125 | 3 |
What Is WLC in Networking and Why Is It Important?
How Does WLC Work. Benefits of Using a Wireless LAN Controller.
This post is also available in: Danish
While the demand for Wi-Fi access is increasing, more and more wireless Access Points (APs) are used in the network to ensure signal coverage in campuses, schools, or organization buildings, which makes the network operations & maintenance difficult for administrators.
Wireless Access Controllers (ACs) come into being to settle this bottleneck by running and administrating these multiple wireless access points. The wireless access point (AP) has lost the intelligent characteristic, while the wireless access controller turns into the new brain for WLAN.
In the case of the Wireless LAN network, also known as WLAN, you can use the WLC or Wireless LAN Controller, whose purpose is to centralize the control of Access Points (APs).
So that you can understand this better, let’s put it this way: what a wireless Access Point (AP) does for your network is similar to what an amplifier does for your home stereo. It takes the bandwidth coming from a router device and extends it so that multiple other devices can connect from farther distances away.
What Is a Wireless LAN Controller (WLC)?
A Wireless LAN Controller (WLC) is a centralized device in the network which is used in combination with the Lightweight Access Point Protocol (LWAPP) to manage lightweight access points in large quantities by the network administrator or network operations center.
Also called “fat” access points, these access points on the network are managed, operated, and configured independently. The WLC automatically handles the configuration of wireless access points.
Because of its centralized position and brainpower, the Wireless LAN Controller is aware of the wireless LAN environment. It provides services that can lower the price of deployment, ease the management process, and provide several layers of security.
Does My Company Need Wireless LAN Controller (WLC)?
The Wireless LAN Controller (WLC) – Lightweight Access Point (LWAP) setup is commonly utilized in the company environment to stretch an individual wireless network in a vast geographical region. This setup lets users stroll the office premise, campus, or building and still be connected to the network.
When deploying enterprise WLANs, every single wireless access point is initially created and managed separately from other APs on the same network. In other words, each AP must run individually, which makes centralized management difficult to realize.
Unfortunately, technical problems and unstable network conditions can be caused by the lack of communication between these Access Points (APs). The solution? Wireless LAN Controllers (WLC) meant to solve the mentioned problems above once for all. Accompanied by fit mode APs, Wireless LAN Controllers (WLC) can help to realize efficient and simplified network management.
Source: FS Community
Functions of Wireless LAN Controller
As we said before, the major function of a wireless LAN controller (WLC) is to maintain the configuration of wireless Access Points (AP), but it carries out multiple other functionalities:
#1. Traffic aggregation and processing for wireless devices function
It is important to know that this function is not all the time performed inside the WLC, for this it will depend on the network architecture used. When all traffic from wireless devices is routed via the controller, you can use it to encode it or divide it so that is sent to different networks or to be filtered to prioritize it according to the established quality policies.
#2. Management and operation function
These two functions enable you to utilize and manage the wireless local network in a much simpler manner. This way you don’t have to repeat the same operations in every one of the APs within the local networks anymore. These tasks allow you to configure, observe and identify problems in the network and they also permit you to send and receive notifications when problems are noticed.
#3. Local wireless function
In the case of the radio features of wireless technology, it is preferable to utilize the coordination and protection mechanisms in the radio spectrum for more efficient use in a particular area.
The mechanisms aimed at optimizing the distribution of traffic between APs and wireless devices can recognize interference and by using radio triangulation mechanisms can locate geographically the devices.
Heimdal® Threat Prevention - Network
- No need to deploy it on your endpoints;
- Protects any entry point into the organization, including BYODs;
- Stops even hidden threats using AI and your network traffic log;
- Complete DNS, HTTP and HTTPs protection, HIPS and HIDS;
Benefits of a Wireless LAN Controller
- It is secure. With all the daily news about hacking and data breaches, security is an essential factor to have in mind for any organization. Wireless LAN Controller (WLC) fights against all kinds of threats to your organization based on user ID and location thanks to built-in security characteristics.
- It is centralized. A centralized wireless controller provides malleability for deployment, which will lower the budget, planning instruments, and time spent organizing a wireless network in the business.
- It is simple. Having a Wireless LAN Controller (WLC) will help you to administer and supervise your access points in the centralized hub.
A Wireless LAN Controller (WLC) gives the network administrators the ability to see all the data and information linked to the network. They are able to observe on the device the hardware status, the situation of the physical ports, and a summary of the Access Points (APs) connected anytime they want. | <urn:uuid:15089558-53ad-439c-ba0a-b3ae1ecbd353> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/what-is-wlc-in-networking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00219.warc.gz | en | 0.920009 | 1,150 | 2.8125 | 3 |
The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding a million and the number of deaths more than 50,000 worldwide. It is a situation that will affect us all in one way or another.
With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.
For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.
In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.
Isolated cases or the new norm?
With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.
While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.
In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”
This is not an easy task, but a necessary one. So what can we do?
Ways to responsibly use AI to fight the coronavirus pandemic
- Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
- Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
- Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
- Time limitation: Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.
Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.
Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.
While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.
It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good. | <urn:uuid:5a135577-d0c0-458a-93d3-75bd52ae6082> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/04/03/using-ai-responsibly-to-fight-the-coronavirus-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00219.warc.gz | en | 0.949755 | 1,448 | 2.78125 | 3 |
Impulse online shopping, downloading music and compulsive email use are all signs of a certain personality trait that make you a target for malware attacks. New research from Michigan State University examines the behaviors – both obvious and subtle – that lead someone to fall victim to cybercrime involving Trojans, viruses, and malware.
“People who show signs of low self-control are the ones we found more susceptible to malware attacks,” said Tomas Holt, professor of criminal justice and lead author of the research. “An individual’s characteristics are critical in studying how cybercrime perseveres, particularly the person’s impulsiveness and the activities that they engage in while online that have the greatest impact on their risk.”
Low self-control, Holt explained, comes in many forms. This type of person shows signs of short-sightedness, negligence, physical versus verbal behavior and an inability to delay gratification.
“Self-control is an idea that’s been looked at heavily in criminology in terms of its connection to committing crimes,” Holt said. “But we find a correlation between low self-control and victimization; people with this trait put themselves in situations where they are near others who are motivated to break the law.”
The research, published in Social Science Computer Review, assessed the self-control of nearly 6,000 survey participants, as well as their computers’ behavior that could indicate malware and infection. To measure victimization, Holt and his team asked participants a series of questions about how they might react in certain situations. For computer behavior, they asked about their computer having slower processing, crashing, unexpected pop-ups and the homepage changing on their web browser.
“The internet has omnipresent risks,” Holt said. “In an online space, there is constant opportunity for people with low self-control to get what they want, whether that is pirated movies or deals on consumer goods.”
As Holt explained, hackers and cybercriminals know that people with low self-control are the ones who will be scouring the internet for what they want – or think they want – which is how they know what sites, files or methods to attack.
Understanding the psychological side of self-control and the types of people whose computers become infected with malware – and who likely spread it to others – is critical in fighting cybercrime, Holt said. What people do online matters, and the behavioral factors at play are entirely related to risks.
Computer scientists, Holt said, approach malware prevention and education from a technical standpoint; they look for new software solutions to block infections or messaging about the infections themselves. This is important, but it is also essential to address the psychological side of messaging to those with low self-control and impulsive behaviors.
“There are human aspects of cybercrime that we don’t touch because we focus on the technical side to fix it,” he said. “But if we can understand the human side, we might find solutions that are more effective for policy and intervention.”
Looking ahead, Holt hopes to help break the silos between computer and social sciences to think holistically about fighting cybercrime.
“If we can identify risk factors, we can work in tandem with technical fields to develop strategies that then reduce the risk factors for infection,” Holt said. “It’s a pernicious issue we’re facing, so if we can attack from both fronts, we can pinpoint the risk factors and technical strategies to find solutions that improve protection for everyone.” | <urn:uuid:fd1f11f0-6f28-4494-b83b-36fe6d470875> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2018/12/19/personality-cybercrime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00219.warc.gz | en | 0.950464 | 744 | 3.125 | 3 |
You already know that cybercriminals hunt online accounts, including yours. They can gain access by bruteforcing or stealing the password. How to stay protected? And what NOT to do? Picture. A burglar with a set of skeleton keys in front of a door.
Let’s start with the most common errors when creating a password.
A simple word or short string of characters can be cracked either by guesswork or a dictionary attack. Accounts with passwords like 123456, password, or love2000 are practically defenseless against attacks.
If you use the name of your dog, or the birthday of your spouse, an attacker can easily glean it from a social network. Such passwords are the definition of weak.
But even a good password can turn bad if you use it for multiple accounts. If just one service is not sufficiently protected and its database leaks, cybercriminals will get hold of your username (usually an email address) and password. And they will
try the same pair for other accounts too. If you use the same password elsewhere, that account will be compromised as well.
Lastly, don’t give your passwords to anyone — not even friends, family, or colleagues. They might be less vigilant than you think! You don’t want your relationship with them to suffer as a consequence.
For starters, a genuinely strong password is at least 10 characters long. But to protect your most important accounts, we recommend at least 15.
Second, a strong password is a set of characters that is either random or non-obvious to an outsider. It should include letters (upper and lower case), numbers, and special symbols. Bruteforcing such a password can take years, and cybercriminals don’t
have that kind of time to play with.
Wait, you say, surely a Fort Knox password locks out not only villains, but also the account owner? Who can remember 15 random letters, numbers, and symbols for different accounts? And if I jot them down, someone will find them.
There’s a little trick here: The password should not be obvious to outsiders, but it can make perfect sense to you. For example, take the first few words of your favorite song, poem, or other text that you know off by heart, and create a password
from the first letter of each word, adding a special symbol and number at the end, and at the beginning add the first letter of the name or the main color of the site for which you’re creating an account. That’s just an example. You can work out
your own scheme and use it to create unique passwords for each account.
If that doesn’t appeal, use a password manager — a special program that does what its name suggests. It will create robust passwords and store them for you. You only have to remember only one master password.
So, your password is hard to guess. But some dastardly cybervillain still might try to steal it from you. We take an in-depth look at account protection in the Security course. There you can also find out about phishing and spyware, and how to protect
yourself against it all. Unfortunately, your credentials can be stolen not only from you, but straight from the service itself. How to protect yourself from the consequences of such thefts is the topic of the next lesson.
Which of these passwords is the strongest? | <urn:uuid:9eb18cb7-83b3-4b1f-b76a-65005242bce7> | CC-MAIN-2022-40 | https://education.kaspersky.com/en/lesson/16/page/69 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00219.warc.gz | en | 0.932872 | 711 | 2.875 | 3 |
Lesson 8. Console
security and jailbreaking
prefer consoles (PlayStation, Nintendo, Xbox, etc.) to computers, smartphones,
and tablets. Each of them requires an account with the manufacturer, which
needs to be protected. That's what we're going to look at now.
account protection starts with a password. You know from lesson 5 what it
should be like. As for changing it, that's done under Account Management in the
console settings. Or you can log into PlayStation Network through your browser,
and change the password in the Security section. There you can also set a
password reset security question and enable 2-Step Verification.
console for the whole family? Then you can set a four-digit passcode to log into
your profile so that no one logs in under your name by mistake and wipes your
saves. The passcode is set in the console menu under Login Settings.
In any case,
never log into your PSN account from other people's devices, especially in
public places. If you do, don't forget to log out, otherwise your account will
be at the mercy of outsiders. If you do happen to forget, open your account in
a browser, and in the Security section log out of the network on all devices.
PlayStation, here two-factor authentication is called two-step verification. It
is enabled by selecting Sign-in and security settings in a browser, and uses
Google Authenticator or a similar app. When the feature is enabled, you will
see a list of backup codes.
Sign-in and security settings, you can view the login history and sign out of
your account on all or some devices.
addition to a password and two-factor authentication, you have a personal
passkey. This is handy, for example, to prevent tech-savvy kids who know where
the parental bank card and smartphone are kept from buying Xbox Live games on
the sly. You can create this key in the console settings in the Account
jailbreaking a console is a bad idea
console games are expensive, some gamers try to hack the console, known as
flashing or jailbreaking. We do not recommend this, and not only out of ethical
considerations. First, console manufacturers do not like such users and block
their online accounts, which means no multiplayer or updates.
might break and brick the device. Even if you entrust the reflashing job to a
self-styled pro, you still might end up with a brick and/or lose your money.
You'll also void the warranty, because it's not an entirely legal procedure.
So our advice
is not to mess around with jailbreaking and flashing. As for saving money
safely, we talked about that in lesson 3. In the next installment, we put
consoles to one side and return to computers to discuss the causes of game lag
and how to fix it.
of the following is safe to do on game consoles?
Which of the following is safe to do on game consoles? | <urn:uuid:5b864d77-9074-465b-849c-8fe725110d58> | CC-MAIN-2022-40 | https://education.kaspersky.com/en/lesson/29/page/258 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00219.warc.gz | en | 0.904413 | 690 | 2.5625 | 3 |
In this video, Anthony Sequeira walks you through some key facts regarding the OSI model.
Remember, the OSI model breaks down networking functions into seven layers:
- Application – high end protocols for network applications; examples inlcude HTTPS and FTP
- Presentation – data represenations like JPG and ASCII
- Session – this layer maintains sessions between end stations
- Transport – this layer is where we have segments (the PDU term); data can be sent with reliability or unreliability using protocols at this layer
- Network – this layer is responsible for routing packets (the PDU term) throughout the network and beyond; it is at this layer where IP address information (source and dest) is encapuslated
- Data link – this layer takes care of encapsulating in addressing information (and other important info) as required to send data in the network; in an Ethernet network, this includes the encapsulation of source and destination MAC address information; the PDU term at this layer is frame
- Physical – this includes the stuff that sends the bits over the wire and the wire itself; it also includes things we cannot see with our eyes, for example, the radio frequency spectrum used to carry data in WiFi networks; here the PDU is termed simply bits
Learn to Love It…
The OSI model gets a really bad reputation in the world it seems. This is not some silly acedemic thing made up just to torture aspiring networking or cyber security students, it is a model that you can use throughout your entire career to help you learn new technologies, and to help you pass a crap ton of exams! Oh – and you will also most likely use it every time you are troubleshooting a network related problem.
In fact, you really should consider using it every time because you can avoid some really, really potentially embarrasing moments in your career. You know, like having someone trbleshoot for hours at the application layer, when the iossue was the physical layer all along. You know, the classic, it was not plugged in issue!
The OSI model should be studied, debated, loved, revered, made fun of, for many many decades to come.
Thanks as always for stopping by and reading and or watching.
For more information and other posts you will like – check out – | <urn:uuid:fe93edf8-42fe-4bee-a7f2-c746169d46f0> | CC-MAIN-2022-40 | https://www.ajsnetworking.com/tag/training/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00219.warc.gz | en | 0.92029 | 508 | 3.3125 | 3 |
September 28, 2017
About 20 years ago, Boeing, the world’s largest aerospace company, identified the need for a hands-free, heads-up technology in its operations. Flash forward to 2014, when a device fitting this vision (Google Glass) finally appeared on the scene. Today, the aviation and aerospace industries are experiencing a digital renaissance, and the timing is critical for several reasons:
Demand is high
Demand is being driven by two factors: 1) Rapidly aging fleets that need to be replaced or maintained at great cost; and 2) New, more technologically advanced aircraft needed to stay competitive. (Boeing, for one, has a backlog of some 5,000 planes it is under contract to build.) Next-generation aircraft boast features like advanced avionics, noise reduction capabilities, improved interior cabin designs, and greater fuel efficiency. Aviation and aerospace companies are under pressure to ramp up production to replace customers’ older fleets and supply them with state-of-the-art vehicles. And, of course, as demand for new aircraft rises so too does the need to operate and maintain those crafts.
A talent gap is creating a need for fast, low-cost training
As in pretty much all manufacturing sectors, the aviation and aerospace industries are dealing with a skilled labor crunch as experienced workers retire and leave the workforce, taking their careers’ worth of knowledge with them. By some estimates, the aerospace industry is going to need to attract and train nearly 700,000 new maintenance technicians alone by the year 2035. More jobs are being created and more baby boomers retiring than can be filled or replaced by new workers. Aerospace manufacturers and suppliers are therefore looking for innovative technologies to maximize the productivity of their existing workforces and quickly onboard new workers.
The stakes are high: Operations are complex, downtime is costly, safety is crucial, and the market is competitive
Building aircraft (commercial airplanes, military jets, spacecraft, etc.) and the engines and propulsion units that drive them involves extremely complex processes in which thousands of moving parts are assembled in precise order, carefully inspected, and maintained for years. Speed is desirable to meet demand and for competitive advantage, yet there can be no compromise or negligence when it comes to accuracy and safety—after all, we’re talking about aircraft that transport hundreds of passengers across oceans or even dodge enemy missiles at over 1,000 mph. Boeing, Airbus, Lockheed Martin and other large firms are all vying to sell to the U.S. Department of Defense, NASA and large airlines (the aviation, aerospace and defense industries’ biggest U.S. customers;) so errors and downtime are, of course, expensive and bad for business, and can also greatly affect human lives.
To accelerate production, close the talent gap, reduce errors, limit downtime, and improve safety; the leading aviation and aerospace companies are employing wearable technology, especially smart (Augmented Reality) glasses. In general, smart glasses are good for complex industrial processes that are very hands-on, time-consuming, error-prone, and loaded with information—processes like wiring an electrical system or installing the cabin of an airplane. AR glasses and VR headsets are proving useful in aircraft assembly, quality and safety inspection, field maintenance and repair, and training. The technology is providing aviation and aerospace workers with instant, hands-free access to critical information, and reducing training requirements for technicians and operators alike. Here’s how some of the aerospace giants are applying wearable tech in their operations:
In 2015, the French aerospace company teamed up with Accenture on a proof of concept in which technicians at Airbus’ Toulouse plant used industrial-grade smart glasses to reduce the complexity of the cabin furnishing process on the A330 final assembly line, decreasing the time required to complete the task and improving accuracy.
Sans smart glasses, operators would have to go by complex drawings to mark the position of seats and other fittings on the cabin floor. With Augmented Reality, a task that required several people over several days can be completed by a single worker in a matter of hours, with millimeter precision and 0 errors.
Airbus went ahead with this application: Technicians today use Vuzix smart glasses to bring up individual cabin plans, customization information and other AR items over their view of the cabin marking zone. The solution also validates each mark that is made, checking for accuracy and quality. The aerospace giant is looking to expand its use of smart glasses to other aircraft assembly lines (ex. in mounting flight equipment on the No. 2 A330neo) and other Airbus divisions.
Every Boeing plane contains thousands of wires that connect its different electrical systems. Workers construct large portions of this wiring – “wire harnesses” – at a time—a seemingly monumental task demanding intense concentration. For years, they worked off PDF-based assembly instructions on laptops to locate the right wires and connect them in the right sequence. This requires shifting one’s hands and attention constantly between the harness being wired and the “roadmap” on the computer screen.
In 2016, Boeing carried out a Google Glass pilot with Upskill (then APX Labs,) in which the company saw a 25% improvement in performance in wire harness assembly. Today, the company is using smart glasses powered by Upskill’s Skylight platform to deliver heads-up, hands-free instructions to wire harness workers in real time, helping them work faster with an error rate of nearly zero. Technicians use gesture and voice commands to view the assembly roadmap for each order in their smart glasses display, access instructional videos, and receive remote expert assistance.
Boeing believes the technology could be used anywhere its workers rely on paper instructions, helping the company deliver planes faster. AR/VR are also significantly cutting training times and assisting with product development. For instance, HoloLens is proving useful in the development of Starliner, a small crew transport module for the ISS.
Boeing’s Brian Laughlin will lead a thought-provoking closing brainstorm on Day One of EWTS Fall 2017
General Electric is using Augmented Reality and other IoT technologies in multiple areas of its far-ranging operations. At GE Aviation, mechanics recently tested a solution consisting of Upskill’s AR platform on Glass Enterprise Edition and a connected (WiFi-enabled) torque wrench.
The pilot involved 15 mechanics at GE Aviation’s Cincinnati manufacturing facility, each receiving step-by-step instructions and guiding visuals via Glass during routine engine assembly and maintenance tasks. At any step requiring the use of the smart wrench, the Skylight solution ensured the worker tightened the bolt properly, automatically verifying and recording every torqued nut in real time.
GE Aviation mechanics normally use paper- or computer-based instructions for tasks, and have to walk away from the job whenever they need to document their work. With smart glasses, workers were 8-12% more efficient, able to follow instructions in their line of sight and automatically document steps thanks to the device’s built-in camera. And reducing errors in assembly and maintenance saves GE and its customers millions of dollars.
In early 2015 it came out that Lockheed Martin was trialing the Epson Moverio BT-200 glasses with partner NGRAIN, to provide real-time visuals to its engineers during assembly of the company’s F-35 fighter jets and ensure every component be installed in the right place. Previously, only a team of experienced technicians could do the job, but with Augmented Reality an engineer with little training can follow renderings with part numbers and ordered instructions seen as overlay images through his/her smart glasses, right on the plane being built.
In the trial, Lockheed engineers were able to work 30% faster and with 96% accuracy. Those workers were learning by doing on the job as opposed to training in a classroom environment, which amounted to less time and cost for training. And although increased accuracy means fewer repairs, the AR solution could be used to speed up the repair process, too, from days- to just hours-long, with one engineer annotating another’s field of view. At the time, however, Lockheed acknowledged that getting the technology onto actual (secured) military bases would be difficult.
Lockheed is also interested in Virtual Reality, seeing AR/VR as key to lowering acquisition costs (all costs from the design/construction phase of a ship to when the vessel is decommissioned.) The company is applying VR to the design of radar systems for navy ships. The challenge lies in integrating the radar system with a ship’s other systems, which requires very precise installation. VR can help identify errors and issues during the design stage and prevent expensive corrections.
Using HTC Vive headsets, engineers can virtually walk through digital mock-ups of a ship’s control rooms and assess things like accessibility to equipment and lighting. Lockheed is also using Microsoft’s HoloLens to assist young naval engineers with maintenance tasks at sea—much more effective than a dense manual.
*Learn more about this application from Richard Rabbitz of Lockheed Martin Rotary Mission Systems (RMS) at EWTS Fall ‘17
Lockheed is allegedly saving $10 million a year from its use of AR/VR in the production line of its space assets, as well, by using devices like the Oculus Rift to evaluate human factors and catch engineering mistakes early. For the Orion Multi-Purpose Crew Vehicle and GPS 3 satellite system, Lockheed ran virtual simulations in which a team of engineers rehearsed assembling the vehicles in order to identify issues and improvements. A network platform allows engineers from all over to participate, saving the time and money of travelling.
Last but not least, Lockheed Martin is also actively developing and testing commercial industrial exoskeletons. Keith Maxwell, the Senior Product Manager of Exoskeleton Technologies at Lockheed, attested to this at the Spring 2017 EWTS. The FORTIS exoskeleton is an unpowered, lightweight suit, the arm of which – the Fortis Tool Arm – is available as a separate product for operating heavy power tools with less risk of muscle fatigue and injury.
While Augmented Reality has been around for decades in the form of pilots’ HMDs, only now has the technology advanced enough to become a standard tool of engineers, mechanics and aircraft operators across aviation and aerospace operations. In a high-tech industry like aerospace, AR/VR are critical for keeping up production during a mass talent exodus from the workforce. Workers won’t need years of experience to build a plane if they have on-demand access to instructions, reference materials, tutorials and expert help in their field of view.
The Fall Enterprise Wearable Technology Summit 2017 taking place October 18-19, 2017 in Boston, MA is the leading event for wearable technology in enterprise. It is also the only true enterprise event in the wearables space, with the speakers and audience members hailing from top enterprise organizations across the industry spectrum. Consisting of real-world case studies, engaging workshops, and expert-led panel discussions on such topics as enterprise applications for Augmented and Virtual Reality, head-mounted displays, and body-worn devices, plus key challenges, best practices, and more; EWTS is the best opportunity for you to hear and learn from those organizations who have successfully utilized wearables in their operations. | <urn:uuid:e2115a52-2f1a-44f8-8784-0c0de72e9e25> | CC-MAIN-2022-40 | https://www.brainxchange.com/blog/just-in-time-ar-vr-spark-a-digital-renaissance-in-aviation-and-aerospace | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00219.warc.gz | en | 0.942624 | 2,334 | 2.671875 | 3 |
Integrating technology into the classroom without barriers.
The biggest barrier to digital learning for teachers is gaining student access to technology. This is closely followed by a lack of time during the school day. For administrators, the top concern is securing relevant and effective professional development to their staff, followed by limitations and problems with technological infrastructures, such as Wi-Fi and security. Both roles found that the main obstacle to integrating technology into the classroom was lack of time (43 percent) and an insufficient number of devices (40 percent).
These results surfaced in a survey untaken by education technology company, Schoology. Responses came from 2,846 education professionals specifically in K-12, a quarter of whom were users of the company’s online service. Although the response was worldwide, a high volume came from the United States.
Among the other hurdles that came up in the survey were that there was ineffective professional development, lack of access at home, and difficulty in creating lesson plans. This survey highlighted where digital learning’s flaws were and how we can work together to improve them.
Unsurprisingly, almost everyone in the survey said that digital learning had a positive impact on student achievement (95 percent). However, most of the time, the resources they said they use tend to be ‘static,’ such as PDFs, Word documents, and videos. The report then noted that the institutions might be digitalizing traditional learning instead of enhancing it.
The survey also examined instructional approaches that integrate technology. The ones used most by respondents were differentiated instruction (75 percent), blended learning (54 percent), and individualized learning (45 percent).
Contact D&D Security by calling 800-453-4195 or by clicking here. | <urn:uuid:5af4a703-85ea-4125-a74d-2ae402343f52> | CC-MAIN-2022-40 | https://ddsecurity.com/2017/11/30/biggest-barriers-digital-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00219.warc.gz | en | 0.97413 | 356 | 3.484375 | 3 |
Libraries provide people with free access to an otherwise obscure wealth of information. Because of this, they have been important parts of society for generations. Despite the growing use of modern technological resources to obtain information, libraries continue to be instrumental tools for retaining, protecting and sharing documents. With digital technology influencing our lives more every day, however, it is clear that libraries have to adapt. But what does this mean? Let's take a look at what modern libraries are doing right now for the future, and how your library can adapt as well.
Books provide you with access to content from times long past. Without them, we would have little more than stories, presumptions and lore to go by. Books do their job dutifully, with some books even enduring thousands of years of use.
The successful operation of a library is founded upon the ability to be able to effectively manage and archive a large set of records. Doing this the right way, however, is not as simple as snapping your fingers; in order to effectively archive your records and manage them, you have to look beyond simply placing them in alphabetical order. Here are the three best practices that you should take heed of when engaging in your archives and records management duties.
A simple truth about digital technology is that it is transforming the ways that library systems and services operate. Not only is technology itself advancing, but the expectations of users are also changing; today, people expect to be able to access all manner of information digitally.
Analog documents – books, newspapers, microfilm, microfiche, town records, historic information – are still important, but there is a recognized need to have them digitized in order to improve access and availability. And documents are not the only things being affected by the digital age. Increasingly, libraries and other informational institutions are rethinking the ways they provide services to their clientele. Below are some of the bigger transformations library systems and services are experiencing in the age of digital media.
With the internet surpassing other media as the primary source of information, digitization has become an increasingly salient topic. The ability to access information digitally is no longer a novelty – it is expected. Take for example the Google Books Library Project, in which Google is working with several major libraries and educational institutions (including Stanford University) to digitize library resources and create a vast digital library.
The trend isn’t ephemeral. The push towards the digitization of library resources and other sources of information will only accelerate. As Guy Berthiaume (who will become the new head of Library and Archives Canada on June 23) points out, libraries are in the process of reinventing themselves. He believes that digital technology is not threatening the place of libraries as centres of knowledge; rather, it should be embraced and leveraged to improve the availability of information and how people access it.
If you’re a book enthusiast, news that libraries are going digital may leave you feeling weary. If you’re a library administrator, the concerns of staunch traditionalists may have you shying away from new technology. Fear not, going digital won’t turn your library into a cold, sterile environment from sci-fi future. Instead, it could help to highlight and improve many of the services patrons use on a regular basis.
Historical books, new books, magazines, files, binders, contracts … the list goes on. There’s almost nothing you can’t capture or copy with the new zeta book scanner by Zeutschel. It is the ideal multifunctional system for scanning and copying in libraries, archives, universities, schools and more.
Plug and Scan
Have you considered giving patrons access to a walk up book scanning service in your library? Sure you've probably looked at it once or twice a few years back; between the Google book initiative and the popularity of e-book readers hitting the market, it would be hard not to consider offering the service in your library. However, chances are, the idea was dismissed for a variety of different reasons (cost, time, training effort, support, etc.). But times have changed, technology has advanced and the needs of the library have been heard and adopted. So let's take a look at 5 reasons now might be the time to consider book scanning in your library. | <urn:uuid:b28abb31-97d0-4040-a922-1ab28531a82f> | CC-MAIN-2022-40 | https://blog.mesltd.ca/topic/library-scanning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00419.warc.gz | en | 0.951203 | 879 | 3.09375 | 3 |
After you determine where to set up the tripod, you must attach the target to it, and raise it to the proper height for camera alignment.
- Attach the three-arm knob to the tripod base using the provided wing nut.
- Attach the plastic thumb-head screw to the wing nut (both sides).
- Raise the tripod to its full length. The center of the target should be 36 in. (0.9 m) from the ground. | <urn:uuid:374af709-d454-4029-8356-f6129718e3af> | CC-MAIN-2022-40 | https://techdocs.genetec.com/r/en-US/AutoVuTM-SharpZ3-Deployment-Guide-13.1/Attaching-the-target-to-the-tripod | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00419.warc.gz | en | 0.871931 | 100 | 2.53125 | 3 |
MIT develops AI to measure stress exertion on materials
The Massachusetts Institute of Technology announced yesterday it has developed an artificial intelligence (AI) tool with the ability to measure stress forces on materials.
Developed by MIT researchers, the tool is able to make estimations of the stresses exerted on materials in real-time.
Briefly explaining how it works,McAfee Professor of Engineering and Director of the Laboratory, Markus Buehler, said: “From a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth.”
To turn the idea into reality, the researchers used a Generative Adversarial Network enhanced by using several thousands of images which showed a material’s microstructure after exertion.
According to MIT, the network is able to solve the connection between the appearance of the material and the forces placed on it, using game theory.
The AI is also able to replicate problems such as cracks developing which affect how the material reacts to stress.
The GAN will run on consumer-grade computer processors once fully developed, making carrying out inspections easier and the AI more accessible in the field.
Taking about the physics of force exertion, Buehler said: “Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers.
“But it’s still a tough problem. It’s very expensive - it can take days, weeks, or even months to run some simulations. So, we thought: “Let's teach an AI to do this problem for you,” he concluded. | <urn:uuid:4f4e18d0-0c0a-4f33-a6fd-70b531899785> | CC-MAIN-2022-40 | https://aimagazine.com/ai-applications/mit-develops-ai-measure-stress-exertion-materials | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00419.warc.gz | en | 0.945437 | 340 | 3.15625 | 3 |
Java is a good language to write in because it has evolved slowly over the years, letting programmers master it before new iterations and functionalities are introduced. Yet, there are still some best practices that you might need. These best practices include:
- Don’t be tricked by SPI evolution evaluation. SPI is a good way to let customers insert custom behavior into your code or library. The thing is, you might need to use context and parameter objects when you work with SPI rather than write methods into your code. With dozens of methods to anticipate your customers’ actions, it would mean you would have a bloated code.
- Avoid local, inner or anonymous classes. Using these classes might have some benefits, but you should not overuse them. This is because these classes refer to an outside instance and can be a source of memory leaks.
- Keep your C++ destructors in check. Destructors are just the opposite of constructor functions, and are called when you destroy or de-allocate objects. If you are using Oracle or Sun, then you might never have to debug the code that leaves memory leaks when allocated memory is not freed after removing an object. When dealing with destructors, it is best to free the memory in the inverse allocation order, meaning you free up the last allocated memory first and work backwards. This is also applicable to semantics that are similar to destructors, such as when you are using @after or @before JUnit annotations, when freeing or allocating JDBC resources or when calling super methods. The general rule is to consider where you should perform stuff in inverse when you are dealing with free/allocate, before/after and return/take semantics.
- Avoid null from API. You should make sure that you avoid returning null from your API methods as much as possible. The only times you should be returning nulls is when you are dealing with absent or uninitialized semantics.
- No returning null arrays from your API methods. While returning nulls is okay for some cases, you should remember that it is NEVER okay to return null collections or arrays!
- Use SAMs. With the coming of Java 8, it makes sense to write your API to accept single abstract method or functional interface. This will help you and your customers make use of Java 8’s lambdas, further simplifying the codes that you have to write. The thing is, you and your API customers might want to use lambdas as often as possible, so you must write your API to let them do just that!
- Set methods to be final by default. Java programmers might not agree with this simply because they are quite used to doing it the other way. But it makes sense to make your methods to be final by default because this assures that you will never override any method by accident and if you need to override a method, you can just remove the final keyword. This is ideal when you have full control of your source code and in static methods where shadowing does not make sense.
- Always be functional. Code using a more functional style rather than deal with states. You can pass state via method arguments, rather than manipulating a lot of object state.
- Try to avoid accept-all signatures as much as possible.
- Short circuit your equals() to gain better performance. This is especially true if you have large object graphs.
Need help with your Java development? Contact Four Cornerstone today!
Photo courtesy of Heidi Ponagai. | <urn:uuid:77dfb457-bdb6-4a71-ae32-4dedf6c50fba> | CC-MAIN-2022-40 | https://fourcornerstone.com/coding-java-top-10-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00419.warc.gz | en | 0.935456 | 710 | 2.578125 | 3 |
It is common to see technologists put artificial intelligence (AI) technology into a box and wax lyrical about their vision of how it will impact humanity. Elon Musk made headlines when he publicly stated that AI, in his opinion, posed a significant threat and was in need of regulation, going so far as to call it a “fundamental risk to the existence of civilisation”. Meanwhile, Mark Zuckerberg called such warnings “irresponsible”, and accentuated the benefits AI could provide in saving lives through medical diagnoses and driverless cars.
It is important to bear in mind that the form of AI being discussed by Musk and Zuckerberg relates primarily to artificial intelligence that has ‘human level’ cognitive skills, otherwise known as AGI or ‘Artificial General Intelligence’. Despite impressive progress in a range of specialities (from driving cars to playing Go), this technology is by no means imminent.
AI is in use and not science-fiction
What current debates tend to ignore is that AI is something that’s already in common use by many in a business context today, and that the associated risks are not about whether it will leave us all in devastation. Instead of worrying about such catastrophic scenarios, we should focus our energies on the very real risks posed by this technology in the here and now if it is used incorrectly. These dangers can include regulation violations, diminished business value and significant brand damage. Though not cataclysmic in their impact on humanity, these can still play a major role in the success or failure of organisations.
Artificial Intelligence vs. Artificial Intelligence
As a refresher, not all artificial intelligence is created equally. AI comes in two flavours – Transparent and Opaque. Both have very different uses, applications and impacts for businesses and users in general. For the uninitiated – Transparent AI is a system whose insights can be understood and audited, allowing one to reverse engineer each of its outcomes to see how it arrived at any given decision. Opaque AI, on the other hand, is an AI system that cannot easily reveal how it works. Not unlike the human brain, any attempt to explain exactly how it has arrived at a particular insight or decision can prove challenging. […] | <urn:uuid:e630ca21-21e6-425f-aff2-499102527abd> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/11/04/artificial-intelligence-choosing-the-right-flavour/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00419.warc.gz | en | 0.962729 | 452 | 2.953125 | 3 |
Once upon a time, cybersecurity news revolved around corporate hacks and credit card scammers. But today’s headlines have taken an alarming turn. News about nation-state hackers and government breaches dominate the media.
If there is a silver lining to these attacks, it’s that cybersecurity issues are hitting close to home for legislators, prompting lawmakers to take action and create policies that will keep both corporate and government organizations safer. In 2020 alone, over 280 cybersecurity bills and resolutions were introduced by U.S. lawmakers.
What does the future hold for cybersecurity regulation? Only time will tell. In the meantime, here are some of the most influential cybersecurity laws passed in recent years.
California Consumer Protection Act (CCPA)
California became the first state in the U.S. to pass broad privacy legislation when it passed the California Consumer Protection Act, effective January 1, 2020. Similarly to the E.U.’s General Data Protection Regulation, the CCPA was created to give everyday consumers more control over their data privacy.
The two pieces of legislation are similar, with a few key differences. While the GDPR’s main focus is to require prior consent from consumers, the CCPA focuses on the consumers’ right to opt out of data collection. The CCPA does not mandate consumer consent, but it does give users the right to access any data that has already been collected by an organization, as well as to request that personal data be deleted.
General Data Protection Regulation (GDPR)
On May 25, 2018, the European Union made history when they passed GDPR and changed data privacy forever. While the rule was passed to protect European citizens from data breaches, the blurred boundaries of the internet meant that GDPR affected organizations around the world.
In short, the GDPR established data protection rules for any company collecting data from an EU citizen — regardless of that company’s location. The GDPR covers a wide range of regulations, most notably about data collection and transparency. Any company that collects data from EU citizens needs explicit, informed consent.
The GDPR also impacted breach reporting. When data is breached, the GDPR gives companies just 72 hours to notify authorities, and requires that organizations notify consumers of high-risk data breaches “without undue delay.” The strict regulation mandates that businesses that don’t comply with GDPR may be penalized up to €20 million or 4 percent of annual global revenue — whichever is higher.
Internet of Things (IoT) Cybersecurity Improvement Act
Consumers and businesses alike have been quick to adopt smart devices, from voice tech to connected security cameras to intelligent cars. But with more connected devices comes more vulnerability to cyber attacks. Thankfully, the U.S. federal government has stepped in to up security on IoT technology with the Internet of Things Cybersecurity Improvement Act.
Signed into law on December 4, 2020, this act established security standards for IoT devices owned or used by the federal government. Despite that the law currently only applies to devices used by government entities, this act is expected to have a trickle-down effect. If tech companies want the buy-in of government bodies, they’re going to have to follow the minimum security standards set forth in this piece of legislation—which means they will likely follow the same standards to manufacture consumer-facing IoT devices as well.
State and Local Cybersecurity Improvement Act
Government entities are one of the most targeted industries for cyberthieves, which is why U.S. legislators will be prioritizing cybersecurity initiatives in 2021. One of these initiatives is the State and Local Cybersecurity Improvement Act, which would disperse more federal resources to smaller state and local governments.
This bipartisan act would grant $400 million to the Department of Homeland Security for the sole purpose of much needed cybersecurity funding for state and local governments. It would also mandate the Department of Homeland Security and Infrastructure Security Agency (CISA) to create a defense in depth strategy to fortify the defense of local, state, territorial, and tribal governments.
Secure Your Organization Against Cyberattacks
With new regulations underway, government entities and corporations alike are seeing the light at the end of the tunnel. But you don’t need to wait for the next bill to pass before updating your cyber defenses.
Bluefin is here to help you secure your networks and keep consumers safe from data breaches. For more information on payment security solutions, P2PE encryption, tokenization and more, contact a Bluefin representative today and view our white paper to understand the benefits of using Bluefin P2PE technology. | <urn:uuid:4dff436a-5442-4eff-a4d9-4b37f6f0ac4d> | CC-MAIN-2022-40 | https://www.bluefin.com/bluefin-news/global-cybersecurity-laws-regulation-changes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00419.warc.gz | en | 0.941189 | 936 | 2.78125 | 3 |
Following is an excerpt from Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information™, 2nd Ed. (Elsevier/Academic Press, 2021) by Danette McGilvray.
What can we do to ensure the quality of the data on which our world depends? One of the first steps is to recognize that data and information are vital assets to be managed as intentionally as other assets. As assets, data and information have value and are used by an organization to make profits. As resources, data and information are essential to performing business processes and achieving organizational objectives. Historically, organizations understand that people and money are assets with value and must be managed to be successful. Information, however, is often seen only as a byproduct of technology, with lip service paid to the data while actions focus only on technology. Let’s do some comparisons.
Managing Information vs. Managing Money
Every organization manages its money, often through a dedicated Finance department, with roles such as chief financial officers, controllers, accountants, and bookkeepers. Each role helps to manage financial assets, and no one would consider running their company without them. But when it comes to information, how many people know those with specialized skills to manage data quality even exist?
Everyone knows finance-related roles must be budgeted for and people hired. No one expects the person who supports the financial software to set the chart of accounts. So why should an organization resist the idea of hiring professionals whose expertise is centered on data? Most people know accountants are necessary and understand generally what they do. In the same way, I hope that in my lifetime most people will generally know what data professionals do and no organization would think of running the enterprise without their specialized expertise.
Managing Information vs. Managing People
Every organization has to manage its people. The Human Resources department oversees this process, but many roles are involved. When managers hire people and offer contracts, they must stay within the parameters of the job classes, job roles, titles and compensation guidelines set by the central Human Resources department. Everyone understands that a line manager does not have the authority to negotiate benefits packages on behalf of the whole company. Yet when it comes to information, how often do managers create their own databases or purchase external data without considering what company- wide information resources already exist to fill their needs? Is that wise management of information assets? Similarly, every person who creates data, updates data, deletes data, or uses data in the course of their job (just about everyone) affects the data. Yet how many of them understand the impact they have on this important asset called information? Are we really managing our data and information assets if people do not understand how they affect them?
Management Systems for Data and Information
Similarities between managing human and financial resources and information resources are clear. In the details, people are managed differently than money and differently than data and information. An appropriate management system is required to get the most value from a particular resource or asset type. Management system refers to how an organization manages the many interrelated parts of its business such as processes, roles, how people interact with each other, strategy, culture, and “how things get done.” We also need what Tom Redman (2008) calls a management system for data. Data and information quality management is an essential component of data management.
Managers and executives must lead the way by investing in data quality – to ensure data is properly managed, with enough money, time, and the right number of skilled people involved. Individual contributors can help others to understand the value of information assets and do their part to manage them appropriately.
Join Danette to learn about the fundamentals necessary to managing data assets by attending; Ten Steps to Quality Data. This 3-day virtual course will take place from 15-17 June 2022 and 30 November – 2 December 2022. To book your place visit: https://irmuk.co.uk/events/ten-steps-to-data-quality/
Watch Danette talk about the importance of data quality https://www.youtube.com/watch?v=awJYntMW8sA&t=4s
Bio: An internationally respected expert, Danette McGilvray is known for her Ten Steps™ approach, used by multiple industries as a proven method for increasing the value of data through quality and governance. It applies to operational processes and also to focused initiatives such as security, analytics, digital transformation, artificial intelligence, data science, and compliance. Danette guides leaders and staff as they connect business strategy to practical steps for implementation. As president and principal of Granite Falls Consulting, Inc., Danette is committed to the effective use of technology and also to addressing the human aspect of data management (communication, change management, etc.)
Danette is the author of Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information™, 2nd Ed. (Elsevier/Academic Press, 2021). The first edition (2008) is often described as a “classic” or noted as one of the “top ten” data management books. She is a co-author of “The Leader’s Data Manifesto”, and has overseen its translation into 20 languages (see www.dataleaders.org). | <urn:uuid:e7c6daf5-866f-41c2-8145-e309ce7a6c5d> | CC-MAIN-2022-40 | https://www.irmconnects.com/following-is-an-excerpt-from-executing-data-quality-projects-ten-steps-to-quality-data-and-trusted-information-2nd-ed-elsevier-academic-press-2021-by-danette-mcgilvray/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00419.warc.gz | en | 0.950658 | 1,088 | 2.625 | 3 |
The CRIME attack is a vulnerability in the compression of the Secure Sockets Layer (SSL)/Transport Layer Security (TLS) protocols and the SPDY protocol. The abbreviation stands for Compression Ratio Info-leak Made Easy.
This type of risk constitutes an attack against secret web cookies sent over connections through compressed HTTPS for SSL/TLS protocols or SPDY, Google’s HTTP-like protocol. The attack can leave cookie data vulnerable to session hijacking.
Here are the basics about the CRIME vulnerability and how to prevent it from affecting your systems.
CRIME Vulnerability Security Assessment
CVSS Vector: AV:N/AC:H/AU:N/C:P/I:N/A:N
What Is the CRIME Vulnerability?
As noted, the CRIME attack can be executed against SSL/TLS protocols and the SPDY protocol to hijack a user’s session cookies while they’re still authenticated to a website.
This can be possible only if the protocols have enabled certain types of data compression methods. While compression can be quite handy in general, it poses the risk of unintentionally revealing clues about the content of the encryption. In particular, the TLS DEFLATE compression scheme was found to be problematic. Its compression algorithm eliminates duplicate strings.
The CRIME technique was categorized as CVE-2012-4929 by MITRE.
How Do CRIME Attacks Work?
To realize a CRIME attack, cybercriminals can abuse a weakness in the SSL/TLS protocol and the SPDY protocol’s compression mechanism to decrypt the HTTPS cookies set by a website. Then, this can force a user’s browser to forward HTTPS requests to a malicious website and to visit it while executing the attack. Afterward, the attackers control the path for new requests.
Cybercriminals can gain information about the ciphertext size that the client browser sends. Then they can see how the compressed request payload — the secret cookie sent by the browser and the injected malicious content — changes its size. When the compressed content diminishes in size, it’s likely that the injected content has matched some part of the secret content they want to gain access to. Observing the change in length — the variation in the compression ratio or its variable content — the value of the user’s session cookie can potentially be discovered.
Discovery of the Vulnerability
Adam Langley, a software engineer at Google, made the first hypothesis that such an attack could be executed. Then, the concept of the CRIME attack was officially demonstrated in 2012 by two security researchers, Juliano Rizzo and Thai Duong. They showed how it could impact a wide array of websites. The vulnerability was seen as a potential abuse technique by geopolitical criminals.
Rizzo and Duong presented a demo of the attack at the Ekoparty security conference in Buenos Aires, Argentina. Even before it, the security community had already theorized and discovered many details around CRIME and its relation to compression technique issues.
CRIME Vulnerability Impact
The security experts identified as vulnerable TLS 1.0 applications that use TLS compression, Google’s SPDY protocol, older versions of Mozilla Firefox that support SPDY, and older versions of Google Chrome that support TLS and SPDY.
Back in 2012, about 42% of servers supported the optional feature of SSL compression, with numerous popular sites being potentially affected. Only 0.8% of servers supported the explicitly embedded SPDY. About 7% of browsers supported compression.
While the vulnerability has a low risk and low probability, its impact can be of medium strength. This is because encryption protocols are at the heart of the top security mechanisms in our digital world. They safeguard the flow of network traffic, and without trusting them, we can’t have any guarantee for online safety.
As the major browsers, Chrome and Firefox, were vulnerable to the CRIME attack technique, Google and Mozilla created patches to address it by blocking the vulnerability. The patches were pushed through automatic updates, so only older versions remained potentially vulnerable.
The two security researchers demonstrated how the CRIME attack could be executed against websites like github.com, dropbox.com, and stripe.com through Chrome. The websites disabled the vulnerable compression in the meantime.
However, despite the timely measures of browsers and websites, the security experts Rizzo and Duong have warned that the CRIME exploit against HTTP compression has not been truly addressed. They believe it can be more prevalent than the TLS and SPDY compression vulnerability.
How to Prevent SSL CRIME Vulnerabilities?
To prevent the CRIME attack, disable SSL compression.
When using the standard settings, CRIME is only a problem for Apache version 2.4.3.
To disable SSL compression, set the following directive in your SSL settings:
- usually /etc/apache2/mods-enabled/ssl.confor /etc/letsencrypt/options-ssl-apache.conf when using Let’s Encrypt
It’s also strongly recommended to upgrade Apache to the latest version.
With SSL compression enabled, Nginx is vulnerable to the CRIME attack in older versions.
To prevent the vulnerability, update a recent Nginx and OpenSSL version.
The following versions are known as secure to this attack:
- 1.0.9 (if OpenSSL 1.0.0+ used)
- 1.1.6 (if OpenSSL 1.0.0+ used)
How protected are your systems? You can use Crashtest Security’s holistic SSL/TLS scanner to check whether they’re susceptible to the CRIME attack and similar vulnerabilities. | <urn:uuid:e2dca008-7c94-47e6-85d2-24640f44af61> | CC-MAIN-2022-40 | https://crashtest-security.com/prevent-ssl-crime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00619.warc.gz | en | 0.88752 | 1,225 | 3.109375 | 3 |
The coronavirus pandemic, which has swept the globe, has brought out the best in most, but the crisis has also given criminals and other threat actors on the Dark Web the opportunity to profit from the climate of fear caused by the virus.
While technology allows individuals to stay connected during global pandemics such as the coronavirus, threat actors, and cybercrime on all levels of the internet, including the darknet have also benefited tremendously from modern technology using its accessibility to commit cybercrime.
Both cyber criminals and state-sponsored actors aiming to profit from the confusion and widespread fear of the coronavirus crisis have launched countless schemes which target everyone from the vulnerable consumer to unprepared government facilities.
In the few months since the start of the coronavirus crisis, thousands of scams and threats related to the global pandemic have emerged. Both government and cybersecurity officials have cautioned that social isolation and fear can cause clients to open themselves up to compromising positions by clicking malicious links or URLs.
Scams originating on the Dark Web include the selling of counterfeit masks, latex gloves, fake home testing kits for the virus, bogus preventative drugs, fake vaccines and are being sold by threat actors through text messages, emails, on social media and more.
Threat actors have also begun to create new types of malware and ransomware, have been intercepting traffic from videoconferencing and have registered numerous domain names to run phishing campaigns that target people’s emails, passwords and personal information.
With millions of people working from home, there are more vulnerable points for cyber attacks as employees are not surrounded by their usual IT infrastructure and are highly susceptible to malicious cybercrime.
Using Cobwebs’ automated tools such as Webloc, the digital footprints of the threat actors behind the phishing attacks targeting individuals can be identified, traced, mapped and monitored in a non-intrusive way, allowing investigators to find the source.
Webloc meticulously races through and scans endless layers of the internet and Dark Web, analyzing all available information and providing effective and real-time intelligence to authorities.
The ability of Cobwebs’ AI-driven search engines, capable of automatically shifting through an infinite amount of critical data across all layers of the internet including open source and the dark web, optimizes investigations and provides authorities with precise intelligence much faster than ever before.
Machine-learning and AI capabilities which scour the darknet can also unveil hidden criminal connections, potentially preventing additional crime and the AI-powered search engine allows authorities to extract critical insights from across the internet and real-time alerts provide the ability to respond in optimal time.
With Cobwebs’ solutions like social network analysis and location based data, clients are able to instantly extract data to then analyze and verify the identity and location of the threat actors to prevent them from further criminal acts.
With web intelligence capabilities providing the algorithms to identify and mitigate threats, authorities are able to remain one step ahead of cybercriminals who aim to make big profits from the deadly coronavirus pandemic. | <urn:uuid:8d0b1d29-e757-4a15-8a9e-546eecc69ecf> | CC-MAIN-2022-40 | https://cobwebs.com/the-dark-web-and-the-coronavirus-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00619.warc.gz | en | 0.931262 | 622 | 2.875 | 3 |
By now, we’re all a bit coronavirus weary–maybe even a bit anxious to find out how things are progressing as states and businesses begin to re-open. There are endless offerings of websites, apps, news stories, etc. to stay current. Now more than ever, it is critical to exercise caution in your consumption of news. There is a lot of malicious information on the internet, exploiting an already-tense situation. Here are some tips to help keep yourself safe in your digital space:
- Do your research! If you want to download a COVID19 tracking app – or any app for that matter – find out if it’s a legitimate download. This one in particular is very informative (and safe to click on)! However, someone has created a mimic of the tracker app containing Malware that if downloaded, will steal credentials and other personal data. If an app or website asks you to download something, be wary.
- Hover over hyperlinks to view their source. Example: You get an email with a link to an interesting article headline in the South Florida Sun Sentinel. You hover your cursor over the link and the URL says https://www.sunn-sentine1.com/. Then you look up the actual publication online and the real website is https://www.sun-sentinel.com/. Do you see the difference? Hackers will make very subtle differences in malicious links. Had you clicked on the link in the email, you likely would have compromised your data.
- If you receive a suspicious email do NOT click on any links, nor share personal information. Cybercriminals are counting our eagerness for information during this pandemic. See tip #2! | <urn:uuid:0202d416-b20d-4c10-ab51-e2346e330f49> | CC-MAIN-2022-40 | https://decyphertech.com/digital-safety-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00619.warc.gz | en | 0.915286 | 350 | 2.515625 | 3 |
U.S. cybersecurity officials are warning K-12 educators of an uptick in cyberattacks designed to exploit and disrupt distance learning during the COVID-19 pandemic.
Bad actors are targeting schools with ransomware, data theft and other attack methods, the FBI and Cybersecurity and Infrastructure Agency (CISA) said in a new advisory.
“Cyber actors likely view schools as targets of opportunity, and these types of attacks are expected to continue through the 2020/2021 academic year,” the alert says.
“These issues will be particularly challenging for K-12 schools that face resource limitations; therefore, educational leadership, information technology personnel, and security personnel will need to balance this risk when determining their cybersecurity investments.”
These attacks have not slowed, and cybercriminals are utilizing methods and tools typically used in attacks, according to officials.
The agencies, citing the Multi-state Information Sharing and Analysis Center, said 57% of ransomware incidents reported involved K-12 schools in August and September. That’s a rise from 28% from January through July.
The most common ransomware strains targeting education are Ryuk, Maze, Nefilim, AKO and Sodinokibi/REvil, according to the advisory.
Cybersecurity officials have also observed malware attacks on state, local, tribal and territorial educational institutions over the last year. Zeus is highlighted the most common type of malware hitting schools on Windows operating systems. Attackers use it to infect machines and send stolen information to command-and-control servers.
Meanwhile, Shlayer targets MacOS systems through malicious websites, hijacked domains and malicious advertising.
Phishing and social engineering
A frequent type of attack on the enterprise – phishing – is also becoming common in education, with cyber actors targeting students, parents, faculty, IT professionals and others involved in distance learning operations. These attacks masquerade as legitimate requests for information via email and trick users into revealing account credentials or other information.
Other attacks leverage fake domains that are similar to legitimate websites in an attempt to capture credentials.
Other disruptions mentioned in the advisory include DDoS attacks and videoconferencing hijacking.
To mitigate these attacks, the agencies recommend a long list of best practices and steps to take, like:
- Patching out-of-date software
- Regularly changing passwords
- Using multi-factor authentication on all accounts
- Setting security software to automatically update and conduct regular scans
- Disabling unused remote access/RDP ports and monitoring logs
- Implementing network segmentation
- Training for students, teachers and other staff
- Looking into a technology provider’s cybersecurity policies and practices before agreeing to a contract | <urn:uuid:adf6d258-f9d8-41b0-b347-1fb270db2674> | CC-MAIN-2022-40 | https://mytechdecisions.com/network-security/k-12-cybersecurity-increasing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00619.warc.gz | en | 0.927479 | 560 | 2.71875 | 3 |
Mobile devices are omnipresent within today’s society. In fact, according to Global Web Index, the average digital consumer now owns at least three different connected devices, which can include laptops, tablets, smartphones, wearables and consoles. Phones are by far the most widely adopted piece of hardware. Pew Research Center reported that 95 percent of Americans own a cellphone of some kind, with 77 percent of these being smartphones.
The mobile revolution has impacted how people complete personal tasks and how they expect to work. As the world becomes more connected and more tech savvy than ever, educators must ensure that their students learn how to use this equipment effectively. More schools are starting to adopt bring-your-own-device (BYOD) policies and even providing hardware in some cases. Before educational institutions start rolling out mobile devices, there are a few critical things to know and consider.
1. Infrastructure Adaptability
In most traditional school settings, items on the network are often under total control of the IT department. This might include staff hardware, the computer lab, routers and projection equipment. By introducing mobile devices into the mix, the hardware is much harder to manage and ensure that students are using it effectively, particularly in BYOD environments. If students are allowed to leverage personal devices, there’s a chance that they could download malicious applications and not maintain the equipment correctly. This opens up vulnerabilities within the school network and can impact an institution’s ability to meet federal and industry regulations.
School infrastructure must be able to support devices throughout the campus.
Additionally, mobile devices require a lot more bandwidth capacity than traditional equipment. iPads, a popular choice for many schools, can hog bandwidth, especially if an institution hasn’t thought ahead to upgrade its capacity. Scholastic noted that school administrators and IT professionals must consider usage during certain times of the day and ensure that accessibility is provided throughout different parts of the building. For example, students and faculty might use their mobile devices during lunch time, meaning that there will be a spike of activity and that the cafeteria and teacher’s lounge must facilitate reliable internet connections. Relying on the infrastructure in place will inhibit mobile initiatives and cause frustration across the board. Administrators need to determine the best way to set up the network and configure filters to keep students and their devices safe.
2. MDM Strategy Planning
Businesses across every industry are facing challenges with governing mobile hardware, and educational institutions must learn from these experiences to ensure their rollout goes smoothly. While a BYOD approach offers the opportunity for teaching digital citizenship, schools will still require a capable mobile device management (MDM) solution alongside solid policies. EdScoop contributor Stephen Noonoo suggested talking with educators in other school districts about their MDM experience as well as defining goals early on in the process. This will help choose the best MDM solution for your school’s particular needs and ensure that the system plays well with active operating systems and device platforms.
Device management plays a critical role in adhering to child privacy laws, filtering out explicit content and keeping parents informed about what software is being used. IT administrators need to thoroughly assess applications and ensure that student information remains secure. It will be particularly important to read agreements, data sharing policies and age requirements. With application control and strong mobile management, schools can advocate appropriate programs and prevent students from utilizing unauthorized apps.
“Teachers will need know how to incorporate the devices into the learning process.”
3. Training and Adoption
Although students and faculty are increasingly interacting with mobile devices in their daily lives, it’s not safe to assume that they will understand how to use them appropriately for learning purposes. If mobile hardware use isn’t given the right guidance, the equipment can be expensive to support and not worth the investment. Teachers will need to be trained on how to incorporate the devices into the learning process and differentiate instruction through a range of apps and Web tools, Edudemic contributor Tom Daccord wrote. Even workflow basics like sharing materials and passing student work back can be complicated if teachers are unfamiliar with how these activities are performed online. By establishing workflow plans and implementing training sessions, teachers can address these challenges.
“Schools that share a common vision for learning, extensive support for teachers in learning to use these new devices, and a willingness to learn from the teachers around the country who have already piloted these tools are much more likely to reap the benefits of their investments in iPads,” Daccord wrote.
Students will also have to learn the best practices of using mobile devices effectively. This could include implementing security, how to update hardware and how to use approved applications. Information can be imparted during lessons as students progress in their school years, but training will certainly be required as technology becomes more sophisticated and capable.
Mobile device use is becoming the norm, and people must understand how to use this equipment effectively for a variety of purposes. More schools are rolling out mobile devices to teach students how to become productive members of society, but these considerations will be necessary prior to implementation.
To learn more about what your school can do to enable productive mobility in education, and keep devices secure, contact Faronics today. | <urn:uuid:d1b43288-4612-42d4-90e2-c87afd697b8a> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/mdm-for-schools-what-schools-need-to-know-before-rolling-out-mobile-devices | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00619.warc.gz | en | 0.954749 | 1,059 | 2.875 | 3 |
Being aware of one’s surroundings is the greatest form of self-defense.
Here are six security awareness training topics you should consider reviewing with your team in order to bolster your security strategy.
1. Network Security
A secure network involves two facets: strong user credentials and controlled access.
More than 60 percent “of all network intrusions are due to compromised user credentials,” according to Microsoft.
Use strong passwords, and be on the lookout for veiled attempts to reveal those passwords (see the section on social engineering below).
Additionally, comprehensive training should address network access.
“Organizations allowing third-party access were 63 percent more likely to experience a cybersecurity breach,” say the authors of The State of Industrial Cybersecurity 2017 report, “compared to 37 percent of those who did not.”
2. Cloud Security
More than one-third of engineering firms reported a scarcity of cloud security skills, “yet they are continuing with their plans anyway,” according to a Forbes analysis of an Intel Security survey.
The same analysis also founds that this lack of skills has slowed down cloud adoption plans.
3. Application Security
When cloud providers “expose a set of software user interfaces (UIs) or APIs that customers use to manage and interact with cloud services,” they open apps to security risks.
The publication CSO analyzed a report from the Cloud Security Alliance, who recommends designing APIs “to protect against accidental and malicious attempts to circumvent policy.”
In 2015, apps contained a median of 20 vulnerabilities, which was up more than three times from 2013.
4. Social Engineering
Social engineering is when people “take advantage of human behavior to pull off a scam,” says CSO.
People may get links they think come from a Facebook friend or LinkedIn connection, but in reality, that link is coming from a social engineer. Clicking or tapping that link can provide scammers with a password they can then use to explore a network.
Phishing, says TechTarget, is “a form of fraud in which an attacker masquerades as a reputable entity or person in email or other communication channels.”
Three-quarters of organizations experienced phishing attacks in 2017, according to TripWire’s analysis of the 2018 State of the Phish report. Phishing attacks most often resulted in malware infection, compromised accounts, and loss of data.
6. Social Media
The Pew Research Center says nearly 70 percent of American use social media. That opens up people and businesses to security vulnerabilities.
“When someone neglects their privacy settings or publicly posts personal notes and photos,” says CSO, “they can leave cybercriminals free to use their information to launch targeted phishing emails containing malware links.”
Cybersecurity involves so many facets, and it will only continue to grow.
However, covering these security awareness training topics with your team is a great start. | <urn:uuid:d9faf304-8c6a-4339-a06b-c29f98417a76> | CC-MAIN-2022-40 | https://blog.integrityts.com/6-security-awareness-training-topics-to-review-with-your-team | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00619.warc.gz | en | 0.945942 | 622 | 2.875 | 3 |
October 4, 2017 | Written by: IBM Research Editorial Staff
Share this post:
The deadliest skin cancer is melanoma, which will be responsible for over 9,000 deaths in the United States in 20171. Melanoma is unique among cancers in that it arises as a visible and identifiable mark on the surface of the skin – unlike cancers of the breast, lung, or colon that develop hidden from our view. This would suggest that computer vision, which has demonstrated human equivalency in visual recognition tasks such as facial and object identification, would be ideally suited to aid in early detection of melanoma. However, physicians and patients continue to rely upon their naked eye to recognize melanoma. This begs an obvious question: why aren’t computers aiding the human eye in melanoma detection?
Figure 1 – An example of a melanoma skin lesion (left) and a benign mole (right)
The reason, in my opinion, is not due to a deficiency in computer vision technology or an innate complexity of melanoma detection. Rather, the biggest roadblock to date has been the inability of the medical community to generate large, well-designed, public datasets of skin images with requisite metadata to train systems for accurate detection. This dataset bottleneck has prohibited the study of computer-aided melanoma detection on a large and meaningful scale and prevented comparative studies of the few algorithms developed by those researchers fortunate to have access to non-public skin image datasets. Studies published in this environment contribute to the ongoing “replication crisis” that exists in medicine today; results are impossible to reproduce (or improve upon) by independent researchers if datasets are hidden in private silos.
The International Skin Imaging Collaboration (ISIC) is beginning to address this unmet need though the creation of a large, open-source, public archive of high quality, annotated skin images. At present, the ISIC Archive contains over 13,000 images of skin lesions, including more than 1,000 images of melanomas, with a long-term goal of housing millions of images from multiple imaging modalities for use by: (a) physicians and educators to improve teaching and identification of skin cancer, (b) the general public for self-education, and (c) computer vision scientists to develop and test algorithms for skin cancer detection.
Using a dataset curated from the ISIC Archive, our academia-industry team from Memorial Sloan Kettering Cancer, Emory University, IBM Research, and Kitware, Inc. organized the first international melanoma image detection challenge at the 2016 International Symposium for Biomedical Imaging in Prague, Czech Republic. Twenty-five teams participated and we recently published our results in the Journal of American Academy of Dermatology, comparing the performance of the automated computer algorithms to dermatologists who specialize in skin cancer detection. In this challenge, the average performance of the dermatologists equaled the melanoma diagnostic accuracy of the top individual computer algorithms, but was surpassed by a machine learning fusion algorithm using predictions from 16 algorithms.
Based on these results, do I anticipate being replaced by a computer over the next 5-10 years? No, for two reasons: 1) the study had a number of limitations, including not having a fully diverse representation of the human population and possible diseases, and 2) clinicians use and employ skills beyond image recognition. Our study had numerous limitations and was conducted in a highly artificial setting that doesn’t come close to everyday clinical practice involving patients.
For example, when examining a suspicious skin lesion, a dermatologist would not only consider relevant clinical data, such as age, lesion history/symptoms, past personal or family history of skin cancer, and context of the lesion relative to the appearance of the patient’s other skin lesions, but might also palpate its texture, wipe it with rubbing alcohol, adjust lighting, or re-position the patient. The contribution of these additional historical and physical examination factors to melanoma diagnosis is unknown, but likely to be significant, and unfortunately we were not able to include these data in our study. Dermatologists also consider dozens of possible diagnoses (as well as the potential medical, psychosocial, cosmetic, financial, and legal ramifications of their decisions) during an examination of a patient and we tested only two diagnoses, melanoma and moles, in the computer challenge.
Nonetheless, having made our dataset available to the broader scientific community, I hope that our efforts represent a new, transparent path forward that spurs interest in melanoma detection among the computer vision community. In the meantime, I will continue to work with my colleagues to build larger, more varied datasets in the ISIC Archive that will accelerate the development of deep learning methods for melanoma detection and more closely replicate the challenges encountered when examining skin lesions on patients. Our recently concluded 2017 challenge is a small step in this direction but there is a lot of work left to do.
1 – https://seer.cancer.gov/statfacts/html/melan.html | <urn:uuid:96ad8b25-c2a6-4966-bf84-1582688fe586> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/research/2017/10/computers-to-aid-melanoma-detection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00019.warc.gz | en | 0.936797 | 1,027 | 2.78125 | 3 |
Every so often, it’s good practice to change passwords. I think that everyone it IT is aware of that. One of the most overlooked passwords is the local Administrator’s password on every machine on the network.
Sure, your user’s are required to change theirs every 90 days, and you change your domain Administrator’s password at times as well. The local Administrator on your member servers and all your PCs is usually overlooked, or avoided, because you don’t have time to touch all those machines…
Well, before Windows 2008, we would have to create a script to change the local Administrator’s password, and assign that script into a Group Policy, under the Computer Logon Scripts. Usually, the script looks like this:
Set WshNetwork = WScript.CreateObject(“WScript.Network”)
StrComputer = “.”
Set objUser = GetObject(“WinNT://” & strComputer & “/Administrator,user”)
This would then apply the new password to the Local Administrator account once the machine got the new policy and rebooted. The problem with using a script, however, is that the new password is in clear-text for anyone to see. (Assuming they dig thought the sysvol share).
With Windows 2008 comes Group Policy Preferences. Policy Preferences can be used to configure things like:
- Folder Options
- Drive Mappings
- Scheduled Tasks
- Local Users
When using it to change the Local Administrator’s password; the password is not stored in clear-text for anyone to read snooping through the sysvol share.
To change to Local Administrator’s password for all machines assigned this Group Policy, edit the policy and choose:
<Computer Configuration> –> <Preferences> –> <Control Panel Settings> –> <Local Users and Groups>
Right click in the white space and select New –> Local User.
Configure the Action for Update, and the username of Administrator, and then your new password twice. You can also change the expiration options, etc.
Once saved, it will now show in the list. You can use this area to add local users if you needed to as well. Some companies may want to set the Local Administrator to disabled, and create a custom Local Administrator with a different username.
That’s it. Once all the PCs get the new policy applied, your local administrator password will be changed. | <urn:uuid:4a7792e3-908b-4f0f-9851-14eae46677a3> | CC-MAIN-2022-40 | https://tsmith.co/2011/changing-local-admin-passwords-on-the-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00019.warc.gz | en | 0.872159 | 550 | 2.578125 | 3 |
The world of technology is dramatically changing how every sector and industry conducts business, and that is especially true of universities. Learning on the average college campus has shifted from taking place in large lecture halls and libraries to computer labs and via online classrooms. In order to better prepare pupils with the skills needed in the rapidly evolving technology marketplace, universities across North America are putting a greater focus on computer science education and on arming students with the latest tech-related knowledge and tools available.
New partnership for boosting enrollment rates in computer science programs
To make sure there are enough computer science professionals in the years ahead, the National Science Foundation last month awarded a $6.24 million, five-year grant to the University of Massachusetts Amherst and the Georgia Institute of Technology. The funds will go toward the Expanding Computing Education Pathways (ECEP) Alliance, a new collaboration that seeks to improve best practices for educating young people about computer science and technology in elementary schools, high schools and universities.
According to Georgia Tech, the program will look at previously existing models of teaching to see if a format exists that can be used in other schools in the United States to make computer science more appealing to young people.
“Computing is the world’s newest great science,” Mark Guzdial, Georgia Tech School of Interactive Computing professor and one of the people helping to spearhead ECEP, said in a November statement. “Yet, even though enrollments in U.S. computer science programs are on a four-year rise, it’s still not enough to satisfy the workforce demands of a technology-driven global economy. This new collaboration will drive the discipline forward, enabling states to replicate recent successes in Georgia and Massachusetts that enhanced computing education, grew the pipeline of interested students, and facilitated systemic change to the educational system.”
The program will seek to build on the previous success of a joint collaboration between the two institutions that brought a large number of high school students in Georgia and Massachusetts into college preparatory computer science classes. The ECEP initiative will start off working with pupils and teachers in South Carolina and California.
Ivy League university puts greater emphasis on tech education
In order to better prepare its engineering students for the working world, Columbia University recently changed its course offerings so that pupils in the School of Engineering and Applied Science now have to take a Python programming course. In addition, the Columbia Spectator reported that the school wants to encourage greater classroom computer usage and training among its future engineers by adding five new optional courses.
“I suspect most of the students will take computer science courses with these [new electives],” said professor Adam Cannon, the computer science department’s associate chair for undergraduate education, according to the Spectator. “However, if a student knows that they are going to be applying to grad school in physics or medicine or something else, then they have the opportunity to hit the ground running in that field by taking up to five courses in that area.” He added that “at the end of the day, we think that we’re providing a really solid foundation in computer science and then giving students the opportunity to decide where they want to go from there.”
The news source reported that students enrolled in other programs at Columbia are also clamoring for more classroom software and computer learning, but some recent engineering graduates said that the new courses may not go far enough in regard to providing the skills they need in the workplace.
Are schools and universities doing enough to encourage computer science education? What computer skills should colleges teach to students in order to adequately prepare them for the working world? Leave your comments below to let us know your thoughts on the current state of computer education in North America! | <urn:uuid:35bbebb0-fa92-4724-8765-a112ed2f68eb> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/latest-trends-in-the-college-computing-world | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00019.warc.gz | en | 0.950955 | 764 | 3.171875 | 3 |
What Is Lean Manufacturing and Do You Need It?
Lean manufacturing definition: What is lean manufacturing? The core principle of lean manufacturing refers to the practice of reducing waste and improving productivity, helping organizations offer a better product or service more efficiently to their customers.
Though the concept of lean manufacturing has been around since the 1930s, the ideas behind it are still relevant today and widely practiced, particularly with the advent of affordable digital technology for small and midsize businesses.
Japanese auto manufacturer Toyota is credited with coming up with the concept, focusing their efforts on two distinct aspects:
- Jidoka: Refers to the process of human oversight of technology—automation with human assistance.
- Just-in-Time: Refers to improving productivity by creating only what is needed, when it is needed, and in the quantity needed.
Naturally, most manufacturing businesses will be familiar with the concept of just-in-time production, but it’s applicability in the context of modern technology has never been more apt.
Today, we’ll be taking a look at the key areas of lean manufacturing and how technology can help address these areas.
Core Methods of Lean Manufacturing
What’s makes up a lean manufacturing strategy? There are five core methods of lean manufacturing that attempt to streamline operations as effectively as possible. They are:
1. Improving processes
Any process that where labor is inputted, and an output is expected, comes under this. Of course, this pretty much covers everything, but in manufacturing terms, this would typically be applied to processes that can be improved through automation.
These processes are not restricted to any one department; there are several within any manufacturing company—or any company for that matter—that can benefit by using tech to improve their processes. A simple example would be to automate payment invoices, order fulfillment, or using bulk actions for large orders.
Suggested solution: workflow automation
When we talk about improving processes in an organization, it’s common to pursue automation as a means to do so.
By using robotic process automation (RPA), companies can utilize software bots that are either attended or unattended.
Attended bots require a human to personally trigger the action, whereas unattended bots operate on their own without the need for any input.
By using the appropriate mix of attended and unattended bots, working processes can be streamlined significantly during the manufacturing process, from production to fulfillment.
2. Identifying value
The value of your product is determined by the value that your customers find in it. While this may seem obvious, a large number of manufacturers fail to dig into the valuable data that their customers provide.
A study by Deloitte shows that many manufacturing enterprises are lagging when it comes to broader enterprise-wide initiatives such as customer-centric innovation and human resources.
In other words, many businesses in the industry are not taking advantage of the technology at their disposal to identify value. When they do, they are able to more effectively assess what works and what doesn’t, providing a better overall product.
Suggested solution: big data analytics
In today’s digitally-driven world, big data makes or breaks organizations.
Data is everywhere, in every facet of our lives, and yet so few businesses leverage this information to help them identify valuable insights for growth.
Of businesses that adopt the cloud, 87% of them report business growth from their cloud use. 41% of businesses are able to directly attribute business growth to their use of cloud services.
Related Post: What Is Smart Manufacturing?
The simple fact of the matter is that while many businesses profess a desire to make more use of their data, the reality is that very few of them do.
Data can be useful for practically any job, whether it’s determining how you can better track inventory levels using predictive analytics, monitoring the health of machines in the production line so problems can be routed out before they become a larger issue, or having the ability to more effectively assess your factory’s efficiency for individual processes.
3. Continually improve
Lean manufacturing today, as with most digital initiatives, is about continually improving your processes, rather than treating it as a one-off solution.
This means that once the technology is in place, like a new ERP system, it should be used to consistently look for issues in product quality and waste, helping the business remove factors—often previously unknown—that are slowing processes.
Suggested solution: Managed IT
Being able to scale tech solutions is important for organizations today. As we continue to move away from the more traditional approach of one-off purchases for software—these systems today are commonly referred to as legacy solutions.
While there’s nothing wrong with legacy solutions per se, software developers and distributors have shifted to cloud-based subscription models for their apps.
Cloud-based solutions offer the best pathway for technology implementation as far as future-proofing is concerned, chiefly because these solutions are very scalable for businesses experiencing growth or decline, alleviating some of the issues that might be present when encumbered with an old system.
Learn more about what a Managed IT program looks like here.
4. Create flow
Flow is highly contingent on improving processes. Flow is when a business improves its processes to the point where an order process, from placement to delivery, runs as smoothly as possible with the tools available.
If there are unnecessary barriers that slow this process down, then your flow is disrupted and you’re losing money, whether it’s through labor costs or the cost of not being able to deliver the kind of service you’d like to.
Suggest solution: process mapping/automation
Establishing good flow begins with understanding exactly how effective your working processes are, which is done through process mapping.
This also ties into automation once more, as after the process mapping is complete, stakeholders can identify where they are having issues with flow and adopt automation as a means to improve it.
5. Standardize processes
Finally, we have standardized processes. It’s impossible to improve your processes and achieve flow if your processes are not standardized to some degree.
Standardization removes guesswork in processes and ensures a defined degree of quality. It also allows you to have a documented system for your processes so you can compare it to your improved processes down the line.
Suggested solution: enterprise resource planning (ERP)
We will go into a little more detail about ERPs later in the post, but ERPs in the context of standardizing processes is extremely important.
This is particularly the case for organizations that still operate with different departments using different solutions that are not connected to one another.
Because of this disconnect, it’s common for businesses to inadvertently create data silos—in effect shutting off their own ability to comprehensively assess their own data across the organization.
Related Post: Breaking Down Data Silos: Unify Your Business Data
By adopting an ERP, companies can integrate all their solutions into a single dashboard, making data accessible at any time from anywhere.
This allows for far more standardization, and is particularly important for businesses that operate across multiple offices or locations.
Eight Wastes of Traditional Operations
In addition, there are the so-called “eight wastes of traditional operations”—that prevent lean manufacturing. These are:
- Unnecessary transportation
- Excess inventory
- Unnecessary motion of people or equipment
- Idle employees or equipment
- Over-producing a product
- Over-processing a product, such as adding unnecessary features that add no value
- Workers, in terms of whether they’re being effectively used according to their skillsets
The eighth factor is a newer addition, but is nonetheless an important waste that should be removed.
If any of these wastes are familiar, then it’s time to start considering how you can address these issues and improve the processes with your operations to achieve lean manufacturing.
What’s the Solution to Waste?
Businesses can assess their issues through an audit, either by themselves or through a third party like an MSP.
Once you’ve identified the waste issues that are holding your operations back, it’s time to put in place digital solutions that can help you address them.
Enterprise resource planning systems
An ERP is a must for manufacturing businesses. ERPs are systems that unify data across a business, and can cover a wide range of areas, such as:
- Inventory and supply chain
- Automated reports
- Project management
- Human resource functions
- Sales and marketing
The amount of data in the world, and by extensions in businesses, is truly staggering, but what if you’re not using that data?
Over the last two years alone 90% of the data in the world was generated.
Well, to put it bluntly, the organization as a whole suffers. The key characteristic of a modern ERP platform is its ability to examine big data sets automatically and provide you with actionable data.
Analyzing data sets has always been a part of manufacturing, but the scale and speed at which it can be done today is unmatched—one of the reasons the use of predictive analysis grew 76% from 2017 to 2019.
For manufacturers, this is crucial for understanding weaknesses and fixing them, creating leaner operations.
For more information about how businesses can use analytics with ERPs to streamline their supply chains, click the related post below and see what the frontrunners are doing to cut waste and implement leaner operations.
Customer relationship management
CRMs can come either standalone or as a module of an ERP system. Similarly, they use data sets—but relating solely to clients.
Your CRM will be able to give you actionable data on addressing customer satisfaction with your service and product.
By keeping your customer data all in one place and using the tool for analysis, you’re able to better understand what your customers like and dislike and change your service accordingly.
It’ll also give you opportunities to identify upselling and retargeting opportunities within your consumer base, amongst other things.
If you came here asking, “What is lean manufacturing?”, we hope this blog post gave you an understanding of what it is, how it’s deployed by organizations, and why it’s important for manufacturers today.
Lean manufacturing is a term that has been used around the United States since the late 20th century and it’s as important today as it ever was—just in a very different context.
The principles of cutting waste, improving processes, and streamlining manufacturing operations are hardly new, but the tools with which to achieve these goals are today more viable for SMBs than they have ever been.
There’s a reason organizations across the country are implementing cloud ERP systems and similar modules on a large scale; it’s to take advantage of the wealth of data that companies have at their disposal, and to use that data to drive meaningful change.
If you’re taking a look at the list of wastes, and any of them are ringing true, it’s definitely time to think about getting the correct solutions in place so that you can address the issues head-on, creating a leaner manufacturing operation that will reduce expenditure and maximize productivity.
In light of recent events, many organizations have found themselves playing catchup, trying to implement makeshift cloud solutions to make up lost ground while their workforces see drastic transformations.
To find out more about how the cloud can ensure your business is in good shape for the future, download our eBook, “Which Cloud Option Is Right For Your Business?” | <urn:uuid:26bca15d-afe8-4687-845c-0a634e7937e5> | CC-MAIN-2022-40 | https://www.impactmybiz.com/blog/what-is-lean-manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00019.warc.gz | en | 0.949154 | 2,427 | 3.015625 | 3 |
Sometimes you may need some additional guidance while working through our lessons, in these scenarios, you can add a print statement to your code and get output to help you debug! Keep in mind, you will need to interact with the target application to trigger output and that interaction is dependent on the lesson instructions (login, add post etc.)
For printing debug statements, do not use the browser console for output. This tab will act like your development terminal/console.
Also helpful: Why don't I see output during tests?
See below for an example of printing output in each of our supported languages:
C# / .NET
import ( "fmt" )
SQL Injection Part 1 Example:
First, put a print statement in the code editor. In this case, we are putting a print statement in the login function and having it return "Login"
Next, go back to the Target Application and log in since this is the functionality your print statement exists in.
After you log in you will see the Sandbox Output will now have output in it.
If you open the Sandbox Output you will see your print statement has been posted. | <urn:uuid:3b73567d-9485-4c4d-956c-4faf69710794> | CC-MAIN-2022-40 | https://help.hackedu.com/en/articles/4789265-tips-for-getting-output | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00019.warc.gz | en | 0.828623 | 315 | 2.71875 | 3 |
In this blog post, I will talk about a Live CD Linux distribution geared towards preserving privacy, anonymity and circumventing censorship.
Who would use Tails?Journalists interested in keeping their sources private, people who reside in an oppressive regime, people using internet cafés, tourists in foreign nations where there is surveillance, business people conducting affairs in foreign countries where competitors might seek to intercept their communications to gain an economic advantage, law enforcement, and spies.
Basically anyone who would like to minimize the digital trail they leave when using the Internet.
What is Tails?Tails is a Linux distribution that can be run off a live CD or a live USB stick.
Where would you use Tails?Anywhere you suspect your Internet traffic is monitored. (That’s pretty much everywhere recently…)
When would you use Tails?When your communication really must be private. While using Tails does not guarantee anonymity, the system is geared towards securing communications by default, as well as erasing any tracks generated while in use (The system even wipes memory at shutdown, to circumvent cold boot attacks)
How does Tails work?“Tails is a live system that aims to preserve your privacy and anonymity. It helps you to use the Internet anonymously and circumvent censorship almost anywhere you go and on any computer but leaving no trace unless you ask it to explicitly.”
The makers of Tails, who choose to remain anonymous presumably to avoid being pressured into including backdoors in their system, also include instructions on how to download their operating system, as well as links to tutorials on how to verify the integrity of your ISO, to avoid man-in-the-middle attacks.
I proceed to burn this ISO image onto a blank DVD, and boot from the optical drive. Achieving this will be different on each system, but here is a link to a wikihow page explaining this process.
The steps to achieve this may differ with the equipment you are using. While it is possible to use the ISO and create a bootable USB stick, I found the old optical drive technique to be the easiest.
This method also has the added security of being “read only” media, as once the data has been written to the optical disk, it cannot be modified. I tried to get it to work on an SD card, but my initial target system did not have a "boot to SD-card" option. I also tried to make a bootable USB stick directly from the ISO with no success. I suspect that having created it on a different system than the one I wanted to use it on may have played a part in it not working.
My initial attempts to get Tails to work on an Eeepc were unsuccessful, (the screen was too small, it was not really useable)
I was able to boot to the Tails operating system using an older laptop using the optical drive, and I was greeted by a desktop environment that is pretty much the same as Debian, of which Tails is derived from.
After Tails has synchronized the clock, it automatically connect to the TOR network, to ensure your internet traffic is anonymized. I discovered you cannot create persistent storage, as I was on a live CD. (doh) From the applications menu, if you select the "Tails" sub menu, you can start the "Tails Installer" to begin creating a live USB stick that will boot to tails. (It took me a couple of tries with different USB sticks until I found one that worked.)
[gallery type="slideshow" ids="4534,4533,4532,4531"]
Once the USB version of Tails was created, I changed the boot order in my BIOS and booted from it. The live USB stick version is considerably faster to start, and affords me the option to create a persistent partition, on which you can save documents between sessions.
[gallery type="slideshow" ids="4545,4544,4543,4541,4540,4539,4538,4537,4536,4535,4553"]
Tails comes with a small set of applications, all geared towards maintaining your privacy.
TOR enabled by default.
KeePassX, a cross-platform password manager.
Pidgin Chat client, cross-platform compatible, supporting OTR messaging.
MetaData Anonymization toolkit, to strip metadata from documents.
i2P client, a similar network to TOR.
Whisperback to send feedback via encrypted e-mail.
So after some fiddling around, I have a working copy of Tails, on a live USB stick. This is a useful portable operating system to add to your toolkit, for when you need your communications to remain private. | <urn:uuid:60d633ea-21f7-4b5e-be03-95574f23ef60> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2014/05/tails-the-amnesic-incognito-live-system | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00220.warc.gz | en | 0.951899 | 996 | 2.515625 | 3 |
By: R. Mehla
The advancements in technology and digitalization in the healthcare industry promote the growth of tools like E-health and M-health, which are used at various levels in health promotion programs. The use of information technology and telecommunications in healthcare is referred to as e-health. Prescription renewals, online appointments, exchanging healthcare data and medical records via a particular program, and many more features are included. On the other hand, M-health is a sub-domain of e-health in which mobile devices are utilized to provide healthcare services. It involves using communication technologies such as smartphones, PDAs, tablets, and wearable devices such as smartwatches for health and medical records treatment and maintenance. E-health and m-health have several advantages, including the availability and accessibility of healthcare services, low distribution costs, customization, and real-time treatment of patients, among others. However, it is all achieved at the expense of massive data generated by these devices, which brings the new term big data.
Big data refers to the collection of enormous amounts of data, which could be structured or unstructured. However, in big data, it is the knowledge derived from it that is useful, not the volume of data. It necessitates the employment of tools and analysis methodologies capable of extracting meaningful insights from it at higher abstraction levels. Deep learning is a multi-layered framework that can learn from unstructured data, making it an effective tool for big data analytics. Deep learning’s characteristics like self-training, self-learning, and adaptability make it ideal for processing and analyzing extensive data created by e-health and m-health applications. Patient monitoring, healthcare information technology, intelligent assistance, diagnosis, and information analysis and cooperation are only a few deep learning use in big data in e-health and m-health. Apart from these basic functionalities, a systematic use of the deep learning in a healthcare system is shown below in Figure 1 . It consists of three main phases: i) creation of the digital knowledge base, ii) application of deep learning strategies in diagonis and selection of treatment, iii) development of the clinical decision support systems.
Health data isn’t just for archiving and passing on to future medical fellows or professionals. It must be thoroughly analyzed to detect patterns that could be done by deep learning. The identified trend could be utilized to see any specific disease symptom ahead of time, which otherwise may not be possible by just involving humans. Deep-learning models have attained the accuracy equivalent to health professionals in various diagnostic tasks, such as distinguishing melanomas from moles , breast lesion detection in mammograms [3-4], and spinal analysis with magnetic resonance imaging .
There are different deep learning techniques like convolutional neural network (CNN) and recurrent neural network (RNN), which are becoming quite popular in the field of healthcare. CNN is mainly used in medical imaging to perform complicated diagnoses . It is employed in various medical areas like dermatology, radiology, ophthalmology, and pathology. Image data is fed into CNN models, which iteratively warp it via a series of convolutional filters until the raw data is converted into a probability distribution across all possible image classes. RNN, a deep learning methodology, which is popular in analyzing large text, and speech data, also plays an important role in the healthcare domain. With the advent of e-health and m-health, electronic health records (EHR) are increasingly becoming prevalent. Doctors might end up spending more than half of the workday focusing on EHR paperwork, leading to fatigue and less time with patients. This could be mitigated by automated transcribing, using RNN speech to text translation in any language.
Notwithstanding these efforts, deep learning-based systems contain several errors and inefficiencies because the data in the healthcare field is exceptionally diverse, confusing, noisy, and incomplete. It’s challenging to train an effective deep learning model with such large and diverse data sets . Apart from that, there are several other challenges which are stated as follows:
- Understanding ailments and their variations are far more complicated than tasks like image or speech recognition.
- Diseases are continuously evolving and changing in unpredictable ways over time. However, most of the deep learning models assume static inputs, which become ineffective with time.
- The diseases are incredibly diverse, and we still don’t know everything there is to know about the etiology and progression of the majority of them.
- Furthermore, in a real-life clinical situation, the number of patients is generally restricted, hindering the professionals from collecting more data and training an effective deep learning model.
All of these problems present several opportunities and future research options for the profession to develop. As a result, keeping all of these in mind, we propose the following directions, which we feel will pave the way for deep learning in health care in the future.
- Norgeot B, Glicksberg BS, Butte AJ. A call for deep-learning healthcare. Nat Med. 2019;25(1):14‐15. doi:10.1038/s41591-018-0320-3
- Haenssle, H. A. et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 29, 1836–1842 (2018).
- Cheng, J.-Z. et al. Computer aided diagnosis with deep learning architecture: applications to breast lesions in us images and pulmonary nodules in CT scans. Sci. Rep. 6, 24454 (2016).
- Kooi, T. et al. Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 35, 303–312 (2017).
- Jamaludin, A., Kadir, T. and Zisserman, A. Spinenet: automatically pinpointing classification evidence in spinal mris. In International Conference on Medical Image Computing and Computer-Assisted Intervention 166–175 (Springer, 2016).
Cite this article:
R. Mehla (2021) Application of Deep Learning in Big Data Analytics for Healthcare Systems, Insights2Techinfo, pp. 1 | <urn:uuid:46ba6279-53a0-4976-984e-4107500f0ac5> | CC-MAIN-2022-40 | https://insights2techinfo.com/application-of-deep-learning-in-big-data-analytics-for-healthcare-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00220.warc.gz | en | 0.92672 | 1,303 | 3.5625 | 4 |
By: MD T. Chishty
Smart City is a concept that various private organizations and governments use to change and modernize how a city runs to make cities a better place for humanity to live. It consists of information technologies and communication technologies [1-3].
A core aim of a smart city is the use of ICTs to create a sustainable integrated living environment, as represented in figure 1. The focus across each municipal operation, from transportation to utility management services, to emergency management, should be centered on how new technologies can remove inefficiencies and add value. It means that IT should be embedded into the fabric of daily life, leading to a better quality of life for all inhabitants [4-6].
Present scenario of smart cities
These technologies build an intelligent network to connect all systems to form an extensive procedure. Cloud-based IoT is also used to modernize the things surrounding it. Many things can be improved, like trash disposal, energy distribution, traffic congestion on roads, and air pollution levels. Cities like New York, Paris, Tokyo are continuously developing strategies to change or adapt the existing systems to the developed infrastructure of smart cities. The Indian government has already planned for 100 such smart cities to increase the City’s economic growth. Countries like China also have ongoing numerous Smart City projects that have the potential to increase the City’s financial structure and educate the people and workers of the City since it increases the number and efficiency of the people using smart City technologies. These Smart City projects require much funding as well as technical minds.
The Pros and Cons of smart cities
A smart city is defined as a city that utilizes broadband to connect different service providers and citizens in an effort to improve the quality of life for individuals who live there. The idea is to use the information gathered from the sensors and control systems around the city in order to predict and maximize events such as traffic, energy consumption, pollution levels, and public safety. While this concept has its benefits due to increased efficiency and reduced risk factors, many critics say that creating a controlled environment could be detrimental to the population members’ ability to think critically and make uniquely informed decisions on what they want for themselves .
How to make a city smarter?
As you can probably ascertain from the blog title, cities will be smarter as time goes on. But how do we make them as such? One of the first steps is to eliminate as much as possible by-products that we consider waste. For example, rain that falls on the roofs of buildings and runs into storm drains after cleaning metals and chemicals from our urban environments. Another step is to use sensors and network technology like wi-fi and desalination plants and even smart cars and public transportation systems.
Technology used in smart cities
Smart Cities are based on the idea that technology can improve the quality of life for everyone. Networks like smart grids, transportation systems, emergency services, communications, and more will all work together to make cities more accessible and livable for all residents. The possibilities are endless with this area of development!
Objectives of smart cities
There are various objectives of a smart city; some of them are the efficiency of services, sustainability, mobility, safety and security, economic growth, city reputation. When we talk about securities in smart cities, there are a number of potential threats and vulnerabilities. These threats include man-in-the-middle attack, data, and identity theft, device hijacking, distributed denial of service attack (DDoS) , permanent denial of service attack (PDoS).
In privacy data and identity theft, there are countermeasures like authentication encryption, and access control. There are countermeasures like device identification and access control, and security Lifecycle Management for threats like device hijacking. There are countermeasures like authentication, encryption, access control, application-level DDoS protection, security monitoring, and analysis for threats like permanent denial of service attack(PDoS). There are countermeasures like device identification and access control, security monitoring, and analysis for threats like DDoS attacks. For threats like man-in-the-middle, countermeasures are authentication and encryption, security lifecycle management.
An example of a control system being attacked by hackers is the control system of a train. If a train’s control system is hacked into, the hacker can drive the train and cause a railroad accident, which is a massive security problem.
Multi-layered protection and authentication: Working in security by configuration can guarantee that a brilliant city network is best positioned to withstand any endeavored break or digital assault. Guaranteeing that the general organization and every gadget in that, or possibly the ones possessed and constrained by the position, send a complex security framework can make it a seriously overwhelming and less engaging hack task for any future gatecrashers. However, secure, multifaceted verification can essentially be an obstruction and is a general enhancement for fundamental one-venture network logins
Regular updates: The issue with numerous current IoT networks is that updates are a manual undertaking, and normally, numerous clients neglect to run said refreshes. This can prompt basic openings and issues inside the organization that make way for digital assaults. In a huge city-wide IoT network, it would clearly be difficult to physically refresh every gadget. Working in auto-updates would guarantee that every gadget screens its own well being and auto-introduce any security patches or new programming from approved, confided-in designers.
Smart cities statistics
- According to Grand View Research, the worldwide market for smart cities would be valued $676.01 billion by 2028. The report grounds its projections on the notion that “demand for smart cities is expected to rise largely as a result of reasons such as urbanisation, the increasing need to manage scarce natural resources effectively, and environmental sustainability.”
- NIST SCCF is a good framework that shows best practices for development of smart cities, starting from planning to providing its security.
- United states provides 26% of the total smart city initiatives.
- China has deployed 800 smart city programs all over the country.
- According to ABI, there will be 1.3 billion WAN connections by 2024. For an investment in Cyber security, that value will become 135 billion dollars every year
Challenges faced by smart cities
- Updating and maintenance is an issue for most cities which makes systems insecure.
- Network protocols are still vulnerable; an attack on network protocols is very easy.
Work needed to be done
- Encrypted data: Information should be encoded uniformly. Encryption is a technique for encrypting data in such a way that it is useless and unintelligible to anybody except those who possess an encryption key that can decrypt it. Two-factor verification should also be used with the encryption key. Due to the fact that the smart city foundation handles very sensitive data, encryption should be used as a matter of course. As a result, if programmers get near enough to sensitive data, they will be unable to use it.
- Constant security monitoring: Security auditing needs a dedicated group capable of monitoring traffic and looking for irregularities. This may be automated using security software that can sift through large amounts of data and look for indicators of giving and take. After identifying possible risk zones, they may be disconnected, preventing any information leaks.
- Support platform: Any new assistance stage should have the capability to secure a broad range of connected situations and devices. Given that smart cities are made up of several businesses, SaaS, IaaS, and cloud environments, a unified security architecture should be implemented to assure the security of all components of a linked city.
The advancement of shrewd urban areas can possibly bring benefits for organizations, city administrations, and individuals. However, the security of the hidden computerized foundation is critical to progress. Associations and gadget producers should take on arising guidelines and direction to guarantee frameworks are ‘secure by plan’ and perform testing prior and then afterward establishment to address any imperfections. Besides, administrators of dynamic, brilliant city innovation should look to comprehend the security issues confronting their shrewd surroundings and frameworks in case they are to relieve the dangers before episodes happen. Urban areas of tomorrow will without a doubt be more brilliant as the years continue however, getting IoT security right will be the distinction between a shrewd city and a safe city.
- Gharaibeh, A., Salahuddin, M. A., Hussini, S. J., Khreishah, A., Khalil, I., Guizani, M., & Al-Fuqaha, A. (2017). Smart cities: A survey on data management, security, and enabling technologies. IEEE Communications Surveys & Tutorials, 19(4), 2456-2501.
- Xie, J., Tang, H., Huang, T., Yu, F. R., Xie, R., Liu, J., & Liu, Y. (2019). A survey of blockchain technology applied to smart cities: Research issues and challenges. IEEE Communications Surveys & Tutorials, 21(3), 2794-2830.
- Dowlatshahi, M. B., Rafsanjani, M. K., et al. (2021). An energy aware grouping memetic algorithm to schedule the sensing activity in WSNs-based IoT for smart cities. Applied Soft Computing, 108, 107473.
- Chui, K. T., et al. (2021). Handling Data Heterogeneity in Electricity Load Disaggregation via Optimized Complete Ensemble Empirical Mode Decomposition and Wavelet Packet Transform. Sensors, 21(9), 3133.
- Batty, M., Axhausen, K. W., Giannotti, F., Pozdnoukhov, A., Bazzani, A., Wachowicz, M., … & Portugali, Y. (2012). Smart cities of the future. The European Physical Journal Special Topics, 214(1), 481-518.
- Chourabi, H., Nam, T., Walker, S., Gil-Garcia, J. R., Mellouli, S., Nahon, K., … & Scholl, H. J. (2012, January). Understanding smart cities: An integrative framework. In 2012 45th Hawaii international conference on system sciences (pp. 2289-2297). IEEE.
- A. Dahiya, B. Gupta (2021) How IoT is Making DDoS Attacks More Dangerous?, Insights2Techinfo, pp.1
Cite this article as:
MD T. Chishty (2021) Smart Cities: Future of Mankind, Insights2Techinfo, pp.1 | <urn:uuid:3e2739d1-4831-4d8f-ac01-c2033d36b219> | CC-MAIN-2022-40 | https://insights2techinfo.com/smart-cities-future-of-mankind/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00220.warc.gz | en | 0.907772 | 2,240 | 3.484375 | 3 |
How Is Technology Transforming The Classroom Learning Experience?
According to a new report by the Global Industry Analysts Inc. the global e-learning market is likely to reach US$107 billion in 2015. Factors driving this growth include increased use of the internet and decreased telecommunication costs.
That is how much technology is influencing the education industry these days. Even in classrooms, the use of technology is on the rise. Another study conducted by Futuresource Consulting, revealed that the global expenditure on technology in classrooms is likely to reach US$19 billion by 2018.
Technology use in classrooms has even led to the evolution of a whole new learning experience knows as flipped learning or blended learning.
That said, here are 3 of the latest technology trends in the education industry this year:
- Flipped learning – This is a kind of a blended learning technique in which students learn content online by watching video lectures etc. usually at home and also do work in class with the teachers like discussing and solving questions etc. The basic idea is to engage the learners inside and outside the classroom and thereby provide a dynamic learning atmosphere. This makes learning more effective and fun. For this purpose, video distribution tools and streaming devices will be used widely and more cloud-based learning systems will come into use.
- Personalized learning – There is a wide variety of learning tools which can be personalized according to the needs of the learners. Various approaches like project-based learning, game-based learning etc. can be used with the learners depending on their learning styles. Moreover, with technology, teachers are now able to track the progress of their students in individual subjects or even lessons and find out the areas in which they might need help. Hence, they can further personalize their teaching.
- Online learning – The online revolution in education was brought about by Massive Open Online Courses (MOOCs). They are free online courses offered by many leading universities across the world. They have been revolutionary in providing students around the world with free and quality education that they can finish anytime and anywhere. They offer courses on a wide range of topics like humanities, business, medicine etc. Usually they let learners take their own time in completing a course, but there are timeframes similar to traditional university courses.
With technology becoming such a huge part of the education industry, more schools have started to adopt the blended learning technique. Classroom technology has almost become inevitable for effective learning. Even mobile based learning techniques are on the rise now. Technology has basically become a necessity as far as the future of education is concerned. | <urn:uuid:041ee0a0-97c1-4809-844b-43f69b9032a2> | CC-MAIN-2022-40 | https://www.fingent.com/blog/how-technology-transforming-classroom-learning-experience/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00220.warc.gz | en | 0.966936 | 519 | 2.90625 | 3 |
What is Video Resolution? What are its Applications?
Basics of Video Resolution
Video resolution is defined as the number of pixels that can be displayed in a given area. Video resolution is generally calculated by using width and height. For example, when the resolution of a particular video is 1024 X 768, that means that there are 1024 pixels in the width of the video, and 768 pixels in the height of the video. Video resolution is what allows people to see video images within a video recording. The higher a video’s resolution value, the clearer and sharper the video will be to viewers.
Due to improvements in technology, many monitors currently on the market today, including those on popular smartphones and tablets, can generate 4k and even 8k video resolution offerings. Whether it is for professional reasons or personal use, having a basic understanding of video resolution is well worth the effort. What’s more, those who create videos regularly also need a basic set of knowledge of resolution concepts.
When recording video content, the resolution is a setting that must be chosen from a menu at the time of the shooting. When processing or developing said video, other pieces or clips can then be added as needed. In the end, all video segments that have been recorded must be in the same resolution as to avoid having distorted areas in the film. Even when uploading a video online, a user is yet again tasked with selecting which resolution option they would like to have the video uploaded. To provide another example, many consumers may have noticed that movies and television shows are offered in both standard definition and high definition.
To illustrate this point further, the standard definition is typically a video resolution of 640 x 360 or 640 x 480 for videos. Alternatively, the standard definition for DVDs is 720 x 480 or 720 x 576. While these resolution values are good for viewing and will still provide consumers with a satisfying experience, some people may want to have clearly defined film, sharper images, and brighter colors, among other features that can be enjoyed as a result of viewing a video in a higher resolution. Furthermore, Some movies are much better viewed in high definition or HD, like Star Wars. The values typically used for HD video can be 1280 x 720 (720p) or 1920 x 1080 (1080p). 1080p is also known as ‘full HD.’
Is Video Resolution the Same as Video Compression?
Conversely, video resolution refers to the size or number of pixels in the frame. Due to the size of some video files, there may be instances in which a particular file size may need to be reduced. When working with large files in the context of video editing, these files can be zipped into a folder in order to make them smaller, which will then allows these files to be shared over the internet more compactly. The video compression process is very similar in nature, as the goal is to reduce the size of a particular video file to make sharing said file easier. For instance, If a person had a 1080p video, it may be too large to work with effectively, so there may be a need to shrink or compress it to 720p or even down to SD.
Consumers may also want to reduce the size of a file when looking to upload a film or video to their computer or phone. However, simply increasing video resolution is not enough to improve video quality. While the video file size will grow, the quality or clarity of the film will not. Increasing the size of a video changes the file size, but this process alone is not the same as video compression. Conversely, video compression involves reducing the amount of data in a video, or the video’s file size altogether. The usual process for data compression is to decrease or remove all unwanted data from said video. This process ensures that consumers do not lose any parts of their video that they did not specifically want to be removed. In summary, reducing the size of the data compresses it into a smaller file size.
To this end, there are a variety of benefits to using video compression. For instance, data compression is essential in day-to-day tasks like storing or sharing video files. To provide another example, when looking to send a video to a friend or colleague, it can be frustrating to receive an error message that reads “file size exceeds limitations”. As large video files can be burdensome to transfer across the internet or email, compressing a particular video file can clear up available free space on a particular device.
While industry standards may vary, the most commonly used standard used for video compression is High-Efficiency Video Coding, or HEVC for short. HVEC doubles the amount of information compressed within a video, without compromising the quality of the said video. In turn, the level of video resolution can be extended to 8K UHD, or 8192 x 4320. While video resolution and compression are often discussed simultaneously, both are uniquely essential in understanding how video files work, are compressed, or are displayed. The difference that breaks down to that video resolution is the numerical value of the output display in pixels, and compression refers to reducing the file’s size.
Which Resolution Works Best with Video?
While many consumers may be wondering which form of video resolution is ideal for creating, editing, or viewing video content, there is no single answer to this question. The resolution that works best when recording a video depends significantly on how the video will be displayed. How do you want to watch the finished film or video file? Will this video be displayed on a mobile device, a desktop monitor, a big screen TV, or a movie theater? Here are some different resolutions that work best with their corresponding output device.
- 360p – This is a low resolution that works well on smaller screens such as mobile devices.
- 480p – This is an industry-standard resolution for burning video onto a CD.
- 720p – This resolution is when you hit high definition (HD) video playback and is standard for television viewing.
- 1080p – This value is considered “Full HD.” It is also used for television viewing.
- Ultra HD 4K – This is a 16:9 resolution on televisions used for 4K broadcast.
- Cinema 4K – The resolution here is noted as 1.9:1 and is used in cinema projection with larger screens.
So, which is best? The one that plays back the best during viewing. If the recording is meant to be played on television, such as a new company commercial, then it should minimally be recorded in 720p. However, you may get a better picture at a higher level like the Ultra HD 4k.
Factors That Impact Uncompressed Video Quality
There are a variety of factors that can impact the quality of a particular video. These various factors can include any of the following:
- Video Resolution – Video resolution is primary since the resolution is expressed as the number of pixels.
- Video Frame Rate – This is the FPS or frames per second shown in the video’s playback. This can vary and impact video quality.
- Macroblock – This describes the function level of the processing unit used with image and video compression.
- Bit Rate – This number describes the number of bits processed per second.
- Color Depth – Displayed as the number of bits that demonstrate the color of the specified pixel.
- Bit Control Mode – A macroblock regulates the number of bits for a given frame.
How should consumers go about choosing a particular video resolution?
As there are so many factors that determine the quality of a given video recording, choosing a particular resolution will be largely dependent on the goals of the project at hand. For instance, a consumer looking to record a video for recreational or educational use could likely get away with rendering a video of a lower quality, as the goals of such projects would likely not necessitate the need for a higher quality resolution. Alternatively, a professional looking to edit a video for occupational purposes such as a pitching a T.V. pilot to a recording studio would likely desire to record their video using the highest resolution possible, as they would like the recording studio in question to view their work with a certain level of clarity and preciseness. Irrespective of a consumer’s particular needs in regards to video resolution, advancements in video editing technology have allowed for solutions to be developed at every skill level, from expert to novice. | <urn:uuid:04975712-f66b-47bf-a483-0fe25e5c3bdd> | CC-MAIN-2022-40 | https://caseguard.com/articles/what-is-video-resolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00220.warc.gz | en | 0.937256 | 1,728 | 3.453125 | 3 |
Everyone remembers being a teenager and having to choose between right and wrong. How often did you pick the latter – a forbidden party for instance – because it was more exciting than the alternative? Youngsters are impressionable and often make decisions to boost their reputation or get a quick kick. Hackers are no different, but their poor decisions can have far-reaching consequences.
A report by the National Crime Agency (NCA) found that UK cybercrime suspects are on average just 17 years old, with some as young as 12. This is far lower than the average age of those arrested for drugs (37) and financial crime (39). Many of those young suspects wouldn’t have broken the law before, and would be unlikely to do so in the real world. So what drives them to commit crime digitally, and can those forces be overcome?
How young people fall into cybercrime
Youngsters who commit cybercrime typically have a passion for tech which stems from legitimate interests, such as gaming. Their competitiveness (a desired cyber trait) may lead them to forums where they can share tips and learn to hack games. But these forums are often hotbeds for malicious activity, with like-minded youngsters encouraging each other to perform increasingly daring stunts, perhaps unaware of their gravity. Cybercriminals also use them for recruitment, though most youngsters turn to the dark arts not for financial gain, but to earn kudos among their peers.
Many technically minded teens are neurodivergent (one in seven people have conditions linked to neurodiversity), meaning they have atypical ways of thinking. Their skill sets differ from their neurotypical counterparts, with attention to detail, logical thinking and problem-solving – desirable cybersecurity traits – all frequently present. These curious individuals are often unchallenged and underwhelmed by their school syllabus material, forcing them to hone their skills unsupervised elsewhere. And with easy-to-use hacking tools so readily available, this absence of leadership can be dangerous.
However young people fall into cybercrime, most consider their teenage kicks harmless fun. Those who do understand their actions often have no intention of committing ‘serious’ crime and consider their chances of encountering law enforcement slim. But things can and do escalate. Take TalkTalk hacker Daniel Kelley for example, who began hacking maliciously when he failed to get the grades to study a college computer course. Kelley first targeted the college that rejected him before moving onto global companies and high level crime. In 2019, he was jailed for four years.
Why we need young hackers on side
Engaging young cyber talent is as much a social issue as an economic one. We don’t just need to help the 30% of UK businesses lacking advanced cyber talent; it’s also our duty as an industry to protect young people. Hackers currently have so few places to practice legitimately that it’s unsurprising some turn to cybercrime – where else can they exploit services or escalate privileges?
Offensive cyber techniques are important (we understand that as well as anyone), so the industry must start building a positive narrative around hacking. The term might be interwoven with criminality at present, but hackers simply do what others thought impossible. Whether for positive or nefarious ends, being a hacker requires an innovative mindset. By damning those involved in hacking from an early age, we risk driving them to the fringes and perhaps even criminality. We should encourage, not admonish, those with the technical abilities to transform industries – or else the next Marcus Hutchins, the bedroom hacker who committed crime but eventually derailed WannaCry, might never be on our side.
How Immersive Labs is helping keep young hackers on track
Creating a secure, exciting space for young hackers to equip themselves with skills is essential, or else they could be tempted to practice illegitimately. Bug bounty providers such as HackerOne are excellent playgrounds for advanced hackers, with one teenager Santiago Lopez making millions through the platform. Even he admits, however, to being tempted by cybercrime – and most teens aren’t at the level required for ethical hacking.
Immersive Labs creates a safe proving ground for young people to test out cyber tools and problem solving techniques. Our approach to skills development puts real malware and threat actor techniques in the hands of those who will eventually be tasked with opposing them. Content is designed to progress users through the concepts, tools and techniques required for a career in cybersecurity, making it an invaluable resource for aspiring hackers. This is why we provide the Students’ Digital Cyber Academy and the Neurodivergent Digital Cyber Academy to qualifying individuals for free.
We also recently ran a fantastic event as part of Unlock Cyber, who aim to open up cyber careers to young people in the South West (UK). Immersive Labs powered the June competition, which saw various schools complete a combined 1,412 labs. All participating students have been granted access to the Students’ Digital Cyber Academy.
James Webber, who teaches at a participating school, said: “The Unlock Cyber competition in partnership with Immersive Labs was a truly fantastic experience for our pupils. The tasks were challenging but pitched at the right level, with the practice labs released beforehand very useful in concentrating pupils’ efforts on key areas of cyber. It promotes cyber to those interested in more than coding and is fantastic for schools in the South West.”We have also partnered with the National Crime Agency (NCA) to provide Cyber4Summer, a summer training and learning opportunity for teenagers. This will give young people the opportunity to develop cyber skills online over the summer holidays and forms part of the NCA’s strategy to train today’s youth via the UK Government’s Cyber Essentials scheme. And this means – when it comes to cyber at least – young people don’t have to choose between what’s fun and what’s right. | <urn:uuid:5896299c-d2a7-4125-9524-1ff618c0b19e> | CC-MAIN-2022-40 | https://www.immersivelabs.com/blog/what-does-it-take-to-keep-young-hackers-on-the-right-path/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00220.warc.gz | en | 0.961466 | 1,202 | 2.9375 | 3 |
It wasn’t that long ago that Pacific Gas & Electric Co. (PG&E) implemented rolling blackouts throughout California in an effort to prevent wildfires — an unprecedented move in the U.S. My office in North San Jose was spared, but several other San Francisco Bay Area locations were met with power outages. Critics of the move say that PG&E was in the wrong for shutting off power, as it quickly became a public safety issue. It raised the question of how companies that rely on networks to run their businesses prepare for power outages.
As more organizations move their data to the edge, it has made networks more vulnerable to outages caused by natural disasters compared to when networks were mostly run out of a few centralized locations that had robust infrastructure and dedicated staff. But as the trend of moving to the edge shows no signs of slowing down — and as natural disasters, such as wildfires, hurricanes, and tornadoes increase in both regularity and severity — network engineers must create strategies to protect and sustain edge networks during natural disasters. The following best practices are a good place to start.
- Be strategic at the edge
In the past, companies would focus disaster recovery plans on their headquarters and other major data centers. But now, they also need to think about the edge and evolve their strategies to ensure recovery plans exist for every site.
- Develop a resilient network
After an outage — whether from a natural disaster or not — network engineers must ensure that a plan is in place to quickly and securely bring networks back online. As the move to the edge continues, creating a resilient network that can withstand and recover from a service disruption becomes mission critical. Network resilience allows companies to continue normal operation, even in the face of an outage.
- Have a direct link to core infrastructure
Companies can implement out-of-band management (OOB) at edge sites, giving administrators access to devices remotely and providing visibility into the network. In the event of a disaster, the network administrator has the power to reconfigure network devices remotely, bypassing failed equipment to recover the network without an IT expert on the ground.
As natural disasters increase, organizations will need to rethink their strategies to ensure they’re able to weather the storm. By creating a strong disaster recovery plan, companies can be better prepared to react and resume business quickly. | <urn:uuid:549a7a2d-6b09-4247-9244-4bf8bd2be5b9> | CC-MAIN-2022-40 | https://www.missioncriticalmagazine.com/articles/92793-disaster-recovery-at-the-edge | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00420.warc.gz | en | 0.960427 | 475 | 2.515625 | 3 |
There’s a glimmer of good news amid the ever-evolving IT threat landscape – although it’s come about as a result of worrying illegal activity. Even though recent changes to data privacy laws have placed consumers in control of their personal information, the Federal Trade Commission (FTC) has found that some apps are, in fact, collecting data they don’t need. For example, tracking users when they’re not actively using the app and going against the permissions they have set.
As a result, and in a positive move for consumers, the FTC published a stern warning that it will take action against organizations using or sharing data illegally. Tony Pepper, Egress CEO, celebrates this promise, saying, “The FTC’s commitment to enforcing privacy laws across smart devices and apps is fantastic news for consumers, and any company found in violation can expect to face the consequences set out in the law they’ve broken, such as threat of injunction and financial penalties.”
The risks of data misuse
Some apps have not only been misusing personal data but have also been reidentifying individuals for financial gain. “For example, a health or fitness provider could use geo data combined with health app data to target specific individuals with local services or offers,” Pepper explains.
The examples of data we’ve highlighted so far – location and health – are two of the most sensitive types, according to the author of FTC’s Location, health, and other sensitive information article, Kristin Cohen. She highlights how ironic it is that many of us are sharing delicate information with unknown entities, completely unaware of the risks we’re creating for ourselves.
“The extent to which highly personal information that people choose not to disclose even to family, friends, or colleagues is actually shared with complete strangers,” Cohen says. “These strangers participate in the often shadowy ad tech and data broker ecosystem where companies have a profit motive to share data at an unprecedented scale and granularity. The marketplace for this information is opaque, and once a company has collected it, consumers often have no idea who has it or what’s being done with it.”
Cohen explains that misuse of location and health information “exposes consumers to significant harm,” including phishing scams and identity theft. Leaked location data can lead to stalking or robbery, and stolen health information can cause many other issues, including discrimination or stigma – especially if that information concerns reproductive health.
Sharing data can be incredibly risky for users, and the concern isn’t merely theoretical – the FTC has dealt with multiple cases of an app illegally using data first-hand. Recently, the FTC settled a case with the Flo Health app – a popular menstruation tracker – after alleging that it shared personal information with third parties despite promises of privacy.
And there are even more real-world examples where sensitive information has been leveraged illegally. “Besides financial gain, the misuse of consumers’ personal information has been used to influence elections (Cambridge Analytica), persuade the judicial system (protests at the homes of Supreme Court Justices), and impact health choices,” Pepper adds.
The FTC is on your side
This is why the FTC has promised to crack down on organizations breaking the rules, continuing the work it’s been doing behind the scenes to punish companies misusing information to protect consumers. “We will vigorously enforce the law if we uncover illegal conduct that exploits Americans’ location, health, or other sensitive data,” promises Cohen. “The FTC’s past enforcement actions provide a roadmap for firms seeking to comply with the law.”
Cohen’s advice is to bear in mind that sensitive data is protected by multiple federal and state laws, that claims of anonymity in data use can be deceptive, and, vitally, that the FTC won’t tolerate over-collection, indefinite retention, or misuse of consumer data. The FTC has already dealt with hundreds of cases for the sake of protecting individuals’ personal data, and some of these have led to substantial civil penalties.
Pepper, referring to the FTC’s commitment to the law, concludes: “The benefit to people is simple: consumer confidence that their data really is private and only ever used in ways they’ve consented to.” | <urn:uuid:f7507f6c-e91c-4770-b9f0-6c9bc5cdc8fc> | CC-MAIN-2022-40 | https://www.egress.com/blog/compliance/the-ftc-are-cracking-down-on-illegal-data-sharing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00420.warc.gz | en | 0.93704 | 910 | 2.578125 | 3 |
The ransomware is considered as one of the most dangerous viruses. Companies are afraid of losing their data, so they try to do their best, spending a lot of money on security improvements. But what if I tell you that any anti-malware software which has its detection databases updated constantly is able to deal with 95% of ransomware samples?
Such a fact goes against the common imagination about ransomware attacks. Is the last one wrong? No, rather obsolete than wrong. The ransomware attack model was really close to what we imagined, which was fixed in a great number of reports, articles, and other literature. But since 2019, the attack vector has changed sharply.
What’s new in ransomware?
Since its appearance in 2013 in the shape we used to see, ransomware was a virus that was distributed chaotically, without any strict targeting. Of course, ransomware developers were not able to maintain such a massive distribution campaign, so there were a lot of offers in darknet where you could become a part of a ransomware injection scheme. “External” people used in this scheme meant the lower operating margin (if we can call it so) and lower fault-tolerance.
Company-specific attacks were first detected long before the massive cases of ransomware targeting. But the separated ransomware families that are used only against the companies (and likely never used to infect individuals) became really significant only at the edge of 2019. The distribution methods of such a “new” type of malware are exactly the same as other ransomware uses. But a significant difference appears after the injection: ransomware may be inactive for a long period of time – average is about 56 days. What do cybercriminals do through this time?
The answer is easy and complex simultaneously. Global answer – they make the infected corporate network much more accessible for other viruses and for more wide ransomware spreading. The detailed answer requires to be shown in the list:
- Inspecting the network to understand the weak spots and figure out which computers/servers encryption will be more harmful to the target company.
- Injecting the additional malware that allows to get more control over the infected network, and to decrease the chance of detection of ransomware by antivirus programs.
- Collecting the important data about the attacked corporation, that may be sold in the darknet in the future – credentials, interim financial/operational reports, balance sheets, etc.
- Checking the used backup mechanisms to make specific changes in the ransomware mechanism to make the usage of the backup impossible.
Several more words about the security corrupting after the injection. The first infected computer can be used to spread the malware through the whole computer network. If that PC has no antivirus software onboard, and has the administrator account, ransomware developers may easily get the passwords of the whole network using the hack tools. When such control is obtained, cybercriminals are able to disable the security tools on all other computers in the network, so they will not notify anyone about the malicious items onboard.
Ways of ransomware penetration
The mechanism of ransomware spreading through the network is quite clear, but what about its initial injection? As it was mentioned, corporation-targeted ransomware uses the same distribution methods as other ransomware families. There are two ways which are the most popular and were established not so long ago – email spamming and dubious applications. And while the second method may raise suspicion of system administrators, the email letters are usually considered as something legitimate.
The human error plays the key role in such attacks. Someone who has the administrator user account gets a letter that looks like some internal documents of the company – invoices, consignment notes, approvals of operations or so. He or she will likely not ask anyone if this letter is really awaited, and just open it, injecting the ransomware.
There is also another method, that needs no counterparty inside of the company. The network vulnerabilities, together with weak passwords can give the cyber burglars a chance to do everything they need without any accidental help from the inside of the company. Ransomware distributors may just scan your network for remote-desktop protocol, which is enormously vulnerable, and hack it using brute force.
How to make your corporate network safe?
There is no universal answer for this question. The list of proper measures depends on tens of the factors which differ from one company to another. There are also a lot of actions that are essential for cybersecurity, however, not all of them are implemented properly, or even not implemented at all.
First thing you need to think about is the decreasing of possible damage after the malware attack. Dividing your corporate network to segments allow you to risk only a part of workstations and information that is stored on them, instead of risking all machines simultaneously.
Then, you need to deal with a massive amount of work for making your network less exploitable. The first and obvious thing for this purpose is closing all possible exploits that surely exist in your system. Remote desktop protocol, Windows Script host, local user profile, MS Office macro sets – all these things are used actively by cyber attackers. Not all of them may be closed, but at least you can force the user to enter the password when he/she tries to launch the exploitable thing, to force this person to think twice. Another thing you can do is take care of installing every security patch for the computer network as soon as possible, to be sure that you will not get your network corrupted because of holes in your network settings.
Minimizing the chances of human error is the last, and the least clear among others. The clearnessless is caused by the presence of a human, who may unintentionally say a password or other credentials, being drunk in the bar at night on Friday. Hence, it is better to restrict the access of users to the corporate-importance credentials and make them very long and hard to remember, so even the “spy” will not be able to carry too much information. Antivirus software on every PC that has contact with mentioned important data is also a thing you need to take care of. The less the chance of keylogger/stealer launching – the less the chances of a successful attack. | <urn:uuid:3109804d-196b-4e3a-a628-135f92e8db5b> | CC-MAIN-2022-40 | https://gridinsoft.com/blogs/ransomware-trends-2021/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00420.warc.gz | en | 0.952194 | 1,267 | 2.6875 | 3 |
In April, Xudong Zheng, a security enthusiast based in New York, found a flaw in some modern browsers in the way they handle domain names. While Chrome, Firefox, and Opera already have security measures in place to cue users that they might be visiting a destination they thought was legitimate, at that time these browsers did not flag a fake domain name that used all Latin look-alike characters taken from another foreign language. Zheng demonstrated this when he created and registered a proof-of-concept (PoC) page for the domain, аррӏе.com, which was written in pure Cyrillic characters.
What is a homograph attack?A homograph attack is a method of deception wherein a threat actor leverages on the similarities of character scripts to create and register phony domains of existing ones to fool users and lure them into visiting. This attack has some known aliases: homoglyph attack, script spoofing, and homograph domain name spoofing. Characters—i.e., letters and numbers—that look alike are called homoglyphs or homographs, thus the name of the attack. Examples of such are the Latin small letter O (U+006F) and the Digit zero (U+0030). Hypothetically, one might register bl00mberg.com or g00gle.com and get away with it. But in this day and age, such simple character swaps could be easily detected.
In an internationalized domain name (IDN) homograph attack, a threat actor creates and registers one or several fake domains using at least one look-alike character from a different language. Again, hypothetically, one might register gοοgle.com, but not before swapping the Latin small letter O (U+006F) with the Greek small letter Omicron (U+03BF).
Zheng's PoC is another example of an IDN homograph attack, so let's list down each character he used to illustrate how this particular attack can be highly successful and dangerous if used in the wild. Interestingly, an operating system's typeface of choice could make it easy or difficult for users to visually differentiate non-Latin characters from Latin ones.
Table 1: We used Segoe UI, Microsoft's system-wide typeface, here.To the human eye, these Cyrillic glyphs can easily be confused with their Latin counterparts. Computers, however, read these confusables differently, as we can see from the different hex codes assigned to them.
Table 2: We used San Francisco, Apple's system-wide typeface, here. It's worth noting that OSX distinguishes the Cyrillic small letter Palochka from the Latin small letter L; however, it cannot show the difference between the Latin small letter L with the Latin capital letter I, as per the text "Cyrillic small letter Ie".According to this bug report, it seems that even the system-wide font for Linux doesn't distinguish confusable characters either.
The use of all-Cyrillic glyphs—or any other non-Latin characters for this matter—for domain names isn't the problem. IDN has made it possible for internet users around the globe to create and access domains using their native language scripts. The problem is when these glyphs are misused to deceive internet users.
Is this a new form of online threat?Homograph attacks have been around for years. As far as we know, Zhang's PoC was the first of its kind to make headlines and spark a conversation among internet users.
Below are other examples of homographed domains and how they were used:
- To raise awareness, a security consultant highlighted the common misconception that sometimes a Latin capital letter I (U+0049) looks similar to a Latin small letter L (U+006C) by registering a fake Lloyds Bank website and adding an SSL certificate to it to make it look as legitimate as the real one.
- A security researcher from NTT Security shared his experience about a friend of his who received several Google Analytics spam containing the domain, secret[DOT]ɢoogle[DOT]com. The "ɢ" there wasn't the Latin capital letter G (U+0047) but a Latin letter small capital G (U+0262).
- A security researcher from NewSky Security found an impersonated Adobe website serving the Betabot malware, pretending to be an Adobe Flash Player installer file. The threat actor used the Latin small letter B with Dot below (U+1E05) to replace the Latin small letter B (U+0062) in "adobe.com".
How is this different from typosquatting?Although typosquatting also uses visual tricks to deceive users, it relies heavily on users mistyping a URL in the address bar, hence, the "typo" in its name.
Are all homograph attacks just phishing attacks?Not necessarily. Although homograph attacks usually involve phishing, threat actors could create fake yet believable websites for other fraudulent purposes or to introduce malware onto user systems, as is the case of the bogus Adobe website we mentioned earlier.
In this in-depth report about IDN homograph attacks, our friends at Symantec have noted that several homographed domains they found were either part of a malvertising network, hosting exploit kits and malicious mobile apps, or generated by botnets.
How can we protect ourselves from homograph attacks?Browser tools have been created, such as Punycode Alert and the Quero Toolbar, to aid users in alerting them of potential homograph attacks. Users have the discretion of adopting them alongside the built-in security mechanisms in today's browsers. However, no tool can replace vigilance when browsing online and a solid cybersecurity hygiene. This includes:
- Regularly updating your browser (They may be your first line of defense against homograph attacks)
- Confirming that the legitimate site you're on has an EVC
- Avoid clicking links from emails, chat messages, and other publicly available content, most especially social media sites, without ensuring that the visible link is indeed the true destination.
- ICANN Statement on IDN Homograph Attacks and Request for Public Comment
- Unicode Security Considerations and Mechanisms
- The Homograph Attack [PDF] by Evgeniy Gabrilovich and Alex Gontmakher | <urn:uuid:8b82ae86-c820-4d03-8c46-8a5e0c5e6793> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2017/10/out-of-character-homograph-attacks-explained | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00420.warc.gz | en | 0.921976 | 1,338 | 3.21875 | 3 |
Jargon permeates the software development industry. Best practices. Artifacts. Scope Creep. Many of these terms are so common as to be called overused, and it is easy to assume we understand them because they seem so obvious. Still, we sometimes find new depth when we examine them closely. In this post, let us muse on the “Pattern,” and its somewhat lesser known counterpart, the “Anti-Pattern.”
We all know what patterns generally are in common language, but to understand their importance in software engineering it’s important to first discuss algorithms. An algorithm is simply a way of performing a common task, such as sorting a list of items, storing data for efficient retrieval, or counting occurrences of an item within a data set.
The text would be nearly unrecognizable to a modern programmer, as it mainly emphasizes Calculus-based proofs of its solutions and its only code examples are provided in obscure, outdated languages such as Algol or MIX Assembly. Despite this, much of what was covered is still used today: singly- and double-linked lists, trees, garbage collection, etc. The details are often buried in convenient libraries, but the concepts are the same. These algorithms have remained valid solutions to common software engineering problems for more than 5 decades and are still going strong.
A “pattern” can be considered a more general form of an algorithm. Where an algorithm might focus on a specific programming task, a pattern might consider challenges beyond that realm and into areas such as reducing defect rates, increasing maintainability of code, or allowing large teams to work more effectively together. Some common patterns include:
- Factories – An evolution of early object-oriented programming concepts that eliminated the need for the creator of an object to know everything about it ahead of time. A flowchart application might support extensible stencil libraries by focusing on creating and organizing “shapes,” allowing the stencils themselves to manage the details of creating a simple square vs. a complex network router icon.
- Pub/Sub – A mechanism for “decoupling” applications. Rather than having a sender directly send messages to a receiver, the sender “publishes” the messages to a topic or queue. One or more receivers can “subscribe” to receive those messages, and the message queue handles details such as transmission errors or resending messages. This simplifies both the sending and receiving applications.
- Public-key Cryptography – A mechanism by which two parties can communicate securely and without interception, yet without the need to pre-arrange an exchange of secret encryption keys. Each party maintains a pair of keys (public and private), and the public key can often be obtained as needed rather than exchanged in advance.
- Agile – A philosophy that encapsulates a set of guiding principles for software development that emphasize customer satisfaction, embrace the need for flexibility and collaboration, and promote the adoption of simple, sustainable development practices.
These are just four of the many common patterns in the industry, and even in this mix we can see how they range from highly technical to broader, more process-oriented points. Factories are a very code-oriented pattern, while pub/sub is more architectural in nature. And while public-key cryptography has broad implications, libraries to support its operations are available for nearly every programming language in common use today, making it generally straightforward to implement.
At the other end of the spectrum, “Agile” remains somewhat elusive: simultaneously a rallying point and an instrument of divisiveness among developers, project managers, and other stakeholders about exactly what it means and how it should be implemented. It is a great example of an overused yet poorly understood term. Seeing the terms “Waterfall” or “Stand ups” in the same sentence as “Agile” is almost always an example of misuse. Agile is a philosophy, not a software development methodology, so it cannot be directly compared to Waterfall, nor does it directly spell out process components such as stand ups. (Those are a component of Scrum, a methodology that implements Agile principles, but does not represent Agile itself.)
Narrow or broad, technical or process-oriented, a good working knowledge of these patterns is an essential component in a technologist’s toolbox.
What is an Anti-Pattern?
If a “pattern” is simply a known-to-work solution to a common software engineering problem, wouldn’t an “anti-pattern” simply be the opposite? A non-Agile development methodology, or a tightly-coupled application?
Actually, anti-patterns do not just incorporate the concept of failure to do the right thing, they also include a set of choices that seem right at face value, but lead to trouble in the long run. Wikipedia defines the term “Anti-pattern” as follows:
“An anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive.”
Note the reference to “a common response.” Anti-patterns are not occasional mistakes, they are common ones, and are nearly always followed with good intentions. As with regular patterns, anti-patterns can be broad or very specific, and when in the realms of programming languages and frameworks, there may be literally hundreds to consider. Here are just a few of this author’s high-level, personal favorites:
Whiteboard programming challenges in software interviews
David Hansson, creator of Ruby on Rails and the Founder and CTO of Basecamp, once tweeted “Hello, my name is David. I would fail to write bubble sort on a whiteboard. I look code up on the internet all the time. I don’t do riddles.” The anti-pattern here is evaluating the wrong metrics during an interview, such as where a typical task assignment will be “Add zip code lookup during registration” but interview questions sound like “Sort this array in pseudocode using functional programming concepts.”
Remember the “good intentions” aspect of anti-patterns? It seems as if we are testing the candidate on a valuable principle: knowledge of fundamentals. However, programming is often a ruthlessly pragmatic practice, and this focus on theoretical knowledge over practical skills and experience might cause us to choose a candidate that meets our cultural ideals, but lacks the actual skills required to be successful in the position.
Put another way: if StackOverflow will be a regular resource used by the developer in the position, it should be available (and used) during the interview. Homework assignments and pair programming challenges may also be worth exploring.
All patterns and anti-patterns have valid exceptions. A developer whose job will be to make libraries of algorithms for others to use may very well need to know the Calculus behind a mechanism. The error here is applying this expectation universally, even to developers who will not be doing so.
In philosophical contexts, Moral Hazard is the separation of individuals from the consequences of their decisions. This sounds like an obvious behavior to avoid, but this anti-pattern is the root cause of many SDLC inefficiencies.
Consider the traditional QA process, in which “tickets” are addressed by developers, then passed to QA for review before being deployed. There are two problems here. First, staffing ratios are almost never “1 developer to 1 QA analyst,” and even a handful of developers can easily exceed the capacity of the QA team. Second, this insulates developers from the consequences of their mistakes by making it another individual’s responsibility to find them before they are released – a moral hazard.
The effects of this anti-pattern can be subtle: if the QA team is effective, it may not directly lead to lower quality output. It is more likely to show up in other areas such as complaints about estimation accuracy and missed targets. Quality and estimation accuracy suffer because developers instinctively focus on “getting things through QA” rather than shipping high quality software. Even with a modest defect rate of 20-30% (a number which even might be optimistic in many organizations), the churn this produces can significantly impact team productivity.
Additional anti-patterns often arise in the attempt to solve the problem. In Scrum, it may be tempting to make sprints longer or hold them open. But a sprint is meant to be a measure of time, not a measure of output. This act reverses that nature, which destroys the value of other tools such as “velocity” metrics that are based upon it. It is also common to see longer sprint planning or pre-planning meetings to more deeply review tickets. But this attempts to convert an instinctive process into a scientific one, forgetting that the purpose for implementing a methodology like Scrum was to acknowledge this impossibility in the first place.
Two patterns that are often effective at resolving this issue include:
- Embracing a culture of continuous improvement: “ship it when it’s better, not when it’s right.” (Also see “Polishing the Cannonball” below). Developers encouraged and empowered to do this can make better decisions about how they address their tasks, and also experience a more tangible sense of personal accomplishment.
- Make developers responsible for their work product all the way through to Production deployments. Facebook, Google, and other industry titans have all reported success with this approach.
Polishing the Cannonball
Sometimes also known as “gold plating” or “boiling the ocean,” trying to ship perfect products often significantly increases project timelines and costs without actually increasing the value delivered. A closely related anti-pattern is the “zombie ticket,” the plaque on the arterial walls of the Backlog. Zombie tickets are never a high enough priority to get cleaned out, but are never closed for fear of losing the documentary record of the task.
The problem with both habits is that the metrics that support them are phantoms. Unshipped features have zero value to customers, and tasks that do not cause enough pain to become priorities may never be worth addressing. It is almost always better to focus available resources on regularly delivering new, valuable features rather than on constantly looking backward on small issues that affect very few users.
The “pattern” counterpart here is the minimally viable product (MVP), which often ends up being a bit of a phantom itself. (MVPs are almost never as small as planned or hoped for.) However, the act of attempting to ship an MVP is itself often an antidote to the problems listed above, so even if some slippage does occur it is still worth the effort. Iterative development processes also address this by emphasizing regular, predictable delivery of incremental value, reinforced by feedback from actual end users.
There are enough patterns and anti-patterns in the industry to fill books, and indeed many have been written about them. In the end, it is usually not necessary to memorize lists of them, although developers specializing in certain languages or frameworks should be encouraged to research those specifically targeted at those areas.
If you are interested in learning about more patterns and anti-patterns, I have found these resources to be valuable in my own reading:
- Wikipedia’s “Software Design Patterns” and “Anti-Patterns” pages provide good examples of high level topics.
- SourceMaking’s “Design Patterns” and “AntiPatterns” contain useful specifics about general software engineering tasks.
- Enough Rope to Shoot Yourself in the Foot: Rules for C and C++ Programming, Alan Holub, McGraw Hill, 1995. Although this book is focused on C and C++, most of the rules it covers still apply to nearly any modern language.
In addition, nearly every language or framework has dozens of resources available if you simply search for “NodeJS Patterns” or similar in any search engine. I would encourage every developer to do this for their particular fields: even when we think we know good vs. bad practices, it can sometimes be surprising when we find ourselves following an anti-pattern – always with the best of intentions, of course! | <urn:uuid:4e2e258c-9c52-4153-8d78-45a5b073d08d> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/anti-patterns-vs-patterns/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00420.warc.gz | en | 0.946007 | 2,559 | 3.25 | 3 |
Fog Helps Make Remote Decisions Quickly
Cloud computing still generates confusion, with some workers certain this distributed resource resides in the sky. Despite its inaccuracy, the comparison is useful for energy sector modeling: The relative distance of large-scale cloud networks to offshore rigs or remote power plants puts them similarly out of reach.
For companies to capture and process data on demand, they need a way to keep compute tasks closer to home. One solution is edge computing, which effectively offloads small tasks onto Internet of Things devices, allowing them to selectively send or discard data points based on preprogrammed rules. Fog computing moves the focus outside devices but grounds it near IoT networks, providing a platform to aggregate data from multiple devices and then make intelligent decisions.
For example, if connected sensors report a consistent increase to power plant core temperature, fog networks can direct safety measures to engage and then send a report back to cloud servers to prompt further action. Without fog, decision time is slowed — and in the case of failing equipment or emergent environmental concerns, this could have disastrous effects.
Delivering Data in the Growing ‘Internet of Types’
The term “fog computing” was originally coined by Cisco. The company recognized that as growing numbers of IoT devices created “an unprecedented volume and variety of data,” the delay between acquisition and action meant “the opportunity to act on it might be gone.”
This becomes especially problematic as the number of connected devices skyrockets; already, these devices generate more than 2 exabytes of data per day. Add in the expanding range of device “types” — everything from connected cameras and wireless access points to machine controllers and temperature sensors — it becomes critical for energy companies to both aggregate and analyze disparate data at speed.
And despite its ability to handle large data volumes at scale, the cloud simply can’t keep up. Tackling the Internet of Things to deliver reliable, real-time information demands a middle ground.
The Cost Benefit of Fog
Along with the ability to analyze data onsite and increase decision-making speed, fog solutions also offer benefits in scale and scope.
Fog computing offers substantive cost benefits over typical clouds when it comes to data transmission, computing, data storage and power consumption at scale. And according to work from the Department of Information Engineering and Telecommunication at the Sapienza University of Rome, Big Data analytics “can be done faster and with better results” across geographically diverse networks by using fog computing. In addition, fog computing naturally supports heterogeneous devices, allowing companies to unify disparate sensor and monitoring data at speed.
Leveraging these advantages is critical to realizing the long-term benefits of fog computing: SCADA networks, mobile endpoints and storage resources, all connected to a middle-ground fog network capable of both taking action on demand and facilitating the downstream benefits of Industry 4.0 in the cloud for energy organizations. | <urn:uuid:605888d8-c5b0-42e9-ab79-f01235a5ea54> | CC-MAIN-2022-40 | https://biztechmagazine.com/article/2019/10/how-fog-computing-can-optimize-energy-sector | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00620.warc.gz | en | 0.920907 | 603 | 2.984375 | 3 |
Primary school students are more likely to eat a nutritional breakfast when given 10 extra minutes to do so, according to a new study by researchers at Virginia Tech and Georgia Southern University.
The study, which is the first of its kind to analyze school breakfast programs, evaluated how students change their breakfast consumption when given extra time to eat in a school cafeteria.
The study also compared results of these cafeteria breakfasts to results of serving in-classroom breakfasts to the same group of students.
“It’s by far the most sophisticated, accurate measurement of school breakfast intake ever done,” said Klaus Moeltner, a professor of agricultural and applied economics in the Virginia Tech College of Agriculture and Life Sciences. “We know exactly how much the students consumed and how much time they had to consume it.”
Using food weighting stations developed by co-author Karen Spears of Georgia Southern University, the researchers collected information on the number of students who ate a school breakfast, how much they ate, and their exact nutritional intake.
The findings, recently published in the American Journal of Agricultural Economics, revealed that the number of school breakfasts consumed increased by 20 percent when students were given 10 extra minutes to eat in the cafeteria, and an additional 35-45 percent when breakfasts were served inside classrooms, bringing the overall rate of breakfast consumption close to 100 percent.
“The percent of students that go without breakfast because they didn’t eat at home and they didn’t have time to eat at school goes from 4 to 0 percent when given 10 minutes more to eat, so the most vulnerable segment is taken care of,” said Moeltner.
And while the results suggest that more students eat breakfast when it is served inside classrooms, the researchers acknowledge the extra costs associated with in-classroom breakfasts.
“When you move breakfast into the classroom, you have to serve all the students for free, and the associated costs needed to feed all the students must be covered by low income subsidies,” said Moeltner.
“But many schools don’t have a large enough proportion of subsidized students and therefore cannot afford to serve in-classroom breakfasts because they lack the subsidies to offset the costs.”
Thus, the findings have significant implications for schools that cannot afford classroom breakfasts, but could allow more time for cafeteria breakfasts.
The study also provided additional insights into student breakfast consumption habits.
Third- and fourth-grade students from the three Reno-area schools that participated in the study were given wristbands as they arrived on campus that tracked their arrival time as well as individual consumption and nutrition data.
In addition, students completed a daily questionnaire to gain further insight into whether they ate breakfast at home, how hungry they were upon arrival at school, which transportation method they used to get to school, and whether they liked any of the food offered.
Analysis of the data showed that the transportation method used to get to school did not impact whether or not students ate breakfast, and that students did not overeat because of the extra time provided.
“Our results show that there’s no change in average consumption, which is reassuring,” said Moeltner. “Kids aren’t overeating because of the extra time.
Instead, they’re substituting—if they used to eat breakfast at home, now they eat it at school.”
The researchers are now analyzing the breakfast waste data collected during the study with hopes of publishing further research on the topic.
With the rich data provided by the study, researchers can examine many school breakfast questions. For now, the results on breakfast consumption when given extra time are clear, and researchers advise educational institutions and policymakers to consider implementing additional time for school breakfasts.
More information: Klaus Moeltner et al, Breakfast at School: a First Look at the Role of Time and Location for Participation and Nutritional Intake, American Journal of Agricultural Economics (2018). DOI: 10.1093/ajae/aay048
Provided by: Virginia Tech search and more info website | <urn:uuid:c9c39ee4-9f6b-4a16-a5ed-a9a8a7751a48> | CC-MAIN-2022-40 | https://debuglies.com/2018/08/18/primary-school-students-are-more-likely-to-eat-a-nutritional-breakfast-when-given-10-extra-minutes-to-do-so/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00620.warc.gz | en | 0.966504 | 854 | 2.890625 | 3 |
ITEM: Three years ago, Google’s Sycamore quantum computer solved a problem no ordinary computer could do. Now an ordinary computer has solved it via a clever new algorithm.
In 2019, Sycamore performed a task that involves verifying that a sample of numbers output by a quantum circuit have a truly random distribution. Google claimed that the world’s most powerful supercomputer at that time, IBM’s Summit, would take 10,000 years to solve this particular task – Sycamore did it in 3 minutes and 20 seconds.
Google declared that Sycamore had achieved “quantum supremacy by doing something that only a-quantum computer could do. However, a team at the Chinese Academy of Sciences in Beijing says they’ve created an algorithm that allows a non-quantum computer to solve the same task.
New Scientist reports:
The researchers found that they could skip some of the calculations without affecting the final output, which dramatically reduces the computational requirements compared with the previous best algorithms.
The researchers ran their algorithm on a cluster of 512 GPUs (graphics processing units), completing the task in around 15 hours.
That’s not quite as fast as Sycamore. But it’s a lot shorter than the 10,000 years Google claimed it would take a cutting-edge supercomputer to do. The researchers noted that the algorithm could beat Sycamore’s time by running it on an exascale computer. That said, such computers are rare, and require a lot of performance overhead.
Quantum computing will win eventually
While the results are impressive on paper, they don’t spell doom for Sycamore or quantum computing in general. One expert notes that the algorithm pits modern exascale computing with quantum computer technology from three years ago.
Quantum computing has made considerable progress since then – and Sergio Boixo, principal scientist at Google Quantum AI, pointed out in a statement that quantum technology “improves exponentially faster” than classical computing:
“… So we don’t think this classical approach can keep up with quantum circuits in 2022 and beyond, despite significant improvements in the last few years.”
Research leader Pan Zhang agrees that classical computers are unlikely to keep pace with quantum machines for certain tasks, according to New Scientist:
“Eventually quantum computers will display overwhelming advantages over classical computing in solving specific problems,” he says.
Related article: Quantum computing is further away than marketing hype makes it look | <urn:uuid:539d71e8-d4af-412f-9fe7-4ffa76c61aa3> | CC-MAIN-2022-40 | https://disruptive.asia/googles-quantum-computer-skunked-by-ordinary-algorithm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00620.warc.gz | en | 0.929729 | 521 | 2.890625 | 3 |
Nowadays, with the rapid development of network technology, people are in need of a network technology that can support data transmission and power supply. The invention of power over Ethernet switch brings people a lot of conveniences because of their flexibility and reliability. And power over Ethernet switches have been applied in many applications in order to keep network at peak utilization. So what is power over Ethernet switch and how does it work? This post will give you the answer.
What Is Power Over Ethernet Switch and How Does It Work?
Power over Ethernet switch is also called PoE switch. PoE switch belongs to network switch or hub, it can not only transmit network data, but also supply power to connected devices over one Ethernet cable at the same time, which can greatly simplify the cabling process and cut costs. At the same time, PoE technology is applied in different devices, such as IP cameras, IP access points and voice over IP (VoIP) phones.
When PoE switch is connected with a PoE-Capable device, it can detect automatically the same devices that you have. Each of the spare wire pairs of PoE switch is treated as a single conductor and the electricity is injected onto the cable. And PoE switch can sometimes be transmitted on the data wires by applying a common-mode voltage to each pair. Because the twisted pair Ethernet cable uses differential signaling, the voltage doesn’t interfere with the data transmission.
Figure 1: How Does Power over Ethernet Switch Work?
Common Power Over Ethernet Types
According to different ports, power over Ethernet switch can be grouped into three common types, they are 8-port ,24-port and 48-port power over Ethernet switches. Different types differ in switching capacity, price and other aspects. For example, different PoE switches have different switching capacities and prices in FS (Figure 2). Different types of PoE switches also have different applications.
Figure 2: Comparison of Different PoE Switch in FS
Confusing Questions About Power Over Ethernet Switch
They differ in reliability, function, cost and manipulation. Compared with normal switches which only support data transmission, power over Ethernet switch can support data transmission and power supply. Devices connected with power over Ethernet switch don’t need to perform power supply wiring, which can save costs and simplify the whole network management.
Firstly, one difference between PoE and PoE+ is the actual Institute of Electrical and Electronics Engineers (IEEE) standards themselves. PoE is 802.3af, while PoE+ is 802.3at. The maximum capacity of PoE can reach 25.5 w, while the maximum capacity of PoE+ can reach 35.4 w. Secondly, the maximum support current of PoE is 350mA, while PoE+ is 600mA.
Yes, you can. All PoE switches have auto-sensing PoE ports, which means that the PoE port will detect if the connected device is a PoE device or not. But it is very important for you to check whether your PoE device supports 802.3af or 802.3at. Because non-standard PoE switches don’t have auto-sensing PoE ports, which is more likely to damage the network port.
Yes, nearly all Ethernet cables support PoE. PoE will work with existing cable, including Category 3, 5, 5e or 6.
It is without that power over Ethernet switches significantly improve the efficiency of network devices. After reading the whole passage, you may likely to have a general idea of PoE switches. FS provides different types of PoE switches for Ethernet PoE power supply and data communication. For more information, just reach us via [email protected]. | <urn:uuid:7c68e709-4b50-46cc-bf9a-9ab4f6e29f22> | CC-MAIN-2022-40 | https://www.fiber-optic-components.com/tag/power-over-ethernet-switch | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00620.warc.gz | en | 0.901726 | 763 | 3.40625 | 3 |
Indonesia could be described as simply a huge collection of volcanoes, many of which regularly erupt, many of which are extremely (symmetrically) beautiful, aka – must-see!
Some sources say there are around 300 volcanoes, some – around 400, others – around 500! That’s quite a margin of error! But it’s to be expected: they’re difficult to count. Example: a volcanic mass with three or four distinct conical peaks: does that count as one, or three/four?
‘Active’ volcanoes are counted separately, but again there are differences in the totals as there’s no fixed definition of an active volcano. Anyway, in Indonesia there are around 75 to 130, depending on the source on the internet you look up.
Whichever total you take, there’s no denying Indonesia is one seriously volcanic country. But then, of course it is: Indonesia is a segment of the islands (and peninsula) that help make up the Ring of Fire (together with its volcanic siblings like Japan, Kamchatka, the Kurils, New Zealand, etc.)
But enough of volcanic theory; time for some actual volcanic experience. All righty: in at the deep end – Mount Merapi: the most active volcano of Indonesia…
Those aren’t clouds; they’re fumarolic emissions. Clearly a very active volcano, without a doubt.
The beaut in the following pic isn’t Merapi, it’s the next-door volcano:
No matter what direction you look in here it’s other-worldly, enchantingly volcanic:
While taking these pics I was reminded of my 12 reasons volcanoes are way better than mere mountains. Ok: let’s go through the list and find out using Merapi as an example…
Yep, it’s beautiful. No one’s disputing that.
Crater at the top – check (a fresh one at that). Awesome panoramic views all around looking outward, plus the bonus when looking inward too.
Celestial observations. Though I didn’t see any observatories on Merapi, I did see these volcanological instruments:
Volcanoes = riots of color. No disputing that either!
Volcanoes are often warm. Merapi is no exception. We made it up to the summit (~3000 meters above sea level) at dawn and, though nippy, at least the thermometer showed a ‘+’ temperature. In some places the rock underneath our feet wasn’t just warm, it was piping hot! Hollows in the rock made nice volcanic body-warmers to take a breather in!
Volcanoes erupt. Merapi erupts. Merapi is the most active volcano in Indonesia, and one of the most destructive. It has also taken thousands of lives throughout its history.
“Typically, small eruptions occur every two to three years, and larger ones every 10–15 years or so. Notable eruptions, often causing many deaths, have occurred in 1006, 1786, 1822, 1872, and 1930. Thirteen villages were destroyed in the latter one, with 1400 people killed by pyroclastic flows.
“The very large eruption in 1006 is claimed to have covered all of central Java with ash. The volcanic devastation is claimed to have led to the collapse of the Hindu Kingdom of Mataram; however, the evidence from that era is insufficient for this to be substantiated.” – Wikipedia.
The last time Merapi erupted was in October-November of 2010: 353 people died, mostly from pyroclastic flows. These eruptions took 40 meters off the top of the volcano, and left a lop-sided crater up there – somewhat resembling the crater of Mount St. Helens, which I visited in August 2013, and Bezymianny on Kamchatka. A new, smaller cone is growing up inside the lop-sided crater, which sooner or later will also blow it’s top.
The cone is constantly monitored by volcanologists. Btw, eruption forecasts are pretty accurate; they’re based on the monitoring of mini-earthquakes, which attract rising lava upward.
Btw, the 2010 eruption was predicted in good time, and 350,000 local inhabitants were evacuated. However, some decided to return – only to be killed by the pyroclastic flows. These deadly flows btw look like this:
That was then; this is now:
And up we climb it…
That’s T.G., V.K, I.K, D.K. O.R, A.I., S.S., and your humble servant E.K – the intrepid Indonesian itinerants!
At half-past midnight we awoke, and 30 minutes later we were on our way already to conquer Merapi. Btw, practically all our volcano climbing took place at night. And there’s a good reason for that.
First, at night it isn’t so hot. Ok, so two kilometers up from the sea it’s hardly going to be scorching, but still: cool is cool ); then, three kilometers up the temperature gets down to around 10°C and warm clothes are needed (see the photo above at the peak). It’s the first leg of a volcano-scaling trek down in the stiflingly hot jungle that makes a night-ascent the way to go.
Another thing: at night it’s not so horribly humid. Mercifully, at night in this part of the world the skies are mostly clear – crystal clear so all the hot air from the day has somewhere to escape of a night. (These clear night skies also mean every night is a very starry one: oh my galaxy-gawp!) By day, come noon the peaks of volcanoes are normally shrouded in cloud, and after lunch it’s normally raining, so that’s two more H20-based reasons not to climb volcanoes by day: you may never see much because of the cloud and then you’ll get drenched by the daily equatorial tropical rainstorm.
So we had to force ourselves to observe strict scheduling for our climbs: a light dinner at 7-8pm, then bed. Four hours later up and off. Sounds harsh, but you quickly get used to the new rhythm.
Night-ascents have another bonus: you can’t see signs such as these along the paths:
Technically, practically anyone can clamber up this volcano – even someone not in great shape. It’s a mere 2930 meters high – not very tall at all. Besides, it’s well signposted and the paths going up it are all good. All the same, I wouldn’t call it a doddle for all: office workers who take little exercise will be huffing and puffing and generally struggling with all the physical exertion when the paths become steep. Btw, besides a torch and trekking sticks it’s a good idea to take some gloves that you don’t mind messing up while clambering up the rockiest steepest stretches on all fours!
Next, a short stay of a few hours in the hotel in New Selo village at 1800 meters above sea level, then a car drops us off some 3km from the start of the volcano’s path, and that’s where the trekking starts. The walk is 3.5km long, rising some 1200 meters vertically. This takes around four or five hours taking it steady – in time for the sunrise around 5.30am.
3.5km in 4-5 hours? Yes – you can tell it’s going to get steep and the going’s going to get tough. Indeed, many a pair of untrained legs among our group were ready to just give way by the time we reached the top.
The ascent starts nice and easy with this here nice concrete path:
Later: no concrete, loose rocks, and clambering up lava!…
We pause to catch our breath at Gerbang Tnom, then it’s onward up to Pos I, then Pos II, and on like that up to the peak. The signs are very informative: they show the distances, the altitudes and the approximate walking times:
It goes without saying that all these pics of mine were taken on the descent; going up there was nothing to see for the pitch black night.
The last stretch is the trickiest: loose rubble underfoot; clambering – not walking. And already there are folks up there waiting for the sunrise. We too made it in time.
Going back down was a lot more fun: tons easier and with outstanding views.
Three photos of the hotel we stayed at in New Selo:
Merapi will probably erupt again soon and spew out its ash in all directions, and the shape of the crater and the lava slopes will be completely different next time…
That’s your lot from Merapi folks. Back with more Indonesian-isms tomorrow… | <urn:uuid:81a282d7-3b5a-4454-9539-2764d7f0b19f> | CC-MAIN-2022-40 | https://eugene.kaspersky.com/2018/01/25/1-volcano-climb-merapi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00620.warc.gz | en | 0.946027 | 1,958 | 2.546875 | 3 |
Voice over IP (VoIP) is a technology that allows you to transmit voice over an IP network. This enables both telephony and data to be transmitted over the same network packet-based infrastructure.
There are two fundamental protocols used to transmit the signaling that is necessary to make IP telephony operate correctly. These protocols are Session Initiation Protocol (SIP) and H.323.
These are two very different protocols that emerged from very different beginnings. In this article, we’ll examine them more closely to understand how they operate and where they are typically employed.
So, let’s discuss and compare SIP vs H323 starting with a high-level comparison table:
H323 vs SIP – Comparison Table
The following table compares the H.323 and SIP protocols:
|Origins||IETF Internet based||ITU-T based on ISDN|
|Control Systems||SIP server/IP PBX, mandatory||Gatekeeper, optional, but necessary for larger deployments|
|Endpoint addressing||Uses SIP URLs||Uses aliases mapped by Gatekeepers|
|Natively compatible with IP and the Internet||Yes||No|
|Design||Modular flexible||Monolithic, inflexible|
|Additional features||Instant messaging, presence||None provided beyond voice and video|
|Interoperability with traditional telephony||Not readily interoperable but can be implemented with appropriate adaptors/voice gateways||Backward compatible|
VoIP, IP Telephony, and Signaling
In order to understand SIP and H.323, as well as their differences and operation, let’s take a brief look at what VoIP is and what role it plays in IP telephony.
VoIP is a set of technologies and methodologies that digitizes and packetizes voice at the source device and prepares it to be sent over the network in the form of IP packets.
These packets are received and reassembled at the destination device and the original voice is reproduced and heard by the receiver.
IP telephony leverages VoIP to enable users to call each other using the well-known process of picking up the handset of a phone and dialing a number.
Like traditional telephony, IP telephony requires signaling mechanisms. Signaling is involved in initiating, maintaining, modifying, and terminating a call.
When you pick up the handset of a phone, the phone goes off-hook which sends a signal. Dialing numbers send signaling to the server responsible for routing telephone calls.
Functions such as making a phone ring, hearing ringtone, dialing a number, implementing call display, call waiting, call hold, and various other advanced telephony features, all use signaling to successfully operate.
Signaling is achieved in conventional telephony using a physically separate data channel. For VoIP and IP telephony, signaling is achieved using protocols such as SIP and H.323 that create communication sessions between devices that are separate and distinct from the actual exchange of voice packets.
Session Initiation Protocol (SIP)
SIP was conceived quite early on in 1996 and by 1999 it was published as a standard by the Internet Engineering Task Force (IETF) in RFC 2543.
The goal of its founding developers and of the standardization organization that has adopted it since has been to provide a signaling and call setup protocol for IP-based communications that can mimic and reproduce the call processing functions and features of the Public Switched Telephone Network (PSTN).
At the same time, SIP was designed to be extendable to support additional multimedia services such as video conferencing, and media streaming as well as specialized functionalities that include presence, instant messaging, file transfer, fax over IP, and even online gaming.
Born out of the Internet
Unlike other telephony protocols, SIP is lauded by its proponents for having roots in the Internet community rather than the telephony industry.
This is demonstrated by the fact that SIP has been standardized by the IETF whereas other voice protocols such as H.323 and ISDN have been traditionally associated with the International Telecommunications Union (ITU).
SIP, as its name suggests, is involved in the control mechanisms related to the initiation and termination of sessions needed to allow voice and video applications to function.
It defines the format of the control messages (and once again, not the voice packets) transmitted between participants in a media exchange.
Call setup, call teardown, and Dual Tone Multi-Frequency (DTMF) signals are just some of the call control messages that SIP transmits.
These are among the features that have been employed in traditional telephony for decades that SIP essentially duplicates within the VoIP domain. SIP was designed to mimic the functionality of the PSTN and conventional PBXs to avoid the need of retraining users when moving from conventional to IP telephony.
The goal was to allow a user to use a SIP-enabled telephone without any change in the tones, functionality, and general feel of the calling experience that users have become so familiar with over the years.
An advanced protocol
Even so, SIP was designed to be modular and flexible, to be able to deliver much more than what the traditional PSTN offered.
This modularity allows sip to continually be developed to incorporate advanced features and functionalities that take advantage of the IP infrastructure upon which SIP is based.
VoIP systems based on SIP can easily expand VoIP network services by adding video and mobile users to their existing infrastructure with very little intervention into the existing system.
This increases the options an organization is provided with, as the addition of features is often implemented by simply obtaining a license, a software package, or a system server, depending on the type of feature in question.
H.323 VoIP Protocol Suite
The most popular alternative signaling protocol to SIP is H.323 which was developed by the ITU-Telecom Standardization Sector (ITU-T).
Although it can be used for strictly voice conversations, it is most often applied today in video conferencing equipment and leverages the Q.931 standard, which defines legacy ISDN circuits.
The original intention was to allow video conferencing systems to use ISDN infrastructure which seemed to be a very promising technology at the time, although it has since been adapted to run over IP networks as well.
H.323 is more properly referred to as a system specification or standard that includes various protocols providing multiple services. These protocols are further described below:
H.225.0 Call Signaling – This is essentially SIP’s counterpart as it is the fundamental signaling protocol in the H.323 suite.
H.245 Control protocol – This protocol standardizes the methodology of the exchange of capability information between endpoints and opens and closes logical channels for voice and video.
H.225.0 Registration, Admission and Status (RAS) – This feature is unique to H.323 in that it doesn’t have a counterpart in a SIP environment. Where SIP is highly flat in its architecture, the H.225.0 RAS protocol provides a hierarchical structure to call signaling, with a device called a Gatekeeper at its center. This is especially useful for large enterprises with multiple campuses, locations, and branch offices requiring a centralized and interconnected telephony network.
As all communications systems are slowly converging towards leveraging a single IP-based communication infrastructure, SIP is quickly becoming the protocol of choice.
More and more businesses, vendors, and providers are adopting the use of SIP in their telephony and telecom infrastructures. H.323, although still prevalent, is slowly waning. | <urn:uuid:bc1953d9-be4e-4ad6-9bce-9e677c2af8d4> | CC-MAIN-2022-40 | https://www.networkstraining.com/h323-vs-sip-protocols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00620.warc.gz | en | 0.936977 | 1,627 | 3.6875 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.