id
stringlengths
4
9
paragraph
stringlengths
107
11.4k
index
int64
0
227
102345223
Manufacturing Sites Our company specializes in mass production of drugs/healthcare products for millions of customers.The orders can be placed online via public website and also offline via phone or paper contracts.Shipments are delivered to customers, from all locations.As manufacturing is automated, the sites consist of many digitally controlled production devices, operated locally or remotely, as well as a large number of monitoring and controlling stations for ensuring quality.process should be such that the workstations get updated within a deadline of one day and employees are informed through pop-ups if reboot is required.Besides this, the growing trend is to support laptops as workstations due to their ease of mobility.Hence these devices must also be protected from getting stolen [6].Laptops may be a soft target for stealing confidential data.Hence data encryption should be enabled for all the data residing in the hard-disks of laptops. When the laptop boots it should ask for a PIN which only the owner employee is aware of.This way even if the laptop is stolen, confidential data can be protected.
5
102345223
Code Server The code server would be holding the data for the research (like chemical reaction simulations) being undertaken at various facilities.First level of security is to restrict access only to employees of research units who work on this data. Second level of security is to back up the data in code servers (discussed in detail later) regularly.Third level of security is to station the servers separately in a secure and monitored network.There should be a network firewall and IDS monitoring the traffic to the servers.The fourth level of security is to physically station the servers in a restricted area which is well monitored by security cameras and only the administrators should have access to that area.
6
102345223
Mobile Devices Mobile devices aid employee productivity by allowing them to connect to the internal network through Wi-Fi from any location in the company.They can keep track of their simulations etc. at their convenience from anywhere within the facility.Since mobile devices access multiple wireless networks inside and outside the company, some of which are unsecured, they are more prone to getting infected compared to other devices.More and more rootkits are surfacing which target mobile devices specifically [7].Hence there should be a well-defined security policy on what is allowed on the wireless networks.Highly critical data like code access should not be allowed through these devices.Also, the wireless network should be configured in such a way that it only allows the devices which have the VPN software (discussed later) installed on the phones.The phones should also have the company recommended anti-virus software installed on them.The VPN network server residing behind the access points should check the presence of these softwares on the devices before granting access to the network.If the mobile device doesn't meet the requirements it should be denied access to the network.To allow access a registration policy should be adopted. This means that only devices registered with the network may be allowed to access it.This could be done by one-time registration of the IMEI number and then reading and matching the number from the database, every time a device tries to connect to the Wi-Fi.This wouldn't cause any inconvenience to the employees since they don't change the equipment very often and would probably be using the same phone most of the time.
7
102345223
Email & IM Server (Spam, Malware) Emails and IMs bound to be one of the primary forms of communication as offices span over different geographical locations.Also, since emails are a documentary evidence of the communication, they might be preferred over other forms in certain contexts.Emails reside on the email server, and any security breach of the server can leak confidential information.A link clicked in malicious emails by an unnoticing employee, may result in a drive by download malware getting downloaded on their machines.This may then spread across machines in the network and breach the security to carry out attacks.An employee could receive malicious emails which may look genuine as an internal information broadcast asking them to change their password.By acting on it they may leak their username/password which may then be used to gain access to the internal network by the hackers.Hence it is important to have a good spam filter and an anti-virus on the email server which scans all attachments before downloading them on the client machine to save employees from getting phished.Suspected emails should be blocked and investigated before passing on to an employee, especially when they are from an outside network.Advanced file inspections as part of antivirus are necessary so that attachments are scanned from its contents.Recommendations on anti-virus and spam scanner are discussed later.
8
102345223
Network/IP Phones Telephone network helps in managing communication across the various centers distributed globally.It helps in conducting conference calls for interacting with other teams.These conference calls may be used to communicate confidential information and thus it is necessary that they be secured like any other digital communication.Since our company uses IP phones for all its phone communication, it is important that the software on the phones be protected.Phones are forms of embedded devices with limited memory, processing power and software.They don't have any anti-virus or monitoring software installed in them.Hence, they can be soft targets.Upgrade of firmware on the phones is not a regular activity and may happen rarely.Hence, we recommend that the phones have a provision to hard lock the firmware upgrade, and the unlocking/upgrade be done only when an upgrade is absolutely essential.This will protect the phones from any malicious code getting installed on them.Also, the upgrades should be carried out through a secure network by the administrators, which should require additional login credentials to prevent employees from unlocking the phone and triggering upgrades (insider threat).
9
102345223
Printers Our company's premises have a large network of printers with each facility hav- ing multiple printers stationed on every floor of the premises.The investment on the printers has been done because the employees would need to take print-outs of various documents to assist in their daily work.The documents may contain company confidential information and thus need to be secured.Besides this, most of the printers are multi-functional as they can carry out printing, copying and scanning of documents.Since the scanner needs to save the digital copy of the data, the printers have a small hard disk and support emailing.If not well protected they could be used to carry out attacks like sending spam through the email facility of printers, similar to the refrigerator attacks [8] which was carried out recently.Again, similar to phones, the software upgrades on the printers should be hard-locked.There should be a software installed on the printers to purge the data from the hard-disk once it has been retrieved by the user/employee so that there are no traces of it.Before decommissioning a printer, all the information must be deleted, hard-disk removed and cleaned.There should be well defined processes for all these activities so that any new administrator also knows the process to follow.
10
102345223
Lab Equipment There may be various lab equipment used to carry out testing of drugs and in carrying out research.Some examples include equipment like centrifuges, chromatography testing equipment, analyzers and software which may be used to analyze the results from the equipment.To secure them, they should not be connected to the network.Many equipment may be digitally controlled but need not be remotely controlled.We recommend that for security reasons they should not be network operated/controlled or monitored.Individual workstations which are offline should be connected to them if needed.Also, the lab premises should be well secured with restricted access and monitored via cameras. Internal network equipment can be categorized into the following:
11
102345223
Access Points Wireless networks may be a part of the entire facility, so that the network is easily accessible from any location within the company.They can also be an easy medium to gain access to the internal network, especially if the wireless network is not well protected.To address this, it should be ensured that all access points should be password protected, and default password must be changed to something secure.They should have access control lists that allow only registered devices.
12
102345223
Routers Routers form the backbone of wired network.There would be plenty of them deployed within the company to build a large internal network connecting all the departments at a particular facility.ments to adjacent ones, in an attempt to route traffic towards the malicious routers.To make routers secure, the first thing to keep in mind is that any default passwords must be changed to something strong.The routers are often advanced enough to support fine-grained control or even firewalling, which should be properly utilized to disable any form of traffic from an unlikely source.In order to address the OSPF and BGP vulnerabilities, it is a good idea to have a PKI system for routers so that every route advertisement can be signed, and the source of every route advertisement is known and authenticated.This scheme has limitations to the effect that it involves a complex certificate management system which can scale to a large number for a big firm.Also, it provides no security against falsely configured routers, and hence requires additional monitoring stations to verify that the routes are topologically correct.
13
102345223
Internal Websites Companies usually don't care about internal website security compared to external ones.We suggest that same level of security should be applied to internal websites too since they can be hosting confidential data, and/or the servers may be used to gain access to other network resources. Physical Security can be categorized into the following:
14
102345223
IDs Access to the facilities is usually managed by RFID cards.Since the access is digital, it needs to be well protected.Unauthorized access would cause a security breach which may lead to damage to the company.Security access should be of different levels i.e.only required personnel have access to any of the restricted areas.In order to avoid spoofing the ID cards, the employees can each have a key-pair registered with the company, and the data on the ID card can be signed using their private key for authentication.This also ensures that once registered, the same card will work at any office within the company.
15
102345223
Biometrics Biometrics could be used to further enhance the level of security provided by ID card access.This additional level of security would be useful in protecting the restricted areas within the company premises.But biometrics come with their own weaknesses, e.g.facial recognition can be fooled, or it may read incorrect data if face is altered naturally, or voice recognition can fail in case of an illness. Similarly, fingerprints can be spoofed using gels, etc.The solution is to use biometrics under supervision so that spoofing using physical means can be avoided. Also, biometrics should be used in conjunction with another identifier like names so that matching time is reduced, instead of using the digitized biometric itself for a lookup.
16
102345223
Security Cameras Security cameras stationed at all the critical places can act as deterrent to people trying to sniff.Also, they can provide critical evidence in case there is a break-in and hence the camera feeds should be well secured.There should be an additional security staff managing the security at the facilities.Security cameras should be installed in tandem so that there are no blind spots.Access to camera feed should be restricted, and the videos should be backed up in addition to being saved.Access to those tapes can be restricted based on the biometrics for the security personnel.
17
102345223
Digitally Controlled Equipment There would be mechanical equipment controlled digitally to manufacture and package medicines.An attack on such equipment could potentially turn the company bankrupt.This is because if for example the manufacturing equipment are compromised to print wrong label on the medicines, it could affect people's lives, the company could be sued and may have to close down.There can be other similar attacks like contaminating a particular medicine with chemicals being used to manufacture other medicines and so on.Hence very strict security policy measures are required to protect such equipment.These measures are discussed in detail while discussing impact of Stuxnet and how to prevent it from disrupting manufacturing sites and their digitally controlled equipment.
18
102345223
Workstations Being an advanced facility, each manufacturing site may have multiple workstations for employees and personnel, as well as multiple monitoring / controlling stations to regulate operations.It is important for the company to ensure that these workstations are not compromised.All of the measures for workstations discussed in workstations at research centers apply here as well.However, there are some additional measures that need to be taken into account which are discussed while discussing impact of Stuxnet.
19
102345223
Public Facing Network Since public website is accessible from anywhere, it's also a highly sought-after target for attackers.There is no need to be part of the internal network to attack the servers.The attacks on web-servers are both in form of traditional (generic) attacks as well as advanced ones targeted towards specific web-servers [9].Some of the known forms are explained below: • SQL injection attacks: Insertion of malicious SQL statements into an entry field for execution.This is mostly used against data driven applications, which may be relevant to our company as it has various databases to maintain for customer-data, credentials, company-data, etc.If the user-input is not strongly validated, unexpected SQL code may be embedded into it, and may get executed in this manner.• URL interpretation attacks: This attack is possible in situations where an attacker can adjust the parameters of a request.The syntax of the URL is maintained, but its semantic meaning is altered.For example, changing the email address parameter in the GET request of a password-reset page.• input validation and buffer overflow attacks: This is a very common type of attack against web servers, which is made possible when the scripts/programs handling the data entered by the user are not written securely, and don't perform sanitization or bounds checking, allowing execution of malicious code. • cross site scripting: This particular vulnerability allows attackers to inject client-side scripts into web pages.It may also be used to bypass access controls.It typically uses known-vulnerabilities in web applications, their servers, or plugins. • attacks on the medical professionals' portal: Communication on this portal requires login credentials, which make it a good target for attackers.Leakage of this data can enable attackers to listen in on confidential discussions or to masquerade as another user. • attacks on the customer transaction site: Since this operation has a financial facet, it is also an attractive target for attackers.Needless to say, a successful compromise of this subsystem can lead to theft of financial details of users, which has also been seen in many recent attacks like the one on Sony PlayStation server which lead to breach of data as well as loss of credit card information putting customers at risk [10]. The attacks related to inputs and code injection can be handled by proper input validation and sanitization before passing on the input to the backend scripts or databases.Server application(s) should start as non-privileged user, so it can't compromise protected files.Scripts should be allowed only in certain directories which can be maintained easily, and suEXEC (Apache Web Server) or similar feature should be used to switch ID's before running scripts.Proper patches must be applied to the server software so that it remains bug-free.Access to server must be over TLS, with the key stored in encrypted form, readable only by the administrator. For the professionals' portal, login must be done using credentials stored securely in a database.This particular portal can preferably be hosted on a separate server.Firewalls should be in place to ensure that there is no way to contact the server bypassing security filtering layers.Thus, the company should have a DMZ-minded layer of firewalls to ensure network isolation. For the customer's financial transactions, the traffic should be over TLS and authenticated.It is best to keep the server hosted on a separate machine so that the compromise of any other service should not weaken the security of this part. Help of a 3 rd -party vendor can also be taken for this.A good example is OAUTH.
20
102345223
Taking the Backup Following parameters need to be taken care of while backing up date: • Content to be backed up. • Source to get data from. • Frequency of the backup operation. • Location of storage (backup server). • Duration for which the backup has to be kept. Since backing up involves sending data over the network to a backup server, the operation is vulnerable to network attacks and vulnerability in backup mechanism may make it a soft target for attacks.As a counter-measure, the company can use certificate-based encryption for securing the backup data during transit.Validation of the backup server ensures that the data is being backed up to an authentic server, and not leaked to some other location.Validation of the client certificate ensures that the source of the data is genuine and from within the company and doesn't contain anything malicious. Backup data is likely to have a large size, and the longer it stays in transit, the more vulnerable it gets.In order to minimize the transfer duration, it might be a good idea to compress the backup before sending.Less sensitive/important data may be backed up at lower frequencies to reduce transfers.To further minimize the transit operation, each location can have its own backup server, backing up independently. It makes more sense to have backup servers in the same location where R & D takes place, instead of routing it over to the central headquarters (which may be done at a much lower frequency).Blind backup which involves backing up everything indiscriminately should be avoided.Priority should be given to more sensitive data and data which changes frequently like codebases and research data.Some data changes are in-frequent like databases for employees and customer details, etc.In order to maintain consistency, the data may be updated at the back-up node along-with the primary node whenever there is an update to the database.This would ensure consistency on the back-up node and also avoid the need to back-up such data.Incremental back-ups could be adopted as a general measure.However, backups should be taken frequently enough, either daily or semi-weekly, to be minimize data loss in case of an eventuality.Moreover, there should be policy which rotates the validity of the backup, before the expiry of which, a new backup must be taken to replace the old one.Backed up data has an advantage over live data.Since backups are stored more frequently than being retrieved, a back-up versioning system could be used, with older backups getting removed after a certain period of time.Even if reading from backups (which is rare and only in case of damage to live data) is extremely slow due to highest levels of encryption, it can be adopted as an acceptable trade-off.
22
102345223
External Access via VPN Access to VPN allows employees access to company's secure network through a VPN tunnel.It is used to connect to the research center and the manufacturing sites.Being run over an unsecure public network, it makes VPN communication an attractive target for attackers.Issues compromising VPN tunnel: • VPN fingerprinting: while not an attack in itself, it does give the attacker useful information by identifying the type, version, model, etc. of the device.• Insecure storage of credentials by clients: storing the user credentials in unencrypted form, or using weak forms, whether in the memory or the registry. • Username Enumeration: usage of pre-shared keys enables this kind of attack. • Offline Password Cracking: This type of attack is made possible when a valid username is obtained using the previous vulnerability, and the hash of the username can be obtained from the VPN server to launch an offline password cracking attack. The issue of username enumeration is quite new and can be addressed from the vendor's side.Addressing this will take care of the password-cracking attack too.In order to address the remaining threats, it's imperative to use the strongest possible encryption methods, like EAP-TLS and IPsec.Usage of two-factor authentication products like RSA SecurID is also recommended.VPN access should be made available to personnel with a valid business reason.A strong password policy should be implemented and enforced, which is different from the one used in internal networks, so that compromised credentials only allow access to what is available through VPN.The limitation with this security is that the remote user may himself be compromised in the first place, which can then render some of these protections less effective.A safe approach would be to ensure that the remote users themselves have strong antivirus, antispam and personal firewall, and they must be required to use it.One method is to allow only company issued devices for VPN access.The devices can be validated through the serial keys of the hardware.Also, the system could be checked for meeting the security requirements every time a user logs in, before granting access to the network.
24
102345223
Insider Attacks We mention insider attacks [11] separately, as they apply to almost all aspects of the company in terms of security.The following observations were notable: • It is important to identify the several most important entities that need to be protected within the company and secure them with the highest priority.This doesn't mean just encryption, but also restriction on access, and rigorous monitoring of who interacts with it, and under what circumstances.The circumstances can be used as a basis for a set of policies that clearly outline when the access is allowed.Physical security of assets is also one aspect which should not be ignored at any cost. • The company should be extra careful in times of someone's resignation or employment termination, especially if they're from a high post.Suspicious behavior must be brought to notice. • Centralized logging tools can be used to track all data exfiltration.Proper auditing should be enforced so that no operation on sensitive data goes unmonitored. • Access to sensitive data should be allowed on a need-to-know basis only, and privileges must be rescinded once the project/operation is done.An employee must be asked to submit a valid, signed (digitally or otherwise) reason for accessing sensitive data. • It should be possible to remotely erase a disk on a smartphone or laptop in case of theft. • There should be a proper training course based on general security guidelines and best practices which trains users about the importance of strong password policies, proper emailing protocol for different situations and browsing recommendations, sharing of information with outsiders or colleagues, etc.
26
102345223
Peripherals and Removable Devices Devices like USB sticks, flash drives, external hard disks, smart cards, and mobile phones are examples of set of entities which are very effective as tools for at-tackers.In many cases, they turn out to be either the starting or ending point of a successful attacks, involving mostly theft of data by carrying it off on one of these.Since they can be used with a variety of devices, in different environments and without need of prior installation of any software, they prove to be an effective out-of-band channel for attacks.They are also harder to track because of their removable nature and the human factor involved.We chose to mention them separately, as they apply to all of the divisions of the company as presented above. We recommend the following guidelines for dealing with them: • In general, it's best to avoid the use of removable devices as much as possible. • If the use of removable devices cannot be avoided, it's best to have them registered so that they can be tracked.Un-registered or ad-hoc devices must be rejected. • Some removable devices like USB sticks can have native password protection. It's recommended to use those, to protect data against theft. • Workstations, and any other devices which have USB ports or CD trays, must have the auto play option permanently disabled. • The antivirus software on the host device must be configured to scan the removable device as soon as it's attached. • The HIDS must monitor ALL data travelling to/from a removable device, even if it's registered. • It may be possible to encrypt data if it's copied out to one of these removable devices. • For mobile phones, registration should be recommended, and the user must be required to have sufficient protective measures like antivirus, GPS locators, etc. on the phone.We also discussed mobile phone registration earlier while discussing threats on the research center devices. The limitation of the above strategies is that they might turn out to be too cumbersome to implement and carry out in a large company.The tradeoff of security vs usability states that if the users tend to face a lot of delays or difficulties in working with these restrictions, they may tend to bypass them.Hence, employees should be trained against the harmful effects of such negligence.
27
102345223
Measurement of Security Posture Measurement of security posture refers to the steps the company can take to measure how secure they are at any given point in time.It can be interpreted in a systematic way if we look at different things the company can look at, in order to determine the state of their security. These "things" can include the security alarms generated by their: • host-monitoring systems
28
102345223
• physical security mechanisms The best way to look at these alarms is to use some kind of an IDS visualiza-tion tool.Since the company has a distributed architecture containing many locations, each having many machines, all connected through LAN's and WAN's, a good example for visualization can be IDSradar [12] tool.The tool has provisions to visually depict each node from each location and their network connections within a single representation, which gives a holistic idea of what is going on with the company in terms of security threats. The following properties of IDSradar can prove very helpful in obtaining this representation: • servers and workstations-nodes are shown as arranged in circles indicating a common corporate network.The bigger nodes are those with high priority. Locations are ordered by IP address.This can be used to take a look at any of the nodes in the whole network, and to see any security-relevant statistics. • alert types-each alert type is shown in a separate color, and the width of each arc is corresponding to its percentage.This gives a kind of visually proportionate size to the importance of an attack.• interactive design-interactive filtering is provided in the form of clicks on hosts, servers, alerts types, etc. as well the ability to zoom-in/out, play, stop, etc. Detailed information about any entity can be seen by mouse-hover.This provides information in various ways, and being visual, it's easy to be interacted with by personnel who may not know about the setup or commands or similar technicalities of a particular system. Apart from visualization, the company can consolidate the security alarms and reports from different locations and try to use behavior-based learning methods to glean patterns or statistics out of the reports.The statistics can be collected based on location, time, type of attack, or any relevant metric which can help in detecting any kind of trend in the attacks.If so, the company can take steps to address that particular concern proactively.This methodology of pro-active threat assessment will ensure that the companies resources are secure, and breaches can be avoided before they take place.elements on which products from different vendors could be compared.This can be taken as a baseline.There is multiple criterion which must be considered before selecting a security product to be installed in the company.Each criterion should be matched by the network requirements of the company and whether any of those criteria would act as bottleneck in case the network resources are expanded.Also, the projected network growth and pricing of security products must be considered before purchasing/deploying a product to ensure least security cost to company.Other parameters include:
29
102345223
Selecting a Security Product • Network speed supported by the device when the security features are enabled.• Number of concurrent sessions supported. • Inter-operability with other vendor security appliances already installed in the network or the ones which would be installed. • Inter-operability with other TYPES of appliances.For example, inter-operability of vendor X network IDS with vendor Y Host IDS. • Support of required features. A list of security features (not exhaustive but informative) provided by various security devices in the market: (They can be either at the host level or the network level or both) In order to arrive at this list, we referred to Juniper Data Sheet [14], Cisco Data Sheet [15], McAfee Data Sheet [16], Palo Alto Networks Data Sheet [17] and some other articles [13] [18] [19] [20] [21] on product comparisons.[22] is an informative resource on Intrusion Detection Technologies and [23] can be referred to learn more about anomaly-based Intrusion Detection Systems. As attackers get more and more sophisticated, even anti-viruses (specialized to thwart viruses/malware/spyware) are becoming ineffective in dealing with the latest threats [24].Hence, a package of products functioning and collaborating at different levels must be used. There are some organizations like NSS Labs which evaluate IDS and other security products from different vendors and publish their report annually.Such third-party reports may be more reliable in reporting actual figures than the company data sheets which usually report the best-case figures to beat the competition.These reports are available for a subscription [25].We recommend relying on at-least one such report before finalizing on the product.
30
102345223
Similar Threats Faced by Others We looked at some threats faced by some other companies/organizations.There were quite a few of them related to healthcare industries, which we enumerate first, followed by some other relevant examples: • Utah Department of Health, March 2012-a breach caused by weak password policy (default password wasn't changed) on a network server exposed protected information of 780,000 individuals, which included their Social Security numbers.This is a classic case of negligence/misconfiguration brought about due to ignorance of user training.Enforcement of proper regulations (like password format restrictions in the company) would have avoided this easily. • Emory Healthcare, Georgia, April 2012-lost 10 backup disks containing information more than 300,000 patient records, two-thirds of which contained Social Security numbers.This strengthens our claim made earlier in the document stressing the security of the backups.In fact, backups need more security as they may contain data This is a good example of insider threats, resulting when a firm makes the mistake of trusting any or all entities within its physical/network boundaries, and fails to monitor their activities.To avoid/detect these kind of attacks, we think it's imperative to track all kinds of operations performed on sensitive documents or machines.This may impose a heavy burden on the monitoring tool as the number of such entities (which are potentially sensitive and should be monitored) can be very large in a big company.The company can then try to compartmentalize documents using security levels and monitor the ones at the highest levels only. • Stuxnet, Iran-Stuxnet was propagated to a high security facility with only PLCs through a USB drive inserted in devices/workstations outside the high security facility.Stuxnet works in the following way: After infecting a less secure workstation (through a USB drive), it propagates to other networked computers and then scans for specific software manufactured by Siemens for controlling a PLC.If the computer is found with the software AND it is controlling the PLC, it introduces the rootkit by infecting both the software and the PL.Otherwise it becomes dormant on the system.Once both the software and PLC are infected, it can send unexpected commands to the PLC while the software still reports normal operation.The design of networks which were infected by Stuxnet are similar in nature to the manufacturing facility of our company.Hence it is a good idea to learn from the way Stuxnet propagates and operates, while designing the security systems at our company. First of all, it is important to highlight that traditional security methods including anti-virus and IDP would not have helped prevent Stuxnet since it exploited 0-day vulnerabilities and there were no signatures available for this attack before it was discovered. The above examples cover a good variety of threats like weak protection policies, insufficient protection of backups, insider attacks which are all witnesses to the fact that healthcare industry requires proper security no less than any other industry.It also covers highly targeted attacks using malware like Stuxnet, which are relevant for healthcare firms which are involved in mass production of drugs using specialized devices and sophisticated machinery.
31
102345223
Impact of Snowden's Disclosures about NSA The Snowden's disclosures on NSA [26] reveal a nexus of international agencies having an elaborate network of global surveillance.Many intelligence agencies like NSA are spying on the citizens of their country by spying on the digital footprints of people through emails/phone conversations and so on.The agencies are able to spy on individuals with relatively simple methods using various kinds of tools.This is alarming because this highlights that other agencies could use similar tools to spy on their targets.For example, a particular pharmaceutical company could develop means of spying on our company to carry out an espionage and steal critical information from the company's network.With Advanced Persistent Threats, it is possible to carry out such an espionage for a long period of time before it even gets detected.So, such threats are not a one-time affair and can last over long periods of time from months to years.Snowden's disclosures also highlight some of the tools which can be used to carry out targeted attacks.Some of the tools are listed below:
32
102345223
Computer Implants 1) Sparrow II: A small device that could be implanted at a strategic location to spy on the wireless networks and collect data. 2) Firewalk: It is capable of filtering and regressing (outputting) network traffic over a custom RF link and can inject traffic into the network as commanded. 3) SWAP: Can be used to exploit motherboard BIOS before the operating system loads 4) CottonMouth I, II, III: USB Implant that provides a wireless bridge into a target network and also the capability to load exploit software on target PCs in the target network. Likewise, there are about 18 hardware/software tools listed as part of the disclosure which can be used as computer implants to spy on the network/data.It is possible that other organizations may be able to develop such tools.Hence it is important to be aware of their presence and take action to prevent their use in spying on the network.Since all the above equipment is a computer implant, it would need to be physically implanted at a strategic location within the company premises.It cannot be done remotely and thus needs an insider to plant the devices.Refer the section on insider threats on strategies to mitigate them. Many such software implants can be used to install backdoors in servers and firewalls, which can be used to leak data from the networks.This highlights the importance of protecting servers and firewalls from such tools.There should be a policy to protect the firewalls themselves, especially when updating their operating system, software, changing the firewall rules, or in general carrying out any management tasks on the devices.
33
102345223
Covert Listening Devices Another set of devices listed in the disclosures are the covert listening devices like LoudAuto, NightWatch, CTX4000, PhotoAnglo, Tawdryyard.These devices can be used to listen to wireless data.There may be many similar devices which could be used to pry on the wireless networks within our company.Hence it is important to use wireless routers whose range is limited to the company premises and don't have very high ranges beyond the premises which cannot be monitored.It is better to use multiple wireless routers with small ranges rather than using high range routers which may have sufficient range making it possible to listen to their traffic beyond the company premises.
34
102345223
Mobile Phone Implants Mobile Phone implants like Picasso, Genesis, Crossbeam, Candygram, Dro-poutJeep, GopherSet highlight that mobile phones can be compromised either during manufacturing phase or during their operation.It is also possible that certain employees might bring in modified hardware devices to help in spying on the networks.Otherwise, a mobile phone user could be a target of an attack which may lead to software like DropoutJeep getting installed in their phone.This will compromise the privacy of the user and may also leak company information exchanged through conference calls, emails, SMS messages through the phone.It is worth noting that such devices are used at multiple networks (public hot-spot) and thus more susceptible to attacks unless the user is careful in using them.Hence it is important for the company to highlight best-practices to employees regularly, so that they are careful in using their devices, and opening links in emails etc.Also, it is important to restrict the data access which is available through these wireless devices on the company's wireless network.
35
102345223
Measures to Avoid Sophisticated Attacks like Stuxnet Our recommendations are as follows: • Isolate the network of the manufacturing units from the internal LAN of the company.This would not have prevented Stuxnet from spreading (it used USB to inject itself initially).However, it is still an important practice to follow • The computer systems connected to the manufacturing units and machines should have very limited connectivity to the outside world.Security measures should be built in to provide additional credentials for connecting USB drives/CD-ROM etc. to the systems.This will prevent anyone who has operational access to the systems from trying to modify it. • Modifying the firmware on the manufacturing units should not be easy.It should have multiple levels of security.Even after downloading the firmware on the computer systems from (say) USB drive, there should be additional privileges required to upgrade the firmware.This should be through a specific set of computers placed in a high security zone which are NOT used for the operational activities of the units.• There should be mandatory integrity checks carried out on the new version of firmware before it is passed for installation.If the integrity checks fail, the systems used for upgrading should lock out.The integrity checks should be carried out on a separate system/network than the one used for upgrading.Some experts recommend using cloud services to carry out the integrity checks which can be considered too. • To ensure tightest security, firmware upgrades for mechanical units should be locked in hardware through jumpers.Only when upgrades are mandated, they should be unlocked and upgraded.Although this is difficult to implement on large scale manufacturing facilities with 1000 s of units for manufacturing, it must be noted that firmware upgrades are not carried out frequently and the equipment run for many months or years without the need for any change in operational logic.The risks of damage that can be caused are way higher than the lack of convenience in upgrading the firmware. Hence, we strongly recommend this method. • Any change in software of the systems (besides firmware) should use the same level of security.Consider the extent of damage that can be caused by printing wrong labels on the medicines and sending them out in the hospitals/ stores.
36
102345223
Proactive Threat Protection Besides a reactive strategy for security threats i.e. taking action once an attack occurs or is attempted, as most organizational measures deploy, we recommend pro-active threat prevention strategies along-with the strategies discussed till now.In this section we see what kind of innovative measures can be used to not only prevent attacks but also to catch the attackers and identify their intentions by monitoring their activities on the company network.Although at first glance, the recommendations might appear to be costly to implement, the cost is certainly less than the risks involved in security breaches.Our company, as a pharmaceutical company has the functions of manufacturing and delivering medicines promptly which help in saving lives.Any delays in these procedures or errors in these procedures could affect human lives and lawsuits could potentially lead to closure of the company.Considering this aspect, our recommendations in this section must be seriously considered by our company. Our first recommendation is to deploy multiple HoneyPots at significant locations at all the facilities of our company.A honeypot is a system that's put on a network, so it can be probed and attacked.Because the honeypot has no production value, there is no "legitimate" use for it.This means that any interaction with the honeypot, such as a probe or a scan, is by definition suspicious [27]. Computer systems which may appear to contain confidential data like research information, payroll information, financial bills etc. which are fake could be placed at strategic locations within the company.They could be setup to monitor any network activity on them.A legitimate employee would not access these systems as they don't need to.However, an insider trying to steal data may find these systems online while scanning the network and could get caught easily by the monitoring of the system.This would counter insider threats.A masquerader or an outsider accessing the network through an insider's credential or in general posing as an insider may try to access the resources on honeypots too and get caught by their activities on them.Honeypots may be used to fingerprint the activity of attackers and identify what they are looking for, why they are attacking the network, and also perhaps who they are.Recognizing the identity of attackers would help our company in catching them and thus help in reducing the threats they pose to the company. Honeypots can help prevent attacks in several ways.The first is against automated attacks, such as worms or auto-rooters.These attacks are based on tools that randomly scan entire networks looking for vulnerable systems.If vulnerable systems are found, these automated tools will then attack and take over the system (with worms self-replicating, copying themselves to the victim).One way that honeypots can help defend against such attacks is slowing their scanning down, potentially even stopping them.Called sticky honeypots, these solutions monitor unused IP space.When probed by such scanning activity, these honeypots interact with and slow the attacker down.They do this using a variety of TCP tricks, such as a Windows size of zero, putting the attacker into a holding pattern.This is excellent for slowing down or preventing the spread of a worm that has penetrated your internal organization [28]. Our recommendation is to deploy multiple HoneyPots throughout all the facilities of our company and at strategic locations in the network.This network of HoneyPots is usually termed as HoneyNet.A honeynet is a type of honeypot.Specifically, it is a high-interaction honeypot designed to capture extensive information on threats.High-interaction means a honeynet provides real systems, applications, and services for attackers to interact with, as opposed to low-interaction honeypots which provide emulated services and operating systems.It is through this extensive interaction we gain information on threats, both external and internal to an organization.What makes a honeynet different from most honeypots is that it is a network of real computers for attackers to interact with.These victim systems (honeypots within the honeynet) can be any type of system, service, or information the company wants to provide [29]. The second recommendation is to deploy decoy documents.These decoy documents are usually also referred to as HoneyFiles.These are documents that look enticing i.e. appear to contain important information but actually contain bogus information.They won't be accessed by regular employees either because of their permissions and/or because they are not useful for them.However, an insider or a masquerader trying to steal information would try to look at them.These documents can be embedded with a beacon code through the Decoy Document Distributor system so that whenever they are accessed they trigger (say) an email alert, alerting the administrators of a suspected malicious activity [30] [31].This way they can be caught. Since the deployment of HoneyFiles, HoneyPots and HoneyNets may require some level of advanced security knowledge, our company may consider hiring security experts full-time to manage their security or they could consult them for recommendations on a regular basis.
37
102345223
Counter Measures for Future Threats We looked at several aspects of security which are based on the current state of technology, both software & hardware, and which may change in the near future.These are certain areas which may need to be addressed separately in the future.Here's our view of it and a few examples: • Legacy systems, hardware and code: There are many systems which use legacy code or software, e.g.Windows XP, or software written in languages like C/C++, which may be outdated in the future, or lose its support.Care should be taken to keep them to their latest patched versions, or to replace them with a secure version.This process may be difficult because in order to keep the network homogenous, the same changes may have to be done on a large number of machines, which may cause delay or downtimes, or may not be feasible immediately in cases like email server. • IPv6: As the IPv4 address space is slowly expiring [32], the networking world is switching over to IPv6.Many contemporary security features may not work with IPv6 without some form of modification.The company should look into making itself IPv6 compliant in terms of security software as soon as possible.With the advent of IPv6, there is a possibility of using stronger security features like encryption, key-passing and signatures. • Cryptographic capabilities: A lot of protection depends on the security capabilities provided by the present state of security algorithms like RSA.Day by day, attackers are coming up with newer attacks and also newer hardware is coming closer to crossing the computational barrier on which many secure algorithms rely.In the future, the security experts should look at methods which are based on multiple factors rather than computational difficulty.Also, older, insecure protocols should be replaced by newer, more secure ones as they arrive.The difficulty obviously lies in the scale of the transition and the fact that all machines may not be able to support all kinds of newer cryptographic operations and may require replacement as well. • Bypassing of learning methods by "wrong training" of IDS systems: This is another trick used by attackers and takes place over a really long time, so it is not immediately noticed by monitoring systems.This is in the form of a particular attack which is consistent, and over a long period of time it makes the behavior engine in the IDS system raise its score much higher in comparison to other attacks.Later, the attackers launch an attack which is not much different in nature and is classified by the IDS as low severity.The company's IDS should have features that watch out against this type of "wrong" learn- Most of the latest laptops and mobile phones have a front-camera.The company could mandate using ONLY devices with a front camera.The VPN software could ask for permissions to the front camera, and if permissions were not granted, the software could be disabled.The VPN software while logging into the official network could take a picture of the user trying to log-in and use it as an additional validation.Further, the camera could be accessed randomly during the user's session, ESPECIALLY when dubious activity is detected.This would help catch insiders attacking the system, remotely, which would otherwise get evaded by traditional systems.A similar approach may be adopted for fingerprint scanners.With the (remote) use of biometric hardware on the user's device, the security of access from remote locations would be similar to access on-site.
38
102345223
D . Tandon, P. Parimal DOI: 10.4236/jcc.2018.63010136 Journal of Computer and Communications From the D. Tandon, P. Parimal DOI: 10.4236/jcc.2018.63010137 Journal of Computer and Communications point of view of security, it mainly involves looking at the following two processes: It is just as important to secure the backup, as securing the live servers.Stealing the backup from central back-up servers gives attackers more advantage as then there's no need for stealing the same data individually from their respective sources (which is more difficult).Security of the backup is essentially a case of D. Tandon, P. Parimal DOI: 10.4236/jcc.2018.63010138 Journal of Computer and Communications host-based security and involves mostly the same parameters.Apart from beingcompressed, the backup data must be encrypted even in stored form.The individual subparts of the backup may be encrypted using separate keys (for each source) so that the compromise of a particular key doesn't put the security of the whole backup at risk.The backup server must have a good host-based IDS which can monitor any operation being performed on the backup files within the server.Physical security of backup server should be ensured too and any physical access to the servers must be allowed only when absolutely necessary and restricted using biometrics. • timeline & histogram-time is depicted using animation, and the histograms below each alert type are drawn clockwise along the alert-type arc in real time.They're updated every few minutes.This can help the company keep track of attacks/alerts over time, and can be used to figure out a pattern, if any exists.• attack correlation-triangles are used to connect source IP, destination IP and the top of the bar of histogram below the alert type, in the current time span.The nodes in an alert are highlighted.This can be especially helpful in case the hosts/networks in different physical locations are being targeted systematically.The correlation can help filter out random attacks from deliberately targeted ones. ing.Periodic evaluation/audit of threat scores of various types can help the D. Tandon, P. Parimal DOI: 10.4236/jcc.2018.63010150 Journal of Computer and Communications company in finding these types of anomalies.• Biometrics: The state of biometrics has evolved in the past few years leading to technologies like facial recognition, retina scans and fingerprint readers starting to get used in devices like smartphones for authentication.They are accurate in reading the biometric input, however, the technology being fairlynew is prone to spoofing.As biometrics starts getting used at a wide scale, attackers would try to come up with new ways to spoof the input.The technology may work well for a single user-based phone, but altogether replacing password authentication and certificate signing on large networks may take a while.Nevertheless, the company should be aware of this technology and prepared to adopt it in its systems where evaluations prove it to be better than conventional methods.
41
56464761
A long-term sampling system, called the Adsorption Method for Sampling of Dioxins and Furans (AMESA), was used for long-term sampling (up to 168 hours) of an electric arc furnace (EAF) to obtain the representative flue gas samples. In order to have a comprehensive view of the emissions of persistent organic pollutants (POPs) from EAFs, six POPs, including polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs), polychlorinated biphenyls (PCBs), polychlorinated diphenyl ethers (PCDEs), polybrominated dibenzo-p-dioxins and dibenzofurans (PBDD/Fs), polybrominated biphenyls (PBBs) and polybrominated diphenyl ethers (PBDEs), were investigated. Tests showed the breakthroughs of PBDD/Fs and PBDEs are much larger (8.9%–43%), while the others are less than 3%. A significant increase in breakthrough with the increase in halogen numbers is observed, because highly halogen-substituted POPs tend to partition to the particulate-phase. Except for PCBs and PCDEs, whose percentages of the POPs in the rinses to the total POPs collected from the long-term samples (cartridges + rinses) are less than 7%, those of the other POPs are all greater than 30%. Therefore, the solvents from the rinses of the sampling probe and other components need to be combined with an XAD-2 cartridge for analyses. The PCBs and PBDEs are the most abundant pollutants in the stack flue gases of the EAF, and their mass concentrations are one to three orders higher than those of the other POPs. With regard to POPs with dioxin-like toxicity, the percentages of the contributed toxicities by PCDD/Fs, PCBs and PBBD/Fs are 87.1%, 11.7% and 1.2%. The close association that PBDEs and PBDD/Fs have with the concentrations and congener profiles is not present for PCDEs and PCDD/Fs, suggesting the need for further studies of the formation mechanisms of these analogues.
1
56464761
INTRODUCTION Electric arc furnaces (EAFs) are used to produce carbon and steel alloys, primarily from melting iron and steel scraps, and play an important role in iron/steel making.A typical EAF includes the stages of feeding, smelting, oxidation, reduction and steel discharge.The melting of scrap ferrous material contaminated with varying amounts of chlorinated compounds (i.e., PVC plastics, cutting oils, coatings and paints) provides conditions favorable to the formation of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) (Wang et al., 2003b;Lee et al., 2005).EAFs have been reported as one of the major PCDD/F emission sources (European Commission, 2000;Lee et al., 2004).Together with sinter plants (Wang et al., 2003c), they contribute 99% of the aggregate PCDD/F health risk to residents in densely populated areas of a city in southern Taiwan (Kao et al., 2007). In addition to PCDD/Fs, EAFs have also been recognized as an important emitter of polybrominated diphenyl ethers (PBDEs) (Odabasi et al., 2009;Wang et al., 2010bWang et al., , 2011a, c) , c) and polybrominated dibenzo-p-dioxins and dibenzofurans (PBDD/Fs) (Wang et al., 2008(Wang et al., , 2010c)).PBDEs and PBDD/Fs are emitted from EAFs when they are not completely destroyed in the feeding scraps, and are also formed during the combustion process.This suggests that it is necessary to further characterize the other analogues that are contained in the emissions of EAFs, such as polychlorinated biphenyls (PCBs), polybrominated biphenyls (PBBs) and polychlorinated diphenyl ethers (PCDEs).Except for PCDEs, the various pollutants noted above, such as PCDD/Fs, PBDD/Fs, PCBs, PBBs and PBDEs, all appear on the list of persistent organic pollutants (POPs) in the Stockholm Convention.These POPs are bio-accumulative, toxic, and susceptible to long-range transport (LRT) (Wania, 2003;Wang et al., 2011b).For example, PBDEs can have various harmful effects on humans, such as developmental neurotoxicity, hepatotoxicity, embryotoxicity, and decreased reproductive success (Chao et al., 2010(Chao et al., , 2011;;Shy et al., 2012). PCDEs are a group of halogenated aromatic compounds, which are structurally similar to PCBs and PCDFs.The difference in the chemical structures of PCDEs and PBDEs, which have been extensively used as brominated flame retardants (BFRs) in a large variety of consumer products, is that bromine substitutions are all replaced by chlorine.Similar to their analogues, PCDEs with lipophilic and persistent properties tend to bioaccumulate and biomagnify in food webs (Domingo, 2006).Some PCDEs may have biochemical and toxic effects similar to those of PCBs and PCDD/Fs (Becker et al., 1991).Furthermore, PCDEs may be converted to or form toxic PCDD/Fs by photolysis or pyrolysis (Norström et al., 1976;Lindahl et al., 1980;Liu et al., 2010), and have been observed in the flue gas and fly ash of waste incinerators (Kurz and Ballschmiter, 1995;Nakao et al., 2006).Nevertheless, in terms of being a product of incomplete combustion, PCDEs have attracted little attention compared to related combustion-originated POPs, such as PCDD/Fs (Wang et al., 2007;Lin et al., 2008;Li et al., 2011), PCBs (Chang et al., 2014), PBDD/Fs and PBDEs (Wang et al., 2010a, c). The regulated methods for sampling PCDD/Fs in stack flue gases, such as US EPA modified Method 23 and EN 1948-1,2,3, are used for manual short-term sampling, with the time for one stack flue gas sample usually ranging from three to six hours.Therefore, the PCDD/F emissions that occur during the sampling time are of little relevance to the overall emission level, especially for EAFs that have significant variations in the properties of the raw feeding scraps (Lee et al., 2005).Another issue is that some POPs, such as PBDD/Fs and PBBs, have very low concentrations in the stack flue gases of combustion sources, and several stack flue gas samples are thus usually combined into one for the PBDD/F measurement in order to meet their detection limits (Wang and Chang-Chien, 2007;Wang et al., 2010c). To solve the representative issue resulting from spot PCDD/F measurements using manual short-term sampling, which occur for only a few hours each year, three longterm sampling systems and two semi-real-time continuous monitoring systems have been developed (Mayer et al., 2000;Lee et al., 2008;Vicaretti et al., 2012).One of these is the Adsorption Method for Sampling of Dioxins and Furans (AMESA), which is a fully automatic long-term sampling system for industrial processes based on isokinetic flue gas sampling and the PCDD/F adsorption that occurs on an exchangeable resin-filled cartridge (Lee et al., 2008).This system has been tested, undergoing certification procedures (Mayer et al., 2000;Idczak et al., 2003;Lee et al., 2008;Vicaretti et al., 2012), and is obligatory for incinerator plants in Belgium, France and the Lombardy region of Italy (Rivera-Austrui et al., 2012).The European Committee for Standardization (CEN) is currently developing a standard for long-term sampling of PCDD/Fs and dioxin-like PCBs (pr EN 1948-5).While AMESA has been widely installed in incinerators, especially in Europe, and some related studies with regard to PCDD/Fs and PCBs have been reported (Mayer et al., 2000;Idczak et al., 2003;Lee et al., 2008;Vicaretti et al., 2012), few works have discussed its application with regard to EAFs or for sampling other POPs, such as PCDEs, PBDD/Fs, PBBs and PBDEs. In this study, AMESA was used for long-term sampling (up to 168 hours) of an EAF to obtain the representative POP flue gas samples.In order to have a comprehensive view of the POP emissions from EAFs, six POPs, including three chlorinated ones (PCDD/Fs, PCBs and PCDEs) and three brominated analogues (PBDD/Fs, PBBs and PBDEs) were investigated.Breakthrough tests of XAD-2 cartridges and rinses of the sampling probe were conducted to evaluate the feasibility of the long-term POP sampling of EAF by AMESA.Furthermore, the concentrations, emission factors and congener profiles of the six POPs were compared with each other to clarify which one is the most influential atmospheric pollutant, as well as the similarities or differences among their formation mechanisms.
2
56464761
Basic Details of the Investigated EAF and AMESA The EAF investigated in this study is operated intermittently, with scrap (104 tonnes/hr), alloying agents (1.1 tonnes/hr), flux (2.1 tonnes/hr) and coke (1.8 tonnes/hr) used as its raw feeding materials, and bag filters as its air pollution control devices (APCDs).It can produce carbon steel as a rate of 100 tonnes/hr. The operation of AMESA complies with the cooled probe method of EN-1948.The flue gas is isokinetically sampled by using a titanium probe to cool the temperature of flue gas down to less than 50°C, before it is introduced into the XAD-2 cartridge.Fifty grams of XAD-2 resin were filled into the cartridge, which is greater than that used in the manual short-term sampling method.Instead of using filters to collect particles, quartz wool is placed in the front of XAD-2 cartridge.AMESA can thus collect flue gases for up to four weeks at a time.In this study, an additional XAD-2 cartridge is mounted in series to check the breakthrough of the POPs.
3
56464761
Sampling Procedures A total of six stack flue gas samples were collected from the EAF by using AMESA, and were performed by an accredited laboratory in Taiwan.Prior to sampling, XAD-2 resin was spiked with isotopically labeled PCDD/F surrogate standards.The AMESA system was manually started and stopped to be consistent with the batch time of processes.Each stack gas sample accumulated ~168 hour sampling time (about a week and half of sampling).The sampled flue gas volumes were normalized to the dry conditions of 760 mmHg and 273 K, and denoted as Nm 3 . The sampling probe and other components of the sample train were rinsed after each sampling.The nozzle, probe and probe lines were brushed while rinsing three times with acetone, three times with methylene chloride and then three times with toluene.The rinsate were collected and analyzed for POPs.To ensure that the collected samples were free of contamination, one field blank were also taken during field sampling.
4
56464761
Analytical Procedures In contrast to other POPs, there is little data associated with PCDEs emitted from combustion sources and in the environment, because native and mass-labeled PCDEs have not been commercially available until recently (Domingo, 2006).In this study, each sample was analyzed for seventeen 2,3,7,8-substituted PCDD/F, twelve dioxin-like PCB, six PCDE, twelve 2,3,7,8-substituted PBDD/F, five PBBs and fourteen PBDE congeners.Internal standards were spiked into the samples before Soxhlet extraction with toluene, and were used to monitor the extraction and cleanup procedures.After the sample was extracted in a Soxhlet extractor with toluene for 24 hours, the extracts were concentrated, and then treated with concentrated sulfuric acid, and this was followed by a series of sample cleanup and fractionation procedures, including a multi-layered silica column, alumina column and an activated carbon column.During the alumina column cleanup, non-planar PCBs and PBBs were eluted with 15 mL hexane, and were then further eluted with 25 mL DCM/hexane (1/24, v/v) for activated carbon column use.The activated carbon column was sequentially eluted with 5 mL toluene/methanol/ethyl acetate/hexane (1/1/2/16, v/v) for PCDE, PBDEs, planar PCBs and PBBs, which was further followed by 40 mL of toluene for PCDD/Fs and PBDD/Fs.Before instrument analyses, the planar and nonplanar PCBs/PBBs eluates were mixed together, representing the PCB and PBB samples.The eluate was concentrated to approximately 1 mL and transferred to a vial.The concentrate was further concentrated to near dryness, using a stream of nitrogen.10 μL of the standard solution for recovery checking was added to the sample extract immediately prior to injection to minimize the possibility of loss.The detailed analytical procedures are given in our previous works (Wang et al., 2010a, b, c;Chang et al., 2013;Chang et al., 2014).
5
56464761
Instrumental Analysis A high-resolution gas chromatograph/high-resolution mass spectrometer (HRGC/HRMS) was used for the POP analyses.The HRGC (Hewlett-Packard 6970 Series gas, CA) was equipped with a silica capillary column (J&W Scientific, CA) and a splitless injector, while the HRMS (Micromass Autospec Ultima, Manchester, UK) was equipped with a positive electron impact (EI+) source.The selected ion monitoring (SIM) mode was used with a resolving power of 10,000.The electron energy and source temperature were specified at 35 eV and 250°C, respectively.The detailed instrumental analysis parameters of PCDD/Fs, PCBs, PBDD/Fs, PBBs and PBDEs are given in our previous works (Wang et al., 2010a, b, c;Chang et al., 2014).The instrumental parameters of the PCDEs analyses are the same as those of the PCDD/Fs.
6
56464761
Quality Assurance and Quality Control (QA/QC) Prior to sampling, XAD-2 resin was spiked with PCDD/F surrogate standards pre-labeled with isotopes, including 37 Cl 4 -2,3,7,8-TCDD, 13 C 12 -1,2,3,4,7,8-HxCDD, 13 C 12 -2,3,4,7,8-PeCDF, 13 C 12 -1,2,3,4,7,8-HxCDF and 13 C 12 -1,2,3,4,7,8,9-HpCDF.The recoveries of precision and recovery (PAR), surrogate, and internal labeled standards of POPs all met the relevant standards.Field and laboratory blanks were carried for each batch of sampling and analyses.The total amounts of PCDD/Fs and PBDEs in the field and laboratory blanks were all < 0.1% and 0.3% of the real stack flue gas samples.As for the other POPs, the blanks were all below the detection limits.Furthermore, breakthrough tests of XAD-2 cartridges and determination of the accumulated POP levels onto the surface of the sampling probe were conducted.The detailed results are discussed in the following section.
7
56464761
Breakthrough Tests Two XAD-2 cartridges are mounted in series and analyzed individually to evaluate the POP breakthrough for the long-term sampling of the EAF by AMESA.The breakthrough of POPs is calculated as follows. where A is the mass (toxicity) adsorbed by the first XAD-2 cartridge, and B is the mass (toxicity) adsorbed by the second XAD-2 cartridge.The results of chlorinated and brominated POPs breakthroughs are listed in Tables 1 and 2, respectively.Less than 3% breakthroughs are found for the mass and toxicity of PCDD/Fs, PCBs, PCDEs and PBBs.The breakthroughs are lowest for PCBs (only 0.0533% for mass and 0.236% for toxicity).However, for PBDD/Fs and PBDEs, their breakthroughs are much larger, and are 43.4% and 14.9% for PBDD/Fs mass and toxicity, and 8.91% for PBDEs mass.As for the individual congeners of these POPs, there was a significant increase in breakthrough with the increase in halogen numbers for the PCDD/Fs, PCBs, PBDD/Fs and PBDEs. We know that as the halogen numbers of POPs increase, their boiling points rise and vapor pressures drop (Gajewicz et al., 2010).In order to clarify the relation between the breakthrough and halogen numbers, the breakthrough and the subcooled liquid vapor pressure (P L ) of these POPs, which is an important property affecting the gas-particle partition of POPs in the environment, are analyzed by Pearson correlation analyses.We find that the logarithms of the breakthroughs for these POPs are significantly and negatively correlated with the logarithm of P L (r = -0.904,p < 0.0001) (see Fig. 1).This strong negative correlation implies that the highly halogen-substituted POPs with lower P L tend to partition to the particulate-phase, resulting in their greater breakthrough in the AMESA.This is because AMESA uses quartz wool instead of filters to collect particles, and a higher percentage of the smaller particles emitted from the stack flue gases pass through the first cartridge to the second one.
8
56464761
Rinse of the Sampling Probe The operation of AMESA complies with the cooled probe method of EN-1948, and thus the particulate-and gaseous- phase POPs in the flue gas will adhere and condense onto the surface of the sampling probe and other components of the sample train.To evaluate their accumulated POP levels, four rinses of the sampling probe and other components of the sample train were performed after each sampling, and then individually analyzed for POPs. The percentages of the POPs in each rinse to the corresponding total POPs collected from long-term sampling (cartridges + rinses) are listed in Table S1 of the supporting information.The amount of POPs in the first rinse of the sampling probe and other components of the sample train are one to three orders higher than those in the second one.Except for the first rinse, the POPs in the subsequent rinse of the sampling probe and other components contributed less than 3% of the total POPs.The results show that the rinse procedures adopted in this study are effective with regard to the removal of POPs from the surface of the sampling probe and other components after long long-term sampling by AMESA. Table 3 lists the percentages of the POPs in the whole four rinses to the total POPs collected from long-term sampling (cartridges + rinses).The percentages of the rinses based on total mass are 58.0%for PCDD/Fs, 2.6% for PCBs, 2.9% for PCDEs, 55.1% for PBDD/Fs, 31.6% for PBBs and 32.1% for PBDEs, while those based on total toxicity are 40.8% for PCDD/Fs, 7.0% for PCBs and 49.8% for PBDD/Fs.Except for PCBs and PCDEs whose total percentages are less than 7%, those of the other POPs are all greater than 30%. A significant trend seen in these figures is that the percentages increased along with the increase in halogen numbers for these six POPs.That is attributed to the fact that the highly halogen-substituted POPs in the gaseous phase more easily condense onto the surface of the sampling probe than the lower halogen-substituted ones.Furthermore, the particles that adhere onto the surface of the sampling probe contain more highly halogen-substituted POPs.Therefore, it is crucial to carefully perform the cleaning procedures, and the solvents from the rinses of the sampling probe and other components need to be combined with XAD-2 cartridges for analyses to obtain the real POP concentrations in the stack flue gases.Although performing rinses could remove most POPs which are adherent to the surface of the sampling probe and components, residues might cause memory effects if the next sampling is a short-term one, because the difference between the sampling volumes of the flue gases could reach about 200-fold.
9
56464761
Concentrations and Congener Profiles of POPs in the Stack Flue Gases The POP concentrations in the stack flue gases of the EAFs are listed in Table 4.The mass concentrations are 0.439 ng/Nm 3 for PCDD/Fs, 7.53 ng/Nm 3 for PCBs, 0.0115 ng/Nm 3 for PCDEs, 0.163 ng/Nm 3 for PBDD/Fs, 0.145 ng/Nm 3 for PBBs and 8.03 ng/Nm 3 for PBDEs.The PCBs and PBDEs are the most abundant pollutants, and their mass concentrations are one to three orders higher than those of the other POPs.Regarding POPs with dioxin-like toxicity, the toxicity concentrations are 0.0601 ng I-TEQ/Nm 3 for PCDD/Fs, 0.00804 ng WHO-TEQ/Nm 3 for PCBs and 0.000869 ng TEQ/Nm 3 for PBBD/Fs, and the sum of the dioxin-like toxicity concentrations of these three POPs is 0.0690 ng TEQ/Nm 3 .The toxicities contributed by PCDD/Fs, PCBs and PBBD/Fs are 87.1%,11.7% and 1.2%, respectively. Table 5 lists the POP concentrations in the stack flue gases/ exhausts of various emission sources, such as incinerators (Kim et al., 2004;Wang et al., 2010a), sinter plants (Kuo et al., 2012;Wang et al., 2003c), EAFs (Lee et al., 2005;Wang et al., 2010c;), power plants (Dyke et al., 2003;Hutson et al., 2009;) and diesel engines (Wang et al., 2010b).As yet, no data regarding PCDEs and PBBs in flue gases emitted from combustion facilities have been reported in the literature.Compared to the results in previous studies of EAFs (Wang et al., 2010b, c;Lee et al., 2005), the concentrations of PCDD/Fs, PBBD/Fs and PBDEs obtained in this work are lower, but still within one order.Generally, the PCDD/F and PCB concentrations in the flue gases of stationary sources are higher than those in the exhausts of diesel engines or vehicles, although diesel engines seem to emit higher or at least similar levels of PBDD/F and PBDE concentrations (Wang et al., 2010b) than stationary sources. The congener profiles of these POPs in the stack flue gases of the EAF are shown in Fig. 2. The congener profiles of PCDD/Fs, PBDD/Fs and PBDEs obtained from this study are consistent with the results of previous works (Hofstadler et al., 2000;Wang et al., 2003c;Lee et al., 2004Lee et al., , 2005;;Wang et al., 2010c).The PBDE congeners in the stack flue gases leaned more to the low to medium brominated congeners, namely BDE-28, -47, -100 and -99.These abundant low to medium brominated congeners may be attributed to the thermal desorption of the commercial penta-BDE mixtures, which were impurities in the feeding scrap (Wang et al., 2010c).Among the highly brominated-substituted congeners, BDE-209 was the most abundant.BDE-209 could be formed through combustion processes, which is similar to that of sinter plants (Wang et al., 2010c), and from the undestroyed BDE-209 in the feeding scrap. For the PCBs, the dominant congeners in the stack flue gases were in the order of PCB-118, -105, and -77, showing that tetra-and penta-CBs were the major PCB homologues.This assembled to the results found for the PCDF homologue.Although PBBs, like PCBs, were also dominant in lower halogenated-substituted congeners, the bromination pattern was different from that seen for PBDD/Fs. The PCDE congener profile is dominated by CDEs-28 and -99, the lower chlorinated-substituted congeners.The homologue profile clearly shows that the formation of PCDEs is favorable to the lower chlorinated ones, while the highly chlorinated ones remain minor components.Similar phenomena occurred with regard to PCDFs of the EAF, which had higher fractions of the lower chlorinated congeners compared to other combustion sources, such as vehicles (Chang et al., 2014), incinerators (Wang and Chang-Chien, 2007) and crematories (Wang et al., 2003a).We speculate that Fe 2 O 3 , which should be abundant in fly ashes of EAFs, may shift the formation of PCDEs towards the lower chlorinated congeners (Liu et al., 2013).The close association between PBDEs and PBDD/Fs does not exist for PCDEs and PCDD/Fs.Furthermore, the formation mechanisms between PBDEs and PCDEs, as well as those between PBDD/Fs and PCDD/Fs, are not very similar, suggesting the need for further studies on the relationships among the formation mechanisms of these analogues. The congener profiles of PBDD/Fs and PBDEs, especially for PBDD/Fs, which were more dominated by the highly halogen-substituted congeners than other POPs, resulted in their breakthroughs much larger than those of other POPs.
10
56464761
Emission Factors of the POPs Table 6 lists the POP emission factors of EAFs obtained from this study and from elsewhere.The emission factors are calculated based on the total weights of steel production or feedstock, including scraps, alloying agents, flux and coke.For the same measurement, the obtained emission factors based on the product will be 10%-15% higher than those based on the feedstock, because the transformation rate of feedstock to produced steel is commonly around 90% or lower. The emission factors of the investigated EAF in this study are 0.177 µg I-TEQ/tonne-feedstock for PCDD/Fs, 0.0232 µg WHO-TEQ/tonne-feedstock for PCBs, 0.0343 µg/tonnefeedstock for PCDEs, 0.00247 µg TEQ/tonne-feedstock for PBDD/Fs, 0.416 µg/tonne-feedstock for PBBs and 24.0 µg/tonne-feedstock for PBDEs, which are all one order lower than those reported in other studies.The much higher sampling time of AMESA should include more high emission events, which will then cause higher emission factors.Therefore, we think that the lower emission factors obtained in this study should be attributed to the influence of feeding materials and the operating condition of EAFs (Lee et al., 2005;Wang et al., 2010a, c), not the samplings by AMESA.
11
102634929
This study presents a water purification plant that uses the waste cake from the process of oil extraction of Moringa oleifera seeds. The particularity of this purification plant is that it should be autonomous to work in isolated areas. To do so, the design counts on solar panels and batteries controlled by a Supervisory Control And Data Acquisition (SCADA) system. The main objective of this study is the design and automation of the purification power plant so it can be used either manually or remotely by means of a web server and a micro controller in charge of data collection and to proceed orders from and to the web platform. In pursue of a cost reduction, caused by the development and implementation of hardware and software, this project is conceived using open source systems. Additionally, the plant counts on an Energy Management System that should optimize the energy consumption of the control system and actuators. This system is designed in such a way that it can be used independently in isolated mode or connected to the grid in regions where local authority regulations allows the connection of energy storage systems to the grid.
1
102634929
Introduction The access to drinkable water is considered a Human Right since 2010 by the United Nations General Assembly.In spite of the efforts and cooperation projects related to drinkable water and sanitation access, there are still 663 million people without access to drinkable water according to the World Health Organization [1].Therefore, the differences in quantity and quality of the water supply, especially in impoverished countries, supposes the main cause of diseases due to intestinal parasites, having a significant effect on infant population [2]. Additionally, due to regulations regarding the environmental impact in farming and stock breeding zones, agro industry should introduce water treatment systems to treat water before returning it to rivers or water basins to eliminate nitrates and fertilizers that are very pollutant [3].It is in these scenarios where the necessity to implement com-pact, portable and autonomous water treatment plants is stated.Due to the difficulties to reach these isolated water treatment plants in person, it is appreciated to install the capability to control and supervise remotely its functioning and working state. There are multiple studies related to water treatment processes, such as the ones presented in the recent comparison done by Lopez-Grimau et al. [4] that shows how some studies consider the development of pilot plants to treat urban wastewater while other focus the attention on industrial wastewater.The novelty of the system presented in this study in comparison to other pilot water treatment plants is the possibility to be used in isolated locations without access to electricity.In fact, this study presents the working details of the pilot water treatment plant that presented better environmental results in a previous work [5]. Considering that the plant should work in isolated places, where human intervention is more difficult, the automation of the plant is justified.However, automation needs control.In this case, automation is done using a Supervisory Control And Data Acquisition (SCADA) sys-
2
102634929
M.M Rafique et al. / Desalination and Water Treatment (2017) 1-8 2 tem that allows equally the control of the plant in a manual or a remote mode. Nowadays, software to optimize water and energetic resources is used in a wide range of productive sectors [4].Particularly, in the sector of water purification automated plants, the control of water and energy consumption together with the monitoring and activation of actuators is done by SCADA [6].The SCADA plays an important role in the industrial communication in real time.These systems combine hardware and software such as the Main Terminal Unit (MTU), Remote Terminal Units (RTU), actuators, sensors and Human Machine Interfaces (HMI) [7].Moreover, SCADA systems can be designed and implemented in such a way that they can work either with expensive but function oriented private software or with low cost open source programming software [8].Generally, the automation takes advantage of Programmable Logic Controllers (PLC) [9] to run the supervision and control of sensors and actuators connected through an OCP server to communicate between them [10]. Being possibly installed in isolated locations, the presented prototype should be capable to work powered by distributed electricity generation systems for the cases where no electricity grid is in reach, increasing the cost of the solution.This paper also presents an interesting alternative of batteries to power the water treatment plant in such cases. In summary, the proposed system stands out because it is compact, economic, energetically sustainable and reliability, so it allows the implementation of the water purification plant almost anywhere.
3
102634929
Methodology The design of the plant has several requirements to follow: First, the plant should be compact; second, the plant should be inexpensive, reason why it uses free software; and third, the plant should be self-sufficient, so the energy consumption of all the control, sensors and actuators has to be measured to correctly dimension the amount of solar panels and the number of battery packs to store energy. The presented water purification plant follows the basics of the Patent P201430600 that takes advantage of gravity to operate.This base plant is totally mechanical, using the difference in height and in volume of three containers (water to treat, dissolution and treatment or agitation containers) to dose the coagulants/flocculants by means of a Venturi tube and to proceed with the fast and slow agitation phases that are necessary to properly eliminate the solids in suspension and to disinfect [2] water. This study uses a similar compact concept adding an automation system, which entails the need of electricity power.Electricity could be obtained by connecting the water treatment plant to the electricity grid, if available, or using electricity generation systems such as solar panels and lithium ion batteries, which is the case of the designed prototype presented in this study.Notice that the batteries used in this prototype come from a waste product, as they are taken from dismantled electric vehicles (EV).In fact, EV batteries are considered no longer useful for traction purposes when they have lost a 20% of its capacity [11].When this happens, these batteries are taken out from the car and they are normally recycled.In this case, though, it is considered that an 80% capacity is good enough to fulfill the power plant needs.By incorporating re-used EV batteries, the plant incorporates compact batteries as Li-ion batteries have higher energy density than all other battery types at the moment [12] and reduces the acquisition costs [13]. The resulting prototype is shown in Fig. 1, which has a tarpaulin to protect it from the rain (upper-right image). Roughly, the system works as follows: In container number 1 (Fig. 1) the system stores the water to treat.In a preliminary stage, the tests were generally done mixing kaolin to drinkable water, but several synthetic waters where also satisfactorily tested.In the following container (2) there is a beater that shreds the M. oleifera waste cake to prepare the coagulant/flocculants mixing the M. oleifera with water that is dosed through a Venturi tube to the water to treat.The residual cake used comes from a process that presses M. oleifera seeds to obtain oil.The process to obtain oil is simple, using standard presses having holes in the lower part that allow the oil to flow while capturing the rests, although other possible continuous presses could be used [14].This residual cake may have big parts of seeds, in consequence, the prototype incorporates the aforementioned beater. As it is visible in Fig. 1, this container counts with a mixer to prepare the coagulant based on the Moringa oleifera residual cake but it also has a 0.45 µm glass fiber filter.This is done in the central tanks, circled in orange in Fig. 1 with a dosage of coagulant of 100 mg/L for turbidities between 30 and 150 NTU.For better results, the system may incorporate sodium chloride (NaCl) 0.25 M to the M. oleifera coagulant in tank number 3. If another coagulant should be used, the system should inject the predefined dose of this other coagulant in this container.This mix is pumped (manually or electrically) into the elevated tank (number 5 in Fig. 1) where water follows the processes of fast (150 rpm) and slow (20 rpm) agitation for 10 and 30 min respectively.After that, there is 1 h resting period to facilitate the natural sedimentation.Containers 4 and 5 are dimensioned so the slow agitation is done naturally in the manual configuration thanks to the entrance at 90º of the tube that has a length of 1 m and a diameter of 20 cm.Treated water is stored in container 6.When purged, all containers send the residual mud to container 7. The purge operation is defined to be done once every 3 treatment batches.The different size and
4
102634929
M.M Rafique et al. / Desalination and Water Treatment (2017) 1-8 3 positioning of the containers respond to the possibility to use the plant in a manual or electrically powered mode.In this prototype plant there are two optional tanks, the first one contains water to treat and the last one (the one more to the left in Fig. 1) will accumulate slot.This process is better explained in the results section. With the presented configuration, the prototype is able to work in totally isolated areas using the manual configuration (having no instant control of the purification quality).In electrically isolated areas the system could be powered by solar panels and a battery stack using wireless systems to remotely communicate with the system controller (such as the ones used in mobile phones).Finally, in locations where it can be connected to the electricity grid there is no need to install the solar panels and batteries. The study continues with the implementation and programming of the SCADA that manages and controls the water purification plant.A representation of the piping and instrumentation diagram (P&ID) is visible in Fig. 2.This diagram shows how water is pumped and how it is transferred from one container to another by pipes and valves.Notice that the coagulant/flocculants, consisting in the residual cake of the process of oil extraction from M. oleifera seeds, is grinded and then mixed with water.To dose and program the agitation time and pauses so the purification and disinfection processes are done correctly, the system uses a table relating these parameters according to the degree of turbidity of the water to treat. As shown in Fig. 2, the system counts on sixteen valves, three electric motors (one for the beater that shreds the residual cake, another to prepare the coagulant and the last one for the fast and slow agitation), five ultrasonic sensors to control the water level in the containers and one turbidity sensor (although another one is expected to be incorporated in the nearby future). All these elements are also visible in Table 1 together with a description of their use and the PIN used in the Arduino Mega board (which is presented in the results section).All these variables are essential for the correct use of the purification plant under the remote mode. The control of the water treatment plant counts on the low cost micro controller Atmega 2560, which is powerful enough and it is easy to integrate to SCADA systems following industrial communication protocols [15].Additionally, Atmega 2560 is an open source hardware installed in an Arduino Mega board. For communications between the automated water treatment plant and user, the network counts on a master (web platform), in charge of the reception and sending of orders from and to the screen controller.As depicted in Fig. 3, the communication between these two devices can be done by LAN or WAN network using the Web sockets internet communication protocol detailed in the document RFC 6455 [16]. The control interface (HMI) and the data storage follow a LAMP (Linux Apache Mysql) architecture due to the collection of open source elements that it can combine to create all type of Web applications [17].In our case, the web server is Apache, the database is Mysql managed by Phpmyadmin and for the rest of control applications the system uses Java, due to a higher robustness, stability [18] and security in comparison to PHP [19]. The web platform is configured in a server on the TR5 building in the Terrassa Campus of the Universtitat Politècnica de Catalunya.All the aforementioned structure is implemented between this server and the water purification pilot plant installed in the rooftop of this same building. Finally, the HMI interface design was done in the integrated development environment of Netbeans that is compatible with Java.Netbeans is a robust and user friendly
5
102634929
M.M Rafique et al. / Desalination and Water Treatment (2017) 1-8 5 environment that supports several program languages and web applications. To dimension the electricity generation system, the study measured the consumption and power of the SCADA, sensors, motors and actuators of the water purification plant.For the energy storage, the study takes into account that the systems should be able to work for 2 d even though there is no sun in the cases where the plant is located in isolated places.However, being the actual implementation of the water treatment plant on the rooftop of the R5 building of the Terrassa Campus of the UPC, there is no need to install that amount of solar panels or storage capacity, as the treatment plant may take the energy from the electricity grid if necessary.In consequence, an energy management system (EMS) was specially designed to take the maximum profit of the use of energy.The fact that the prototype re-uses EV battery modules permitted a reduction in the costs and to provide an additional value to a waste from the automotive sector that still has an 80% of its initial capacity [20] The last part of the study presents a comparison of the raw and treated water characteristics to evaluate the efficiency of the proposed process based on previous works that studied the life cycle assessment of this methodology in Burkina Faso using two coagulants: the traditional aluminum sulfate (Al 2 (SO 4 ) 3 ) and M. oleifera.
6
102634929
Results A representative scheme of the water treatment process including all the control and automation elements is visible in Fig. 4. As it can be appreciated, the minimized system counts on four tanks.Tank 1 and 2 are needed to shred the M. oleifera and to prepare the coagulant respectively.Then, by means of a Venturi tube, the coagulant/ flocculants is dosed naturally when water to treat passes through a transversal tube.This mix enters into the third tank, where the processes of fast and slow agitation (15 min for the fast agitation and 1 h in slow agitation mode) takes place.During this process, the coagulant/ flocculants catch the suspended solids that fall to the bottom of the tank by decantation.The fourth tank is conceived to store all treated and drinkable water.It can be also appreciated that there is a purgative system in all tanks to clean them. The SCADA counts on an Arduino mega 2560 together with an Ethernet Wiznet W5100 board, which characteristics are presented in Table 2.
7
102634929
M.M Rafique et al. / Desalination and Water Treatment (2017) 1-8 6 Both boards were assembled to build the controller.These boards and the developed algorithm permit the control of all sensors and actuators of the system.The Ethernet board assigns to the Arduino card an IP direction and configures it with the protocol TCP-IP to allow remote access through an intranet or by internet. All the other sensors in the treatment plant are described below: • Ultrasonic module HC-SR04: Precision distance 2-450 cm, measurement angle <15º and working voltage of 5 V • Turbidity meter: Analogic 0-5 V exit signal • End of movement: incorporated to the electro-valves. Regarding the actuators, the system counts with: • Water pump Calpeda MMM 1/AE: Nominal power of 0.37 kW (0.5 HP) and a flow of 1-4.2 m 3 /h and a range of height of 16.3-22 m. • Electric beater Novital: Nominal power 0.72 kW, diameter 2.5, 4, 6 and 8 mm.• Mixer Rubimix9: Nominal power of 1.2 kW, two velocities: 620 rpm y 820 rpm.• Electro valves: Voltage 12 V DC and 1.8 A current.Some laboratory tests were performed to evaluate the automation system.In the first place the system was built substituting sensors by power regulators, LEDs and pushers as Fig. 5 (left) shows.This process was useful to validate the electric schema (Fig. 5
8
102634929
right). Then, to validate and verify the program, all sensors and actuators where added one by one independently until it was clear that the system was able to control all of them correctly.More details of the plant are accessible in [21].Finally, tests where done in the definitive circuit and the whole device was installed to the prototype water treatment plant. To design the HMI, a client, server and controller were defined.The controller receives the state values from sensors and actuators through the messages from the client (the purification plant in this case).The server sends HTML code to present results graphically and stores the operations in a database. Communications through web socket between the server and the controller of the plant were verified prior to its final implementation.The system refreshes the parameters that rule the plant every one second.In fact, shorter refreshing periods entailed problems in the reception and sending of messages.Thus, as the water treatment process does not require fast responses, this delay in time response is acceptable. In the actual implementation on the rooftop of the R5 building at the university, when the water treatment plant works on automatic mode, it is able to take the energy from the different power sources.These sources are: solar panels, 2 nd life EV batteries and the electricity grid that powers the entire building.The EMS choses the power source by calculating the best economic option taking into account the building and plant consumptions together with the electricity tariff, as graphically depicted in Fig. 6. The water treatment plant is capable to treat 1000 l of water in about 2 h.Thus, the energy consumption of the plant, taking into account all the elements described and that it should work 16 h to produce 8000 l of drinkable water, is close to 11,178 kWh per day. In such circumstances, considering the characteristics of the installed solar panels (210 W/m 2 ) and that it should be able to work for almost 2 days without any additional power supply in the case of being installed in isolated locations, the plant needs an extension of 40 m 2 of solar panels and a capacity of the battery to store near 22 kWh, which is almost the average of the actual battery capacity of commercial EV. Finally, a summary of the raw and treated water characteristics presented in further detail in [23] is described to verify the goodness of the process.
9
102634929
M.M Rafique et al. / Desalination and Water Treatment (2017) 1-8 7 The dosage of coagulant applied was based on the studies carried out by Pritchard et al. [24] and by Bhuptawat et al. [25] multipurpose tree whose seeds contain a high quality edible oil (up to 40% by weight, which is comprised between 25 mg/L, 50 mg/L and 100 mg/L for turbidities within the range of 30 and 150 NTU.These studies demonstrated that concentrations of 100 mg/L reached a 95% of turbidity removal, which was the initial concentration selected to carry out the tests (Table 3). As it is appreciable in Table 3, the lower concentrations of M. oleifera and alum did not comply with the international Standards.In consequence, to increase the effectiveness of M. oleifera as a natural coagulant, the addition of 0.25 M of sodium chloride (NaCl) increased the turbidity removal dramatically (going from 40 NTU to 3.3 NTU).In addition, it should be mentioned that the pH of water was not altered by M. oleifera, and its impact on conductivity is minimal, being still into the international standards [8]. Moreover, Fig. 7 compares the visual aspect of the raw water used in the tests against the treated water.As it can be observed, the process clearly improves the turbidity of the sample.The chemical analysis done afterwards confirmed the good response of the process. As mentioned above, the capacity of M. oleifera as coagulant/flocculants is clearly boosted when mixing it with 0.025 M of sodium chloride (NaCl). From the experience in developing this prototype, there are some aspects that should be considered for its future replicability.For example, the addition of an UPS (uninterrupted power supply) is recommended to avoid the loss of data or control of the plant in case of shutdown.Secondly, communication messages through web socket are limited to 120 characters, which is too low for some applications that might need auxiliary systems to avoid collapse and connection loses.
10
97718750
Introduction Purines are a class of heterocyclic compounds which play an important role in many biological processes 1-5.The most important natural occurrence of purines is in the nucleotides and nucleic acids; compounds which perform some of the most crucial functions in fundamental metabolism.The chemotherapeutic uses of purines and purine analogues have prompted tremendous efforts towards their synthesis, both in academia and in the pharmaceutical industry [6][7][8][9] . As the purine ring system is a fusion of two aromatic heterocyclces, pyrimidine and imidazole, a logical starting point for ring synthesis is an appropriately substituted pyrimidine or imidazole from which the second ring can be constructed by a cyclization process 10,11 .
2
97718750
Experimental All solvents purified and dried using established procedures.The 1 H NMR spectra were recorded on Hitachi-Perkin-Elmer R24B (60 MHz) or Bruker XL 300 (500 MHz) instruments (with J-values given in Hz), 13 C NMR spectra either on a Bruker WP 80 or XL300 instrument, and IR spectra on a Shimadzu IR-435 spectrophotometer.Mass spectra were recorded on a Kratos Concept instrument.The melting points were measured on an Electrothermal digital melting point apparatus and are uncorrected.
3
97718750
General procedure for the preparation of 1-aryl-4-cyano-5-[(ethoxymethylene)amino]imidazoles (4a-b) A mixture of 5-amino-1-aryl-4-cyanoimidazole (0.5 g), triethyl orthoformate (12.0 M equivalent), and acetic anhydride (6.0 M equivalent) was heated gently at 70-80 0 C under an argon atmosphere for several hours.After TLC showed that no starting material remained the resulting yellow-brown solution was evaporated under vacuum to give a residue which, upon treatment with a mixture of dry diethyl ether / hexane (1:1) afforded a precipitate which was filtered off, washed with the same mixture and dried under vacuum to give the products 4a-b. Once TLC confirmed complete consumption of the starting material, the result in solution were evaporated under vacuum to give a residue that upon treatment with a mixture of dry diethyl ether and hexane (1:1), gave the required imidates 4a and 4b as a solid in 81-87% respectively yield.The products were recrystallised from a mixture of dry diethyl ether and hexane (1:1).In the infrared spectra the presence of the cyano and C=N stretching vibrations were observed in the range of 2210-2220 and 1630-1650 cm -1 respectively. The 1 H NMR spectra of the isolated ethoxyimidates (4a-b) showed the presence of the H-2 proton of the imidazole ring in the range of δ7.26-8.28ppm.The H-7 proton appeared in the range δ8.28-8.66ppm.The CH 2 and CH 3 of the ethoxy group had clear quartet and triplet patterns as expected in the regions of δ1.32-1.55 and δ4.31-4.5 ppm respectively.The other bands were in agreement with expected structures.The 13 C NMR spectra of the imidazoles had the expected number of bands with the C-2 carbon of the imidazole ring in the region of δ135.4-140.9, the C-7 carbon at δ159.9-165.5 and the C-4 carbon within the region of δ98.9-103.0ppm. The imidates (4a-b) were converted to the corresponding 9-phenyl-9H-purin-6amines (5a-b) by treatment with ammonia in the minimum amount of methanol.The reaction was carried out under an argon atmosphere at room temperature.During the first 20 minutes a white precipitate started to form.After 2-3 h TLC showed no staring material, and filtration of the reaction mixture gave the purines as a powder in 67-83% yield.The purines (5a-b) were fully characterized by microanalysis and spectroscopic methods.The elemental analysis and mass spectra of isolated 9-phenyl-9H-purin-6-amines (5a-b) were satisfactory.In the infrared spectra, the NH stretching vibrations were observed as 2-3 bands in the range of 3300-3150 and C=N absorption band in the range 1650-1660 cm -1 .The NH 2 protons were observed in the range δ5.70-5.93ppm, the proton at position H-2 of the purines system appeared in the regions of δ8.12-8.26ppm and the proton at position H-8 were seen as a singlet in the range of δ8.08-8.22ppm.The 13 C NMR spectra of the compounds (5a-b) had the expected number of peaks.The C-8 carbon of the imidazoles ring appeared in the region of 143.5-144.0ppm.The carbon at positions C-2 and C-6 of the purines system appeared at δ152.0-152.6 and 158.2-158.4ppm respectively.
4
234577814
: Ciprofloxacin (Cipro) is a broad-spectrum antibiotic used against both Gram (+) and Gram ( − ) bacteria. Its biological half-life is very short (4–5 h) and its conventional administration forms present a limited absorption efficiency. For this reason, the aim of this work was to study other administration strategies based on topical films. Sodium alginate (SA), a naturally occurring polymer, and a recombinant elastin-like polymer (rELP) produced by advanced genetic engineering techniques were evaluated as potential carrier systems. The films were obtained by the casting tech-nique, adding the Cipro by direct dispersion in the polymer solution using 16.6% w / w rELP or 1.5% w / w SA. The in vitro release assays were performed at 37 °C in physiological solution and with orbital shaking at 90 rpm. Cipro concentration was determined by ultraviolet (UV) spectrophotometry at 276 nm. The release profiles were analyzed and adjusted using the Lumped model developed and validated by our research group. Pharmaceutical interest parameters were calculated and com-pared for both polymer-Cipro systems: the time required to reach 80% of the drug dissolved ( t 80% ), the dissolution efficiency ( DE ) and the mean dissolution time ( MDT ). The SA-Cipro platform released the 80% of the drug in 35 min, while this parameter was 209 min for the rELP-Cipro system. The MDT 80 % was 8.9 and 53 min for the SA-Cipro and rELP-Cipro, respectively, while the DE , evaluated at 200 min, was 66.6 and 58.8 for each platform, respectively. These parameter values demon-strate that the rELP films were able to modulate the drug release rate and for the SA ones, release can be considered immediate. Therefore, both systems are promising strategies for the topical application of Cipro.
1
234577814
Introduction Ciprofloxacin (Cipro) is a pale yellow crystalline powder. It responds to the chemical structure of 1-cyclopropyl-6-fluoro-1,4-dihydro 4-oxo-7-(1-piperazinyl)-quinoline-3-carboxylic acid ( Figure 1) which belongs to the family of quinolones. The central structural unit is a quinolone ring with a fluorine atom in position 6, a piperazine group in position 7, a cyclopropyl ring in position 1 and a carboxyl group in position 3 [1]. The Cipro molecule has amphoteric characteristics due to the presence of the carboxyl and amino groups ( Figure 2). It is well known that Cipro can form complexes with certain multivalent cations, according to the degree of ionization and the prevalence of certain chemical variants present in the aqueous solution [2]. The antibacterial effects of Cipro are due to the inhibition of bacterial topoisomerase IV and DNA gyrase, preventing the replication and transcription of bacterial DNA [3]. Cipro belongs to the group of synthetic fluoroquinolone antibiotics with broad antimicrobial activity. It is commonly used for infections of the urinary tract, intestinal, among others, but it has a very short biological half-life of approximately 4 to 5 h [4]. On the other hand, the limited absorption efficiency of the drug in conventional form prompted the development of new delivery systems. Among them, we can mention the transdermal systems which have numerous advantages, such as their application in a specific site, painless application, less frequent replacement, and greater dosage flexibility. Among the materials used to prepare these systems, polymers stand out, particularly those that are biodegradable, biocompatible and from natural sources such as chitosan, sodium alginate (SA) and cellulose, among others. SA is an anionic polysaccharide, derived mainly from brown algae and bacteria. This polymer represents an outstanding class of materials for its biocompatibility, biodegradability, low toxicity and low relative cost [5]. On the other hand, rELPs are self-gelling, biodegradable, and biocompatible polymers, tailored designed for different applications in human medicine. This material was synthesized from recombinant elastin-type protein polymers (rELPs) [6]. rELPs are a class of protein polymers that have promising applications in the fields of biomedicine and nano-biotechnology. These are synthesized by recombinant DNA technology produced by fermentation of Escherichia coli and purified by thermo-dependent reversible segregation cycles. This polymer is capable of self-assembling and forming nanoparticles at low concentrations and hydrogels at high concentrations with a very strong and irreversible stability. The molecular mass of this polymer is 101.12 kDa [7]. The aim of this work was to evaluate the feasibility of developing films based on rELP and SA for the controlled release of Cipro. With this framework, films loaded with Cipro were prepared and characterized by evaluating drug release profiles. The data obtained were analyzed using the "Lumped" model, which allowed determining the initial release rate and parameters of pharmaceutical relevance, such as the mean dissolution time (MDT), the time to release 80% of the drug (t80%) and dissolution efficiency (DE) [8,9] to compare the different formulations developed.
2
234577814
rELP Films Preparation To obtain the films, 10 g of aqueous solutions were prepared with concentrations of 16.6% w/w of rELP, to which 10 mg of Cipro was added. They were subsequently subjected to 0 °C for 10 min to ensure complete dissolution of the polymer. These solutions were placed in glass plates covered with non-stick material and placed in an oven at 37 °C for 24 h.
5
234577814
Sodium Alginate Films Preparation For these films, 0.45 g of SA and 100 mg of Cipro were weighed, and 30 mL of distilled water was added. This solution was homogenized under magnetic stirring (150 rpm) at room temperature and then poured into Petri dishes and oven-dried at 37 °C for 24 h. Finally, crosslinking was carried out for 7 min in a 0.2 M calcium chloride solution.
6
234577814
In Vitro Drug Release Tests To carry out the release tests, 1 × 1 cm 2 samples were cut from the different films and their weights and thicknesses were determined ( Table 1). The samples were carefully placed in test tubes containing 3 mL of physiological solution, used as a release medium, at 37 °C and with orbital shaking at 90 rpm. At pre-established time intervals, samples were taken by complete removal of the release medium, replacing the same volume with fresh medium to re-establish the maximum driving force at each sampling point. The
7
234577814
Data Analysis The data obtained from the release tests were analyzed using a second-order kinetic model, called the Lumped model, developed and validated by our research group [8,9]. To compare the release profiles, the experimental data were adjusted using the Polymath 6.0 program and parameters of pharmaceutical relevance were calculated: the MDT, the time to release 80% of the drug (t80%) and the DE. Furthermore, the model makes it possible to determine the initial release rate (a).
8
234577814
In Vitro Drug Release Tests In vitro drug release tests are used to characterize the release of a drug from a certain pharmaceutical form, the dissolution test being the most relevant. According to the United States Pharmacopeia (USP), drug dissolution and release tests are required for dosage forms in which absorption of the drug is necessary for the product to exert the desired therapeutic effect [10]. A central goal in the modified release of the drugs is to establish the desired release kinetics of a given drug for a specific application. The experimental data obtained from the Cipro release tests of the 2 systems are shown in Figure 3. In the platforms developed, the drug is distributed homogeneously in a continuous matrix formed by the polymeric network that controls the release rate, forming what is called a monolithic system. In this type of devices, release is usually controlled by diffusion through the monolith matrix material or through aqueous pores, and an initial burst of drug release from the surface is often observed. Over time, the rate of drug release decreases as the distance of drug diffusing to the surface increases. This can be seen in Figure 3, where a high initial rate of drug release from both polymer platforms is observed due to the transfer phenomenon that occurs as a consequence of the presence of drug on the surface of the films. Then, a moving front of solvent advances through the polymeric film, allowing the Cipro to diffuse to the surface face so that it is available for dissolution in the release medium. As the distance between the surface and the advance of the moving front increases with time, the release rate decreases. The object of any mathematical model is: (i) to be able to accurately represent the processes associated with drug release, (ii) to be able to describe/summarize experimental data with parametric equations or moments, and (iii) to predict processes under varying conditions. However, when describing the processes involved, some models developed often suffer from being too complex to be useful in practice. Under these premises, the mathematical model used to adjust the experimental values was the Lumped model developed by our research group in pharmaceutical technology. (Equation (1)). It considers the grouped effect of diffusion within the film and the transfer to the physiological solution. where Mt(%) is the cumulative percentage amount of drug released at the moment t. Equation parameters a (%/min) and b (min −1 ) can be obtained graphically. However, the best procedure to fit the model with the experimental data is through a nonlinear regression analysis using the values a and b found graphically as a first approximation. Nonlinear regression analysis was performed using Polymath 6.0 program. The model adjusted well to the experimental values for the 2 polymers evaluated (Figure 3). The model also allows us to determine the initial dissolution rate, since the rate at any time will be: Therefore, when t = 0, the initial rate of dissolution will be: Table 2 shows the values of the parameters a and b of the equation, as well as other parameters of pharmaceutical relevance, such as t80%, DE and MDT and M∞, which are calculated from the following equations: It is observed that SA systems have an initial rate of about 6 times higher than that of rELP systems. Furthermore, the t80% parameter, which is the time required to reach the dissolution of 80% of the drug available for dissolution, is 35 min for the SA films. The pharmacopeia states that if this parameter is lower than 45 min, the release can be considered immediate [11], as in the case of SA platforms. In a system that modulates the dissolution of a drug, it is desirable a t80% high value, as is the case with the rELP films, which would indicate a delay or control in the release process. The DE is defined as the area under the dissolution profile up to a certain time (tF), expressed as the percentage of the area of the rectangle described by 100% dissolution at the same time. These DE values are very close for both systems. Finally, the MDT value is a widely used pharmaceutical parameter to characterize the release rate of drugs from a specific dosage form that provides information about the ability to delay the release of the active ingredient from the polymer platform. A high MDT value indicates a greater ability to delay release. In this case, the behavior of MDT80% is correlated with that presented by t80%, showing a value about 6 times higher for rELP systems than for SA ones. It is known that for topical products, understanding of safety and efficacy is based on the release of the active drug from its dosage form to the surface of the skin. Once the drug is in contact with the surface of the skin, it penetrates through the stratum corneum to achieve its pharmacological action. For this reason, the determination of the release rate through in vitro studies is a relevant parameter to control the quality of a topical product, in the same way that dissolution tests are important for solid dosage forms administered via the oral route.
9
55402924
A Gram-negative bacterium isolated from the sub-surface water of Ikang River, Niger Delta region, Nigeria produced an unusual biosurfactant in waste frying oil-minimal medium. Cultural and biochemical characterizations as well as 16S rRNA sequencing identified the bacterium as a strain of Pseudomonas aeruginosa with 100% sequence homology with Pseudomonas aeruginosa strain HNYM41. Biochemical characterizations, thin layer chromatography (TLC), high performance liquid chromatography (HPLC) and Fourier transform-infrared (FTIR) spectrometry identified the active compound as a glycolipopeptide (peptidoglycolipid) composed of 40.36% carbohydrates, 20.16% proteins and 34.56% lipids. The biosurfactant reduced surface tension of water from 72.00 to 24.62 dynes/cm at a critical micelle concentration (CMC) of 20.80 mg/L indicating excellent effectiveness and efficiency properties. Commendable oil-washing property (79.92% oil recovery) with an elution rate of 0.68 mL/min at 70°C, foaming and foam stability, excellent emulsification activity in kerosene, crude oil and palm oil and a significant (P = 0.000; R = 0.9901) oil solubilization property indicate excellent oil recovery, detergency and remediation potentials of the biosurfactant. Oil displacement, emulsifying and antimicrobial activities of the compound were relatively stable at relevant temperatures, pH and NaCl levels suggesting suitability for applications in hydrophobic compound remediation, emulsion stabilization and preservation of formulations.
1
55402924
Introduction Surface active compounds, otherwise called surfactants, are amphiphilic molecules with hydrophilic and hydrophobic domains which reduce the free surface enthalpy per unit area [1] at air/water interfaces and the interfacial tension at oil/water interfaces [2]. They are about the most sought-after process chemicals worldwide. Their applications range from agriculture and environment [3] [4] to industries including food, cosmetics and pharmaceuticals not to mention petroleum [5]. These applications derive from their nature and the physicochemical properties which determine their various types including glycolipids, lipopeptides, neutral lipids, phospholipids and the polymeric types [6]- [8]. Synthetic chemistry is at the forefront of production of these molecules derived predominantly from petrochemicals which make them cheap and commercially available [9]. Nevertheless, the bulk applications of these chemicals especially in environmental bioremediation [10] [11] and to a lesser extent, the industries, whose effluents still end up in the environment, have been a source of environmental concern owing to their toxicity and environmental incompatibility [7]. Green surfactants or biologically-derived surfaceactive molecules, especially those of microbial origin called biosurfactants, are suitable alternatives to their chemical counterparts by reason of their wider applications, biodegradability, low to non-toxicity and environmental compatibility [12]. Biosurfactantproducing microbial strains from different taxonomic groups especially bacteria [13] [14] and yeasts [15] [16] have been isolated and employed in production. In recent times, a new area, called green chemistry, has been developed where biosurfactants are applied in the production of nanoparticles [17]. In the industries, most of the biosurfactants are used as emulsifiers but extensive applications in this respect have been limited by relatively high production and recovery costs [1]. Production economics is frequently encountered as the major drawback of biotechnological processes including biosurfactant production and has greatly limited its commercial applications. However, production costs can be reduced by careful selection of producing strains with improved yield, improved product formation rates and use of cheap (often waste) substrates. Recommended protocol for isolation of biosurfactant-producing bacteria involves a combination of screening concepts [18]. However, multiple substrates are required for successful isolation and selection of a diversity of biosurfactantproducing bacteria with abilities to produce novel biosurfactant types [13]. The successful use of wastes like olive oil mill effluent, dairy waste and waste frying oil as substrates for biosurfactant production has been reported by few researchers and with encouraging results [19]- [21]. Here, we report the isolation of a strain of Pseudomonas aeruginosa from the mesotidal waters of Ikang River, Niger Delta area, Nigeria, with ability to elaborate an unusual surface-active compound when grown on waste frying sunflower oil. A waste frying oil disposal problem currently exists in Calabar, the capital city of Cross River State, Nigeria, where it is emptied into drainages resulting in unbearable stench in the capital city. Government has already shut down a number of fast foods in the city thereby worsening the unemployment problem of the state and country. Utilization of the waste frying oil for biosurfactant production will go a long way to solving waste oil disposal and subsequently unemployment problems. To the best of our knowledge, this is the third time a strain of Pseudomonas aeruginosa has been reported to produce a glycolipopeptide biosurfactant but the first report of that production on waste frying oil.
2
55402924
Medium formulation for biosurfactant production Purified cultures of all morphologically-distinct bacteria were grown in minimal medium supplemented with different carbon sources including glucose, glycerol, rice processing effluent, crude oil and waste frying oil at 1% (v/v or w/v) as sole source of carbon and energy. The minimal medium contained (g/L) KH2PO4 1.0; K2HPO4 0.5; MgSO4.7H2O 0.2; NaCl 0.5; NH4Cl 1.0; FeSO4.7H2O 0.01 with pH adjusted to 7.0 using 1 M HCl/IM NaOH. Twenty milliliter volume of minimal medium was dispensed into 100 mL Erlenmeyer flasks. Flasks were sterilized by autoclaving at 121°C for 15 min and inoculated, upon cooling, with 2% (v/v) 18 h-old Luria broth culture of each bacterial isolate. Flasks were incubated at room temperature (28 ± 2°C) on a rotary shaker agitating at 150 rpm for 72 h.
3
55402924
Qualitative screening of isolates for biosurfactant production Cell-free fermentation broth of each flask corresponding to each bacterium was obtained by centrifugation at 8,000 x g for 10 min followed by membrane filtration with 0.2 µM (Millipore) and subjected to qualitative screening for biosurfactant production by the combination of rapid drop collapse test [13], emulsifying activity test [22], salt aggregation test [23] and oil displacement test [6].
4
55402924
Quantitative biosurfactant screening The initial quantification of biosurfactant in cell-free broth of bacteria which tested positive to at least 75% of the qualitative screen tests followed the oildisplacement assay. This assay is sensitive enough to detect the presence of 10 µg or 10 nmol of biosurfactant in sample solutions [6]. In this test, 15 µL of crude oil (Nigerian medium crude-Adaax) was added to the surface of 40 mL of distilled water in a Petri dish of diameter 15 cm to form a thin uniform oil layer. Plates were allowed to equilibrate for 1 h. Thereafter, 10 µL of each cell-free fermentation broth was gently placed on the center of the oil layer. The diameter of clear halo visualized under visible light was measured with a meter rule after 30 s and area of clear zone calculated using the formula A = πr 2 ; where A is the area of the oil film, r the radius and π a constant of value 3.14. The larger the diameter of clear halo, the larger the area and hence the greater the amount of biosurfactant. All determinations were made in triplicates. Biosurfactant hyper-producing strains were selected by consideration of mean oildisplaced areas of their cell-free fermentation broth.
5
55402924
Identification of biosurfactant -producing bacterium The selected bacterium was identified morphologically and biochemically using the MICROGEN ID Kit (Microgen Bioproducts Limited, UK) in conjunction with Microgen identification system software as well as by 16S rRNA sequencing. The sequencing protocol made use of primers 271 and 1492R and utilized a sequence mix composed of NZYTaq 2 x Green Master Mix (Nzytech). Amplification was done by denaturation at 94°C for 3 min, cooling to 5°C for 1 min and raising the temperature again to 72°C. This was repeated through 30 cycles and then held at 72°C for 10 min. The polymerase chain reaction (PCR) products were purified with the JETQUICK PCR purification Spin Kit and used as template for the sequencing reaction. Sequences were compared to GenBank sequences by using standard nucleotide basic local alignment search tool (BLAST). The bacterium was deposited at the University of Calabar collection of Microorganisms (UCCM) and had since been given the collection's code name.
6
55402924
2.4. Determination of the effectiveness of biosurfactants by surface tension reduction measurement The surface tensions of sterile clear amber cell-free biosurfactant solutions of four isolates were determined by the ring method [20] at room temperature with a tensiometer (CSC Du-Nouy Tensiometer) fitted with a platinum ring. A volume of 70 mL of each cell-free biosurfactant broth was dispensed into 100 mL beakers and placed on the seat of the tensiometer. The seat was then raised by an adjustment knob until contact was made between the surface of biosurfactant liquid and the platinum ring. The liquid film produced beneath the ring was stretched as attempt was made to bring the ring out of the liquid by means of the adjustment knob. The force needed to break the ring free of the liquid to the surface was read off a scale calibrated in dynes/cm. This was repeated 5 times for each of the biosurfactant solutions and mean determinations obtained and presented as surface tension of biosurfactants from the bacterial strains tested.
7
55402924
2.5. Recovery and characterization of biosurfactant 2.5.1 Biosurfactant recovery Most effective biosurfactant was recovered from the sterile clear amber liquid (obtained by centrifugation of fermentation broth at 8,000 x g for 10 min followed by sterilization by membrane filtration) by acidification of 5 mL sterile biosurfactant solution to pH 2.0 with 6 N HCl [24]. Thereafter, acidified biosurfactant treatments were allowed to stand for 10 h at 4ºC after which equal volumes of the acid-treated biosurfactant and chloroform-methanol mixture (2:1) were prepared in separatory funnels The preparations were allowed to stand for 30 min and the bottom layer (organic phase) separated and subjected to rotary evaporation at 35°C. The brown oily substance that ensued was weighed and expressed as g/L. [13] for the detection of carbohydrates followed by heating at 125°C. Some were steamed in iodine vapour for the detection of lipids (fatty acids) and some sprayed with ninhydrin reagent containing 0.5 g ninhydrin in 100 mL anhydrous acetone for the detection of peptides (proteins). Negative ninhydrin tests prompted digestion of biosurfactant with 6N HCl followed by heating to 105°C for 24 h for the detection of free amino acids [28] and the ninhydrin test repeated. Spot colour was noted. Surface active fractions were confirmed by oil displacement activity of needle-point scrapings from the plates.
8
55402924
Biosurfactant analysis by high performance liquid chromatography (HPLC) The active fractions of crude biosurfactant obtained from analytical TLC were subjected to HPLC analysis using a Gemini C18 column (100 x 4.6 mm) with a particle diameter of 5 µM accompanied with a Varian 335 diode array detector to facilitate UV detection over a wavelength range of 200 and 220 nM. Sample analysis followed a linear gradient with 85% eluent A comprising of 0.1% ortho-phosphoric acid and 15% eluent B made of acetonitrile at 0 min, only to increase eluent B to 100% after 40 min at an eluent flow rate of 1.0 mL/min [29].
9
55402924
Biosurfactant analysis by Fourier transform-Infrared (FT-IR) spectrometry The infrared spectra of the partially-purified biosurfactant were recorded on a Fourier transforminfrared system (Spectrum BX-Perkin Elmer) in the 4000-400 cm -1 spectral region at 2 cm -1 resolution. The sample was spread on 0.23 mm KBr cell and cell inserted into the IR transforming system. The spectra were displayed on a connecting computer monitor after Fourier-transformation.
10
55402924
Evaluation of biosurfactant effectiveness and efficiency The effectiveness of the crude biosurfactant, given by surface tension reduction, was determined by the ring method [20] at room temperature with a tensiometer (CSC Du-Nouy Tensiometer) fitted with a platinum ring. The efficiency of the biosurfactant, given by its critical micelle concentration, CMC was determined by measuring surface tension values of increasing concentrations of the biosurfactant. The logtransformed biosurfactant concentrations were regressed on their respective surface tension values using a second-order polynomial function. The CMC was defined as the minimum biosurfactant concentration above which no further reduction in surface tension occurred.
11
55402924
2.7. Activity characterization of biosurfactant 2.7.1. Biosurfactant-enhanced oil recovery (washing activity) by the sand pack method Biosurfactant enhanced recovery of Bonny light crude oil was conducted using the modified sand pack column technique [30]. One hundred and fifty grams (150 g) of 140 µM size sand particle was packed into a glass column of 20 mm x 25 mm x 85 mm dimensions with a 100 µM pore size sieve. The column was fitted with a tap outlet at the bottom to permit escape of recovered oil. A volume of 100 mL of Nigerian medium crude oil was poured into the column and allowed to stand for 3 h. Then 100 mL of sterile biosurfactant solution was added to the column and the column incubated at 30, 50 and 70°C to ascertain temperature influence on biosurfactantenhanced recovery. The experiment was set up in triplicates with 50 mL of Milli-Q water serving as control. Volume of oil released by biosurfactant enhancement was measured at 30 min interval for 3 h at the different temperatures and compared with that of the control. Oil recovery rate was determined and expressed as mL/min.
12
55402924
Biosurfactant-enhanced solubilization of Nigerian medium crude Crude oil solubilization test was performed following a batch solubilization technique [31]. Twenty-five milliliters (25 mL) of Nigerian medium crude oil (ADAAX Oil Nigeria) and different concentrations of crude biosurfactant (0, 50, 100, 200, 300, 500, 700, 1000, 1300, 1650 and 2000 mg/L) solutions were mixed in equal volumes in glass-stoppered 250 mL Pyrex separatory funnel. The funnels were placed on a rotary shaker agitating at 200 rpm, at 30°C for 36 h. A 12 h settling period was allowed followed by withdrawal of the aqueous phase from the bottom of the funnel with minimal disturbance. Samples, now called water soluble fractions (WSF), were analyzed for total petroleum hydrocarbons (TPH) by the nhexane method. Absorbances of n-hexane extracts from the different treatments were taken at 450 nm with HACH DR 300 Spectrophotometer and amounts determined gravimetrically from a standard curve. Regression statistics (Excel 2007) was used to analyze data obtained to show relationship between biosurfactant concentration and solubilization of crude oil at 95% confidence limit.
13
55402924
Assessment of antimicrobial activity of biosurfactant Assessments were conducted by the agar disc diffusion method [28] in Muller-Hinton agar for bacteria and Potato dextrose agar for fungi. Discs of 6 mm size were punched from filter paper (Whatman No 3; Millipore) and sterilized by autoclaving in capped bottles. Aliquots of 20 µL (50 µg/mL) of crude biosurfactant solution were absorbed by each disc and allowed to equilibrate for 24 h. Equilibrated discs were aseptically introduced onto the center of Muller-Hinton and Potato dextrose agar plates preseeded with appropriate dilutions of test cultures/spores. Cell/spore densities of test cultures were as follows (cfu/mL); 10 6
14
55402924
Assessment of foaming property of biosurfactant The foam power and stability of the glycolipopeptide was evaluated according to the method of Chen et al. [32] with slight modifications. A volume of 200 mL of sterile biosurfactant solution was allowed to flow through a burette from a height of 90 cm into a 500 mL measuring cylinder. The turbulence generated foam whose height was noted immediately and then after 5 min. Foam heights were also measured every 1 h, then every 24 h. The foam height at time 0 min was considered the foam power while the R5, defined as the ratio of the foam height at time 5 min to that at time 0 min, was considered an indication of foam stability.
16
55402924
Stability characterization of biosurfactant 2.8.1. Temperature stability To determine the thermal stability of a 5 g/L crude biosurfactant solution, the test biosurfactant preparation was maintained at temperatures of 20, 40, 60, 80, 100, 120, 130 and 140°C for 15 min, cooled at room temperature and oil displacement, emulsification and antimicrobial activity tests of the glycolipopeptide performed as described by Abouseoud et al. [33].
17
55402924
pH stability The effect of pH on biosurfactant activity was performed by adjusting the pH of 5 mL of 0.5% (w/v) crude biosurfactant solution with 1M HCl/1M NaOH through a range from 4 to 11 and held for 15 min. Oil displacement, emulsification and antimicrobial activity tests of the glycolipopeptide were conducted as described previously [33].
18
55402924
Salinity (NaCl) stability Crude biosurfactant concentration of 0.5% (w/v) was prepared by dissolving 0.5 g of crude biosurfactant in 100 mL of different concentrations of NaCl solution. The concentrations used included 5, 10, 15, 20 and 25% at a pH of 7.0. The preparations were held at 30°C for 15 min to determine the effect of salt concentrations on activities of the glycolipopeptide. Oil displacement, emulsification and antimicrobial activity tests were performed as previously described [33].
19
55402924
3.1. Substrate-specific isolation of biosurfactant-producing bacteria The results of isolation of biosurfactant-producing bacteria from the 20 samples analyzed in our study, as mediated by the different substrates, are presented in Table 1. The table reveals that a total of 858 morphologically-distinct bacteria were isolated from all 20 samples with Ifondo water (IFW) harboring the highest number of distinct bacteria of 69. The table also reveals that a total of 141 (16.43%) biosurfactant-producing bacteria were selected using different substrates with waste-frying oil contributing 50 (35.46%). Generally, miscible substrates selected fewer biosurfactant-producing bacteria than hydrophobic substrates. A two-way analysis of variance revealed significant (P < 0.05) influence of substrates on the isolation of biosurfactant-producing bacteria.
20
55402924
3.2. Quantitative screening for industriallyrelevant biosurfactant-producing bacteria Table 2 presents the results of the quantitative screening of primarily positive biosurfactantproducing bacteria from the three major kinds of samples. The results show that only 28 (19.59%) of the total biosurfactant-producing bacteria were truly positive for biosurfactant production by the oil displacement assay method. Mean oil-displaced areas of 4 best isolates were IKW1 (66.5 cm 2 ) > IFW (55.3 cm 2 ) > OB6 (52.8 cm 2 ) > R15B (50.3 cm 2 ). Results also show very clearly that 3 (75%) of the best isolates were obtained from water samples.
21
55402924
3.3. Identification of biosurfactant-producing bacteria A summary of the results of characterization tests leading to the identification of the best 4 biosurfactant-producing bacteria is presented in Table 3. The table reveals that isolate IKW1 was Pseudomonas aeruginosa with a 100% sequence homology with Pseudomonas aeruginosa strain HNYM41 with a GeneBank accession number of JN999891A. Isolate R15B, on the other hand, was identified as Bacillus cereus with a 100% sequence homology with Bacillus cereus strain F2 with an accession number of JQ579629A. The table also reveals that 3 (75%) of the 4 best biosurfactantproducing bacteria were Gram-negatives. The high performance liquid chromatography (HPLC) analysis of preparatory thin layer chromatographic fractions revealed just one peak indicating that the biosurfactant was purified to 95% purity level. indicating the presence of amino and carboxylic groups. These characteristics, in combination with biochemical and chromatograhic results strongly suggest carbohydrate (glyco), lipid (lipo) and protein (peptide) composition for this biosurfactant. The surface-active compound could therefore be safely referred to as a glycolipopeptide (or a peptidoglycolipid).
22
55402924
3.6. Determination of glycolipopeptide efficiency with the critical micelle concentration (CMC) The efficiency of the glycolipopeptide biosurfactant, given by its critical micelle concentration (data not shown) revealed that the glycolipopeptide has a CMC of 20.80 mg/L. The second order polynomial used to fit the regression model revealed a significant goodness-of-fit (P = 0.000; R 2 = 0.9683) of the model.
23
55402924
Enhanced oil recovery (washing activity), solubilization, emulsification and foaming activities of glycolipopeptide Results of the sand pack experiment demonstrating the oil recovery potential of the glycolipopeptide (data not shown) revealed that highest recovery rate of crude oil of 0.87 mL/min occurred at 50°C at 120 th min of incubation. Biosurfactant-enhanced recovery rate of crude oil at 30°C increased linearly with time until it peaked at 0.63 mL/min at 150 th min. A nearperfect parabolic curve with a peak of 0.68 mL/min at 90 min was attained when temperature was raised to 70°C with an oil recovery effectiveness of 79.92%. Only 21.37% of oil was recovered with Milli-Q water (control) with a constantly increasing recovery rate throughout the 3 h holding time. The results of the solubilization potentials of the glycolipopeptide revealed significant (P < 0.05) positive linear relationship (R 2 = 0.9901); oil solubilization increasing as biosurfactant concentration increased. Emulsifying activity of the glycolipopeptide was tested with kerosene, crude oil and palm oil as hydrophobic compounds. The results showed that emulsification indices of the surface-active compound were 79.71%, 84.87% and 87.54% in kerosene, crude oil and palm oil respectively. Results of the foaming property experiment of the biosurfactant revealed that the active compound could produce small densely-packed foam with a height of 7.2 ± 0.3 cm at time 0 min. The foam was stable for > 48 < 72 h. R5 of the foam was 1.
24
55402924
3.8. Antimicrobial potential of glycolipopeptide The results of the antimicrobial potentials of the glycolipopeptide showed that the biosurfactant could most inhibit Bacillus subtilis UCCM 0006 to a zone diameter of 34 mm. However, the surface-active compound showed no inhibitory activity against Serratia sp.UCCM 0003. Overall spectrum of antimicrobial coverage of the glycolipopeptide was narrow.
25
55402924
3.9. Stability characterization of glycolipopeptide activities The results of the effect of temperature, pH and NaCl on the oil displacement activity of the glycolipopeptide are presented in Figure 2. Figure 2A reveals that oil displacement activity of the glycolipopeptide was stable up to 80°C after which it dropped gradually. The glycolipopeptide was observed, in Figure 2B, to be stable at alkaline pH levels with maximal stability of oil displacement activity at pH between 7 and 9. The oil displacement activity of the surface-active compound in Figure 2C was decreased gradually with increase in NaCl concentrations. NaCl concentration of up to 10% still demonstrated commendable displacement activity. Results of the effect of temperature, pH and NaCl on the emulsifying activity of the glycolipopeptide are presented in Figure 3. Figure 3A shows that the emulsifying activity of the glycolipopeptide increased as temperature increased. An activity of 79.71% at 30°C in kerosene increased to 91.43% at 140°C while that of 82.87% in palm oil increased to 95.49% at 140°C. Figure 3B shows the influence of pH on the emulsifying activity of the biosurfactant and reveals that the activity increased as the pH increased from 4 to 7 and stabilized at the alkaline region. The influence of NaCl on the emulsifying activity of the biosurfactant is shown in Figure 3C and reveals that 77.21% of the emulsifying activity was retained at 10% NaCl. Figure 4 is a presentation of the results of the effect of temperature, pH and NaCl concentrations on the antimicrobial activity of the glycolipopeptide. Figure 4A shows that the antimicrobial activity was retained up to 60°C and reduced after that until 140°C. Figure 4B shows that the antimicrobial activity of the surface-active compound was high at acidic and extremely alkaline pH but was moderate at neutral and weakly alkaline pH (8 to 9). The influence of NaCl concentrations on the antimicrobial activity of the biosurfactant is presented in Figure 4C and reveals that the activity remained stable up to 10% NaCl concentration but gradually decreased at higher NaCl concentrations.
26
55402924
5. Conclusion Pseudomonas aeruginosa Strain IKW1 was isolated from the mesotidal waters of Ikang River, Niger Delta area, Nigeria and demonstrated commendable ability to elaborate a rare but effective and efficient surface-active compound identified as a glycolipopeptide. The active compound demonstrated excellent properties of oil washing, solubilization, emulsification, foaming and antimicrobial action. These activities were stable at moderately high temperatures, alkaline pH and NaCl concentrations below 10%. The bacterium is recommended for large-scale production of the biosurfactant on waste frying oil as a restaurant waste management option and for applications in tertiary oil recovery, bioremediation of hydrocarbon-impacted environments and the development of detergent, food and pharmaceutical preparations where emulsion development, stabilization and preservation are desired. 11.9 ± 0.8 EW6 11.3 ± 1.
28
98334961
Oxopolimetalatos organicamente modificados do “tipo Keggin”de for mula [R]4[SiW11O40(SiR’)2], (R=Bu4N, R’ = -C2H5, -C10H21, -CH=CH2, -CH2CH=CH2, -OH, -C6H5, -C10H7, -C6H4NH2 (o, p), -C6H4NMe2 (p), -C6H4CH=CH2, -C6H4CF3, -C6H4CH=CHC6H5) foram sintetizados. O núcleo óxido apresentou propriedades re dox reversíveis que foram sintonizadas, escolhendo-se o modificador orgânico. As variações nos deslocamentos químicos na RMN de W, induzidas pelos modificadores, foram também medidas e ambos os efeitos correlacionados com as propriedades de doação-aceitação eletrônica do rad i cal orgânico. As mag ni tudes destes efeitos foram comparadas com aqueles causados pelo eletrólito.
1
98334961
In tro duc tion Over the last de cade, there has been great in ter est in the study of polyoxometallates (POM) as mod els for tran si tion metal ox ides 1 .These com pact ox ide clus ters of small size (around 10 Å) pres ent unique re dox prop er ties and the abil ity to mimic tran si tion metal ox ide fea tures 2 .The redox prop er ties are sen si tive to the POM com po si tion and struc ture.Strong ef fects are also mea sured with changes of elec tro lyte 3 .Re cently, the graft ing of or ganic rad i cals to a Keggin-type ox ide core was de scribed 4,5,6 .Asa first ex am ple of the `mod i fier ef fect', we re ported pre vi ously the graft ing of polymerizable groups to POM structrures as a pow er ful way to de velop mixed or ganic-inorganic poly mers with ad just able prop er ties 7,8 . In this Pa per, the in ter ac tions be tween the or ganic modi fier and the ox ide core are mea sured for or gan i cally mod ified POM (OMPOM) with var i ous sub stitu ents.Re dox po ten tials (cy clic voltametry) and NMR chem i cal shifts ( 29 Si and 183 W) are used as sen si tive probe for such in ter actions.were syn the sized fol low ing two pro ce dures pre vi ously described 5 .Trichlorosilane (Cl 3 SiR') in wa ter or trialkoxysilane ((OEt) 3SiR') in acid i fied wa ter were added to lacunary Keggin POM [SiW 11 O39 ] 8-(Ta ble 1).The charac ter iza tion of a few OMPOMs has al ready been de tailed by el e men tal anal y sis, IR, time of flight mass spec trom e try, 29 Si and 183 W NMRs.These anal y ses as sert the graft ing of two R' groups, sym met ri cally an chored to the edges of the hole in the lacunar [SiW 11 O39 ] 8-clus ter, as pro posed by Knoth 9 (Fig. 1).All com pounds were pu ri fied twice by recrystallization in DMF/wa ter or DMF/ac e tone mix tures and the pu rity of these com pounds was shown to be above 95% (main im pu rity is [Bu 4 N] 4 [SiW 12 O 40 ]).
2
98334961
NMR spec tros copy 1 H, 13 C-{ 1 H} and 29 Si-{ 1 H} NMR spec tra were recorded to check the pu rity of the com pounds (Bruker AC250 spec trom e ter). 183W NMR spec tra were re corded on a Bruker AM500 spec trom e ter fol low ing usual pro cedures 5 .OMPOM salts were dis solved in DMF/DMSO-d 6 Ta ble 1 .Prop erties of [Bu 4N] 4[SiW 11 O40 (SiR') 2] salts.Syn the sis pro ce dures for the organo-silicon pre cur sor : (a) (OEt) 3SiR' com mer cial, (b) Cl 3SiR' com mer cial, (c) (OEt) 3SiR' from Si(OEt) 4; R'Br; Mg/Ether, (d) 183 W : 1.0 M so lu tion of Na 2WO 4 (pH = 10).With re spect to the low sen si tiv i ties of these tech niques, mea sure ments could be only ob tained when suf fi cien t quan ti ties of com pounds were avail able.W1, ... W6 la bels are ar bi trary as signed in re gard to the POM struc ture.Elec tro chem is try: Re duc tion po ten tials ( vs. SCE) for the two first re dox stages in Bu 4NBF 4 (0.1M)/DMF elec tro lytes, scan rate 50 mV/s.
3
98334961
Elec tro chem is try Elec tro chem is try mea sure ments were per formed in differ ent elec tro lytes, fol low ing usual pro ce dures 10 .In all exper i ments, the con cen tra tion of the elec tro lyte was 0.1 M in sup port ing salt, and the con cen tra tion of OMPOM was fixed to 10 -3 M. The work ing elec trode was a 0.07 cm 2 disk of glassy car bon, pol ished be fore each mea sure ment (1 µm grain size SiC paste).The ref er ence elec trode was a sat urated cal o mel elec trode (SCE), ex cept in acetonitrile where an Ag/Ag + elec trode had to be used.So lu tions were degazed with dried ar gon prior to mea sure ments.Cy clic voltammetry ex per i ments were per formed with scan rate from 1000 mV s -1 down to 1 mV s -1 and char ac ter is tic poten tials were re pro duc ible within 5 mV.Two suc ces sive monoelectronic re vers ible stages pro ceeded, as shown by the 60 mV mea sured be tween re duc tion and ox i da tion stages, coulometric mea sure ments and the in ten sity-scan rate re la tion ship (I = v 1/2 ).
4
98334961
Re sults and Dis cus sion Ta ble 1 pres ents the prop er ties of 14 OMPOM as their tetrabutylammonium salts.Vari a tions in the chem i cal shifts of all nu clei can be seen with the na ture of the or ganic mod i fier group. 29Si sig nals are very sharp, and changes in the chem i cal shifts are noted for both the or ganic bonded sil i con atom and the cen tral oxometallate sil i con atom.The first ef fect is due to σ−π in ter ac tions be tween the or ganic group and the sil i con at oms and par al lels those ob served for trichloro or trialkoxy sil anes 11 .NMR Si-W cou plings were ob served on this first sig nal, and their sim i lar val ues re flect the iden ti cal struc ture of the dif fer ent OMPOMs.The mag ni tude of the ef fect on the sec ond sil i con is much lower and one should as sume a mod er ate vari a tion of the sil i con par tial charge 12 .Changes in chem i cal shifts are also ob served for all W nu clei.The mag ni tude of this ef fect varies from 0.8 ppm for W1 to up to 8.4 ppm for W6.Assuming also that this ef fect arises mainly from changes in the par tial charge on the dif fer ent tungstens, the stonger effect mea sured on W5 and W6 as serts the prox im ity of the or ganic groups to these nu clei 13 . A sig nif i cant vari a tion in the re duc tion po ten tials, measured in DMF/Bu 4 NBF 4 was ob served for the two first redox steps of the dif fer ent OMPOMs, rang ing from 1000mV ( vs. SCE) up to -890 mV.Ta ble 2 re ports the data ob tained for some of these com pounds in dif fer ent solvents.The ferrocene/ferricinium (Fc/Fc + ) cou ple was used for stan dard iza tion of all mea sure ments in or der to compare the re sults on a com mon scale.Se vere vari a tion of the re dox po ten tials was ob served, de pend ing on both the solvent and sup port ing salt.Ob vi ously, the sol vent strongly af fects the re dox po ten tials (it could be parametrized by the ac cep tor num bers 14 ), while the na ture of the sup port ing salt af fects mainly the dif fer ence be tween the re dox stages.How ever, the rel a tive re dox po ten tials of the dif fer ent OMPOMs are not af fected by chang ing the elec tro lyte.This fun da men tal ob ser va tion dem on strates the in trin sic ef fect of the or ganic substituent on the re dox po ten tial, which ap pears to be re lated to the elec tronic ac cep tor/do nor be hav ior of the or ganic substituent.The abil ity to be reduced is in creased when the ox ide core is elec tron de pleted, i.e. , when the R' or ganic group is more electronegative 15 .Fig ure 2 cor re lates the data ob tained from elec tro chem is try and 183 W NMR mea sure ments on some of these compounds.A strong re la tion ship be tween these two sets of mea sure ments is ev i denced by the par al lel ef fects and both should be re lated to the π-ac cep tor char ac ter of lacunary polyoxometalate 16 and the elec tronic ef fect of the or ganic group.The strong π-con ju ga tion be tween both or ganic and in or ganic moi eties is ex alted in com par ing the re dox po tentials of para and ortho aminophenyl de riv a tives, re spectively -1000 and -890 mV ( vs. SCE).Only in the first of these com pounds, the lone pair may be con ju gated with the ox ide core, lead ing to a rather low re duc tion po ten tial.This work re ports the strong syn ergy be tween or ganic and in or ganic com po nents in a se ries of or gan i cally mod ified heteropolymetalates.Two tech niques were used, both mea sur ing the elec tronic charge on the tung sten core.Elec -tro chem is try av er ages the charge ef fect on the whole ox ide clus ter (EPR of 1e -re duced spe cies dem on strates the electron delocalization at room tem per a ture), while 183 W NMR spec tros copy could pro vide a se lec tive map ping of the charge on the dif fer ent tung sten at oms.Mod er ate con ju gation ef fects were mea sured and a fine tun ing of the re dox po ten tial was ob tained by the choice of the mod i fier group.How ever, for these dif fer ent com pounds, the mag ni tude of the intramolecular ef fect is low in com par i son with the solvent ef fect.This could be due to the siloxane bond which is cer tainly not the best elec tronic junc tion.Other mod i fi cation meth ods of polyoxometallates have al ready been described 2,4,16 .They could be a key to ob tain functionnalized mol e cules with un usual be hav ior for the fields of mo lec ular elec tron ics and non lin ear op tics. Fig ure 2. Cor re la tion be tween elec tro chem i cal (re duc tion po ten tial in Bu 4NBF 4/DMF elec tro lyte ( vs. SCE)) and 183 W NMR chem i cal shifts (for W6 at oms) data.The dots rep re sent the decyl, ethyl, allyl, vi nyl and phenyl spe cies re spec tively, from left to right.
5
98334961
JSiR (80-20) mix tures (0.1 to 0.4 M).Vari a tions of 183 W chem ical shifts for the dif fer ent sig nals were less than ±0.1 ppm with changes of OMPOM con cen tra tions and slight changes of sol vent com po si tion.The sig nal of the silicotungstate salt was used as an in ter nal ref er ence.The spec tra of the dif fer ent com pounds ex hib ited six sig nals.
6
213969052
The growing demand for food worldwide, along with the increasing need for animal protein, lead to the identification of other sources of highest quality meat, then the conventional ones, such as it is the hare meat. The aim of this study was to characterize the sensorial, physico-chemical and nutritional traits of hare meat (Lepus europaeus Pallas) issued from hunting funds in North-East of Romania. The biological material consisted of 79 hares (34 males and 45 females), slaughtered by shooting at the age of about 18 months, during the regular hunting season (1 November to 31 January). Different muscle groups were collected: Longissimus dorsi (LD), Triceps brachi (TB) and Semimembranosus (SM). For physicochemical determinations (measurement of the pH at 24 and 48 hours, of water, proteins, lipids, fatty acids and of ash) were analyzed 237 samples (79 for each muscle group). The results obtained from the sensory analysis are relatively close as a score for the three muscle groups studied. The pH value was higher for TB muscles. The highest amount of protein was observed for LD muscles collected from males (21.65%), while the richest in lipids were the females TB muscles (2.38%). The fatty acids levels were predominantly higher for males (for the most of the assessed fatty acids). Very favorable ratio of PUFA:SFA was identified in LD muscles (1.695 for males and 1.531 for females), in SM muscles (1.679 for males and 1.527 for females), and in TB muscles, as well (1.885 for males and 1.820 for females). The variance analysis revealed insignificant gender related differences for the three muscle groups, concerning the sensorial traits, the pH level, ash content and energy value. However, in proteins, lipids and water levels, there were observed highly significant gender related differences in TB muscles. Also, for some fatty acids, significant statistical differences were found between genders, in all three muscle groups.
1
213969052
The meat products are appreciated by consumers, thus there is an increasing demand of those commodities issued from farming systems observing high standards of animal welfare [23]. The attention of farmers and meat producers has been focused on small mammals, such as domestic rabbits (Oryctolagus cuniculus), which provide high quality meat . Another leporid species, the brown hare (Lepus europaeus Pallas), has also been generating interest from meat producers [2,19,51]. The meat of these animals differs from that of poultry and other farmyard animals [52], but the consumption of their meat is not so popular as other [19,52]. Rabbit meat is healthier than other meats frequently used in human nutrition, such as chicken, beef, and pork [53], being easily digested, lean and rich in proteins (with high levels of essential amino acids), unsaturated lipids (ω3 and ω6), B vitamins, potassium, phosphorus and magnesium, low in sodium and cholesterol and very poor in uric acid [54][55][56][57][58][59][60][61]. The brown hare (Lepus europaeus Pallas) is one of the most popular small game species [20] being sometime reared for the restocking of hunting and protected areas in Europe [2,20,51]. Some authors have investigated the * email: [email protected]; [email protected] potential of adding hare meat from hunting into the human diet because of its favorable sensory characteristics, high unsaturated fatty acids [57], proteins, minerals, vitamins content and low-fat content [58,60,61] and its energetic value which is similar to other meats [51]. Hare meat is classified as red meat, mainly in terms of its high iron (Fe) content [61], but its availability is usually restricted by hunting seasons [52,57,60]. To produce high-quality meat, it is necessary to understand the characteristics of meat quality traits and the factors that control them but in wild animals is quite difficult to establish the influence of diet on meat characteristics [2,52]. Hares, like wild rabbits, are herbivorous that consume a wide variety of plants and grains that qualitatively and nutritionally differ by season, which may cause large variation in the composition of the meat [59]. There are very few data available in the literature regarding the characterization of hare meat. From our knowledge only three articles approached the quality of hare meat issued from hunting, in Austria [57], in Croatia [60], and in Slovakia [58]; another three recent studies depict the quality of hare meat collected from farmed brown hare in Italy [20,51], and Poland [2]. The lack of data on the characterization of hare meat led us to carry on this study, whose goals were to assess the sensorial, physico-chemical and nutritional traits of hare meat (Lepus europaeus Pallas) collected from hunting funds in North-East of Romania.
2
213969052
Materials and methods The biological material consisted of 79 hare individuals (34 males and 45 females), slaughtered by shooting at the age of about 18 months, during the regular hunting season (1 November to 31 January). Three different muscle groups (LD -Longissimus dorsi, SM -Semimembranosus and TB -Triceps brachii muscles) were collected, due to their different physical-chemical properties, different metabolic type and in order to cover the main anatomical regions of the carcasses, as well (back -LD, hind leg-SM, foreleg -TB). The muscles on the right side of the carcass were used, to assess the physical-chemical traits (measurement of pH at 24-and 48-h post-slaughter, of water, proteins, lipids, fatty acids and ash contents), summarizing 237 samples (79 for each muscle group). They were preliminary fine grinded and homogenized using an electric shredder. The muscle groups on the left side of the carcasses (237 samples, individually packaged, vacuumed and then prepared for one hour at a constant temperature of 80°C in a water bath), were used for sensory analyses and performed by tasting. After cooling, the samples were cut and given to 23 tasters, trained in advance. The assessment sheets of the sensor y characteristics were filled in using a five-point hedonic scale (scores from 1 to 5), in which one point represented the not favourable features, while 5 points indicated the characteristics which fully satisfied the requirements of the tasters. For example: the extremely pale colour was noted with 1, while the intense red colour was noted with 5; global assessment was scored with 1 for unacceptable meat, with two points for acceptable meat, with three points for good meat, with four points for very good meat and with five points for exceptional meat. For two consecutive days after slaughter, meat pH value was measured, using the digital pH meter Hanna Electronics, type 212, on chilled samples, at 2°C. The water, protein and lipid content were determined using the Food Check Near Infrared Spectrophotometer (NIRS technology); the energy value was determined by calculation using conventional formulas and crude ash content was assessed by calcinations (at 550°C for 16 h after a preliminary carbonization) [62][63][64]. Using the FOSS 6500 spectrophotometer (NIRS technology), the assessment of fatty acids content was performed. The freshly ground samples were placed in sterile Petri dishes, weighed, then lyophilized at -110°C for 24 h , using the CoolSafe™, SCANVAC lyophilizer, weighed again and then vacuumed (in special bags) and stored in a freezer at a temperature of -80°C until the moment of their analysis. The following saturated fatty acids (SFA) were assessed: C14:0 (myristic acid), C15:0 (pentadecanoic acid), C16:0 (palmitic acid) C17:0 (heptadecanoic acid) and C18:0 (stearic acid). Among the monounsaturated fatty acids (MUFA, ω7 and ω9) there were investigated: C18:1n-7 (vaccenic acid cis isomer of oleic acid) and C18:1n-9 (oleic acid); a total of nine polyunsaturated fatty acids (PUFA, ω3 and ω6) were also assessed: C18:2n-6 (linoleic), C18:3n-3 (linolenic), C20:2n-6 (eicosadienoic), C20:3n-6 (eicosatrienoic), C20:4n-6 (arachidonic), C20:5n-3 (eicosapentaenoic or EPA), C22:4n-6 (docosatetraenoic), C22:5n-3 (docosapentaenoic or DPA) and C22:6n-3 (doco-sahexaenoic or DHA) [65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81]. All the achieved results were statistically processed through the main descriptors computation and analysis of variance test (Anova single factor), using the GraphPad Prism 7.0 software.
3
202885811
: Mayenite was recently successfully employed as an active catalyst for trichloroethylene (TCE) oxidation. It was e ff ective in promoting the conversion of TCE in less harmful products (CO 2 and HCl) with high activity and selectivity. However, there is a potential limitation to the use of mayenite in the industrial degradation of chlorinated compounds—its limited operating lifespan owing to chlorine poisoning of the catalyst. To overcome this problem, in this work, mayenite-based catalysts loaded with iron (Fe / mayenite) were prepared and tested for TCE oxidation in a gaseous phase. The catalysts were characterized using di ff erent physico-chemical techniques, including XRD, ICP, N 2 -sorption (BET), H 2 -TPR analysis, SEM-EDX, XPS FESEM-EDS, and Raman. Fe / mayenite was found to be more active and stable than the pure material for TCE oxidation, maintaining the same selectivity. This result was interpreted as the synergistic e ff ect of the metal and the oxo-anionic species present in the mayenite framework, thus promoting TCE oxidation, while avoiding catalyst deactivation.
1
202885811
Introduction Trichloroethylene (TCE) is a chlorinated volatile organic solvent belonging to the class of dense non aqueous phase liquids (DNAPL) pollutants [1][2][3]. Several strategies have been considered for TCE remediation, including the use of CaO [4], bioremediation [5,6] and adsorption processes with activated charcoal or zeolites [7,8]. In addition, as TCE is highly volatile, it can be easily stripped from the remediation media (water, surfactant solutions, removed soils, etc.) with air flux and directed to further treatments in gas phase [9,10]. In this respect, catalytic heterogeneous oxidation is becoming a popular alternative to thermal incineration for treating exhausted gases rich in TCE, as catalysts lower operative temperatures and improve selectivity of the reaction towards less harmful products, with high benefits in terms of energy consumption and environmental impact. Several heterogeneous catalysts have been developed and tested for gaseous TCE oxidation. Catalytic systems based on noble metals, particularly Pt and Pd, have been extensively employed, showing good results in terms of activity and selectivity, as reported by Gonzalez-Velasco and co-workers in a recently published review [11]. Less-expensive catalysts, based on metallic oxides, have also been prepared as uniform catalyst or supported on high surface materials (e.g., γ-Al 2 O 3 ) [12,13]. Blanch-Raga et al. reported the oxidation of TCE over different mixed oxides derived from hydrotalcites [14], with the Co(Fe/Al) catalyst being the most active (T 50% = 280 • C and T 90% = 340 • C at Gas Hourly Space Velocity, GHSV = 15,000 h −1 and [TCE] = 1000 ppm) due to its acidic and oxidative properties. Zeolites also represent an important type of active catalysts for the oxidation of TCE and many papers have reported on the synergic effect of acidic sites in zeolites [15] with metal catalysts in order to improve the performance of the whole catalytic system. Romero-Saez et al. [16], studied the performance of iron-doped ZSM-5 zeolite for TCE oxidation, finding that a ZSM-5 containing 2 wt% of Fe quantitatively oxidizes 1000 ppm of TCE at 500 • C and GHSV = 13,500 h −1 . This paper shows that the formation of active iron (III) species, as Fe 2 O 3 nanoparticles, was most likely responsible for the enhanced catalytic performance of the zeolite. Nevertheless the catalyst suffers some deactivation after 16 h of reaction due to the formation of FeCl 3 [16]. Recently, Palomares et al. reported a remarkably high selectivity towards CO 2 during TCE oxidation with Cu and Co-doped beta zeolites. The best results (T 50% = 310 • C and T 90% = 360 • C at GHSV = 15,000 h −1 and [TCE] = 1000 ppm) were obtained by using Cu-doped zeolite, which combined the acid sites of the zeolite with the redox properties of the copper ions [17]. Notwithstanding, zeolites-based catalysts suffer some drawbacks, which include coke formation, deactivation, and formation of chlorinated by-products [14]. In previous works, we reported about the oxidation of TCE by using the mesoporous calcium aluminate mayenite (Ca 12 Al 14 O 33 ) as a catalyst [18][19][20][21][22]; mayenite had a good overall performance, showing high activity and selectivity towards nontoxic compounds, and fair thermal stability and recyclability. As a main drawback, the material shows a certain tendency towards chlorine poisoning, leading to slow deactivation of the catalyst. Mayenite has a zeolite-type structure with interconnected cages and a positive electric charge per unit cell that is balanced by O 2− ions (free oxygen ions) [23]. The free oxygen ions can be substituted by other species (Cl − , H − , NH 2 − , etc.) [24,25] and can migrate from the bulk to the surface at temperatures higher than 400 • C [26], thus conferring to the mayenite oxidative properties exploited for many applications [27][28][29]; for instance, as Ni support for the catalytic reforming of tar [30][31][32]. With the aim of further improving mayenite activity for TCE oxidation and mitigating deactivation of the material, in this work, we use a mayenite containing iron that is employed for the catalytic oxidation of TCE. The performance of the system has been evaluated by means of the light-off curve and the structural properties of the material have been characterized, before and after the reaction, by means of different physico-chemical techniques, including XRD, ICP analysis, N 2 -sorption (BET), H 2 -TPR analysis, SEM-EDX, FESEM-EDS, XPS, and Raman spectroscopy. We have prepared catalysts with different iron content and compared the activity and stability of this material with those of pure mayenite. The Fe/mayenite catalyst clearly maintains the crystalline structure of mayenite. Furthermore, no peaks associated with iron oxides were observed in the Fe/mayenite samples; this is due to the low metal loading in the mayenite and their good dispersion on the mayenite support [34]. Table 1 shows the metal loading and specific surface area of the catalysts. The BET surface area of mayenite was 11.7 m 2 /g, in line with data reported for this type of material [27]. The Fe/mayenite samples had BET surface area values similar to that of mayenite, showing that the incorporation of iron does not modify its textural properties. ICP analysis confirmed that the iron content was close to the nominal value. Fe/mayenite is a porous material (view SI, Figure S1), characterized by large pores with dimensions of µm (macropores) and nm (mesopores) composed of calcium, aluminium, oxygen, and iron with an approximate content of 34%, 40%, 26%, and 2 wt.%, respectively. The structure and composition of the synthetized materials were also characterised by FESEM-EDS analysis, which yielded similar results, but allowed a detailed distribution of the atomic content. Figure 2 shows the results obtained, observing the most abundant elements in mayenite, i.e., aluminium, calcium, and oxygen. Iron atoms appear in low quantity and with a homogenous distribution in the mayenite, demonstrating good iron dispersion on the mayenite support. Moreover, the FESEM images of Fe/mayenite and pure mayenite (not shown) showed no significant differences in terms of morphology, maintaining the typical morphology of mayenite in both cases. TPR study of the catalysts is reported in Figure 3. All samples have similar hydrogen consumption, but present different TPR profiles. As can be seen, two peaks are observed for mayenite: the first one is a smaller band that appears around 550 • C, while the second is more intense and has the maximum at 620 • C. The first corresponds to the dissociative adsorption of H 2 in a heterolitic fashion [35] and the second to the reaction of these species with extra framework O x − and O 2 2− anions [36]. Iron-containing mayenites show a different profile with a unique band centred at 550 • C for the sample with 1.5% Fe or at 530 • C for the sample containing 2% Fe. These bands are assigned to the reduction of extra framework O x − and O 2 2− anions that in these catalysts are coincident with the band assigned to the dissociative adsorption of H 2 with consequential water formation, as previously reported for iron oxide-based catalysts [37]. Also, a small shoulder at 400-450 • C can be observed in the sample with higher iron content. This shoulder is assigned to the reduction of Fe 3+ to Fe 2+ , as reported by Romero-Saez et al. for Fe/zeolite samples [16]. Quantification of the hydrogen consumption shows similar results for the different samples, as the main species reduced in all the catalysts are the anionic oxygens present in the mayenite. The low iron content of the Fe/mayenite and the only partial reaction of the Fe 3+ species results in a negligible consumption of hydrogen compared to that necessary for the oxygen species reduction. The results also show that there is a relationship between the content of iron and the shift towards lower temperatures of the extra framework O x − and O 2 2− reduction peak.
2